uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,498,015
arxiv
\section{Introduction} Since the observation of gravitational waves \cite{TheLIGOScientific:2016wyq, Abbott:2017xzu}, we are entering a new era of cosmology. More precise measurements would be provided by future experiments such as LIGO \cite{TheLIGOScientific:2014jea}, Virgo \cite{TheVirgo:2014hva}, LISA \cite{Audley:2017drz}, DICIGO/BBO \cite{Yagi:2011wg}, the Einstein Telescope (ET) \cite{Punturo:2010zz}, Cosmic Explorer (CE) \cite{Evans:2016mbw}, Taiji \cite{Guo:2018npi}, and TianQin \cite{Mei:2020lrl}. This is bound to have profound ramifications upon cosmology. The idea of cosmic inflation \cite{Starobinsky:1980te, Sato:1980yn, Guth:1980zm, Linde:1981mu} solves many problems (such as the horizon, flatness, and unwanted relics problems\footnote{What kind of relics are unwanted depends on whether there are monopoles, gravitinos, or something else beyond the standard model of particle physics which may be over-abundantly produced in a hot big bang.}) of old hot big bang model and explains why our universe is so big (compared with Planck scale), so long-lived (compared with Planck time), geometrically so flat (and not perfectly flat), and have a large entropy with a variety of things. It is intriguing that in this scenario, the origin of all the structures of the universe (such as galaxies and planets) is ultimately from quantum fluctuations. It is arguably the standard scenario of the very early universe. Yet we are still searching for the best inflation model. Among the inflationary models, D-term inflation \cite{Binetruy:1996xj, Halyo:1996pp} is a supersymmetric (SUSY) realisation of hybrid inflation \cite{Linde:1993cn}. Hybrid inflation provides an effective way to produce small field inflation which is defined to be inflation models with an inflaton field value smaller than the Planck scale (at least when our observable universe is leaving the horizon). Although small field inflation models also produce primordial gravitational waves, the value of tensor-to-scalar ratio $r$ is typically too small to be observed in near-future experiments \cite{Lyth:1996im}. Instead, one can study a class of small field inflation models in which cosmic strings are produced after inflation via gravitational waves generated by those cosmic strings \cite{Lin:2021wbn}. A salient feature of D-term inflation is the cosmic string production after inflation. The energy per unit length of cosmic strings (also known as the string tension $\mu$) can be significantly large and it can impose signatures for observations. String tension is commonly expressed via a dimensionless combination $G\mu$, where $G$ is Newton's constant. Current constraint from CMB (Cosmic Microwave Background) measurement is $G\mu<1.1 \times 10^{-7}$ \cite{Charnock:2016nzm}. More precise measurements can be obtained through observations of gravitational waves since the vibration of cosmic strings generate gravitational radiation \cite{Hindmarsh:1994re}. Current limit from the European Pulsar Timing Array (EPTA) implies $G\mu \lesssim 10^{-11}$ \cite{vanHaasteren:2011ni}. Recently there is a possible signal of stochastic gravitational waves background from NANOGrav Collaboration \cite{Arzoumanian:2020vkk}. This corresponds to a string tension $G\mu \in (4\times 10^{-11}, 10^{-10})$ at the $68 \%$ confidence level \cite{Ellis:2020ena}\footnote{In \cite{Blasi:2020mfx}, a slightly different value of $G\mu \in (6\times 10^{-11}, 1.7\times 10^{-10})$ at the $68 \%$ confidence level is obtained.}. In the calculation of the following sections, we will be using $G\mu=10^{-11}$. On the other hand, recently there is a discrepancy between observations of the Hubble constant $H_0$ between high redshift (such as observations of CMB and baryon acoustic oscillations (BAO)) and low redshift measurements using local distance ladder (such as Cepheids and SNe Ia). The measurement from CMB is \cite{Planck:2018vyg} \begin{equation} H_0=67.4 \pm 0.5 \mbox{ km s}^{-1}\mbox{ Mpc}^{-1}. \label{high} \end{equation} However, the measurement from the SH0ES collaboration is \cite{Riess:2021jrx} \begin{equation} H_0=73.04 \pm 1.04 \mbox{ km s}^{-1}\mbox{ Mpc}^{-1}. \label{low} \end{equation} There are also other experiments and the discrepancy cannot be easily explained by systematic error \cite{Efstathiou:2013via, Addison:2015wyg, Planck:2016tof, Aylor:2018drw, Verde:2019ivm}. This discrepancy is known as the Hubble tension (see \cite{DiValentino:2021izs, Schoneberg:2021qvd, Abdalla:2022yfr, Perivolaropoulos:2021jda} for recent reviews) and currently the disagreement is about $4\sigma$ to $6\sigma$. Since the result from CMB measurement is based on $\Lambda$CDM model, many proposed resolutions for the Hubble tension assume some modifications of $\Lambda$CDM model. In this case, there may be corresponding modifications of the spectral index \cite{DiValentino:2018zjj, Ye:2021nej, Jiang:2022uyg}. According to observations based on $\Lambda$CDM model, the spectral index is given by $n_s=0.965 \pm 0.004$ \cite{Planck:2018vyg}. On the other hand, in pre-recombination resolutions of the Hubble tension (such as early dark energy), the Monte Carlo Markov chain analysis done by \cite{Ye:2021nej} shows \begin{equation} \delta n_s \simeq 0.4 \frac{\delta H_0}{H_0}, \end{equation} for lifting high redshift $H_0$ and it seems to be pointing to\footnote{Very roughly, from Eqs.~(\ref{high}) and (\ref{low}) we can calculate $n_s=0.96+0.4\frac{73.04-67.4}{70}=0.99 \sim 1$.} \begin{equation} n_s=1. \label{n1} \end{equation} This increase of $n_s$ compared with $\Lambda$CDM model was consistent with some earlier works \cite{Poulin:2018cxd, Agrawal:2019lmo, Lin:2019qug, Ye:2020oix}, where the results indicate that in order to have $H_0 \gtrsim 71 \mbox{ km s}^{-1}\mbox{ Mpc}^{-1}$, the spectral index is \begin{equation} n_s \gtrsim 0.98. \label{n98} \end{equation} The physical meaning of the increase of $n_s$ is to compensate for the suppression of small-scale fluctuations in those models. In \cite{Takahashi:2021bti}, cosmological implications of $n_s \sim 1$ are considered, in particular on the model of axion curvaton models. Here we focus on the implication of $n_s \sim 1$ to D-term inflation. \section{D-term Inflation} \label{sec2} D-term inflation is a SUSY hybrid inflation. The superpotential is given by \cite{Binetruy:1996xj, Halyo:1996pp} \begin{equation} W_D=\lambda S \Phi_+ \Phi_- \end{equation} where $S$ is the inflaton superfield, $\lambda$ is the superpotential coupling and $\Phi_\pm$ are chiral superfields charged under the $U(1)_{FI}$ gauge symmetry responsible for the Fayet-Iliopoulos term. The corresponding SUSY tree-level effective scalar potential is \begin{equation} V(S, \Phi_+, \Phi_-)=\lambda^2 \left[ |S|^2\left( |\Phi_+|^2+|\Phi_-|^2 \right)+|\Phi_+|^2|\Phi_-|^2 \right]+\frac{g^2}{2}\left( |\Phi_+|^2-|\Phi_-|^2 +\xi \right)^2, \end{equation} where $\xi$ is the Fayet-Iliopoulos term and $g$ is the $U(1)_{FI}$ gauge coupling. From the potential, the true vacuum is given by \begin{equation} \langle S \rangle=0,\;\;\;\; \langle \Phi_+ \rangle =0,\;\;\;\; \langle \Phi_- \rangle =\sqrt{\xi}. \end{equation} During inflation, when the inflaton field value is larger than the critical value $|S| \gg |S|_c=g\xi^{1/2}/\lambda$ the field value of $S$ provides effective masses to $\Phi_+$ and $\Phi_-$, which drives their field values to zero. The potential minimum is along a flat valley and is given by \begin{equation} V=V_0=\frac{g^2\xi^2}{2}. \label{eq7} \end{equation} The 1-loop inflaton potential is \begin{equation} V(S)=V_0+\frac{g^4\xi^2}{32\pi^2}\left[2 \ln \left( \frac{\lambda^2 |S|^2}{M^2} \right) +(z+1)^2 \ln (1+z^{-1})+(z-1)^2 \ln (1-z^{-1}) \right], \end{equation} where $z=\lambda^2|S|^2/g^2\xi=|S|^2/|S|^2_c$ and $M$ is a renormalisation scale. If $z \gg 1$, the 1-loop potential can be approximated by \begin{equation} V(s)=V_0+\frac{g^4 \xi^2}{16 \pi^2}\ln \left( \frac{s^2}{2M^2} \right), \label{eq2} \end{equation} where $S \equiv s/\sqrt{2}$. In the following calculations, we assume $V_0$ dominates the potential, namely $V \simeq V_0$ as a hybrid inflation should have. The slow-roll parameters are given by \begin{equation} \epsilon\equiv \frac{M_P^2}{2}\left( \frac{V^\prime}{V} \right)^2=\frac{M_P^2 g^4}{32 \pi^4 s^2}, \label{epsilon} \end{equation} and \begin{equation} \eta \equiv M_P^2\frac{V^{\prime\prime}}{V}=-\frac{M_P^2g^2}{4\pi^2s^2}, \label{eta} \end{equation} where $M_P=2.4 \times 10^{18}$ GeV is the reduced Planck mass. The number of e-folds $N$ is given by \begin{equation} N=\frac{2\pi^2}{g^2}\left(s^2-s_e^2 \right), \label{efolds} \end{equation} where $s_e$ marks the end of inflation. Inflation ends either when the inflaton drops below its critical value $s_c=g\sqrt{2}\sqrt{\xi}/\lambda$ or when the second slow-roll parameter becomes $|\eta|=1$ at $s_{s.r.}=(g/2\pi)M_P$ as can be seen from Eq.~(\ref{eta}). Namely, \begin{equation} s_e=\mbox{max}(s_{s.r.},s_c). \end{equation} Depending on the mechanism of reheating, $N$ is roughly $50 \lesssim N \lesssim 60$. We will take $N=60$ when a numerical calculation is needed in the following. It will be useful to calculate \begin{equation} \frac{s_c^2}{s_{s.r.}^2}=\frac{8\pi^2 \xi}{\lambda^2 M_P^2}. \label{ss} \end{equation} When $\lambda$ is small, we may have $s_c>s_{s.r.}$. In this case, from Eqs.~(\ref{eta}), (\ref{efolds}), and (\ref{ss}), \begin{equation} \eta=-\frac{1}{\frac{s_c^2}{s_{s.r.}^2}+2N}. \label{etasc} \end{equation} It is a novel feature of our calculation to express the results in terms of $s_c^2/s_{s.r.}^2$. On the other hand, if $\lambda$ is big, we may have $s_c<s_{s.r.}$. In this case, \begin{equation} \eta=-\frac{1}{1+2N} \simeq -\frac{1}{2N}. \label{etasr} \end{equation} Let us make a mnemonic rule here, when $s_c<s_{s.r.}$, we just set $s_c^2/s_{s.r.}^2=0$ so that Eq.~(\ref{etasc}) includes Eq.~(\ref{etasr}) and there is no need to duplicate equations in the following discussion\footnote{Practically there is no difference between $1+2N$ and $2N$. This is not only because $121 \simeq 120$ but we can have chosen say, $N=59.5$ from the beginning instead of $N=60$ and this choice is equally good concerning the uncertainty of $N$.}. From Eqs.~(\ref{epsilon}) and (\ref{eta}), we can see that \begin{equation} |\epsilon|=\frac{g^2}{8\pi^2}|\eta|. \label{smalle} \end{equation} Therefore we neglect $\epsilon$ compared with $\eta$ as we always consider $g \leq 0.1$. In particular, we have the spectral index given by \begin{equation} n_s = 1+ 2\eta-6\epsilon \simeq 1+2 \eta=1-\frac{2}{\frac{s_c^2}{s_{s.r.}^2}+2N}. \label{index} \end{equation} The spectrum is \begin{equation} P_R=\frac{1}{12\pi^2M_P^6}\frac{V^3}{V^{\prime 2}}=\frac{\xi^2}{6M_P^4}\left(\frac{s_c^2}{s_{s.r.}^2}+2N \right), \label{spectrum} \end{equation} with CMB normalisation given by $P_R^{1/2}=5 \times 10^{-5}$. After inflation, the $U(1)$ gauge symmetry is spontaneously broken and cosmic strings form. The string tension $\mu$ (mass per unit length) is \begin{equation} \mu=2\pi \xi. \end{equation} Experimental constraints for cosmic strings would give a constaint to $G\mu$ which by using the reduced Planck mass $M_P$ can be written as \begin{equation} G\mu=\frac{2\pi}{8\pi}\frac{\xi}{M_P^2}=\frac{\xi}{4 M_P^2}. \label{string} \end{equation} Note that $\xi$ determines the scale of inflation via Eq.~(\ref{eq7}). \section{Conventioanl D-term Inflation} The coupling constants $g$ and $\lambda$ are free parameters in D-term inflation. However, aesthetically we may start from $g = \lambda = 0.1$. We refer to this as a conventional D-term inflation. In this case we have $s_c<s_{s.r.}$ due to the smallness of $\xi$ and bigness of $g$. This statement will be verified soon in the following calculation. From Eq.~(\ref{spectrum}) (and the mnemonic rule), we have \begin{equation} P_R=\frac{N\xi^2}{3 M_P^4}=\frac{20 \xi^2}{M_P^4}=(5 \times 10^{-5})^2 \end{equation} This gives $\xi=1.1 \times 10^{-5} M_P^2$. By using Eq.~(\ref{ss}), we can now calculate \begin{equation} \frac{s^2_c}{s^2_{s.r.}}=8.8 \times 10^{-2} \end{equation} to verify our previous assumption of $s_c<s_{s.r.}$. From Eq.~(\ref{index}) (and the mnemonic rule), we obtain \begin{equation} n_s=1-\frac{1}{N}=0.98, \label{conven} \end{equation} which satisfies Eq.~(\ref{n98}). The running spectral index can be obtained from $n_s$ as \begin{equation} \alpha=-\frac{dn_s}{dN}=-\frac{1}{N^2}=-0.00028. \end{equation} This is compatible with the Planck data $|\alpha|<0.01$ \cite{Planck:2018jri}. From Eq.~(\ref{string}), we have $G\mu=2.8 \times 10^{-6}$. However, as discussed in the Introduction section, we need $G\mu=10^{-11}$ to achieve the current experimental bound. Therefore conventional D-term inflation is ruled out by experimental searches of cosmic strings. We refer to this as the cosmic string problem. \section{D-term Inflation with small coupling constants} It is shown in \cite{Endo:2003fr, Rocher:2004my} that cosmic string problem with constaint from CMB can be evaded if we relax the requirement of $g = \lambda = 0.1$. In \cite{Endo:2003fr}, the authors obtain $\lambda \lesssim O(10^{-4}-10^{-5})$. In \cite{Rocher:2004my}, the authors obtain $g \lesssim 2 \times 10^{-2}$ and $\lambda \lesssim 3 \times 10^{-5}$. In this section, we calculate the coupling constants $g$ and $\lambda$ by using the much more stringent constraint for cosmic strings from gravitational waves. If $G\mu=10^{-11}$, from Eq.~(\ref{string}) we have $\xi=4 \times 10^{-11}M_P^2$ or $\sqrt{\xi}=6.3 \times 10^{-6}M_P$. Contrary to conventional D-term inflation, we have $s_c^2/s_{s.r.}^2 \gg 2N $ due to the smallness of $\lambda$ which will be verified later. From Eqs.~(\ref{ss}) and (\ref{spectrum}), we have \begin{equation} P_R=\frac{4\pi^2 \xi^3}{3M_P^6 \lambda^2}=25 \times 10^{-10}. \label{e17} \end{equation} This implies \begin{equation} \lambda=1.8 \times 10^{-11}, \label{eq23} \end{equation} which is much smaller than those obtained in \cite{Endo:2003fr, Rocher:2004my} because we have updated the experimental constraint. In this case, from Eq.~(\ref{ss}), we have \begin{equation} \frac{s_c^2}{s_{s.r.}^2}=9.4 \times 10^{12} \gg 2N=120, \label{e19} \end{equation} which is in accordance with our assumption. For the allowed values of the coupling constant $g$, since we are considering a small field inflation model, it is required that \begin{equation} s_c=\frac{g\sqrt{2}\sqrt{\xi}}{\lambda} < 0.1M_P. \end{equation} This implies \begin{equation} g<2.0 \times 10^{-7}. \label{eq26} \end{equation} In addition, by noticing that Eq.~(\ref{e17}) is independent of $N$, we can conclude that the spectrum is scale-invariant, namely the spectral index $n_s=1$ which satisfies Eq.~(\ref{n1})!\footnote{With $|n_s-1|$ smaller than $10^{-12}$, which can be seen from Eq.~(\ref{index}) and Eq.~(\ref{e19}).}. The requirement of cosmic string constraint drives the spectrum to be scale-invariant. Interestingly, this is in accordance with the proposals to alleviate the Hubble tension mentioned in the Introduction section. One may not be satisfied with the small coupling constants given in Eqs.~(\ref{eq23}) and (\ref{eq26}). Especially a very small gauge coupling $g$ seems to have difficulty connecting to known gauge couplings in (known) particle physics. In the following section, we propose a model to make the coupling constants bigger. \section{D-term Inflation on the brane} If our four-dimensional world is a 3-brane embedded in a higher-dimensional bulk, the Friedmann equation becomes \cite{Cline:1999ts, Csaki:1999jh, Binetruy:1999ut, Binetruy:1999hy, Freese:2002sq, Freese:2002gv, Maartens:1999hf} \begin{equation} H^2=\frac{1}{3M_P}\rho \left[ 1+ \frac{\rho}{2 \Lambda} \right], \end{equation} where $\Lambda$ provides a relation between the four-dimensional Planck scale $M_4=\sqrt{8 \pi}M_P$ and five-dimensional Planck scale $M_5$ via \begin{equation} M_4=\sqrt{\frac{3}{4\pi}}\left( \frac{M_5^2}{\sqrt{\Lambda}} \right)M_5. \end{equation} The nucleosynthesis limit implies that $\Lambda \gtrsim (1\mbox{ MeV})^4 \sim (10^{-21}M_P)^4$. A more stringent constraint, $M_5 \gtrsim 10^5$ TeV can be obtained by requiring the theory to reduce to Newtonian gravity on scales larger than $1$ mm, this corresponds to \begin{equation} \Lambda \gtrsim 5.0 \times 10^{-53} M_P^4. \label{ls} \end{equation} For D-term inflation on the brane, the slow-roll parameters\footnote{We use the same notations for the slow-roll parameters as in the previous sections, but confusion should not be caused due to the context.} are given by \cite{Maartens:1999hf, Lee:2009mj} \begin{equation} \epsilon=\frac{M_P^2}{2}\left( \frac{V^\prime}{V} \right)^2\frac{1}{\left( 1+\frac{V}{2\Lambda} \right)^2}\left( 1+\frac{V}{\Lambda} \right), \end{equation} and \begin{equation} \eta=M_P^2 \frac{V^{\prime\prime}}{V} \left( \frac{1}{1+\frac{V}{2\Lambda}} \right). \end{equation} In this case, instead of Eqs.~(\ref{epsilon}) and (\ref{eta}), we have \begin{equation} \epsilon=\frac{M_P^2 g^4}{32 \pi^4 s^2}\frac{1}{\left( 1+\frac{g^2 \xi^2}{4\Lambda} \right)^2}\left( 1+\frac{g^2 \xi^2}{2\Lambda} \right), \label{e36} \end{equation} and \begin{equation} \eta=-\frac{M_P^2 g^2}{4\pi^2 s^2}\frac{1}{\left( 1+\frac{g^2 \xi^2}{4\Lambda} \right)}. \label{eq25} \end{equation} We will consider the case $g^2\xi^2/2 \gg \Lambda$, therefore from Eqs.~(\ref{e36}) and (\ref{eq25}), we have \begin{equation} |\epsilon| \simeq \frac{g^2}{4\pi^2}|\eta|, \end{equation} which can be compared with Eq.~(\ref{smalle}) therefore $\epsilon$ will be neglected again in the following discussion. The inflaton field value $s_{s.r.}$ when slow-roll fails can be obtained by solving $|\eta|=1$ as \begin{equation} s^2_{s.r.}=\frac{M_P^2 g^2}{4\pi^2 \left( 1+\frac{g^2 \xi^2}{4\Lambda} \right)}. \label{bsr} \end{equation} The number of e-folds is \begin{equation} N=\frac{2\pi^2}{g^2}\left( 1+\frac{g^2\xi^2}{4 \Lambda} \right) \left(s^2-s_e^2 \right), \label{bn} \end{equation} where the end of inflation is determined by the inflaton field value \begin{equation} s_e=\mbox{max}(s_{s.r.},s_c). \end{equation} By using $s_c=g\sqrt{2}\sqrt{\xi}/\lambda$, we obtain \begin{equation} \frac{s_c^2}{s_{s.r.}^2}=\frac{8 \pi^2 \xi}{\lambda^2 M_P^2} \left( 1+\frac{g^2 \xi^2}{4 \Lambda} \right). \label{bss} \end{equation} This can be compared with Eq.~(\ref{ss}). When $s_c>s_{s.r.}$, from Eqs.~(\ref{eq25}), (\ref{bn}), and (\ref{bss}), $\eta$ can be written as \begin{equation} \eta=-\frac{1}{\frac{s_c^2}{s_{s.r.}^2}+2N}. \label{eq27} \end{equation} The spectral index is given by \begin{equation} n_s = 1+ 2\eta-6\epsilon \simeq 1+2 \eta=1-\frac{2}{\frac{s_c^2}{s_{s.r.}^2}+2N}. \label{bindex} \end{equation} Note that Eqs.~(\ref{eq27}) and (\ref{bindex}) appears to have the same form as Eq.~(\ref{etasc}) and (\ref{index}), but the corresponding $s_{s.r.}$ are different. When $s_c<s_{s.r.}$, our previous rule to neglect the factor $s_c^2/s_{s.r.}^2$ still applies. The spectrum is \begin{equation} P_R=\frac{1}{12\pi^2M_P^6}\frac{V^3}{V^{\prime 2}}\left( 1+ \frac{V}{2\Lambda} \right)^3=\frac{\xi^2}{6M_P^4}\left( 1+\frac{g^2\xi^2}{4 \Lambda} \right)^2 \left(\frac{s_c^2}{s_{s.r.}^2}+2N \right) \label{bspectrum} \end{equation} In the following, we define \begin{equation} \left( 1+\frac{g^2\xi^2}{4 \Lambda} \right) \equiv L, \end{equation} since this factor appears a lot. Our purpose here is to avoid small couplings, therefore we start by assuming $g=\lambda=0.1$. In this case, $s_c<s_{s.r.}$ as will be verified later. From Eq.~(\ref{bspectrum}) (and setting $s_c^2/s_{s.r.}^2=0$), the spectrum is \begin{equation} P_R=\frac{\xi^2L^2N}{3M_P^4}=25 \times 10^{-10}. \end{equation} By using $N=60$ and $\xi=4 \times 10^{-11}M_P^2$, we obtain $L=2.8 \times 10^5$. This can be achieved if $\Lambda=1.4 \times 10^{-29}M_P^4$, which satisfies Eq.~(\ref{ls}). We can now calculate (by using Eq.~(\ref{bss})) \begin{equation} \frac{s^2_c}{s^2_{s.r.}}=\frac{8\pi^2 \xi L}{\lambda^2 M_P^2}=8.8 \times 10^{-2} \end{equation} to verify our previous assumption of $s_c<s_{s.r.}$. From Eq.~(\ref{eq25}), one may naively guess that $\eta$ would be very small since we have $L \sim O(10^5)$ in the denominator. However, from Eq.~(\ref{eq27}) (and our mnemonic rule of setting setting $s_c^2/s_{s.r.}^2=0$ when $s_c<s_{s.r.}$), we actually have $\eta=-1/2N$ and this implies $n_s=1+2\eta=0.98$. This spectral index is the same as Eq.~(\ref{conven}) of conventional D-term inflation, which satisfies Eq.~(\ref{n98}). Here we have assumed $g=\lambda=0.1$, but if we allow $\lambda$ to be smaller, it is possible to obtain the scale invariant spectrum $n_s =1$. In order to show it, firstly, we assume $g=0.1$, $\lambda=10^{-3}$ and $s_c^2/s_{s.r.}^2 \gg 2N$. The spectrum is then \begin{equation} P_R=\frac{4\pi^2 \xi^3 L^3}{3M_P^6 \lambda^2}=25 \times 10^{-10}. \label{eq51} \end{equation} By using $\xi=4 \times 10^{-11}M_P^2$, we obtain $L=1.4 \times 10^5$ (which corresponds to $\Lambda=2.9 \times 10^{-29}M_P^4$). From Eq.~(\ref{bss}), \begin{equation} \frac{s^2_c}{s^2_{s.r.}}=\frac{8\pi^2 \xi L}{\lambda^2 M_P^2}=442, \end{equation} which is larger (althoug not too much larger) than $2N=120$. Secondly, we can consider $g=0.1$ and $\lambda=10^{-4}$. Through similar calculations, we have $L=3.1 \times 10^4$ (which corresponds to $\Lambda=1.3 \times 10^{-28}M_P^4$ ) and $s_c^2/s_{s.r.}^2=9790$. Note that for $\lambda \sim O(10^{-3}-10^{-4})$ the requirement of small field inflation $s_c=g\sqrt{2}\sqrt{\xi}/\lambda < 0.1M_P$ is satisfied even with $g$ as large as $g=0.1$. From Eq.~(\ref{bindex}), we can see that $n_s$ is drvien to $n_s=1$ (which satified Eq.~(\ref{n1})) by having a smaller $\lambda$. Compare Eq.~(\ref{eq51}) with Eq.~(\ref{e17}), we can understand the reason why $\lambda$ is not as small as the case in the previous section. It is because the condition of small $\lambda^2$ is now changed to small $\lambda^2/L^3$. Thanks to large $L$, $\lambda$ need not be so small. \section{Conclusion and Discussion} \label{con} simplest D-term inflation can be consistent with the cosmic string bound provided by observations of gravitational waves with very small coupling constants. This drives the spectral index to $n_s=1$ which may be an interesting result in light of the Hubble tension. We show that in the case of D-term inflation on the brane, the coupling constants can be $g=\lambda=0.1$. In this case we have $n_s=0.98$. If we lower one coupling constant to $\lambda<10^{-3}$, the spectral index $n_s=1$ can be achieved. To some extent, the requirement of small coupling constants for the simplest model to work is intuitively expected. From $s_{s.r.}=(g/2\pi) M_P$ we can see that although D-term inflation is a small field inflation model, the field value is not far below $M_P$ for a large $g$. This means the energy scale of the potential is about the scale of grand unified theories (GUT). Therefore the tension of cosmic strings would also be large. On the other hand, D-term inflation on the brane can effectively reduce the inflation scale via the $L$ factor. \acknowledgments This work is supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant No. MOST 110-2112-M-167-001.
1,116,691,498,016
arxiv
\section{Introduction} \label{sec:intro} The point of departure for this paper is the following result obtained in \cite{ssv1, ssv2}. Let $N_n^0$ denote the semi-algebraic set of all unipotent upper-triangular $n \times n$ matrices $x$ with real entries such that, for every $k = 1, \ldots, n-1$, the minor of $x$ with rows $1, \ldots, k$ and columns $n-k+1, \ldots, n$ is non-zero. Then the number $\#_n$ of connected components of $N_n^0$ is given as follows: $\#_2 = 2, \#_3 = 6, \#_4 = 20, \#_5 = 52$, and $\#_n = 3 \cdot 2^{n-1}$ for $n \geq 6$. An interesting feature of this answer is that every case which one can check by hand turns out to be exceptional. But the method of the proof seems to be even more interesting than the answer itself: it is shown that the connected components of $N_n^0$ are in a bijection with the orbits of a certain group $\Gamma_n$ that acts in a vector space of dimension $n (n-1)/2$ over the two-element field $\mathbb{F}_2$, and is generated by symplectic transvections. Such groups appeared earlier in singularity theory, see e.g., \cite{Janssen} and references therein. The construction of $\Gamma_n$ given in \cite{ssv1, ssv2} uses the combinatorial machinery (developed in \cite{BFZ}) of pseudo-line arrangements associated with reduced expressions in the symmetric group. In this paper we present the following far-reaching generalization of this construction. Let $W$ be an arbitrary Coxeter group of simply-laced type (possibly infinite but of finite rank). Let $u$ and $v$ be any two elements in $W$, and $\mathbf{i}$ be a reduced word (of length $m = \ell (u) + \ell (v)$) for the pair $(u,v)$ in the Coxeter group $W \times W$ (see Section~\ref{sec:definitions} for more details). We associate to $\mathbf{i}$ a subgroup $\Gamma_\mathbf{i}$ in $GL_{m} (\mathbb{Z})$ generated by symplectic transvections. We prove among other things that the subgroups corresponding to different reduced words for the same pair $(u,v)$ are conjugate to each other inside $GL_{m} (\mathbb{Z})$. To recover the group $\Gamma_n$ from this general construction, one needs several specializations and reductions: take $W$ to be the symmetric group $S_n$; take $(u,v) = (w_0, e)$, where $w_0$ is the longest permutation in $S_n$, and $e$ is the identity permutation; take $\mathbf{i}$ to be the lexicographically minimal reduced word $1, 2, 1, \ldots, n-1, n-2, \ldots, 1$ for $w_0$; and finally, take the group $\Gamma_\mathbf{i} (\mathbb{F}_2)$ obtained from $\Gamma_\mathbf{i}$ by reducing the linear transformations from $\mathbb{Z}$ to $\mathbb{F}_2$. We also generalize the enumeration result of \cite{ssv1, ssv2} by showing that, under certain assumptions on $u$ and $v$, the number of $\Gamma_\mathbf{i} (\mathbb{F}_2)$-orbits in $\mathbb{F}_2^{m}$ is equal to $3 \cdot 2^s$, where $s$ is the number of simple reflections in $W$ that appear in a reduced decomposition for $u$ or $v$. We deduce this from a description of orbits in an even more general situation which sharpens the results in \cite{Janssen,ssv2} (see Section~\ref{sec:proofs-enumeration} below). Although the results and methods of this paper are purely algebraic and combinatorial, our motivation for the study of the groups $\Gamma_\mathbf{i}$ and their orbits comes from geometry. In the case when $W$ is the (finite) Weyl group of a simply-laced root system, we expect that the $\Gamma_\mathbf{i} (\mathbb{F}_2)$-orbits in $\mathbb{F}_2^{m}$ enumerate connected components of the real part of the reduced double Bruhat cell corresponding to $(u,v)$. Double Bruhat cells were introduced and studied in \cite{FZ} as a natural framework for the study of total positivity in semisimple groups; as explained to us by N.~Reshetikhin, they also appear naturally in the study of symplectic leaves in semisimple groups (see \cite{HKKR}). Let us briefly recall their definition. Let $G$ be a split simply connected semisimple algebraic group defined over $\mathbb{R}$ with the Weyl group $W$; thus $W = {\rm Norm}_G (H)/H$, where $H$ is an $\mathbb{R}$-split maximal torus in $G$. Let $B$ and $B_-$ be two (opposite) Borel subgroups in $G$ such that $B \cap B_- = H$. The \emph{double Bruhat cells}~$G^{u,v}$ are defined as the intersections of ordinary Bruhat cells taken with respect to $B$ and $B_-$: $$G^{u,v} = B u B \cap B_- v B_- \ .$$ In view of the well-known Bruhat decomposition, the group $G$ is the disjoint union of all $G^{u,v}$ for $(u,v) \in W \times W$. The term ``cell" might be misleading because the topology of $G^{u,v}$ can be quite complicated. The torus $H$ acts freely on $G^{u,v}$ by left (as well as right) translations, and there is a natural section $L^{u,v}$ for this action which we call the \emph{reduced double Bruhat cell}. These sections are introduced and studied in a forthcoming paper \cite{BZ99} (for the definition see Section~\ref{sec:components-claims} below). We seem to be very close to a proof of the fact that the connected components of the real part of $L^{u,v}$ are in a natural bijection with the $\Gamma_\mathbf{i} (\mathbb{F}_2)$-orbits in $\mathbb{F}_2^{m}$; but some details are still missing. This question will be treated in a separate publication. The special case when $(u,v) = (e,w)$ for some element $w \in W$ is of particular geometric interest. In this case, $L^{u,v}$ is biregularly isomorphic to the so-called \emph{opposite Schubert cell} \begin{equation*} C_w^0 := C_w \cap w_0 C_{w_0} \ , \end{equation*} where $w_0$ is the longest element of $W$, and $C_w = (B w B)/B \subset G/B$ is the \emph{Schubert cell} corresponding to $w$. These opposite cells appeared in the literature in various contexts, and were studied (in various degrees of generality) in \cite{BFZ,BZ,rie,rie2,ssv1,ssv2}. In particular, the variety $N_n^0$ which was the main object of study in \cite{ssv1, ssv2} is naturally identified with the real part of the opposite cell $C_{w_0}^0$ for $G = SL_n$. By the informal ``complexification principle" of Arnold, if the group $\Gamma_\mathbf{i} (\mathbb{F}_2)$ enumerates connected components of the real part of $L^{u,v}$, the group $\Gamma_\mathbf{i}$ itself (which acts in $\mathbb{Z}^{m}$ rather than in $\mathbb{F}_2^{m}$) should provide information about topology of the complex variety $L^{u,v}$. So far we did not find a totally satisfactory ``complexification" along these lines. The paper is organized as follows. Main definitions, notations and conventions are collected in Section~\ref{sec:definitions}. Our main results are formulated in Section~\ref{sec:results} and proved in the three next sections. We conclude by discussing in more detail the geometric connection outlined above. \section{Definitions} \label{sec:definitions} \subsection{Simply-laced Coxeter groups} \label{sec:Coxeter groups general} Let $\Pi$ be an arbitrary finite graph without loops and multiple edges. Throughout the paper, we use the following notation: write $i \in \Pi$ if $i$ is a vertex of $\Pi$, and $\{i,j\} \in \Pi$ if the vertices $i$ and $j$ are adjacent in $\Pi$. The (simply-laced) Coxeter group $W = W(\Pi)$ associated with $\Pi$ is generated by the elements $s_i$ for $i \in \Pi$ subject to the relations \begin{equation} \label{eq:Coxeter} s_i^2 = e; \quad s_i s_j = s_j s_i \, \, (\{i,j\} \notin \Pi); \quad s_i s_j s_i = s_j s_i s_j \, \, (\{i,j\} \in \Pi) \ . \end{equation} A word $\mathbf{i} = (i_1, \ldots, i_m)$ in the alphabet $\Pi$ is a \emph{reduced word} for $w \in W$ if $w = s_{i_1} \cdots s_{i_m}$, and $m$ is the smallest length of such a factorization. The length $m$ of any reduced word for $w$ is called the \emph{length} of $w$ and denoted by $m = \ell (w)$. Let $R(w)$ denote the set of all reduced words for $w$. The ``double" group $W \times W$ is also a Coxeter group; it corresponds to the graph $\tilde \Pi$ which is the union of two disconnected copies of $\Pi$. We identify the vertex set of $\tilde \Pi$ with $\{+1, -1\} \times \Pi$, and write a vertex $(\pm 1, i) \in \tilde \Pi$ simply as $\pm i$. For each $\pm i \in \tilde \Pi$, we set $\varepsilon (\pm i) = \pm 1$ and $|\pm i| = i \in \Pi$. Thus two vertices $i$ and $j$ of $\tilde \Pi$ are joined by an edge if and only if $\varepsilon (i) = \varepsilon (j)$ and $\{|i|, |j|\} \in \Pi$. In this notation, a reduced word for a pair $(u,v) \in W \times W$ is an arbitrary shuffle of a reduced word for $u$ written in the alphabet $-\Pi$ and a reduced word for $v$ written in the alphabet $\Pi$. In view of the defining relations (\ref{eq:Coxeter}), the set of reduced words $R(u,v)$ is equipped with the following operations: \begin{itemize} \item \emph{2-move}. Interchange two consecutive entries $i_{k-1},i_k$ in a reduced word $\mathbf{i} = (i_1, \ldots, i_m)$ provided $\{i_{k-1},i_k\} \notin \tilde \Pi$. \item \emph{3-move}. Replace three consecutive entries $i_{k-2},i_{k-1},i_k$ in $\mathbf{i}$ by $i_{k-1},i_{k-2},i_{k-1}$ if $i_k = i_{k-2}$ and $\{i_{k-1},i_k\} \in \tilde \Pi$. \end{itemize} In each case, we will refer to the index $k \in [1,m]$ as the \emph{position} of the corresponding move. Using these operations, we make $R(u,v)$ the set of vertices of a graph whose edges correspond to $2$- and $3$-moves. It is a well known result due to Tits that this graph is \emph{connected}, i.e., any two reduced words in $R(u,v)$ can be obtained from each other by a sequence of $2$- and $3$-moves. We will say that a $2$-move interchanging the entries $i_{k-1}$ and $i_k$ is \emph{trivial} if $i_k \neq -i_{k-1}$; the remaining $2$-moves and all $3$-moves will be referred to as \emph{non-trivial}. \subsection{Groups generated by symplectic transvections} \label{sec:symplectic transvections general} Let $\Sigma$ be a finite directed graph. As before, we shall write $k \in \Sigma$ if $k$ is a vertex of $\Sigma$, and $\{k,l\} \in \Sigma$ if the vertices $k$ and $l$ are adjacent in the underlying graph obtained from $\Sigma$ by forgetting directions of edges. We also write $(k \to l) \in \Sigma$ if $k \to l$ is a directed edge of $\Sigma$. Let $V = \mathbb{Z}^\Sigma$ be the lattice with a fixed $\mathbb{Z}$-basis $(e_k)_{k \in \Sigma}$ labeled by vertices of $\Sigma$. Let $\xi_k \in V^*$ denote the corresponding coordinate functions, i.e., every vector $v \in V$ can be written as $$v = \sum_{k \in \Sigma} \xi_k (v) e_k \ .$$ We define a skew-symmetric bilinear form $\Omega$ on $V$ by \begin{equation} \label{eq:Omega} \Omega = \Omega_\Sigma = \sum_{(k \to l) \in \Sigma} \xi_k \wedge \xi_l \ . \end{equation} For each $k \in \Sigma$, we define the symplectic transvection $\tau_k = \tau_{k,\Sigma}: V \to V$ by \begin{equation} \label{eq:transvections} \tau_k (v) = v - \Omega(v, e_k) e_k \ . \end{equation} (The word ``symplectic" might be misleading since $\Omega$ is allowed to be degenerate; still we prefer to keep this terminology from \cite{Janssen}.) In the coordinate form, we have $\xi_l (\tau_k (v)) = \xi_l (v)$ for $l \neq k$, and \begin{equation} \label{eq:transvections2} \xi_k (\tau_k (v)) = \xi_k (v) - \sum_{(a \to k) \in \Sigma} \xi_a (v)+ \sum_{(k \to b) \in \Sigma} \xi_b (v) \ . \end{equation} For any subset $B$ of vertices of $\Sigma$, we denote by $\Gamma_{\Sigma, B}$ the group of linear transformations of $V = \mathbb{Z}^\Sigma$ generated by the transvections $\tau_k$ for $k \in B$. Note that all transformations from $\Gamma_{\Sigma, B}$ are represented by integer matrices in the standard basis $e_k$. Let $\Gamma_{\Sigma, B}(\mathbb{F}_2)$ denote the group of linear transformations of the $\mathbb{F}_2$-vector space $V(\mathbb{F}_2) = \mathbb{F}_2^\Sigma$ obtained from $\Gamma_{\Sigma, B}$ by reduction modulo $2$ (recall that $\mathbb{F}_2$ is the $2$-element field). \section{Main results} \label{sec:results} \subsection{The graph $\Sigma(\mathbf{i})$} \label{sec:main construction} We now present our main combinatorial construction that brings together simply-laced Coxeter groups and groups generated by symplectic transvections. Let $W = W(\Pi)$ be the simply-laced Coxeter group associated to a graph $\Pi$ (see Section~\ref{sec:Coxeter groups general}). Fix a pair $(u,v) \in W \times W$, and let $m = \ell (u) + \ell(v)$. Let $\mathbf{i} = (i_1, \ldots, i_m) \in R(u,v)$ be any reduced word for $(u,v)$. We shall construct a directed graph $\Sigma (\mathbf{i})$ and a subset $B(\mathbf{i})$ of its vertices, thus giving rise to a group $\Gamma_{\Sigma (\mathbf{i}), B(\mathbf{i})}$ generated by symplectic transvections. First of all, the set of vertices of $\Sigma (\mathbf{i})$ is just the set $[1,m] = \{1, 2, \ldots, m\}$. For $l \in [1,m]$, we denote by $l^- = l^-_\mathbf{i}$ the maximal index $k$ such that $1 \leq k < l$ and $|i_k| = |i_l|$; if $|i_k| \neq |i_l|$ for $1 \leq k < l$ then we set $l^- = 0$. We define $B(\mathbf{i}) \subset [1,m]$ as the subset of indices $l \in [2,m]$ such that $l^- > 0$. The indices $l \in B(\mathbf{i})$ will be called \emph{$\mathbf{i}$-bounded}. It remains to define the edges of $\Sigma(\mathbf{i})$. \begin{definition} \label{def:edges} {\rm A pair $\{k,l\} \subset [1,m]$ with $k<l$ is an edge of $\Sigma(\mathbf{i})$ if it satisfies one of the following three conditions: (i) $k=l^-$; (ii) $k^- < l^- < k$, $\{|i_k|, |i_l|\} \in \Pi$, and $\varepsilon(i_{l^-})=\varepsilon(i_{k})$; (iii) $l^-< k^- < k$, $\{|i_k|, |i_l|\} \in \Pi$, and $\varepsilon(i_{k^-})=-\varepsilon(i_{k})$. \noindent The edges of type (i) are called \emph{horizontal}, and those of types (ii) and (iii) \emph{inclined}. A horizontal (resp. inclined) edge $\{k,l\}$ with $k < l$ is directed from $k$ to $l$ if and only if $\varepsilon (i_{k}) = +1$ (resp. $\varepsilon (i_{k}) = -1$).} \end{definition} We will give a few examples in the end of Section~\ref{sec:graphs}. \subsection{Properties of graphs $\Sigma(\mathbf{i})$} \label{sec:graphs} We start with the following property of $\Sigma(\mathbf{i})$ and $B(\mathbf{i})$. \begin{proposition} \label{pr:boundary vertex} For any non-empty subset $S \subset B(\mathbf{i})$, there exists a vertex $a \in [1,m] \setminus S$ such that $\{a,b\} \in \Sigma (\mathbf{i})$ for a unique $b \in S$. \end{proposition} For any edge $\{i,j\} \in \Pi$, let $\Sigma_{i,j}(\mathbf{i})$ denote the induced directed subgraph of $\Sigma({\mathbf{i}})$ with vertices $k \in [1,m]$ such that $|i_k|=i$ or $|i_k|=j$. We shall use the following planar realization of $\Sigma_{i,j}(\mathbf{i})$ which we call the $(i,j)$-\emph{strip} of $\Sigma(\mathbf{i})$. Consider the infinite horizontal strip $\mathbb{R}\times [-1,1] \subset\mathbb{R}^2$, and identify each vertex $k \in \Sigma_{i,j}(\mathbf{i})$ with the point $A = A_k = (k,y)$, where $y = -1$ for $|i_k| = i$, and $y = 1$ for $|i_k| = j$. We represent each (directed) edge $(k \to l)$ by a straight line segment from $A_k$ to $A_l$. (This justifies the terms ``horizontal" and ``inclined" edges in Definition~\ref{def:edges}.) Note that every edge of $\Sigma({\mathbf{i}})$ belongs to some $(i,j)$-strip, so we can think of $\Sigma({\mathbf{i}})$ as the union of all its strips glued together along horizontal lines. \begin{theorem} \label{th:strip} (a) The $(i,j)$-strip of $\Sigma(\mathbf{i})$ is a planar graph; equivalently, no two inclined edges cross each other inside the strip. \noindent (b) The boundary of any triangle or trapezoid formed by two consecutive inclined edges and horizontal segments between them is a directed cycle in $\Sigma_{i,j}(\mathbf{i})$. \end{theorem} Our next goal is to compare the directed graphs $\Sigma(\mathbf{i})$ and $\Sigma({\mathbf{i}'})$ when two reduced words $\mathbf{i}$ and $\mathbf{i}'$ are related by a $2$- or $3$-move. To do this, we associate to $\mathbf{i}$ and $\mathbf{i}'$ a permutation $\sigma_{\mathbf{i}',\mathbf{i}}$ of $[1,m]$ defined as follows. If $\mathbf{i}$ and $\mathbf{i}'$ are related by a trivial $2$-move in position $k$ then $\sigma_{\mathbf{i}',\mathbf{i}} = (k-1,k)$, the transposition of $k-1$ and $k$; if $\mathbf{i}$ and $\mathbf{i}'$ are related by a non-trivial $2$-move then $\sigma_{\mathbf{i}',\mathbf{i}} = e$, the identity permutation of $[1,m]$; finally, if $\mathbf{i}$ and $\mathbf{i}'$ are related by a $3$-move in position $k$ then $\sigma_{\mathbf{i}',\mathbf{i}} = (k-2,k-1)$. The following properties of $\sigma_{\mathbf{i}',\mathbf{i}}$ are immediate from the definitions. \begin{proposition} \label{pr:sigma} The permutation $\sigma_{\mathbf{i}',\mathbf{i}}$ sends $\mathbf{i}$-bounded indices to $\mathbf{i}'$-bounded ones. If the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial then its position $k$ is $\mathbf{i}$-bounded, and $\sigma_{\mathbf{i}',\mathbf{i}}(k) = k$. \end{proposition} The relationship between the graphs $\Sigma(\mathbf{i})$ and $\Sigma(\mathbf{i}')$ is now given as follows. \begin{theorem} \label{th:graph change} Suppose two reduced words $\mathbf{i}$ and $\mathbf{i}'$ are related by a $2$- or $3$-move in position $k$, and $\sigma = \sigma_{\mathbf{i}',\mathbf{i}}$ is the corresponding permutation of $[1,m]$. Let $a$ and $b$ be two distinct elements of $[1,m]$ such that at least one of them is $\mathbf{i}$-bounded. Then \begin{equation} \label{eq:preserving edges} (a \to b) \in \Sigma(\mathbf{i}) \Leftrightarrow (\sigma (a) \to \sigma (b)) \in \Sigma({\mathbf{i}'}) \ , \end{equation} with the following two exceptions. \begin{enumerate} \item If the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial then $(a \to k) \in \Sigma(\mathbf{i}) \Leftrightarrow (k \to \sigma (a)) \in \Sigma({\mathbf{i}'})$. \item If the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial, and $a \to k \to b$ in $\Sigma(\mathbf{i})$ then $\{a,b\} \in \Sigma(\mathbf{i}) \Leftrightarrow \{\sigma (a), \sigma (b)\} \notin \Sigma({\mathbf{i}'})$; furthermore, the edge $\{a,b\}\in\Sigma(\mathbf{i})$ can only be directed as $b\to a$. \end{enumerate} \end{theorem} The following example illustrates the above results. \begin{example} \label{ex:graph} {\rm Let $\Pi$ be the Dynkin graph $A_4$, i.e., the chain formed by vertices $1, 2, 3$, and $4$. Let $u = s_4 s_2 s_1 s_2 s_3 s_2 s_4 s_1$ and $v = s_2 s_1 s_3 s_2 s_4 s_1 s_3 s_2 s_1$ (in the standard realization of $W$ as the symmetric group $S_5$, with the generators $s_i = (i, i+1)$ (adjacent transpositions), the permutations $u$ and $v$ can be written in the one-line notation as $u=53241$ and $v=54312$). The graph $\Sigma (\mathbf{i})$ corresponding to the reduced word $\mathbf{i}=(2,1,-4,-2,-1,3,-2,2,-3,-2,4,1,-4,-1,3,2,1)$ of $(u,v)$ is shown on Fig.~\ref{fig:graph}. Here white (resp. black) vertices of each horizontal level $i$ correspond to entries of $\mathbf{i}$ that are equal to $-i$ (resp. to $i$). Horizontal edges are shown by solid lines, inclined edges of type (ii) in Definition~\ref{def:edges} by dashed lines, and inclined edges of type (iii) by dotted lines. \begin{figure} \vskip 10pt \centerline{\hbox{\epsfxsize=10cm\epsfbox{s5xs5.eps}}} \vskip 5pt \caption[]{\label{fig:graph} Graph $\Sigma(\mathbf{i})$ for type $A_4$. } \end{figure} \begin{figure}[t] \vskip 10pt \centerline{\hbox{\epsfxsize=10cm\epsfbox{s522.eps}}} \vskip 5pt \caption[]{\label{fig:2-move} Graph transformation under a non-trivial 2-move. } \end{figure} \begin{figure}[b] \vskip 10pt \centerline{\hbox{\epsfxsize=10cm\epsfbox{s5232.eps}}} \vskip 5pt \caption[]{\label{fig:3-move} Graph transformation under a 3-move. } \end{figure} Now let $\mathbf{i}'$ be obtained from $\mathbf{i}$ by the (non-trivial) 2-move in position 8, i.e., by interchanging $i_7 = -2$ with $i_8 = 2$. The corresponding graph $\Sigma (\mathbf{i}')$ is shown on Fig.~\ref{fig:2-move}. Notice that the edges of $\Sigma(\mathbf{i})$ that fall into the first exceptional case in Theorem~\ref{th:graph change} are $A\to B$, $C\to A$, and $A\to D$; by reversing their orientation, one obtains the edges $B'\to A'$, $A'\to C'$, and $D'\to A'$ of $\Sigma(\mathbf{i}')$. The second exceptional case in Theorem~\ref{th:graph change} applies to two edges $B \to E$ and $D\to E$ of $\Sigma(\mathbf{i})$ and two ``non-edges" $\{C,B\}$ and $\{C,D\}$; the corresponding edges and non-edges of $\Sigma(\mathbf{i}')$ are $C'\to B'$, $C'\to D'$, $\{E',B'\}$, and $\{E',D'\}$. Finally, consider the reduced word $\mathbf{i}''$ obtained from $\mathbf{i}'$ by the 3-move in position 10, i.e., by replacing $(i'_8, i'_9, i'_{10}) = (-2,-3,-2)$ with $(-3, -2, -3)$. The corresponding graph $\Sigma (\mathbf{i}'')$ is shown on Fig.~\ref{fig:3-move}. Now the first exceptional case in Theorem~\ref{th:graph change} covers the edges $D'\to A'$, $C'\to D'$, $D'\to F'$, and $G'\to D'$ of $\Sigma(\mathbf{i}')$, and the corresponding edges $A''\to D''$, $D''\to C''$, $F''\to D''$, and $D''\to G''$ of $\Sigma(\mathbf{i}'')$. The second exceptional case covers the edges $F'\to C'$ and $A'\to C'$, and non-edges $\{G',F'\}$ and $\{G',A'\}$ of $\Sigma(\mathbf{i}'')$; the corresponding edges and non-edges of $\Sigma(\mathbf{i}'')$ are $G''\to F''$, $G''\to A''$, $\{C'',F''\}$, and $\{A'',C''\}$.} \end{example} \subsection{The groups $\Gamma_\mathbf{i}$ and conjugacy theorems} \label{sec:conjugacy} As before, let $\mathbf{i} = (i_1, \dots, i_m)$ be a reduced word for a pair $(u,v)$ of elements in a simply-laced Coxeter group $W$. By the general construction in Section~\ref{sec:symplectic transvections general}, the pair $(\Sigma(\mathbf{i}), B(\mathbf{i}))$ gives rise to a skew symmetric form $\Omega_{\Sigma(\mathbf{i})}$ on $\mathbb{Z}^m$, and to a subgroup $\Gamma_{\Sigma (\mathbf{i}), B(\mathbf{i})} \subset GL_m (\mathbb{Z})$ generated by symplectic transvections. We denote these symplectic transvections by $\tau_{k,\mathbf{i}}$, and also abbreviate $\Omega_\mathbf{i} = \Omega_{\Sigma(\mathbf{i})}$, and $\Gamma_\mathbf{i} = \Gamma_{\Sigma (\mathbf{i}), B(\mathbf{i})}$. \begin{theorem} \label{th:conjugacy} For any two reduced words $\mathbf{i}$ and $\mathbf{i}'$ for the same pair $(u,v) \in W \times W$, the groups $\Gamma_\mathbf{i}$ and $\Gamma_{\mathbf{i}'}$ are conjugate to each other inside $GL_{m}(\mathbb{Z})$. \end{theorem} Our proof of Theorem~\ref{th:conjugacy} is constructive. In view of the Tits result quoted in Section~\ref{sec:Coxeter groups general}, it is enough to prove Theorem~\ref{th:conjugacy} in the case when $\mathbf{i}$ and $\mathbf{i}'$ are related by a 2- or 3-move. We shall construct the corresponding conjugating linear transformations explicitly. To do this, let us define two linear maps $\varphi^\pm_{\mathbf{i}',\mathbf{i}}: \mathbb{Z}^{m} \to \mathbb{Z}^{m}$. For $v \in \mathbb{Z}^{m}$, the vectors $\varphi^+_{\mathbf{i}',\mathbf{i}} (v) = v^+$ and $\varphi^-_{\mathbf{i}',\mathbf{i}} (v) = v^-$ are defined as follows. If $\mathbf{i}$ and $\mathbf{i}'$ are related by a trivial $2$-move and $l$ is arbitrary, or if $\mathbf{i}$ and $\mathbf{i}'$ are related by a non-trivial move in position $k$ and $l \neq k$, then we set \begin{equation} \label{eq:phi non-critical} \xi_l (v^+) = \xi_l (v^-) = \xi_{\sigma_{\mathbf{i}',\mathbf{i}} (l)} (v) \ ; \end{equation} for $l = k$ in the case of a non-trivial move, we set \begin{equation} \label{eq:phi critical} \xi_k (v^+) = \sum_{(a \to k) \in \Sigma(\mathbf{i})} \xi_a (v) - \xi_k (v) \ ; \quad \xi_k (v^-) = \sum_{(k \to b) \in \Sigma(\mathbf{i})} \xi_b (v) - \xi_k (v) \ . \end{equation} \begin{theorem} \label{th:groupoid} If two reduced words $\mathbf{i}$ and $\mathbf{i}'$ for the same pair $(u,v) \in W \times W$ are related by a $2$- or $3$-move then the corresponding linear maps $\varphi^+_{\mathbf{i}',\mathbf{i}}$ and $\varphi^-_{\mathbf{i}',\mathbf{i}}$ are invertible, and \begin{equation} \label{eq:conjugacy} \Gamma_{\mathbf{i}'} = \varphi^+_{\mathbf{i}',\mathbf{i}} \circ \Gamma_{\mathbf{i}} \circ (\varphi^+_{\mathbf{i}',\mathbf{i}})^{-1} = \varphi^-_{\mathbf{i}',\mathbf{i}} \circ \Gamma_{\mathbf{i}} \circ (\varphi^-_{\mathbf{i}',\mathbf{i}})^{-1}\ . \end{equation} \end{theorem} Our proof of Theorem~\ref{th:groupoid} is based on the following properties of the maps $\varphi^\pm_{\mathbf{i}',\mathbf{i}}$, which might be of independent interest. \begin{theorem} \label{th:phi^2} (a) The linear maps $\varphi^\pm_{\mathbf{i}',\mathbf{i}}$ satisfy: \begin{equation} \label{eq:phi inverses} \varphi^-_{\mathbf{i},\mathbf{i}'} \circ \varphi^+_{\mathbf{i}',\mathbf{i}} = \varphi^+_{\mathbf{i},\mathbf{i}'} \circ \varphi^-_{\mathbf{i}',\mathbf{i}} = {\rm Id} \ . \end{equation} \noindent (b) If the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial in position $k$ then \begin{equation} \label{eq: tau = phi^2} \varphi^+_{\mathbf{i},\mathbf{i}'} \circ \varphi^+_{\mathbf{i}',\mathbf{i}} = \tau_{k,\mathbf{i}} \ . \end{equation} \noindent (c) For any $\mathbf{i}$-bounded index $l \in [1,m]$, we have \begin{equation} \label{eq:phi-intertwiner} \varphi^+_{\mathbf{i}',\mathbf{i}} \circ \tau_{l,\mathbf{i}} = \tau_{\sigma_{\mathbf{i}',\mathbf{i}} (l),\mathbf{i}'} \circ \varphi^+_{\mathbf{i}',\mathbf{i}} \end{equation} unless the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial in position $k$, and $(l \to k) \in \Sigma_\mathbf{i}$. \end{theorem} \subsection{Enumerating $\Gamma_{\Sigma, B}( \mathbb{F}_2)$-orbits in $\mathbb{F}_2^\Sigma$} \label{sec:orbit enumeration} Let $\Sigma$ and $B$ have the same meaning as in Section~\ref{sec:symplectic transvections general}, and let $\Gamma = \Gamma_{\Sigma, B}( \mathbb{F}_2)$ be the corresponding group of linear transformations of the vector space $\mathbb{F}_2^\Sigma$. The following definition is motivated by the results in \cite{Janssen,ssv1,ssv2}. \begin{definition} \label{def:E6-compatible graphs} {\rm A finite (non-directed) graph is $E_6$-compatible if it is connected, and it contains an induced subgraph with $6$ vertices isomorphic to the Dynkin graph $E_6$ (see Fig.~\ref{fig:E6}).} \end{definition} \begin{figure} \vskip 10pt \centerline{\hbox{\epsfxsize=6cm\epsfbox{e6.eps}}} \vskip 5pt \caption[]{\label{fig:E6} Dynkin graph $E_6$. } \end{figure} \begin{theorem} \label{th:number of orbits} Suppose that the induced subgraph of $\Sigma$ with the set of vertices $B$ is $E_6$-compatible. Then the number of $\Gamma$-orbits in $\mathbb{F}_2^\Sigma$ is equal to $$2^{\#(\Sigma \setminus B)} \cdot (2 + 2^{\dim (\mathbb{F}_2^B \cap {\rm Ker} \ \overline \Omega)}) \ ,$$ where $\overline \Omega$ denotes the $\mathbb{F}_2$-valued bilinear form on $\mathbb{F}_2^\Sigma$ obtained by reduction modulo $2$ from the form $\Omega = \Omega_\Sigma$ in (\ref{eq:Omega}). \end{theorem} Theorem~\ref{th:number of orbits} has the following corollary which generalizes the main enumeration result in \cite{ssv1,ssv2}. \begin{corollary} \label{cor:number of components} Let $u$ and $v$ be two elements of a simply-laced Coxeter group $W$, and suppose that for some reduced word $\mathbf{i} \in R(u,v)$, the induced subgraph of $\Sigma(\mathbf{i})$ with the set of vertices $B(\mathbf{i})$ is $E_6$-compatible. Then the number of $\Gamma_\mathbf{i} (\mathbb{F}_2)$-orbits in $\mathbb{F}_2^{m}$ is equal to $3 \cdot 2^s$, where $s$ is the number of indices $i \in \Pi$ such that some (equivalently, any) reduced word for $(u,v)$ has an entry $\pm i$. \end{corollary} \section{Proofs of results in Section~\ref{sec:graphs}} \label{sec:proofs-graphs} \subsection{Proof of Proposition~\ref{pr:boundary vertex}} \label{sec:ProofBoundary} By the definition of $\mathbf{i}$-bounded indices, we have $k^- \in [1,m]$ for any $k \in S$. Now pick $b \in S$ with the smallest value of $b^-$, and set $a = b^-$. Clearly, $a \notin S$, and $\{a,b\}$ is a horizontal edge in $\Sigma (\mathbf{i})$. We claim that $b$ is the only vertex in $S$ such that $\{a,b\} \in \Sigma (\mathbf{i})$. Indeed, if $\{a,c\} \in \Sigma (\mathbf{i})$ for some $c \neq b$ then $c^- < a$, in view of Definition~\ref{def:edges}. Because of the way $b$ was chosen, we have $c \notin S$, as required. \endproof \subsection{Proof of Theorem~\ref{th:strip}} \label{sec:ProofStrips} In the course of the proof, we fix a reduced word $\mathbf{i} \in R(u,v)$, and an edge $\{i,j\} \in \Pi$; we shall refer to the $(i,j)$-strip of $\Sigma(\mathbf{i})$ as simply the strip. For any vertex $A = A_k = (k,y)$ in the strip, we set $y(A) = y$, and $\varepsilon (A) = \varepsilon (i_k)$; we call $y(A)$ the \emph{level}, and $\varepsilon (A)$ the \emph{sign} of $A$. We also set $$c(A) = y(A) \varepsilon(A) \ ,$$ and call $c(A)$ the \emph{charge} of a vertex $A$. Finally, we linearly order the vertices by setting $A_k \prec A_l$ if $k < l$, i.e., if the vertex $A_k$ is to the left of $A_l$. In these terms, one can describe inclined edges in the strip as follows. \begin{lemma} \label{lem:inclined edges} A vertex $B$ is the left end of an inclined edge in the strip if and only if it satisfies the following two conditions: \noindent (1) $B$ is not the leftmost vertex in the strip, and the preceding vertex $A$ has opposite charge $c(A) = - c(B)$. \noindent (2) there is a vertex $C$ of opposite level $y(C) = - y(B)$ that lies to the right of $B$. Under these conditions, an inclined edge with the left end $B$ is unique, and its right end is the leftmost vertex $C$ satisfying (2). \end{lemma} This is just a reformulation of conditions (ii) and (iii) in Definition~\ref{def:edges}. \endproof \begin{lemma} \label{lem:inclined edges 2} Suppose $A \prec C \prec C'$ are three vertices in the strip such that $c(A) = - c(C)$, and $y(C) = - y(C')$. Then there exists a vertex $B$ such that $A \prec B \preceq C$, and $B$ is the left end of an inclined edge in the strip. \end{lemma} \proof Let $B$ be the leftmost vertex such that $A \prec B \preceq C$ and $c(B) = - c(A)$. Clearly, $B$ satisfies condition (1) in Lemma~\ref{lem:inclined edges}. It remains to show that $B$ also satisfies condition (2); that is, we need to find a vertex of opposite level to $B$ that lies to the right of $B$. Depending on the level of $B$, either $C$ or $C'$ is such a vertex, and we are done. \endproof Now everything is ready for the proof of Theorem~\ref{th:strip}. To prove part (a), assume that $\{B,C\}$ and $\{B',C'\}$ are two inclined edges that cross each other inside the strip. Without loss of generality, assume that $B \prec C$, $B' \prec C'$, and $C \prec C'$. Then we must have $B' \prec C$ (otherwise, our inclined edges would not cross). Since $\{B',C'\}$ is an inclined edge, and $B' \prec C \prec C'$, Lemma~\ref{lem:inclined edges} implies that $y(C) = y(B')$. Therefore, $y(B) = - y(C) = - y(B')$. Again applying Lemma~\ref{lem:inclined edges} to the inclined edge $\{B',C'\}$, we conclude that $B \prec B'$, i.e., we must have $B \prec B' \prec C \prec C'$. But then, by the same lemma, $\{B,C\}$ cannot be an inclined edge, providing a desired contradiction. To prove part (b), consider two consecutive inclined edges $\{B,C\}$ and $\{B',C'\}$. Again we can assume without loss of generality that $B \prec C$, $B' \prec C'$, and $C \prec C'$. Let $P$ be the boundary of the polygon with vertices $B, C, B'$, and $C'$. By Lemma~\ref{lem:inclined edges}, the leftmost vertex of $P$ is $B$, the rightmost vertex is $C'$, and $P$ does not contain a vertex $D$ such that $B' \prec D \prec C'$; in particular, we have either $C \preceq B'$ or $C = C'$. Now we make the following crucial observation: all the vertices $D$ on $P$ such that $B \prec D \prec B'$ must have the same charge $c(D) = c(B)$. Indeed, assume that $c(D) = - c(B)$ for some $D$ with $B \prec D \prec B'$. Then Lemma~\ref{lem:inclined edges 2} implies that some $B''$ with $B \prec B'' \preceq D$ is the left end of an inclined edge; but this contradicts our assumption that $\{B,C\}$ and $\{B',C'\}$ are two \emph{consecutive} inclined edges. We see that $c(D) = c(B)$ for any vertex $D \in P \setminus \{B',C'\}$. Combining this fact with condition (1) in Lemma~\ref{lem:inclined edges} applied to the inclined edge $\{B',C'\}$ with the left end $B'$, we conclude that $c(B') = - c(B)$. Remembering the definition of charge, the above statements can be reformulated as follows: $B'$ has the same (resp. opposite) sign with all vertices of opposite (resp. same) level in $P \setminus \{C'\}$. Using the definition of directions of edges in Definition~\ref{def:edges}, we obtain: \noindent 1. Horizontal edges on opposite sides of $P$ are directed opposite way since their left ends have opposite signs. \noindent 2. Suppose $B'$ is the right end of a horizontal edge $\{A,B'\}$ in $P$. Then exactly one of the edges $\{A,B'\}$ and $\{B',C'\}$ is directed towards $B'$ since their left ends $A$ and $B'$ have opposite signs. \noindent 3. The same argument shows that if $C'$ is the right end of a horizontal edge $\{A,C'\}$ in $P$ then exactly one of the edges $\{A,C'\}$ and $\{B',C'\}$ is directed towards $C'$. \noindent 4. Finally, if $B$ is the left end of a horizontal edge $\{B,D\}$ in $P$ then exactly one of the edges $\{B,C\}$ and $\{B,D\}$ is directed towards $B$. These facts imply that $P$ is a directed cycle, which completes the proof of Theorem~\ref{th:strip}. \endproof \subsection{Proof of Theorem~\ref{th:graph change}} \label{sec:ProofGraphChange} Let us call a pair of indices $\{a,b\}$ \emph{exceptional} (for $\mathbf{i}$ and $\mathbf{i}'$) if it violates (\ref{eq:preserving edges}). We need to show that exceptional pairs are precisely those in two exceptional cases in Theorem~\ref{th:graph change}; to do this, we shall examine the relationship between the corresponding strips in $\Sigma(\mathbf{i})$ and $\Sigma(\mathbf{i}')$. Let us consider the following three cases: \noindent {\bf Case 1 (trivial $2$-move).} Suppose $i_k = i'_{k-1} = i_0$, $i_{k-1} = i'_{k} = j_0$, and $i_l = i'_l$ for $l \notin \{k-1,k\}$, where $i_0, j_0 \in \tilde \Pi$ are such that $|i_0| \neq |j_0|$ and $\{i_0,j_0\} \notin \tilde \Pi$. If both $i$ and $j$ are different from $|i_0|$ and $|j_0|$ then the strip $\Sigma_{i,j}(\mathbf{i})$ is identical to $\Sigma_{i,j}(\mathbf{i}')$, and so does not contain exceptional pairs. If say $i = |i_0|$ but $j \neq |j_0|$ then the only vertex in $\Sigma_{i,j}(\mathbf{i})$ but not in $\Sigma_{i,j}(\mathbf{i}')$ is $A_k$ (in the notation of Section~\ref{sec:ProofStrips}), while the only vertex in $\Sigma_{i,j}(\mathbf{i}')$ but not in $\Sigma_{i,j}(\mathbf{i})$ is $A'_{k-1} = A'_{\sigma(k)}$. The vertex $A_k$ has the same level and sign and so the same charge as the vertex $A'_{\sigma(k)}$ in $\Sigma_{i,j}(\mathbf{i}')$; by Lemma~\ref{lem:inclined edges}, there are no exceptional pairs in the strip $\Sigma_{i,j}(\mathbf{i})$. Finally, suppose that $\{i,j\} = \{|i_0|, |j_0|\}$; in particular, in this case we have $\{|i_0|, |j_0|\} \in \Pi$, hence $\varepsilon (i_0) = - \varepsilon (j_0)$. Now the only vertices in $\Sigma_{i,j}(\mathbf{i})$ but not in $\Sigma_{i,j}(\mathbf{i}')$ are $A_k$ and $A_{k-1}$, while the only vertices in $\Sigma_{i,j}(\mathbf{i}')$ but not in $\Sigma_{i,j}(\mathbf{i})$ are $A'_{k-1} = A'_{\sigma(k)}$ and $A'_{k} = A'_{\sigma(k-1)}$. Since $A_k$ and $A_{k-1}$ are of opposite level and opposite sign, they have the same charge, which is also equal to the charge of $A'_{\sigma(k-1)}$ and $A'_{\sigma(k)}$. Again using Lemma~\ref{lem:inclined edges}, we see that the strip in question also does not contain exceptional pairs. \noindent {\bf Case 2 (non-trivial $2$-move).} Suppose $i_k = i'_{k-1} = i_0 \in \tilde \Pi$, $i_{k-1} = i'_{k} = - i_0$, and $i_l = i'_l$ for $l \notin \{k-1,k\}$. Interchanging if necessary $\mathbf{i}$ and $\mathbf{i}'$, we can and will assume that $i_0 \in \Pi$. Clearly, an exceptional pair can only belong to an $(i,j)$-strip with $i = i_0$. In our case, the location of all vertices in $\Sigma_{i,j}(\mathbf{i})$ and $\Sigma_{i,j}(\mathbf{i}')$ is the same; the only difference between the two strips is that the vertices $A_{k-1}$ and $A_k$ in $\Sigma_{i,j}(\mathbf{i})$ have opposite signs and hence opposite charges to their counterparts in $\Sigma_{i,j}(\mathbf{i}')$. It follows that exceptional pairs of vertices of the same level are precisely horizontal edges containing $A_k$, i.e., $\{A_{k-1}, A_k\}$ and $\{A_k, C\}$, where $C$ is the right neighbor of $A_k$ of the same level (note that $C$ does not necessarily exist). Since $\varepsilon (i_k) = \varepsilon (i'_{k-1}) = +1$, and $\varepsilon (i_{k-1}) = \varepsilon (i'_{k}) = -1$, we have $$(A_k \to A_{k-1}) \in \Sigma(\mathbf{i}), \,\, (A_k \to C) \in \Sigma(\mathbf{i}),$$ $$(A'_{k-1} \to A'_{k}) \in \Sigma(\mathbf{i}'), \,\, (C' \to A'_k) \in\Sigma(\mathbf{i}'),$$ so both pairs $\{A_{k-1}, A_k\}$ and $\{A_k, C\}$ fall into the first exceptional case in Theorem~\ref{th:graph change}. Let us now describe exceptional pairs corresponding to inclined edges. Let $B$ be the vertex of the opposite level to $A_k$ and closest to $A_k$ from the right (as the vertex $C$ above, $B$ does not necessarily exist). By Lemma~\ref{lem:inclined edges}, the left end of an exceptional inclined pair can only be $A_{k-1}$, $A_k$, or the leftmost of $B$ and $C$; furthermore, the corresponding inclined edges can only be $\{A_{k-1},B\}$, $\{A_k,B\}$, or $\{B,C\}$. We claim that all these three pairs are indeed exceptional, and each of them falls into one of the exceptional cases in Theorem~\ref{th:graph change}. Let us start with $\{B,C\}$. Since $A_k$ is the preceding vertex to the leftmost member of $\{B,C\}$, and it has opposite charges in the two strips, Lemma~\ref{lem:inclined edges} implies that $\{B,C\}$ is an edge in precisely one of the strips. By Theorem~\ref{th:strip} (b), the triangle with vertices $A_k$, $B$, and $C$ is a directed cycle in the corresponding strip. Thus the pair $\{B,C\}$ falls into the second exceptional case in Theorem~\ref{th:graph change}. The same argument shows that $\{A_{k-1},B\}$ falls into the second exceptional case in Theorem~\ref{th:graph change} provided one of $A_{k-1}$ and $B$ is $\mathbf{i}$-bounded, i.e., $A_{k-1}$ is not the leftmost vertex in the strip. As for $\{A_k,B\}$, it is an edge in both strips, and it has opposite directions in them because its left end $A_k$ has opposite signs there. Thus $\{A_k,B\}$ falls into the first exceptional case in Theorem~\ref{th:graph change}. It remains to show that the exceptional pairs (horizontal and inclined) just discussed exhaust all possibilities for the two exceptional cases in Theorem~\ref{th:graph change}. This is clear because by the above analysis, the only possible edges through $A_k$ in $\Sigma (\mathbf{i})$ are $(A_k \to A_{k-1})$, $(A_k \to C)$, and $(B \to A_k)$ with $B$ of the kind described above. \noindent {\bf Case 3 ($3$-move).} Suppose $i_k = i_{k-2} = i'_{k-1} = i_0$, $i_{k-1} = i'_{k} = i'_{k-2} = j_0$ for some $\{i_0, j_0\} \in \Pi$, and $i_l = i'_l$ for $l \notin \{k-2, k-1,k\}$ (the case when $\{i_0, j_0\} \in -\Pi$ is totally similar). As in the previous case, we need to describe all exceptional pairs. First an exceptional pair can only belong to an $(i,j)$-strip with at least one of $i$ and $j$ equal to $i_0$ or $j_0$. Next let us compare the $(i_0,j_0)$-strips in $\Sigma(\mathbf{i})$ and $\Sigma(\mathbf{i}')$. The location of all vertices in these two strips is the same with the exception of $A_{k-2}, A_{k-1}$, and $A_k$ in the former strip, and their counterparts $A'_{k-2} = A'_{\sigma(k-1)}, A'_{k-1} = A'_{\sigma(k-2)}$, and $A'_k$ in the latter strip. Each of the six exceptional vertices has sign $+ 1$; so its level is equal to its charge. These charges (or levels) are given as follows: $$c(A_{k-2}) = c(A'_{\sigma (k-2)}) = c(A_{k}) = -1, \,\, c(A_{k-1}) = c(A'_{\sigma(k-1)}) = c(A'_{k}) = 1 \ .$$ Let $B$ (resp. $B'$) denote the vertex in both strips which is the closest from the right to $A_k$ on the same (resp. opposite) level; note that $B$ or $B'$ may not exist. Since the trapezoid $T$ with vertices $A_{k-2}, A_{k-1}, B'$, and $B$ in $\Sigma_{i_0,j_0}(\mathbf{i})$ is in the same relative position to all outside vertices as the trapezoid $T'$ with vertices $A'_{\sigma(k-2)}, A'_{\sigma(k-1)}, B'$, a nd $B$ in $\Sigma_{i_0,j_0}(\mathbf{i}')$, it follows that every exceptional pair is contained in $T$. An inspection using Lemma~\ref{lem:inclined edges} shows that $T$ contains the directed edges $$A_{k-2} \to A_k \to A_{k-1} \to B' \to A_k \to B$$ and does not contain any of the edges $\{A_{k-2}, B\}$, $\{A_{k-2}, B'\}$, or $\{A_{k-1}, B\}$. Similarly (or by interchanging $\mathbf{i}$ and $\mathbf{i}'$), we conclude that $T'$ contains the directed edges $$A'_{\sigma(k-1)} \to A'_k \to A'_{\sigma(k-2)} \to B \to A'_k \to B'$$ and does not contain any of the edges $\{A'_{\sigma(k-1)}, B'\}$, $\{A'_{\sigma(k-1)}, B'\}$, or $\{A'_{\sigma(k-2)}, B'\}$. Furthermore, $\{B,B'\}$ is an edge in precisely one of the strips (since the preceding vertices $A_k$ and $A'_k$ have opposite charges); and precisely one of the pairs $\{A_{k-2}, A_{k-1}\}$ and $\{A'_{\sigma (k-1)}, A'_{\sigma (k-2)}\}$ is an edge in its strip provided $A_{k-2}$ is not the leftmost vertex (since their left ends $A_{k-2}$ and $A'_{\sigma (k-1)}$ have opposite charges). Comparing this information for the two trapezoids, we see that the exceptional pairs in $T$ are all pairs of vertices in $T$ with the exception of two diagonals $\{A_{k-2}, B'\}$ and $\{A_{k-1}, B\}$ (and also of $\{A_{k-2}, A_{k-1}\}$ if $A_{k-2}$ is the leftmost vertex in the strip). By inspection based on Theorem~\ref{th:strip} (b), all these exceptional pairs fall into the two exceptional cases in Theorem~\ref{th:graph change}. A similar (but much simpler) analysis shows that any $(i,j)$-strip with precisely one of $i$ and $j$ belonging to $\{i_0,j_0\}$ does not contain extra exceptional pairs, and also has no inclined edges through $A_k$ or $A'_k$. We conclude that all the exceptional pairs are contained in the above trapezoid $T$. The fact that these exceptional pairs exhaust all possibilities for the two exceptional cases in Theorem~\ref{th:graph change} is clear because by the above analysis, the only edges through $A_k$ in $\Sigma (\mathbf{i})$ are those connecting $A_k$ with the vertices of $T$. Theorem~\ref{th:graph change} is proved. \endproof \section{Proofs of results in Section~\ref{sec:conjugacy}} \label{sec:proofs-abstract} We have already noticed that Theorem~\ref{th:conjugacy} follows from Theorem~\ref{th:groupoid}. Let us first prove Theorem~\ref{th:phi^2} and then deduce Theorem~\ref{th:groupoid} from it. \subsection{Proof of Theorem~\ref{th:phi^2}} We fix reduced words $\mathbf{i}$ and $\mathbf{i}'$ related by a 2- or 3-move, and abbreviate $\sigma = \sigma_{\mathbf{i}',\mathbf{i}} = \sigma_{\mathbf{i},\mathbf{i}'}$ and $\varphi^+ = \varphi^+_{\mathbf{i}',\mathbf{i}}$. Let us first prove parts (a) and (b). We shall only prove the first equality in (\ref{eq:phi inverses}); the proof of the second one and of (\ref{eq: tau = phi^2}) is completely similar. Let $v \in \mathbb{Z}^m$, $v^+ = \varphi^+ (v)$, and $v' = \varphi^-_{\mathbf{i},\mathbf{i}'} (v^+)$; thus we need to show that $v = v'$, i.e., that $\xi_l (v) = \xi_l (v')$ for all $l \in [1,m]$. Note that the permutation $\sigma$ is an involution. In view of (\ref{eq:phi non-critical}), this implies the desired equality $\xi_l (v) = \xi_l (v')$ in all the cases except the following one: the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial in position $k$, and $l = k$. To deal with this case, we use the first exceptional case in Theorem~\ref{th:graph change} which we can write as $$(k \to b) \in \Sigma(\mathbf{i}') \Leftrightarrow (\sigma (b) \to k) \in \Sigma({\mathbf{i}}) \ .$$ Combining this with the definitions (\ref{eq:phi non-critical}) and (\ref{eq:phi critical}), we obtain $$\xi_k (v') = \sum_{(k \to b) \in \Sigma(\mathbf{i}')} \xi_b (v^+) - \xi_k (v^+)$$ $$ = \sum_{(\sigma (b) \to k) \in \Sigma(\mathbf{i})} \xi_{\sigma (b)}(v) - (\sum_{(a \to k) \in \Sigma(\mathbf{i})} \xi_{a}(v) - \xi_k (v)) = \xi_k (v) \ ,$$ as required. We deduce part (c) from the following lemma which says that the maps $(\varphi^\pm_{\mathbf{i}',\mathbf{i}})^*$ induced by $\varphi^\pm_{\mathbf{i}',\mathbf{i}}$ ``almost" transform the form $\Omega_{\mathbf{i}'}$ into $\Omega_\mathbf{i}$. \begin{lemma} \label{lem:Omega-transform} If the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is trivial then $$(\varphi^+_{\mathbf{i}',\mathbf{i}})^* (\Omega_{\mathbf{i}'}) = (\varphi^-_{\mathbf{i}',\mathbf{i}})^* (\Omega_{\mathbf{i}'}) = \Omega_{\mathbf{i}} \ .$$ If the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial in position $k$ then \begin{equation} \label{eq:Omega-transform} (\varphi^+_{\mathbf{i}',\mathbf{i}})^* (\Omega_{\mathbf{i}'}) = (\varphi^-_{\mathbf{i}',\mathbf{i}})^* (\Omega_{\mathbf{i}'}) = \Omega_{\mathbf{i}} - \doublesubscript{\sum}{(a \to k \to b) \in \Sigma(\mathbf{i})}{a,b \notin B(\mathbf{i})} \xi_a \wedge \xi_b \ . \end{equation} \end{lemma} \proof We will only deal with $(\varphi^+)^* (\Omega_{\mathbf{i}'})=(\varphi^+_{\mathbf{i}',\mathbf{i}})^* (\Omega_{\mathbf{i}'})$; the form $(\varphi^-_{\mathbf{i}',\mathbf{i}})^*(\Omega_{\mathbf{i}'})$ can be treated in the same way. By the definition, $$(\varphi^+)^* (\Omega_{\mathbf{i}'}) = \sum_{(a' \to b') \in \Sigma(\mathbf{i}')} (\varphi^+)^* \xi_{a'} \wedge (\varphi^+)^* \xi_{b'} \ .$$ The forms $(\varphi^+)^* \xi_{a'}$ are given by (\ref{eq:phi non-critical}) and (\ref{eq:phi critical}). In particular, if $\mathbf{i}$ and $\mathbf{i}'$ are related by a trivial move then $(\varphi^+)^* \xi_{a'} = \xi_{\sigma(a')}$ for any $a' \in [1,m]$; by Theorem~\ref{th:graph change}, in this case we have $$(\varphi^+)^* (\Omega_{\mathbf{i}'}) = \sum_{(a \to b) \in \Sigma(\mathbf{i})} \xi_{a} \wedge \xi_{b} \,$$ as claimed. Now suppose that $\mathbf{i}$ and $\mathbf{i}'$ are related by a non-trivial move in position $k$. Then we have \begin{eqnarray*} (\varphi^+)^* (\Omega_{\mathbf{i}'}) &=& \doublesubscript{\sum}{(\sigma(a) \to \sigma(b)) \in \Sigma(\mathbf{i}')}{a,b \neq k} \xi_a \wedge \xi_b \\ &+& \sum_{(k \to \sigma (a')) \in \Sigma(\mathbf{i}')} (\sum_{(a \to k) \in \Sigma(\mathbf{i})} \xi_{a} - \xi_k) \wedge \xi_{a'} \\ &+& \sum_{(\sigma (b) \to k) \in \Sigma(\mathbf{i}')} \xi_{b} \wedge (\sum_{(a \to k) \in \Sigma(\mathbf{i})} \xi_{a} - \xi_k) \ . \end{eqnarray*} Using the second exceptional case in Theorem~\ref{th:graph change}, we can rewrite the first summand as $$\doublesubscript{\sum}{(a \to b) \in \Sigma(\mathbf{i})}{a,b \neq k} \xi_a \wedge \xi_b + \doublesubscript{\sum}{(a \to k \to b) \in \Sigma(\mathbf{i})}{\{a,b\} \cap B(\mathbf{i}) \neq \emptyset} \xi_a \wedge \xi_b \ .$$ Similarly, using the first exceptional case in Theorem~\ref{th:graph change}, we can rewrite the last two summands as $$\sum_{(a \to k) \in \Sigma(\mathbf{i})} \xi_{a} \wedge \xi_k + \sum_{(k \to b) \in \Sigma(\mathbf{i})} \xi_{k} \wedge \xi_b - \sum_{(a \to k \to b) \in \Sigma(\mathbf{i})} \xi_a \wedge \xi_b$$ (note that the missing term $$\doublesubscript{\sum}{(a \to k) \in \Sigma(\mathbf{i})}{(a' \to k) \in \Sigma(\mathbf{i})} \xi_a \wedge \xi_{a'}$$ is equal to $0$). Adding up the last two sums, we obtain (\ref{eq:Omega-transform}). \endproof Now everything is ready for the proof of Theorem~\ref{th:phi^2} (c). Since $l$ is assumed to be $\mathbf{i}$-bounded, Lemma~\ref{lem:Omega-transform} implies that $\Omega_{\mathbf{i}} (v, e_l) = \Omega_{\mathbf{i}'} (\varphi^+(v), \varphi^+(e_l))$ for any $v \in \mathbb{Z}^m$. On the other hand, since the case when the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial in position $k$, and $(l \to k) \in \Sigma_\mathbf{i}$, is excluded, we have $\varphi^+(e_l) = \pm e_{\sigma(l)}$ (with the minus sign for $l = k$ only). Therefore, our assumptions on $l$ imply that $$\Omega_{\mathbf{i}} (v, e_l) \varphi^+ (e_l) = \Omega_{\mathbf{i}'} (\varphi^+(v), e_{\sigma(l)}) e_{\sigma(l)} \ .$$ Remembering the definition (\ref{eq:transvections}) of symplectic transvections, we conclude that $$(\tau_{\sigma (l),\mathbf{i}'} \circ \varphi^+)(v) = \varphi^+(v) - \Omega_{\mathbf{i}'} (\varphi^+(v), e_{\sigma(l)}) e_{\sigma(l)}$$ $$= \varphi^+(v) - \Omega_{\mathbf{i}} (v, e_l) \varphi^+ (e_l) = (\varphi^+ \circ \tau_{l,\mathbf{i}})(v) \ ,$$ as required. This completes the proof of Theorem~\ref{th:phi^2}. \endproof \begin{remark} {\rm It is possible to modify all skew symmetric forms $\Omega_\mathbf{i}$ without changing the corresponding groups $\Gamma_\mathbf{i}$ in such a way that the modified forms will be preserved by the maps $(\varphi^\pm_{\mathbf{i}',\mathbf{i}})^*$. There are several ways to do it. Here is one ``canonical" solution: replace each $\Omega_\mathbf{i}$ by the form $$\tilde \Omega_\mathbf{i} = \Omega_\mathbf{i} - \frac{1}{2} \sum \varepsilon(i_k) \xi_k \wedge \xi_l \ ,$$ where the sum is over all pairs of $\mathbf{i}$-unbounded indices $k < l$ such that $\{|i_k|, |i_l|\} \in \Pi$. It follows easily from Lemma~\ref{lem:Omega-transform} that $(\varphi^\pm_{\mathbf{i}',\mathbf{i}})^* (\tilde \Omega_{\mathbf{i}'}) = \tilde \Omega_{\mathbf{i}}$. Unfortunately, the forms $\tilde \Omega_\mathbf{i}$ are not defined over $\mathbb{Z}$; in particular, they cannot be reduced to bilinear forms over $\mathbb{F}_2$.} \end{remark} \subsection{Proof of Theorem~\ref{th:groupoid}} The fact that $\varphi^+_{\mathbf{i}',\mathbf{i}}$ and $\varphi^-_{\mathbf{i}',\mathbf{i}}$ are invertible follows from (\ref{eq:phi inverses}). To prove (\ref{eq:conjugacy}), it remains to show that $\varphi^+_{\mathbf{i}',\mathbf{i}} \circ \tau_{l,\mathbf{i}} \circ (\varphi^+_{\mathbf{i}',\mathbf{i}})^{-1} \in \Gamma_{\mathbf{i}'}$ for any $\mathbf{i}$-bounded index $l \in [1,m]$. This follows from (\ref{eq:phi-intertwiner}) unless the move that relates $\mathbf{i}$ and $\mathbf{i}'$ is non-trivial in position $k$, and $(l \to k) \in \Sigma_\mathbf{i}$. In this exceptional case, we conclude by interchanging $\mathbf{i}$ and $\mathbf{i}'$ in (\ref{eq:phi-intertwiner}) that $$\varphi^+_{\mathbf{i},\mathbf{i}'} \circ \tau_{\sigma_{\mathbf{i}',\mathbf{i}} (l),\mathbf{i}'} = \tau_{l,\mathbf{i}} \circ \varphi^+_{\mathbf{i},\mathbf{i}'} \ .$$ Using (\ref{eq: tau = phi^2}), we obtain that $$\varphi^+_{\mathbf{i}',\mathbf{i}} \circ \tau_{l,\mathbf{i}} \circ (\varphi^+_{\mathbf{i}',\mathbf{i}})^{-1} = (\varphi^+_{\mathbf{i}',\mathbf{i}} \circ \varphi^+_{\mathbf{i},\mathbf{i}'}) \circ \tau_{\sigma_{\mathbf{i}',\mathbf{i}} (l),\mathbf{i}'} \circ (\varphi^+_{\mathbf{i}',\mathbf{i}} \circ \varphi^+_{\mathbf{i},\mathbf{i}'})^{-1} = \tau_{k,\mathbf{i}'} \circ \tau_{\sigma_{\mathbf{i}',\mathbf{i}} (l),\mathbf{i}'} \circ \tau_{k,\mathbf{i}'}^{-1} \in \Gamma_{\mathbf{i}'},$$ as required. This completes the proofs of Theorems~\ref{th:groupoid} and \ref{th:conjugacy}. \endproof \section{Proofs of results in Section~\ref{sec:orbit enumeration}} \label{sec:proofs-enumeration} \subsection{Description of $\Gamma$-orbits} In this section we shall only work over the field $\mathbb{F}_2$. Therefore we find it convenient to change our notation a little bit. Let $V$ be a finite-dimensional vector space over $\mathbb{F}_2$ with a skew-symmetric $\mathbb{F}_2$-valued form $\Omega$ (i.e., $\Omega (v,v) = 0$ for any $v \in V$). For any $v \in V$, let $\tau_v: V \to V$ denote the corresponding symplectic transvection acting by $\tau_v (x) = x - \Omega (x,v) v$. Fix a linearly independent subset $B \subset V$, and let $\Gamma$ be the subgroup of $GL(V)$ generated by the transvections $\tau_b$ for $b \in B$. We make $B$ the set of vertices of a graph with $\{b,b'\}$ an edge whenever $\Omega (b,b') = 1$. We shall deduce Theorem~\ref{th:number of orbits} from the following description of the $\Gamma$-orbits in $V$ in the case when the graph $B$ is $E_6$-compatible (see Definition~\ref{def:E6-compatible graphs}). Let $U \subset V$ be the linear span of $B$. The group $\Gamma$ preserves each parallel translate $(v + U) \in V/U$ of $U$ in $V$, so we only need to describe $\Gamma$-orbits in each $v + U$. Let us first describe one-element orbits, i.e., $\Gamma$-fixed points in each ``slice" $v + U$. Let $V^\Gamma \subset V$ denote the subspace of $\Gamma$-invariant vectors, and $K \subset U$ denote the kernel of the restriction $\Omega|_U$. \begin{proposition} \label{pr:Gamma-invariants} If $\Omega(K,v + U) = 0$ then $(v + U) \cap V^\Gamma$ is a parallel translate of $K$; otherwise, this intersection is empty. \end{proposition} \proof Suppose the intersection $(v + U) \cap V^\Gamma$ is non-empty; without loss of generality, we can assume that $v$ is $\Gamma$-invariant. By the definition, $v \in V^\Gamma$ if and only if $\Omega (u,v) = 0$ for all $u \in U$. In particular, $\Omega(K,v) = 0$, hence $\Omega(K,v + U) = 0$. Furthermore, an element $v + u$ of $v + U$ is $\Gamma$-invariant if and only if $u \in K$, and we are done. \endproof Following \cite{Janssen}, we choose a function $Q: V \to \mathbb{F}_2$ satisfying the following properties: \begin{equation} \label{eq:Q} Q(u + v) = Q(u) + Q(v) + \Omega (u,v) \,\, (u,v \in V), \quad Q(b) = 1 \,\, (b \in B) \ . \end{equation} (Clearly, these properties uniquely determine the restriction of $Q$ to $U$.) An easy check shows that $Q(\tau_v (x)) = Q(x)$ whenever $Q(v) = 1$; in particular, the function $Q$ is $\Gamma$-invariant. Now everything is ready for a description of $\Gamma$-orbits in $V$. \begin{theorem} \label{th:orbits} If the graph $B$ is $E_6$-compatible then $\Gamma$ has precisely two orbits in each set $(v + U) \setminus V^\Gamma$. These two orbits are intersections of $(v + U) \setminus V^\Gamma$ with the level sets $Q^{-1} (0)$ and $Q^{-1} (1)$ of $Q$. \end{theorem} The proof will be given in the next section. Let us show that this theorem implies Theorem~\ref{th:number of orbits} and Corollary~\ref{cor:number of components}. \begin{corollary} \label{cor:number of orbits} If the graph $B$ is $E_6$-compatible then the number of $\Gamma$-orbits in $V$ is equal to $2^{\dim (V/U)} \cdot (2 + 2^{\dim (U \cap {\rm Ker} \ \Omega)})$; in particular, if $U \cap {\rm Ker} \ \Omega = \{0\}$ then this number is $3 \cdot 2^{\dim (V/U)}$. \end{corollary} \proof By Proposition~\ref{pr:Gamma-invariants} and Theorem~\ref{th:orbits}, each slice $v+ U$ with $\Omega (K,v + U) = 0$ splits into $2^{\dim K} + 2$ $\Gamma$-orbits, while each of the remaining slices splits into $2$ orbits. There are $2^{\dim (V^\Gamma/K)}$ slices of the first kind and $2^{\dim (V/U)} - 2^{\dim (V^\Gamma/K)}$ slices of the second kind. Thus the number of $\Gamma$-orbits in $V$ is equal to $$2^{\dim (V^\Gamma/K)} \cdot (2^{\dim K} + 2) + (2^{\dim (V/U)} - 2^{\dim (V^\Gamma/K)}) \cdot 2 \ .$$ Our statement follows by simplifying this answer. \endproof Now Theorem~\ref{th:number of orbits} is just a reformulation of this Corollary. As for Corollary~\ref{cor:number of components}, one only needs to show that its assumptions imply that $U \cap {\rm Ker} \ \Omega = \{0\}$. But this follows at once from Proposition~\ref{pr:boundary vertex}. \subsection{Proof of Theorem~\ref{th:orbits}} We split the proof into several lemmas. Let $E \subset U$ be the linear span of $6$ vectors from $B$ that form an induced subgraph isomorphic to $E_6$. The restriction of $\Omega$ to $E$ is nondegenerate; in particular, $E \cap K = \{0\}$. \begin{lemma} \label{lem:Q on E} (a) Every $4$-dimensional vector subspace of $E$ contains at least two non-zero vectors with $Q = 0$. \noindent (b) Every $5$-dimensional vector subspace of $E$ contains at least two vectors with $Q = 1$. \end{lemma} \proof (a) It suffices to show that every $3$-dimensional subspace of $E$ contains a non-zero vector with $Q = 0$. Let $e_1, e_2$, and $e_3$ be three linearly independent vectors. If we assume that $Q = 1$ on each of the $6$ vectors $e_1, e_2, e_3, e_1 + e_2, e_1 + e_3$, and $e_2 + e_3$ then, in view of (\ref{eq:Q}), we must have $\Omega(e_1, e_2) = \Omega(e_1, e_3) = \Omega(e_2, e_3) = 1$. But then $Q(e_1 + e_2 + e_3) = 0$, as required. (b) It follows from the results in~\cite{Janssen} (or by direct counting) that $E$ consists of $28$ vectors with $Q = 0$ and $36$ vectors with $Q = 1$. Since the cardinality of every $5$-dimensional subspace of $E$ is $32$, our claim follows. \endproof \begin{lemma} \label{lem:Q nonconstant} The function $Q$ is nonconstant on each set $(v + U) \setminus V^\Gamma$. \end{lemma} \proof Suppose $v \in V \setminus V^\Gamma$. By Lemma~\ref{lem:Q on E} (b), there exist two vectors $e \neq e'$ in $E$ such that $$\Omega (v, e) = \Omega (v,e') = 0, \,\, Q(e) = Q(e') = 1 \ .$$ In view of (\ref{eq:Q}), we have $Q(v + e) = Q(v + e') = Q(v) + 1$, and it is clear that at least one of the vectors $v + e$ and $v + e'$ is not $\Gamma$-invariant (otherwise we would have $\Omega(e-e', u) = 0$ for all $u \in U$, which contradicts the fact that $\Omega|_E$ is nondegenerate). \endproof To prove Theorem~\ref{th:orbits}, it remains to show that $\Gamma$ acts transitively on each level set of $Q$ in $(v + U) \setminus V^\Gamma$. To do this, we shall need the following important result due to Janssen~\cite[Theorem~3.5]{Janssen}. \begin{lemma} \label{lem:Q=1 transvections} If $u$ is a vector in $U \setminus K$ such that $Q(u) = 1$ then the symplectic transvection $\tau_u$ belongs to $\Gamma$. \end{lemma} We also need the following result from~\cite[Lemma~4.3]{ssv2}. \begin{lemma} \label{lem:Janssen} If the graph $B$ is $E_6$-compatible then $\Gamma$ acts transitively on each of the level sets of $Q$ in $U \setminus K$. \end{lemma} To continue the proof, let us introduce some terminology. For a linear form $\xi \in U^*$, denote $$T_\xi = \{u \in U \setminus K: Q(u) = \xi (u) = 1\} \ .$$ We shall call a family of vectors $(u_1, u_2, \dots, u_s)$ \emph{weakly orthogonal} if $\Omega(u_1 + \cdots + u_{i-1}, u_i) = 0$ for $i = 2, \cdots, s$. \begin{lemma} \label{lem:straight sums} Let $\xi \in U^*$ be a linear form on $U$ such that $\xi|_K \neq 0$. Then every nonzero vector $u \in U$ such that $Q(u) = \xi (u)$ can be expressed as the sum $u = u_1 + \cdots + u_s$ of some weakly orthogonal family of vectors $(u_1, u_2, \dots, u_s)$ from $T_\xi$. \end{lemma} \proof We need to construct a required weakly orthogonal family $(u_1, u_2, \dots, u_s)$ in each of the following three cases. \noindent {\bf Case 1.} Let $0 \neq u = k \in K$ be such that $Q(k) = \xi(k) = 0$. Since $\xi \neq 0$, we have $\xi (b) = 1$ for some $b \in B$. By (\ref{eq:Q}), we also have $Q(b) = 1$. Since $b \notin K$, we can take $(u_1, u_2) = (b, k-b)$ as a desired weakly orthogonal family. \noindent {\bf Case 2.} Let $u = k \in K$ be such that $Q(k) = \xi (k) = 1$. By Lemma~\ref{lem:Q on E} (a), there exist distinct nonzero vectors $e$ and $e'$ in $E$ such that $Q(e) = \xi (e) = Q(e') = \xi (e') = \Omega (e, e') = 0$. Then we can take $(u_1, u_2, u_3) = (k - e, k - e', e + e' - k)$ as a desired weakly orthogonal family. \noindent {\bf Case 3.} Let $u \in U \setminus K$ be such that $Q(u) = \xi(u) = 0$. Since $\xi|_K \neq 0$, we can choose $k \in K$ so that $\xi (k) = 1$. If $Q(k) = 1$ then a desired weakly orthogonal family for $u$ can be chosen as $(u_1, u_2, u_3, u_4)$, where $(u_1, u_2, u_3)$ is a weakly orthogonal family for $k$ constructed in Case 2 above, and $u_4 = u - k$. If $Q(k) = 0$, choose $e \in E$ such that $Q(e) =1, \Omega (u,e) = 0$, and $u - e \notin K$ (the existence of such a vector $e$ follows from Lemma~\ref{lem:Q on E} (b)). If $\xi (e) = 1$ then a desired weakly orthogonal family for $u$ can be chosen as $(u_1, u_2) = (e, u-e)$. Finally, if $\xi (e) = 0$ then a desired weakly orthogonal family for $u$ can be chosen as $(u_1, u_2) = (e + k, u-e-k)$. \endproof Now everything is ready for completing the proof of Theorem~\ref{th:orbits}. Take any slice $v + U \in V/U$; we need to show that $\Gamma$ acts transitively on each of the level sets of $Q$ in $(v + U) \setminus V^\Gamma$. First suppose that $(v + U) \cap V^\Gamma \neq \emptyset$; by Proposition~\ref{pr:Gamma-invariants}, this means that $\Omega (K,v + U) = 0$. Without loss of generality, we can assume that $v$ is $\Gamma$-invariant. Then $\Omega (u, v) = 0$ for any $u \in U$, so we have $Q(v + u) = Q(v) + Q(u)$. On the other hand, we have $g(v+u) = v + g(u)$ for any $g \in \Gamma$ and $u \in U$. Thus the correspondence $u \mapsto v+u$ is a $\Gamma$-equivariant bijection between $U$ and $v + U$ preserving partitions into the level sets of $Q$. Therefore our statement follows from Lemma~\ref{lem:Janssen}. It remains to treat the case when $\Omega (K,v + U) \neq 0$. In other words, if we choose any representative $v$ and define the linear form $\xi \in U^*$ by $\xi (u) = \Omega (u, v)$ then $\xi|_K \neq 0$. Let $u \in U$ be such that $Q(v) = Q(v + u)$; we need to show that $v+u$ belongs to the $\Gamma$-orbit $\Gamma (v)$. In view of (\ref{eq:Q}), we have $Q(u) = \xi (u)$. In view of Lemma~\ref{lem:straight sums}, it suffices to show that $\Gamma (v)$ contains $v + u_1 + \cdots + u_s$ for any weakly orthogonal family of vectors $(u_1, u_2, \dots, u_s)$ from $T_\xi$. We proceed by induction on $s$. The statement is true for $s =1$ because $v+ u_1 = \tau_{u_1} (v)$, and $\tau_{u_1} \in \Gamma$ by Lemma~\ref{lem:Q=1 transvections}. Now let $s \geq 2$, and assume that $v' = v + u_1 + \cdots + u_{s-1} \in \Gamma (v)$. The definition of a weakly orthogonal family implies that $$v + u_1 + \cdots + u_s = v' + u_s = \tau_{u_s} (v') \in \Gamma (v) \ ,$$ and we are done. This completes the proof of Theorem~\ref{th:orbits}. \endproof \section{Connected components of real double Bruhat cells} \label{sec:components-claims} In this section we give a (conjectural) geometric application of the above constructions. We assume that $\Pi$ is a Dynkin graph of simply-laced type, i.e., every connected component of $\Pi$ is the Dynkin graph of type $A_n, D_n, E_6, E_7$, or $E_8$. Let $G$ be a simply connected semisimple algebraic group with the Dynkin graph $\Pi$. We fix a pair of opposite Borel subgroups $B_-$ and~$B$ in $G$; thus $H=B_-\cap B$ is a maximal torus in~$G$. Let $N$ and $N_-$ be the unipotent radicals of $B$ and~$B_-$, respectively. Let $\{\alpha_i: i \in \Pi\}$ be the system of simple roots for which the corresponding root subgroups are contained in~$N$. For every $i \in \Pi$, let $\varphi_i: SL_2 \to G$ be the canonical embedding corresponding to $\alpha_i\,$. The (split) real part of $G$ is defined as the subgroup $G(\mathbb{R})$ of $G$ generated by all the subgroups $\varphi_i (SL_2(\mathbb{R}))$. For any subset $L \subset G$ we define its real part by $L(\mathbb{R}) = L \cap G(\mathbb{R})$. The \emph{Weyl group} $W$ of $G$ is defined by $W = {\rm Norm}_G (H)/H$. It is canonically identified with the Coxeter group $W(\Pi)$ (as defined in Section~\ref{sec:Coxeter groups general}) via $s_i = \overline {s_i} H$, where $$\overline {s_i} = \varphi_i \mat{0}{-1}{1}{0} \in {\rm Norm}_G (H) \ .$$ The representatives $\overline {s_i} \in G$ satisfy the braid relations in~$W$; thus the representative $\overline w$ can be unambiguously defined for any $w \in W$ by requiring that $\overline {uv} = \overline {u} \cdot \overline {v}$ whenever $\ell (uv) = \ell (u) + \ell (v)$. The group $G$ has two \emph{Bruhat decompositions}, with respect to $B$ and $B_-\,$: $$G = \bigcup_{u \in W} B u B = \bigcup_{v \in W} B_- v B_- \ . $$ The \emph{double Bruhat cells}~$G^{u,v}$ are defined by $G^{u,v} = B u B \cap B_- v B_- \,$. Following \cite{BZ99}, we define the \emph{reduced double Bruhat cell} $L^{u,v} \subset G^{u,v}$ as follows: \begin{equation} \label{eq: left sections} L^{u,v} = N \overline u N \cap B_- v B_- \ . \end{equation} The maximal torus $H$ acts freely on $G^{u,v}$ by left (or right) translations, and $L^{u,v}$ is a section of this action. Thus $G^{u,v}$ is biregularly isomorphic to $H \times L^{u,v}$, and all properties of $G^{u,v}$ can be translated in a straightforward way into the corresponding properties of $L^{u,v}$ (and vice versa). In particular, Theorem~1.1 in \cite{FZ} implies that $L^{u,v}$ is biregularly isomorphic to a Zariski open subset of an affine space of dimension $\ell(u)+\ell(v)$. \begin{conjecture} \label{con:components} {\rm For every two elements $u$ and $v$ in $W$, and every reduced word $\mathbf{i} \in R(u,v)$, the connected components of $L^{u,v}(\mathbb{R})$ are in a natural bijection with the $\Gamma_\mathbf{i} (\mathbb{F}_2)$-orbits in $\mathbb{F}_2^{\ell(u) + \ell(v)}$ .} \end{conjecture} The precise form of this conjecture comes from the ``calculus of generalized minors" developed in \cite{FZ} and in a forthcoming paper \cite{BZ99}. If $u$ is the identity element $e \in W$ then $L^{e,v} = N \cap B_- v B_-$ is the variety $N^v$ studied in \cite{BZ}. When $G = SL_n$, and $v = w_0$, the longest element in $W$, the real part $N^{w_0}(\mathbb{R})$ is the semi-algebraic set $N_n^0$ discussed in the introduction; in this case, the conjecture was proved in \cite{ssv1, ssv2} (for a special reduced word $\mathbf{i} = (1, 2, 1, \dots, n-1, n-2, \dots, 2, 1) \in R(w_0)$).
1,116,691,498,017
arxiv
\section*{Introduction} Du Val showed in \cite{Du-Val} how multiplicity sequences of the successive blow-ups of a curve can be used to classify singularities. His approach was geometric in nature and asked, while he was presenting his results in the University of Istambul, if there were an algebraic counterpart of his findings. Arf, who was attending Du Val's lecture said that the computation of Du Val's characters could be calculated by algebraic means, and after a week he showed how to do this. The results were published in \cite{arf} and later these characters were called Arf Characters of a curve. Arf's idea was to calculate what Limpan called later in \cite{lipman} the Arf ring closure of the coordinate ring of the curve, and then its value semigroup (which is an Arf numerical semigroup). The minimal generators of this semigroup are the Arf characters. The idea of using valuations and numerical semigroups to study curves was not new (this was already carried out by Zariski for plane algebraic curves \cite{zariski}), nor the idea of producing successive blowing ups of the curve (in three dimensions, this was done already by Semple in \cite{semple}). Sert\"oz in \cite{sertoz} presents a historic overview and motivation of the problem (see also his appendix to \cite{arf-cw}). Arf numerical semigroups are always of maximal embedding dimension (see \cite{b-d-f} for this and other maximal properties on numerical semigroups and one dimensional local rings). This family of numerical semigroups has several nice properties; we summarize some of them. They are closed under finite intersections and if we adjoin to an Arf numerical semigroup its Frobenius number, then the resulting semigroup is again an Arf semigroup (\cite{arf-num-sem}; in other words, the set of Arf numerical semigroups is a Frobenius variety, see \cite{houston}). Arf numerical semigroups can be also defined by the patterns \cite{maria1, patterns} $x+y-z$ or $2x-y$: for all $x$, $y$ and $z$ nonnegative integers in the semigroup, if $x\ge y\ge z$, then $x+y-z$ is again in the semigroup. Moreover quotients (or fractions) by positive integers of Arf numerical semigroup are again Arf, see \cite{d-s}. Also parameters of algebro-geometric codes associated to these semigroups are well understood \cite{arf-codes}. Arf semigroups are acute semigroups, that is, the last interval of gaps before the conductor is smaller than the previous interval of gaps \cite{maria-acute}. In this manuscript we describe a way to calculate all Arf numerical semigroups with a prescribed genus and/or conductor. This is accomplished by means of Arf sequences, associating to each of these sequences an Arf numerical semigroup. We also characterize the Kunz coordinates of an Arf numerical semigroup. In the last section, with the use of Ap\'ery sets, we show how to parametrically describe all Arf numerical semigroups with fixed Frobenius number and multiplicity up to six. The algorithms presented have been implemented in \texttt{GAP} \cite{GAP} and will appear in a forthcoming version of the accepted \texttt{GAP} package \texttt{numericalsgps} \cite{numericalsgps}. The development version of \texttt{numericalsgps} is freely available at \url{https://bitbucket.org/gap-system/numericalsgps}. The reader interested in the implementation may have a look at the manual and the file \texttt{arf-med.gi} in the \texttt{gap} folder of the package. \section{Notation} We will follow the notation of \cite{ns}. The reader interested in plane curves and numerical semigroups can have a look at \cite[Chapter 4]{ns-app}. A nice description of one dimensional analytically irreducible local rings and their value semigroups can be found in \cite{b-d-f} (we also recommend this manuscript a good explanation on how the terminology used in numerical semigroups comes from Algebraic Geometry). A \emph{numerical semigroup} $S$ is a submonoid of $\mathbb N$, the set of nonnegative integers, under addition and with finite complement in $\mathbb N$. A nonnegative integer $g$ not in $S$ is known as a \emph{gap} of $S$, and the cardinality of the set of gaps of $S$, $\mathbb N\setminus S$, is the \emph{genus} of $S$ (or degree of singularity of $S$, \cite{b-d-f}), denoted $\mathrm g(S)$. As $\mathbb N\setminus S$ has finitely many elements, the set $\mathbb Z\setminus S$ (with $\mathbb Z$ the set of integers) has a maximum, which is known as the \emph{Frobenius number} of $S$, $\mathrm F(S)$. In fact, the \emph{conductor} of $S$, denoted here by $\mathrm c(S)$, is the Frobenius number of $S$ plus one (\cite{b-d-f} explains the relationship with the conductor of the semigroup ring associated to $S$). For a nonempty set of nonnegative integers $A$ we denote by \[ \langle A\rangle =\left\{ \sum\nolimits_{a\in A} \lambda_a a\mid \lambda_a\in \mathbb N \hbox{ for all } a\in A\right\}, \] the submonoid of $\mathbb N$ generated by $A$, where the sums have all but finitely many $\lambda_a$ equal to zero. We say that $A$ \emph{generates} $S$ if $\langle A\rangle =S$, and that $A$ is a \emph{minimal generating system} of $S$ if $A$ generates $S$ and no proper subset of $A$ has this property. Every numerical semigroup $S$ has a unique minimal generating system: $S^*\setminus (S^*+S^*)$, where $S^* =S\setminus\{0\}$ \cite[Chapter 1]{ns}. This minimal generating system must contain the \emph{multiplicity} of $S$, denoted $\mathrm m(S)$, which is the least positive integer in $S$. The cardinality of $S^*\setminus (S^*+S^*)$ is the \emph{embedding dimension} of $S$. Since two minimal generators cannot be congruent modulo the multiplicity of $S$, it follows that the embedding dimension of $S$ is less than or equal to its multiplicity. Numerical semigroups attaining this upper bound are called \emph{maximal embedding dimension numerical semigroups}. There are many characterizations of the maximal embedding dimension property. One of them is the following (see for instance \cite{b-d-f} or \cite[Chapter 2]{ns}): a numerical semigroup has maximal embedding dimension if and only if for every $x,y\in S\setminus\{0\}$, the integer $x+y-\mathrm m(S)$ is in $S$. We are interested in this manuscript in a subfamily of maximal embedding numerical semigroups, which is the set of Arf numerical semigroups. A numerical semigroup has the \emph{Arf property} if for every $x,y,z\in S$ with $x\ge y\ge z$, we have $x+y-z\in S$ (from this definition it follows easily that Arf numerical semigroups have maximal embedding dimension). The Arf property on $S$ is equivalent to: $2x-y\in S$ for every $x,y\in S$ with $x\ge y$. Let $I\subseteq \mathbb Z$. We say that $I$ is a relative \emph{ideal} of $S$ if $I+S\subseteq I$ and there exists an integer $i$ such that $i+I\subseteq S$. Given $I$ and $J$ ideals of $S$, the set \[ I-_\mathbb Z J=\{ z\in \mathbb Z\mid z+J\subseteq I\} \] is again an ideal of $S$, as it is $nI=\{i_1+\cdots+i_n\mid i_1,\ldots, i_n\in I\}$ \cite{b-d-f}. The \emph{Lipman semigroup} of $S$ with respect to $I$ is defined as \[ \mathrm L(S,I)=\bigcup_{n\in\mathbb N} (nI-_\mathbb Z nI), \] and it is also called the \emph{semigroup obtained from $S$ by blowing-up $I$} \cite[Section I.2]{b-d-f}. An ideal $I$ is \emph{proper} if $I\subseteq S$. There is only a maximal proper ideal of $S$ with respect to set inclusion, and this ideal is precisely $\mathrm M(S)=S^*$ (so numerical semigroups are ``local"). We will refer to $\mathrm L(S)=\mathrm L(S,S^*)$ as the \emph{Lipman semigroup} of $S$. It can be shown (see for instance \cite[I.2.4]{b-d-f}) that if $\{n_1,\ldots,n_e\}=S^*\setminus (S^*+S^*)$ is the minimal generating set of $S$ with $n_1<\cdots < n_e$, then \[ \mathrm L(S)= \langle n_1,n_2-n_1,\ldots, n_e-n_1\rangle. \] \begin{example}\label{ex-3-5-7} Let $S=\langle 3,5,7\rangle = \{0,3,5,\to \}$ (here $\to$ denotes that all integers larger than $5$ are in the semigroup; we are denoting in this way that the conductor of $S$ is $5$). Then $\mathrm L(S)=\langle 2,3\rangle$ and $\mathrm L(\mathrm L(S))=\mathbb N$. We obtain in this way a multiplicity sequence $3,2,1$ of the successive blowing-ups with respect to the maximal ideal. Observe that in this setting $S=\{0, 3, 3+2, 3+2+1,\to\}$. If we repeat this calculations with $T=\langle 3,5\rangle$, then we have again $\mathrm L(S)=\langle 2,3\rangle$ and $\mathrm L(\mathrm L(S))=\mathbb N$; whence the multiplicity sequence here is the same. However $T$ is not the semigroup ``spanned'' by this multiplicity sequence, which in this case is $S$. \end{example} The property that pops up in the above example is not accidental. Indeed by \cite[Theorem I.3.4]{b-d-f}, a numerical semigroup $S$ has the Arf property if and only if \begin{equation}\label{arf-mult-seq} S=\left\{0, \mathrm{m}(S), \mathrm{m}(S)+\mathrm{m}(\mathrm L(S)),\dots, \sum\nolimits_{i=1}^n \mathrm m(\mathrm L^i(S)),\to\right\}, \end{equation} where $\mathrm L^i(S)$ is defined recursively as follows: $\mathrm L^0(S)=S$ and for every positive integer $i$, $\mathrm L^i(S)=\mathrm L(\mathrm L^{i-1}(S))$. The integer $n$ can be taken to be the minimum such that $\mathrm L^{n+1}(S)=\mathbb N$; and so $\mathrm m(\mathrm L^n(S))\ge 2$. \section{Arf sequences} We say that a sequence of integers $(x_1,\ldots,x_n)$ is an \emph{Arf sequence} provided that \begin{itemize} \item $x_n\ge \cdots \ge x_1\ge 2$ and \item $x_{i+1}\in \{x_i,x_i+x_{i-1},\ldots, x_i+\cdots+x_1, \to\}$. \end{itemize} The following result (rephrased to our needs) supports this notation. \begin{proposition}[{\cite[Corollary 39]{belga}}]\label{seq-ns} Let $S$ be a nonempty proper subset of $\mathbb N$. Then $S$ is an Arf numerical semigroup if and only if there exists an Arf sequence $(x_1,\ldots, x_n)$ such that $S=\{0,x_n, x_n+x_{n-1},\ldots, x_n+\cdots+x_1,\to\}$. \end{proposition} \begin{proof} Notice that in \cite[Corollary 39]{belga} the condition on $x_1$ is $x_1 \ge 1$. Notice that we can omit all $x_i=1$ in the sequence since in this way the resulting semigroup is the same, and we are considering the multiplicity sequence up to $\mathrm L^n(S)\neq \mathbb N$ and $\mathrm L^{n+1}(S)=\mathbb N$. \end{proof} Given an Arf sequence $(x_1,\ldots,x_n)$, we will denote by $\mathrm S(x_1,\ldots, x_n)$ the associated Arf numerical semigroup given in Proposition \ref{seq-ns}, and we will say that it is the \emph{Arf numerical semigroup associated to $(x_1,\ldots, x_n)$}. Hence for every Arf sequence $(x_1,\ldots,x_n)$, $\mathrm S(x_1,\ldots,x_n)$ is a numerical semigroup not equal to $\mathbb N$ and with the Arf property. And given an Arf numerical semigroup $S\neq \mathbb N$, according to \eqref{arf-mult-seq} and Proposition \ref{seq-ns}, if $n$ is a positive integer such that $\mathrm L^n(S)\subsetneq \mathrm L^{n+1}(S)=\mathbb N$, the sequence $(\mathrm m(\mathrm L^n(S)), \mathrm m(\mathrm L^{n-1}(S)), \ldots, \mathrm m(S))$ is an Arf sequence. This proves the following. \begin{corollary}\label{bijection} Let $\mathcal S$ be the set of Arf sequences, and let $\mathcal A$ be the set of all Arf numerical semigroups. The mapping \[ \begin{matrix} \mathrm S: \mathcal S\to \mathcal A\setminus \{\mathbb N\},\\ (x_1,\ldots,x_n)\mapsto \mathrm S(x_1,\ldots, x_n) \end{matrix} \] is a bijection, and its inverse is the map $S\mapsto (\mathrm m(\mathrm L^n(S)), \mathrm m(\mathrm L^{n-1}(S)), \ldots, \mathrm m(S))$. \end{corollary} It is then clear that counting Arf numerical semigroups is tightly related to counting Arf sequences. Moreover, if we are looking for numerical semigroups with a prescribed genus or Frobenius number, the following result will be of great help. \begin{proposition}\label{genus-frob-seq} Let $(x_1,\ldots, x_n)$ be an Arf sequence. Then \begin{enumerate}[(i)] \item $\mathrm F(\mathrm S(x_1,\ldots,x_n))=x_1+\cdots + x_n-1$ (and thus $\mathrm c(\mathrm S(x_1,\ldots,x_n))=x_1+\cdots + x_n)$), \item $\mathrm g(\mathrm S(x_1,\ldots,x_n))=x_1+\cdots+x_n-n$. \end{enumerate} \end{proposition} \begin{proof} In order to ease the notation, set $S=\mathrm S(x_1,\ldots, x_x)$, which by Proposition \ref{seq-ns} we know it is an Arf numerical semigroup. From the very construction of $S$, we have that the conductor of $S$ is at most $x_1+\cdots+x_n$. \begin{enumerate}[(i)] \item From the above paragraph, it suffices to show that $x_1+\cdots+x_n-1\not \in S$. But this follows easily from the fact that $x_1\ge 2$. \item We can explicitly write the set of gaps of $S$, \begin{multline*} \mathbb N\setminus S=\{1,\ldots,x_n-1,x_n+1,\ldots, x_n+x_{n-1}-1,\ldots,x_n+\cdots+x_2-1,\\ x_n+\cdots+x_2+1,\ldots, x_n+\cdots+x_1-1\}. \end{multline*} It follows that $\mathrm g(S)= (x_n-1)+(x_{n-1}-1)+\cdots+(x_1-1)=x_1+\cdots+x_n-n$.\qedhere \end{enumerate} \end{proof} \section{The set of Arf numerical semigroups with given conductor} Let $c$ be a positive integer. In light of Corollary \ref{bijection} and Proposition \ref{genus-frob-seq}, in order to calculate the set of Arf numerical semigroups with conductor $c$ we only have to calculate some specific integer partitions of $c$, and then their images via the map $\mathrm S$. We can compute the set of integer partitions of $c$ with the help of \cite{partitions} or the built-in \texttt{GAP} command \texttt{partitions}, and either filter those having 1's or while constructing them avoid 1's in the partition. However, the number of partitions grows exponentially (for instance \texttt{NrPartitions(100)} in \texttt{GAP} yields 190569292), and then we must choose which partitions are Arf sequences. We do not have, as in the case of saturated numerical semigroups a ``next" function that, given an Arf sequence, computes the next in a prescribed ordering \cite{sat}. In this section we present an alternative to the approach of computing all partitions and filter those that are Arf sequence. The procedure dynamically calculates the set of all Arf numerical semigroups with conductor less than or equal to $C$. The main idea is based on the following result, which allows to construct all Arf sequences of length $k+1$ from the set of Arf sequences of length $k$. Its proof follows directly from the definition of Arf sequence. \begin{proposition}\label{seq-recur} Let $\mathcal S$ be the set of all Arf sequences and let $k$ be a positive integer. \begin{enumerate}[(i)] \item If $(x_1,\ldots, x_{k+1})\in \mathcal S$, then $(x_1,\ldots, x_k)\in \mathcal S$. \item If $(x_1,\ldots, x_k)\in \mathcal S$, then $(x_1,\ldots, x_k,x_{k+1})\in \mathcal S$ for all $x_{k+1}\in \mathrm S(x_1,\ldots,x_k)^*$. \end{enumerate} \end{proposition} Let us denote by $\mathcal S_k$ the set of all Arf sequences of length $k$, and for a positive integer $n$, set \[\mathcal S_k(n)=\{(x_1,\ldots,x_k)\in \mathcal S_k \mid x_1+\cdots+x_k\le n\}.\] As a consequence of the last result and that $x_1\ge 2$ we obtain the following. We use $(x,y]$, with $x,y\in \mathbb N$, to denote the interval of real numbers $r$ such that $x<r\le y$. \begin{corollary}\label{seq-k-rec} Let $\{b_i\}_{i\in\mathbb N}\subseteq \mathbb N$ be such that $0\le b_{i+1}-b_i\le 2$ for all $i\in \mathbb N$. \begin{enumerate}[(i)] \item $\mathcal S_1(b_1)= \big\{ (x_1) \mid x_1\in \{2,\ldots, b_1\}\big\}$. \item For $k\in \mathbb N^*$, \[\mathcal S_{k+1}(b_{k+1})=\left\{ (x_1,\ldots, x_{k+1})~\middle\vert ~ \begin{matrix} (x_1,\ldots,x_k)\in \mathcal S_k(b_k)\\ x_{k+1}\in \mathrm S(x_1,\ldots,x_k)\cap (0,b_{k+1}-\sum_{i=1}^k x_i]\end{matrix}\right\}.\] \end{enumerate} \end{corollary} In light of Proposition \ref{genus-frob-seq} (i) and Corollary \ref{seq-k-rec}, for the calculation of set of Arf numerical semigroups with conductor less than or equal to $c$ it is enough to calculate $\mathcal S_k(c)$ for $k\in\{1,\ldots,\lfloor c/2\rfloor\}$ (notice that the elements in an Arf sequence are greater than or equal to $2$). This is described in Algorithm \ref{alg:arf-frob-up-to}. \begin{algorithm}[h]\caption{ArfNumericalSemigroupsWithConductorUpTo\label{alg:arf-frob-up-to}} \KwData{A positive integer $c$} \KwResult{The set of all Arf numerical semigroups with conductor less than or equal to $c$} \For{$k\in \{1,\ldots, \lfloor c/2\rfloor \}$}{ Compute $\mathcal S_k(c)$ \tcc*{use Corollary \ref{seq-k-rec}} } $L=\bigcup_{k=1}^{\lfloor c/2\rfloor} \mathcal S_k(c)$\; \Return $\{\mathbb N\}\cup\{ \mathrm S(x_1,\ldots, x_n) \mid (x_1,\ldots, x_n)\in L\}$ \end{algorithm} \begin{figure}[h] \begin{tabular}{|l|l||l|l||l|l||l|l|}\hline $F$ & na($F$) & $F$ & na($F$) & $F$ & na($F$) & $F$ & na($F$) \\ \hline \hline 1&1&26&111&51&1643&76&5494\\ 2&1&27&176&52&1196&77&9215\\ 3&2&28&138&53&2043&78&5707\\ 4&2&29&239&54&1289&79&10469\\ 5&4&30&150&55&2339&80&6709\\ 6&3&31&298&56&1563&81&10822\\ 7&7&32&211&57&2513&82&7698\\ 8&6&33&341&58&1854&83&12951\\ 9&10&34&268&59&3134&84&7705\\ 10&9&35&440&60&1852&85&14028\\ 11&17&36&279&61&3542&86&9399\\ 12&12&37&535&62&2414&87&15011\\ 13&25&38&389&63&3823&88&10395\\ 14&20&39&616&64&2726&89&17538\\ 15&32&40&448&65&4499&90&10381\\ 16&27&41&778&66&2809&91&19147\\ 17&49&42&490&67&5184&92&12425\\ 18&34&43&936&68&3501&93&20048\\ 19&68&44&642&69&5542&94&13988\\ 20&49&45&1001&70&3866&95&23263\\ 21&80&46&759&71&6645&96&13876\\ 22&66&47&1300&72&3936&97&25560\\ 23&118&48&808&73&7413&98&16839\\ 24&77&49&1496&74&4992&99&26734\\ 25&145&50&1028&75&7829&100&17903\\ \hline \end{tabular} \caption{Number of Arf numerical semigroups with Frobenius number up to 100} \label{fig-table-frob-100} \end{figure} Figure \ref{fig-table-frob-100} shows the number of Arf numerical semigroups of Frobenius number up to 100. We already have many functions in the package \texttt{numericalsgps} computing families of numerical semigroups with a given Frobenius number, and thus we decided in our implementation of Arf numerical semigroups with given conductor to use the Frobenius number instead. The calculation of the table took 36 seconds on a laptop. However, we still do not know how many numerical semigroups there are with Frobenius number 100; so the approach of considering them all and filtering those that are Arf was rejected from the very beginning. For instance, for $F=35$, there are 292081 numerical semigroups; among these, 8959 have maximal embedding dimension and only 440 have the Arf property. Figure \ref{fig-comp-frob-saturated} compares the number of numerical semigroups with given Frobenius number that are saturated (as calculated in \cite{sat}) with those that are Arf. \begin{figure} \begin{tikzpicture}[scale=.75] \pgfplotsset{every axis legend/.append style={ at={(1.02,1)}, anchor=north west}} \begin{axis}[ width=12cm, xlabel=Frobenius number, ylabel=\# numerical semigroup ] \addplot[smooth,color=red,mark=x] plot coordinates { (1,1) (2,1) (3,2) (4,2) (5,4) (6,3) (7,7) (8,5) (9,9) (10,8) (11,16) (12,7) (13,21) (14,14) (15,25) (16,18) (17,39) (18,16) (19,50) (20,22) (21,52) (22,40) (23,84) (24,20) (25,92) (26,53) (27,103) (28,54) (29,144) (30,39) (31,175) (32,68) (33,166) (34,105) (35,240) (36,49) (37,280) (38,131) (39,285) (40,113) (41,378) (42,88) (43,439) (44,155) (45,389) (46,233) (47,597) (48,79) (49,624) (50,239) (51,628) (52,266) (53,828) (54,170) (55,909) (56,284) (57,865) (58,440) (59,1210) (60,95) (61,1267) (62,490) (63,1208) (64,443) (65,1522) (66,303) (67,1785) (68,528) (69,1612) (70,662) (71,2228) (72,197) (73,2291) (74,816) (75,2124) (76,779) (77,2783) (78,491) (79,3157) (80,728) (81,2775) (82,1200) (83,3765) (84,282) (85,3789) (86,1347) (87,3752) (88,1196) (89,4681) (90,506) (91,5039) (92,1336) (93,4574) (94,1878) (95,5973) (96,463) (97,6307) (98,1944) (99,5894) (100,1605) }; \addlegendentry{saturated} \addplot[smooth,mark=o,blue] plot coordinates { (1,1) (2,1) (3,2) (4,2) (5,4) (6,3) (7,7) (8,6) (9,10) (10,9) (11,17) (12,12) (13,25) (14,20) (15,32) (16,27) (17,49) (18,34) (19,68) (20,49) (21,80) (22,66) (23,118) (24,77) (25,145) (26,111) (27,176) (28,138) (29,239) (30,150) (31,298) (32,211) (33,341) (34,268) (35,440) (36,279) (37,535) (38,389) (39,616) (40,448) (41,778) (42,490) (43,936) (44,642) (45,1001) (46,759) (47,1300) (48,808) (49,1496) (50,1028) (51,1643) (52,1196) (53,2043) (54,1289) (55,2339) (56,1563) (57,2513) (58,1854) (59,3134) (60,1852) (61,3542) (62,2414) (63,3823) (64,2726) (65,4499) (66,2809) (67,5184) (68,3501) (69,5542) (70,3866) (71,6645) (72,3936) (73,7413) (74,4992) (75,7829) (76,5494) (77,9215) (78,5707) (79,10469) (80,6709) (81,10822) (82,7698) (83,12951) (84,7705) (85,14028) (86,9399) (87,15011) (88,10395) (89,17538) (90,10381) (91,19147) (92,12425) (93,20048) (94,13988) (95,23263) (96,13876) (97,25560) (98,16839) (99,26734) (100,17903) }; \addlegendentry{Arf} \end{axis} \end{tikzpicture} \caption{Comparison with the number of saturated numerical semigroups} \label{fig-comp-frob-saturated} \end{figure} \begin{example} Let us compute the set of numerical semigroups with conductor less than or equal to six and with the Arf property. By Figure \ref{fig-table-frob-100} we already know that we have ten of them (eleven counting $\mathbb N$: 1+1+1+2+2+4; we have to go up to Frobenius number 5). As we have pointed above we must calculate $\mathcal S_k(6)$ for $k\in\{1,2,3\}$. \begin{itemize} \item $\mathcal S_1(6)=\{ (2), (3), (4), (5), (6)\}$, \item $\mathcal S_2(6)=\{ (2,2), (3,3), (2,3), (2,4)\}$, \item $\mathcal S_3(6)=\{(2,2,2)\}$. \end{itemize} Now we have to translate these sequences to numerical semigroups via the map $\mathrm S$. For instance $\mathrm S(2,3)=\{0,3,5,\to\}=\langle 3,5,7\rangle$. We then obtain \begin{itemize} \item $\langle 2,3\rangle$, $\langle 3,4,5\rangle$, $\langle 4,5,6,7\rangle$, $\langle 5,6,7,8,9\rangle$, $\langle 6,7,8,9,10,11\rangle$, \item $\langle 2,5\rangle$, $\langle 3,7,8\rangle$, $\langle 3,5,7\rangle$, $\langle 4,6,7,9\rangle$, \item $\langle 2,7\rangle$. \end{itemize} Finally, we have to add $\mathbb N$. In a \texttt{GAP} session with the package \texttt{numericalsgps} we would proceed as follows: \begin{verbatim} gap> la5:=ArfNumericalSemigroupsWithFrobeniusNumberUpTo(5);; gap> List(la5,MinimalGeneratingSystem); [ [1], [ 2, 3 ], [ 3 .. 5 ], [ 4 .. 7 ], [ 5 .. 9 ], [ 6 .. 11 ], [ 2, 5 ], [ 3, 5, 7 ], [ 4, 6, 7, 9 ], [ 3, 7, 8 ], [ 2, 7 ] ] \end{verbatim} As we mentioned in the introduction, adjoining the Frobenius number to an Arf numerical semigroup yields another Arf numerical semigroup. Figure \ref{fig-hasse-frob-5} represents the Hasse diagram of all numerical semigroups conductor less than or equal to 6 and with the Arf property. \end{example} \begin{figure} \includegraphics[width=.5\textwidth]{fr-5.pdf} \caption{Hasse diagram of Arf numerical semigroups with conductor up to six.} \label{fig-hasse-frob-5} \end{figure} \section{Arf numerical semigroups with given genus} As in the previous section we are again interested in Arf sequences with particular characteristics. In this case, by Proposition \ref{genus-frob-seq} (ii), the length of the sequence is also relevant. If we fix the genus $g$, we need to calculate, for every suitable $k$, $\mathcal S_k(g+k)$, and then take the union of all of them. Also, in contrast to the conductor case, $k$ can range up to $g$, since $\mathrm g(\mathrm S(x_1,\ldots, x_n))= x_1+\cdots +x_n -n\le g$ and $x_i\ge 2$, forces $2n-n\le g$, that is, $n\le g$. In order to use recursion we must be able to construct $\mathcal S_{k+1}(g+k+1)$ from $\mathcal S_k (g+k)$. We can do this by using Corollary \ref{seq-k-rec} with $b_i=g+i$ for all $i\in\mathbb N$. Algorithm \ref{alg:arf-genus-up-to} gathers the procedure to calculate all Arf numerical semigroups with genus up to $g$. \begin{algorithm}\caption{ArfNumericalSemigroupsWithGenusUpTo\label{alg:arf-genus-up-to}} \KwData{A positive integer $g$} \KwResult{The set of all Arf numerical semigroups with genus less than or equal to $g$} \For{$k\in \{1,\ldots, g \}$}{ Compute $\mathcal S_k(g+k)$ \tcc*{use Corollary \ref{seq-k-rec}} } $L=\bigcup_{k=1}^{g} \mathcal S_k(g+k)$\; \Return $\{\mathbb N\}\cup\{ \mathrm S(x_1,\ldots, x_n) \mid (x_1,\ldots, x_n)\in L\}$ \end{algorithm} \begin{example}\label{ex-lag5} Let us apply the procedure in this section to calculate all Arf numerical semigroups with genus less than or equal to 5. We have to compute $\bigcup_{k=1}^g S_k(g+k)$. \begin{itemize} \item $\mathcal S_1(5+1)= \{ ( 2 ), ( 3 ), ( 4 ), ( 5 ), ( 6 ) \}$, \item $\mathcal S_2(5+2)= \{ ( 2, 2 ), ( 2, 3 ), ( 2, 4 ), ( 2, 5 ), ( 3, 3 ), ( 3, 4 ) \}$, \item $\mathcal S_3(5+3)=\{ ( 2, 2, 2 ), ( 2, 2, 4 ), ( 2, 3, 4 ) \}$, \item $\mathcal S_4(5+4)= \{ ( 2, 2, 2, 2 ) \}$, \item $\mathcal S_5(5+5)= \{ ( 2, 2, 2, 2, 2 ) \}$. \end{itemize} Next, we have to compute the image of each of them under $\mathrm S$, and finally add $\mathbb N$. In \texttt{GAP}, we can do this with the package \texttt{numericalsgps} as follows. \begin{verbatim} gap> lag5:=ArfNumericalSemigroupsWithGenusUpTo(5);; gap> List(lag5,MinimalGeneratingSystem); [ [ 1 ], [ 2, 3 ], [ 2, 5 ], [ 2, 7 ], [ 2, 9 ], [ 2, 11 ], [ 4, 6, 9, 11 ], [ 3, 5, 7 ], [ 3, 8, 10 ], [ 4, 6, 7, 9 ], [ 5, 7, 8, 9, 11 ], [ 3 .. 5 ], [ 3, 7, 8 ], [ 4, 7, 9, 10 ], [ 4 .. 7 ], [ 5 .. 9 ], [ 6 .. 11 ] ] \end{verbatim} \end{example} \begin{figure} \begin{tabular}{|l|l||l|l||l|l||l|l|}\hline $g$ & na($g$) & $g$ & na($g$) & $g$ & na($g$) & $g$ & na($g$) \\ \hline \hline 1&1&26&251&51&2504&76&12275\\ 2&2&27&284&52&2694&77&12979\\ 3&3&28&317&53&2904&78&13701\\ 4&4&29&355&54&3131&79&14468\\ 5&6&30&393&55&3358&80&15295\\ 6&8&31&433&56&3605&81&16114\\ 7&10&32&487&57&3851&82&16959\\ 8&13&33&538&58&4112&83&17840\\ 9&17&34&594&59&4391&84&18765\\ 10&21&35&658&60&4699&85&19738\\ 11&26&36&721&61&5022&86&20781\\ 12&31&37&793&62&5365&87&21864\\ 13&36&38&866&63&5705&88&22993\\ 14&47&39&946&64&6074&89&24163\\ 15&55&40&1037&65&6472&90&25351\\ 16&62&41&1138&66&6881&91&26581\\ 17&74&42&1234&67&7307&92&27899\\ 18&87&43&1338&68&7767&93&29246\\ 19&101&44&1452&69&8240&94&30664\\ 20&116&45&1584&70&8740&95&32139\\ 21&133&46&1720&71&9265&96&33657\\ 22&152&47&1861&72&9813&97&35228\\ 23&174&48&2008&73&10386&98&36882\\ 24&196&49&2164&74&10999&99&38602\\ 25&222&50&2332&75&11620&100&40412\\ \hline \end{tabular} \caption{Number of Arf numerical semigroups with genus less than or equal to 100.} \label{table-genus} \end{figure} Notice that in contrast to the sequence of the number of Arf numerical semigroups with given Frobenius number (Figure \ref{fig-table-frob-100}), we see in Figure \ref{table-genus}, that in the case of counting with respect to the genus, the resulting sequence is increasing. From \cite[Section~2]{arf-num-sem}, we know that the tree of Arf numerical semigroups is a binary tree. This tree is constructed as follows. Let $A$ be a nonempty set of positive integers with greatest common divisor one. The intersection of all Arf numerical semigroups containing $A$ is an Arf semigroup (every numerical semigroup containing $A$ must also contain $\langle A\rangle$; whence there are only finitely many containing $A$). We denote by $\mathrm{Arf}(A)$ this numerical semigroup. Given an Arf numerical semigroup $S$, we say that $A$ is an Arf system of generators of $S$ if $\mathrm{Arf}(A)=S$; and it is a minimal Arf system of generators if no proper subset of $A$ has this property. The elements of $A$ are called minimal Arf generators of $S$. The tree of Arf numerical semigroups is constructed recursively by removing minimal Arf generators greater than the Frobenius number for each semigroup in the tree. Lemma 12 in \cite{arf-num-sem} states that at most two minimal Arf generators are greater than the Frobenius number of an Arf semigroup, and according to its proof these are the Frobenius number plus one and plus two. This is why the tree is binary. Also, a leaf in this tree is an Arf numerical semigroup with no minimal Arf generators above its Frobenius number. The absence of leafs would explain the increasing of the sequence in Figure \ref{table-genus}. However this is not the case, there are plenty of leaves in this tree. For instance, $\mathrm F(\mathrm{Arf}(5,7))=8$, and consequently $\mathrm{Arf}(5,7)$ is a leaf in the binary tree of Arf numerical semigroups (this example has genus 6, all the semigroups appearing in Example \ref{ex-lag5} have descendants in the tree). Figure \ref{fig:bin-tree-6} depicts the binary tree of Arf numerical semigroups up to genus 6; the shaded node corresponds to the unique leaf in the tree of all Arf numerical semigroups. Each layer corresponds to a different genus. \begin{figure} \centering \includegraphics[width=\textwidth]{fr-g6.pdf} \caption{The binary tree of Arf numerical semigroups of genus up to 6.} \label{fig:bin-tree-6} \end{figure} \section{Fixing the genus and the conductor} In this section we are interested in calculating the set of all Arf numerical semigroups with fixed genus $g$ and conductor $c$. It is well known, and easy to prove, that if $S$ is a numerical semigroup, then $2\mathrm g(S)\ge \mathrm c(S)$ (see for instance \cite[Lemma 2.14]{ns}). From the definition of genus and Frobenius number it also follows easily that $\mathrm g(S)\le \mathrm F(S)$. The only numerical semigroup with genus equal to zero is $\mathbb N$; so we may assume that \[ 1\le g\le c-1<2g. \] As a consequence of Proposition \ref{genus-frob-seq}, we have the following restriction on the lengths of Arf sequences yielding semigroups with prescribed genus $g$ and conductor $c$. \begin{corollary} Let $(x_1,\ldots,x_n)$ be an Arf sequence and let $S=\mathrm S(x_1,\ldots,x_n)$. Then $n=\mathrm c(S)-\mathrm g(S)$. \end{corollary} \begin{algorithm}[h] \KwData{Positive integers $g$ and $c$ with $1\le g\le c-1<2g$} \KwResult{The set of all Arf numerical semigroups with genus $g$ and conductor $c$} $n=c-g$\; $L_1=\{ (x_1)\mid x_1\in \{2,\ldots, \lfloor c/n\rfloor\}$\; \For{$k\in \{2,\ldots, n \}$}{ $L_{k}=\left\{(x_1,\ldots,x_k)~\middle \vert~ \begin{matrix} (x_1,\ldots,x_{k-1})\in L_{k-1}, x_k\in\mathrm S(x_1,\ldots,x_{k-1})^* \\ x_1+\cdots+x_{k-1}+(n-(k-1))x_k\le c\end{matrix}\right\}$\; } $L=\{(x_1,\ldots,x_n)\in L_n\mid x_1+\cdots+x_n=c\}$\; \Return $\{ \mathrm S(x_1,\ldots, x_n) \mid (x_1,\ldots, x_n)\in L\}$ \caption{ArfNumericalSemigroupsWithGenusAndFrobeniusNumber\label{alg:arf-genus-frob}} \end{algorithm} The correctness of Algorithm \ref{alg:arf-genus-frob} follows from the next two observations. If $(x_1,\ldots,x_n)$ is an Arf sequence and $\mathrm c(\mathrm S(x_1,\ldots, x_n))=c$, then by Proposition \ref{genus-frob-seq}, $c=x_1+\cdots +x_n$. \begin{enumerate}[(i)] \item As $x_n\ge \cdots \ge x_1$, we deduce $nx_1\le c$. This implies $x_1\le \lfloor c/n\rfloor$. \item Also from $x_n\ge \cdots\ge x_k$, we deduce $x_1+\cdots +x_{k-1}+(n-(k-1))x_k\le c$. \end{enumerate} Figure \ref{fig:plot3d} depicts Arf numerical semigroups with genus $g$ ranging from 1 to 20 and conductor from $g+1$ to $2g$. \begin{figure}[h] \centering \begin{tabular*}{.4\linewidth}{@{\extracolsep{\fill}}|r|rrrrrrrrrrrrrrrrrrrr} $g$\\ \cline{1-1} 1& 1\\ 2& 1& 1\\ 3& 1& 1& 1\\ 4& 1& 2& 0& 1\\ 5& 1& 2& 2& 0& 1\\ 6& 1& 3& 2& 1& 0& 1\\ 7& 1& 3& 3& 1& 1& 0& 1\\ 8& 1& 4& 3& 3& 0& 1& 0& 1\\ 9& 1& 4& 6& 1& 3& 0& 1& 0& 1\\ 10& 1& 5& 5& 5& 1& 2& 0& 1& 0& 1\\ 11& 1& 5& 8& 4& 3& 1& 2& 0& 1& 0& 1\\ 12& 1& 6& 8& 6& 2& 4& 0& 2& 0& 1& 0& 1\\ 13& 1& 6& 11& 5& 5& 0& 4& 0& 2& 0& 1& 0& 1\\ 14& 1& 7& 11& 12& 3& 5& 1& 3& 0& 2& 0& 1& 0& 1\\ 15& 1& 7& 15& 8& 10& 2& 4& 1& 3& 0& 2& 0& 1& 0& 1\\ 16& 1& 8& 14& 16& 4& 6& 1& 5& 0& 3& 0& 2& 0& 1& 0& 1\\ 17& 1& 8& 19& 13& 10& 4& 7& 0& 5& 0& 3& 0& 2& 0& 1& 0& 1\\ 18& 1& 9& 19& 19& 8& 11& 1& 7& 1& 4& 0& 3& 0& 2& 0& 1& 0& 1\\ 19& 1& 9& 23& 18& 18& 3& 10& 1& 6& 1& 4& 0& 3& 0& 2& 0& 1& 0& 1\\ 20& 1& 10& 23& 29& 9& 13& 4& 8& 1& 7& 0& 4& 0& 3& 0& 2& 0& 1& 0& 1\\ \multicolumn{21}{c}{conductor from $g+1$ to $2g$} \end{tabular*} \phantom{lalalalaaaaaaaaaa} \begin{tikzpicture}[scale=.5] \begin{axis}[xlabel={$g$},ylabel={$c$},zlabel={\# Arf}, xmin=1,xmax=20, ymin=1,ymax=39, zmin=0,zmax=29, enlargelimits=upper, ytick={0,10,...,40} ] \addplot3[only marks, mark size=2] file {data-g-f.dat}; \end{axis} \end{tikzpicture} \caption{Number of Arf numerical semigroups with genus $g$ and conductor $c$} \label{fig:plot3d} \end{figure} \section{Kunz coordinates of Arf numerical semigroups} Let $S$ be a numerical semigroup and $s\in S^*$. Recall that the \emph{Ap\'ery set} of $s$ in $S$ is defined as \[ \mathrm{Ap}(S,s)=\{ n\in S\mid n-s\not\in S\}. \] It is well known (see for instance \cite[Chapter 1]{ns}) that \begin{equation}\label{eq:ap} \begin{matrix} \mathrm{Ap}(S,s)=\{w(0)=0,w(1),\ldots, w(s-1)\},\\ w(i)=\min\{n \in S\mid n\bmod s=i\},\ i\in\{1,\ldots, s-1\}. \end{matrix} \end{equation} Observe that for every $z\in \mathbb Z$, there exists $k\in \mathbb N$ and $i\in\{0,\ldots,s-1\}$ such that $z=ks+w(i)$. Moreover, $z\in S$ if and only if $k\ge 0$ (see \cite[Chapter 1]{ns}). We will use this well known fact implicitly in this section. If in addition $S$ is an Arf numerical semigroup and $m$ is its multiplicity, then we know that $S$ has maximal embedding dimension and thus $(\mathrm{Ap}(S,m)\setminus\{0\})\cup\{m\}=\{m,w(1),\ldots, w(m-1)\}$ is the minimal generating system of $S$. For every $k\in \{1,\ldots, m-1\}$, $w(k)=x_km+k$ for some positive integer $x_k$. We say that $(x_1,\ldots, x_{m-1})$ are the \emph{Kunz coordinates} of $S$ \cite{kunz}. Notice that in this definition we can take $k=0$ and obtain $x_0=0$. We are not including $x_0$ in the sequence of Kunz coordinates because it is always $0$. We will use this implicitly in what follows. We will fix the multiplicity, $m$, and for an integer $i$, we will write \[\overline{i} =i\bmod m\] (the remainder of the division of $i$ by $m$). Every Arf numerical semigroup has maximal embedding dimension, and thus its Kunz coordinates fulfill the following system of inequalities \cite{ns-coord}. \begin{equation}\label{eq:med} \begin{array}{cc} x_i\geq 1 & \hbox{for all } i\in \{1,\ldots,m-1\},\\ x_i+x_j-x_{i+j}\geq 1 & \hbox{for all } 1\leq i\leq j\leq m-1, i+j\leq m-1,\\ x_i+x_j-x_{\overline{i+j}}\geq 0 &\hbox{for all } 1\leq i\leq j\leq m-1, i+j> m. \end{array} \end{equation} Notice also that for every $i,j\in \{0,\ldots, m-1\}$, $w(i)+w(j)\in S$ and $w(i)+w(j)\equiv i+j \pmod m$. Hence $w(j)+w(j)=km+w(\overline{i+j})$. This implies that $(w(i)+w(j)-w(\overline{i+j}))/m\in \mathbb N$. So we define the \emph{cocycle} of $S$ with respect to $m$ as \[ \mathrm h(i,j)=\frac{w(i)+w(j)-w(\overline{i+j})}{m}. \] Next we see how the Arf condition is written in terms of cocycles. Given a rational number $q$, denote by \[\lceil q\rceil = \min (\mathbb Z\cap [q,\infty)),\ \lfloor q\rfloor =\max (\mathbb Z\cap (-\infty,q]).\] \begin{lemma}\label{ArfCocycleToCocycle} Let \(S\) be a numerical semigroup with multiplicity $m$. Then $S$ is an Arf numerical semigroup if and only if for every $i,j\in\{0,\ldots, m-1\}$, \begin{enumerate}[(i)] \item if $\lceil (w(i)-w(j))/m\rceil \ge 0$, \begin{equation}\label{eq:ArfCondToCocycle-g} \mathrm h(j,j)-\mathrm h(\overline{2j-i} , i) + 2\left\lceil \frac{w(i)-w(j)}{m}\right\rceil \geq 0; \end{equation} \item if $\lceil (w(i)-w(j))/m\rceil < 0$, \begin{equation}\label{eq:ArfCondToCocycle-l} \mathrm h(j,j)-\mathrm h(\overline{2j-i} , i) + \left\lceil \frac{w(i)-w(j)}{m}\right\rceil \geq 0. \end{equation} \end{enumerate} \end{lemma} \begin{proof} Suppose \(S\) is an Arf numerical semigroup. For any \(i,j\in\{0, \ldots, m-1\}\), define \(t=\lceil \frac{w(i)-w(j)}{m}\rceil\). Then $w(j)+mt \geq w(i)$. If $w(i)\ge w(j)$, then $w(j)+mt, w(i)\in S$. By the Arf property \(2w(j)+2mt - w(i)\in S\). This element can be expressed as \[ 2w(j) - w(i) + 2tm=w(\overline{2j-i} ) + (\mathrm h(j,j) - \mathrm h(\overline{2j-i} ,i) + 2t)m, \] and so it belongs to \(S\) if and only if \(\mathrm h(j,j)-\mathrm h(\overline{2j-i} ,i) + 2t \geq 0\). Now if $w(i)<w(j)$, then $w(j)\ge w(i)-mt$, and $w(j),w(i)-mt\in S$. By the Arf property we deduce in this case that $2w(j)+tm-w(i)\in S$. Arguing as above we deduce that this occurs if and only if $h(j,j) - h(\overline{2j-i} ,i) + t \geq 0$. For the converse, let \(x\geq y \in S\). We can write \(x=w(j)+am, y=w(i)+bm\) for some $i,j\in \{0,\ldots, m-1\}$ and \(a,b\in \mathbb N\). Put again \(t=\lceil\frac{w(i)-w(j)}{m}\rceil\). Then \[ 2x-y = 2w(j)+2am - w(i)-bm= w(\overline{2j-i} ) + (\mathrm h(j,j)-\mathrm h(\overline{2j-i} , i) + 2a-b)m. \] By the condition \(x\geq y\), we have $w(j)+am\ge w(i)+bm$, and consequently $a-b\ge t$. If $t\ge 0$, as $2a-b\geq 2(a-b) \geq 2t$, we deduce $\mathrm h(j,j)-\mathrm h(\overline{2j-i} , i) + 2a-b\ge \mathrm h(j,j)-\mathrm h(\overline{2j-i} , i) + 2t$, which by \eqref{eq:ArfCondToCocycle-g} is nonnegative. This forces $2x-y\in S$. If $t<0$, then $2a-b=a+a-b\ge a-b\ge t$. Arguing as in the preceding case, \eqref{eq:ArfCondToCocycle-g} ensures that $2x-y\in S$. \end{proof} Let us now translate cocycles to the language of Kunz coordinates. \begin{lemma}\label{CocycleFromKunz} Let $S$ be a numerical semigroup with multiplicity $m$ and Kunz coordinates $(x_1,\ldots,x_{m-1})$. \[ \mathrm h(i,j) = x_i+x_j-x_{\overline{i+j}}+\left\lfloor \frac{i+j}m\right\rfloor. \] \end{lemma} \begin{proof} By definition $\mathrm h(i,j)=(w(i)+w(j)-w(\overline{i+j}))/m$. This can be expressed in terms of Kunz coordinates as $\mathrm h(i,j)=(x_im+i+x_jm+j-x_{\overline{i+j}}m-\overline{i+j})/m= x_i+x_j-x_{\overline{i+j}}+(i+j-\overline{i+j})/m$. The proof now follows from the equality $(i+j-\overline{i+j})/m=\lfloor (i+j)/m\rfloor$. \end{proof} Notice also that \begin{equation} \label{eq:diff-ap} \lceil (w(i)-w(j))/m\rceil=x_i-x_j+\lceil (i-j)/m\rceil \end{equation} With this and Lemma~\ref{CocycleFromKunz} we can translate Lemma \ref{ArfCocycleToCocycle} to Kunz coordinates. \begin{proposition} Let $S$ be sa numerical semigroup with multiplicity $m$ and Kunz coordinates $(x_1,\ldots, x_{m-1})$. Then $S$ has the Arf property if and only if for any \(i,j\in \{0,\ldots, m-1\}\), \begin{enumerate}[(i)] \item if $x_i+\lceil (i-j)/m\rceil\ge x_j$, \begin{equation}\label{eq:from-cocycle-cond-g} x_i - x_{\overline{2j-i}} + 2\left\lceil\frac{i -j}{m}\right\rceil + \left\lfloor \frac{2j}m\right\rfloor -\left\lfloor \frac{\overline{2j-i}+i}m\right\rfloor \geq 0; \end{equation} \item if $x_i+\lceil (i-j)/m\rceil< x_j$, \begin{equation}\label{eq:from-cocycle-cond-l} x_j - x_{\overline{2j-i}} + \left\lceil\frac{i -j}{m}\right\rceil + \left\lfloor \frac{2j}m\right\rfloor -\left\lfloor \frac{\overline{2j-i}+i}m\right\rfloor \geq 0. \end{equation} \end{enumerate} \end{proposition} \begin{proof} From Lemma \ref{CocycleFromKunz} and \eqref{eq:diff-ap} we obtain \begin{align*} \mathrm h(j,j)-\mathrm h(\overline{2j-i},i)+2\left\lceil \frac{w(i)-w(j)}m\right\rceil =\ & 2x_j-x_{\overline{2j}}+\lfloor 2j/m\rfloor\\ & - x_{\overline{2j-i}}-x_i+x_{\overline{2j}}-\lfloor (\overline{2j-i}+i)/m\rfloor \\ & + 2(x_i-x_j)+2\lceil (i-j)/m\rceil\\ =\ & x_i-x_{\overline{2j-i}}+ 2\lceil (i-j)/m\rceil \\ & + \lfloor 2j/m\rfloor -\lfloor (\overline{2j-i}+i)/m\rfloor, \end{align*} and \begin{align*} \mathrm h(j,j)-\mathrm h(\overline{2j-i},i)+\left\lceil \frac{w(i)-w(j)}m\right\rceil =\ & 2x_j-x_{\overline{2j}}+\lfloor 2j/m\rfloor\\ & - x_{\overline{2j-i}}-x_i+x_{\overline{2j}}-\lfloor (\overline{2j-i}+i)/m\rfloor \\ & + (x_i-x_j)+\lceil (i-j)/m\rceil\\ =\ & x_j-x_{\overline{2j-i}}+ \lceil (i-j)/m\rceil \\ & + \lfloor 2j/m\rfloor -\lfloor (\overline{2j-i}+i)/m\rfloor. \end{align*} Now we apply Lemma \ref{ArfCocycleToCocycle} and we are done. \end{proof} \section{Arf numerical semigroups with low multiplicity} In this section we focus on the parametrization of Arf numerical semigroups with multiplicity up to six and given conductor. To this end, we need some preliminary results. The following well known result will be used to provide upper bounds for the conductor of a numerical semigroup. \begin{lemma}[\cite{arf-num-sem}] {\label{2:2}} Let $S$ be an Arf numerical semigroup with conductor $c$, and let $s$ be any element of $S$. If $s+1 \in S$, then $s+k \in S$ for all $k \in {\mathbb N}$ and thus $c \leq s$. \end{lemma} By Selmer's formulas (see for instance \cite[Chapter 1]{ns}), we know that the Frobenius number of $S$ is $\max\mathrm{Ap}(S,m)-m$. \begin{lemma} {\label{2:3}} Let $S$ be an Arf numerical semigroup with multiplicity $m$ and conductor $c$. For each $j \in\{2,3, \ldots , m-1\}$, we have \begin{enumerate}[(i)] \item if $w(j-1) < w(j)$, then $c \leq w(j)-1$, \item if $w(j) < w(j-1)$, then $c \leq w(j-1)$. \end{enumerate} \end{lemma} \begin{proof} \emph{(i)} If $w(j-1) < w(j)$, then $w(j)-w(j-1)-1$ is a nonnegative multiple of $m$ and therefore it is an element of $S$. Thus $w(j-1)+(w(j)-w(j-1)-1)= w(j)-1$ and $w(j)$ are both elements of $S$. Lemma {\ref{2:2}} forces $c \leq w(j)-1$. \emph{(ii)} If $w(j) < w(j-1)$, then, as above, $w(j-1)-w(j)+1 \in S$. Thus $w(j) + (w(j-1)-w(j)+1) = w(j-1)+1$ and $w(j)$ are both elements of $S$. So, $c \leq w(j-1)$ by Lemma {\ref{2:2}}. \end{proof} This has the corresponding consequence on Kunz coordinates. \begin{corollary}\label{cor:change} Let $S$ be an Arf numerical semigroup with multiplicity $m$, conductor $c$ and Kunz coordinates $(x_1, \ldots , x_{m-1})$. For every $i\in\{2,\ldots, m-1\}$, \begin{enumerate}[(i)] \item if $x_{i-1} \leq x_i$, then $\frac{c-i+1}{m} \leq x_i$; \item if $x_i \leq x_{i-1}$, then $\frac{c-i+1}{m} \leq x_{i-1}$. \end{enumerate} \end{corollary} \begin{proof} If $x_{i-1} \leq x_i$, then $x_{i-1}m+i-1 \leq x_im+i-1$, whence $w(i-1) < w(i).$ Therefore, $c \leq w(i)-1$ by Lemma \ref{2:3}. Hence $c \leq x_im+i-1$ and $\frac{c-i+1}{m} \leq x_i$. The proof of \emph{(ii)} is similar. \end{proof} Let $S$ be a numerical semigroup with multiplicity $m$ and conductor $c$. As every nonnegative multiple of $m$ is in $S$ and $c-1\not\in S$, it follows that $c\not\equiv 1\!\pmod m$. The following lemma shows that $w(1)$ and $w(m-1)$ are completely determined by the conductor and $m$ in an Arf numerical semigroup. \begin{lemma} {\label{2:4}} Let $S$ be an Arf numerical semigroup with multiplicity $m$ and conductor $c$. \begin{enumerate}[(i)] \item $w(1)= \begin{cases} c+1 & {\mbox{if }} c \equiv 0\!\pmod m,\\ c-\overline{c}+m+1 & \hbox{otherwise}. \end{cases}$ \item $w(m-1) = c-\overline{c}+m-1.$ \end{enumerate} \end{lemma} \begin{proof} We know that $\overline{c} \in \{0, 2, \ldots , m-1\}$. Let us first consider the case $\overline{c}=0$. Since $ms \in S$ for all $s \in {\mathbb{N}}$, $ms + 1 \not \in S$ and $ms + m - 1 \not \in S$ for $s < \frac{c}{m}$ by Lemma {\ref{2:2}}. Hence $w(1) = m \cdot \frac{c}{m} + 1 = c+1$ and $w(m-1) = m \cdot \frac{c}{m} + m - 1=c+m-1$. This proves \emph{(i)} and \emph{(ii)} when $\overline{c}=0$. It remains to prove \emph{(i)} and \emph{(ii)} for the case $\overline{c} \neq 0$. Again since $ms \in S$ for all $s \in {\mathbb{N}}$, Lemma {\ref{2:2}} implies that $ms + 1 \not \in S$ for $s \leq \frac{c-\overline{c}}{m}$ and $ms + m - 1 \not \in S$ for $s < \frac{c-\overline{c}}{m}$. Therefore, $w(1) = m \cdot (\frac{c-\overline{c}}{m}+1) + 1 = c-\overline{c}+m+1$ and $w(m-1) = m \cdot \frac{c-\overline{c}}{m} + m - 1= c-\overline{c}+m - 1$ when $\overline{c} \neq 0$. This completes the proof. \end{proof} Let us translate Lemma \ref{2:4} to the language of Kunz coordinates. \begin{corollary}\label{cor:eq-c} Let $S$ be an Arf numerical semigroup with multiplicity $m$ and conductor $c$. Then, \begin{equation}\label{eq:eq-c} x_1=\left\lceil \frac{c}m\right\rceil,\ x_{m-1}=\left\lfloor \frac{c}m\right\rfloor. \end{equation} \end{corollary} \begin{proof} If $\overline{c}=0$, then we know that $w(1)=x_1m+1=c+1$, whence $x_1=c/m$. Also, $w(m-1)=x_{m-1}m+m-1=c+m-1$. Hence $x_{m-1}=c/m$. If $\overline{c}\neq 0$, then $w(1)=x_1m+1=c-\overline{c}+m+1$. Thus, $x_1=(c-\overline{c})/m+1$. In this setting, $w(m-1)=x_{m-1}m+m-1=c-\overline{c}+m-1$. Hence $x_{m-1}=(c-\overline{c})/m$. \end{proof} \begin{lemma} {\label{2:5}} Let $S$ be an Arf numerical semigroup with multiplicity $m >2$. For any integer $k$ with $0 < k < \frac{m}{2}$, we have \[w(2k) \leq w(k)+k \ {\mbox{and}} \ w(m-2k) \leq w(m-k) + m-k.\] \end{lemma} \begin{proof} Let $m>2$ and let $0 < k < \frac{m}{2}$. Note that $w(k)-k$ is a (non negative) multiple of $m$ and thus it is an element of $S$. Therefore $2w(k)-(w(k)-k)= w(k) + k \in S$ by the Arf property. This implies $w(2k) \leq w(k)+k$ since $w(k)+k \equiv 2k\!\pmod m$. Similarly, $2w(m-k)-(w(m-k)-(m-k))=w(m-k)+(m-k) \in S$ which implies $w(m-2k) \leq w(m-k)+(m-k)$. \end{proof} As a consequence of Lemma \ref{2:5}, in the Arf setting, we can add more inequalities to the above system of inequalities. \begin{corollary}\label{cor:more-eq} Let $S$ be an Arf numerical semigroup with Kunz coordinates $(x_1,\ldots, x_n)$. Then for every integer $k$ with $0<k<\frac{m}2$, \begin{equation}\label{eq:more-eq} x_{2k}\le x_k \hbox{ and } x_{m-2k}\le x_{m-k}+1. \end{equation} \end{corollary} \begin{proof} Notice that $w(2k)=x_{2k}m+2k\le w(k)+k=(x_km+k)+k$, and so $x_{2k}\le x_k$. For the other inequality, observe that $w(m-2k)=x_{m-2k}m+m-2k\le w(m-k)+m-k=(x_{m-k}m+m-k)+m-k= (x_{m-k}+1)m+m-2k$, and consequently $x_{m-2k}\le x_{m-k}+1$. \end{proof} \subsection{Arf numerical semigroups with multiplicity one} The only numerical semigroup with multiplicity one is $\mathbb N$, which is trivially Arf. \subsection{Arf numerical semigroups with multiplicity two} Numerical semigroups with multiplicity $2$ are completely determined by their conductor. In fact, if $S$ is a numerical semigroup with multiplicity $2$ and conductor $c$, then $c$ is an even number and $S = \langle 2, c+1 \rangle$. It is easily seen by directly applying the Arf pattern that every numerical semigroup with multiplicity $2$ is an Arf numerical semigroup. \subsection{Arf numerical semigroups with multiplicity three} Numerical semigroups with multiplicity $3$ or more are not completely determined by their conductor alone. The genus is needed to completely determine them ~\cite{tres-cuatro}. In that paper, formulas for the number of numerical semigroups with multiplicity $3$ having a prescribed Frobenius number or genus are given. As we see next, if the Arf property is assumed, then the semigroup is fully determined by the multiplicity and the conductor. \begin{proposition} Let $c$ be an integer such that $c \geq 3$ and $c \not \equiv 1\!\pmod 3$. Then there is exactly one Arf numerical semigroup $S$ with multiplicity $3$ and conductor $c$ given by \begin{enumerate}[(i)] \item $S=\langle 3, c+1, c+2 \rangle$ if $c \equiv 0\!\pmod 3$, \item $S=\langle 3, c, c+2 \rangle$ if $c \equiv 2\!\pmod 3$. \end{enumerate} \end{proposition} \begin{proof} \emph{(i)} If $c \equiv 0\!\pmod 3$, then $w(1)=c + 1$ and $w(2)=c + 2$ by Lemma {\ref{2:4}}. Thus $S=\langle 3, c+1, c+2 \rangle$. \emph{(ii)} If $c \equiv 2\!\pmod 3$, then $w(1)=c+2$ and $w(2)=c $ by Lemma {\ref{2:4}}. Hence $S=\langle 3, c, c+2 \rangle$. \end{proof} \begin{example} The only Arf numerical semigroup with multiplicity $3$ and Frobenius number $10$ (conductor 11) is $\langle 3,11,13 \rangle=\{0,3,6,9,11, \rightarrow\}.$ The only Arf numerical semigroup with multiplicity $3$ and Frobenius number $11$ (conductor 12) is $\langle 3,13,14 \rangle=\{0,3,6,9,12, \rightarrow \}.$ \end{example} \subsection{Arf numerical semigroups with multiplicity four} In ~\cite{tres-cuatro}, it is shown that numerical semigroups with multiplicity $4$ are completely determined by their genus, Frobenius number and ratio (the least minimal generator greater than the multiplicity). Formulas for the number of numerical semigroups with multiplicity $4$ and given genus and/or Frobenius number are also presented in that paper. Of course all those formulas can be expressed by using the conductor instead of the Frobenius number. Also in \cite{gen-fun} formulas for the number of numerical semigroups with multiplicity 4 and fixed Frobenius number are given; these are obtained by means of short generating functions (also if we fix the genus and the Frobenius number). Let $S$ be an Arf numerical semigroup with multiplicity $4$ and conductor $c$. Then $c \equiv 0, 2$ or $3 \!\pmod 4$. The following proposition describes all Arf numerical semigroups with multiplicity $4$ and conductor $c$. \begin{proposition} {\label{4:1}} Let $S$ be an Arf numerical semigroup with multiplicity $4$ and conductor $c$. \begin{enumerate}[(i)] \item If $c \equiv 0\!\pmod 4$, then $S=\langle 4, 4t+2, c+1, c+3 \rangle$ for some $t\in\{1, \ldots , \frac{c}{4}\}$. \item If $c \equiv 2\!\pmod 4$, then $S=\langle 4, 4t+2, c+1, c+3 \rangle$ for some $t\in\{1, \ldots , \frac{c-2}{4}\}$. \item If $c \equiv 3\!\pmod 4$, then $S=\langle 4, c, c+2, c+3 \rangle.$ \end{enumerate} \end{proposition} \begin{proof} We first note that all the semigroups given in the proposition are Arf numerical semigroups. Let $S$ be an Arf numerical semigroup with multiplicity $4$ and conductor $c$. As we have already noted, $c \equiv k\!\pmod m$ where $k \in \{0,2,3\}$. We have $w(3)=c-k+3$ and \[ w(1)=\begin{cases} c+1 & {\mbox{if }} k = 0,\\ c-k+5 & {\mbox{if }} k \neq 0, \end{cases}\] by Lemma {\ref{2:4}}. \emph{(i)} If $c \equiv 0\!\pmod 4$, then $w(1)=c+1$ and $w(3)=c+3$, which by Selmer's formulas is the largest element of $\mathrm{Ap}(S,4)$. Since $w(2) < c+3$, we conclude that $w(2)=4t+2$ with $1 \leq t \leq \frac{c}{4}$. Thus we have $S=\langle 4, 4t+2, c+1, c+3 \rangle, 1 \leq t \leq \frac{c}{4}$. \emph{(ii)} If $c \equiv 2\!\pmod 4$, then $ w(3)=c+1$ and $w(1)=c+3$. In this setting, $w(1)=c+3$ is the largest element of $\mathrm{Ap}(S,4)$. Therefore $w(2) < c+3$, which implies that $w(2)=4t+2$ with $1 \leq t \leq \frac{c-2}{4}$. Thus we have $S=\langle 4, 4t+2, c+1, c+3 \rangle$, for some integer $t$ with $1 \leq t \leq \frac{c-2}{4}$. \emph{(iii)} If $c \equiv 3\!\pmod 4$, then $ w(3)=c$ and $w(1)=c + 2$. In this case $c + 3=w(2)$ is the largest element of $\mathrm{Ap}(S,4)$. Thus $S=\langle 4, c, c+2, c+3 \rangle.$ \end{proof} Proposition {\ref{4:1}} can be used to count Arf numerical semigroups with multiplicity $4$ and conductor $c$. Compare this result with the formula obtained for maximal embedding dimension numerical semigroups with fixed Frobenius number and genus, and multiplicity 4 given in \cite{gen-fun}. Let $\mathrm n_A(c,m)$ denote the number of Arf numerical semigroups with multiplicity $m$ and conductor $c$. \begin{corollary} Let $c$ be an integer such that $c \geq 4$ and $c \not \equiv 1\!\pmod 4$. Then \[\mathrm n_A(c,4)=\begin{cases} \frac{c}{4} & {\mbox{if }} c \equiv 0\!\pmod 4,\\ \frac{c-2}{4} & {\mbox{if }} c \equiv 2\!\pmod 4,\\ 1 & {\mbox{if }} c \equiv 3\!\pmod 4. \end{cases}\] \end{corollary} \begin{example} The five Arf numerical semigroups with multiplicity $4$ and conductor $20$ (Frobenius number 19) are $$\langle 4,6,21,23 \rangle=\{0,4,6,8,10,12,14,16,18,20, \rightarrow\},$$ $$\langle 4,10,21,23 \rangle=\{0,4,8,10,12,14,16,18,20, \rightarrow\},$$ $$\langle 4,14,21,23 \rangle=\{0,4,8,12,14,16,18,20, \rightarrow\},$$ $$\langle 4,18,21,23 \rangle=\{0,4,8,12,16,18,20, \rightarrow\},$$ $$\langle 4,21,22,23 \rangle=\{0,4,8,12,16,20, \rightarrow \rangle.$$ The five Arf numerical semigroups with multiplicity $4$ and conductor $22$ (Frobenius number 21) are $$\langle 4,6,23,25 \rangle=\{0,4,6,8,10,12,14,16,18,20,22, \rightarrow\},$$ $$\langle 4,10,23,25 \rangle=\{0,4,8,10,12,14,16,18,20,22, \rightarrow\},$$ $$\langle 4,14,23,25 \rangle=\{0,4,8,12,14,16,18,20,22, \rightarrow\},$$ $$\langle 4,18,23,25 \rangle=\{0,4,8,12,16,18,20,22, \rightarrow\},$$ $$\langle 4,22,23,25 \rangle=\{0,4,8,12,16,20,22, \rightarrow\}.$$ The only Arf numerical semigroup with multiplicity $4$ and conductor $23$ (Frobenius number 22) is $$\langle 4,23,25,26 \rangle=\{0,4,8,12,16,20,23, \rightarrow\}.$$ \end{example} \subsection{Arf Numerical Semigroups with multiplicity five} Let $S$ be an Arf numerical semigroup with multiplicity $5$ and conductor $c$. Then $c \equiv 0, 2, 3$ or $4 \!\pmod 5)$ and the following proposition describes all Arf numerical semigroups $S$ with multiplicity $5$ and conductor $c$. \begin{proposition} {\label {5:1}} Let $S$ be an Arf numerical semigroup $S$ with multiplicity $5$ and conductor $c$. \begin{enumerate}[(i)] \item If $c \equiv 0\!\pmod 5$, then either $S=\langle 5, c-2, c+1, c+2, c+4 \rangle$ or $S=\langle 5, c+1, c+2, c+3, c+4 \rangle$. \item If $c \equiv 2\!\pmod 5$, then $S=\langle 5, c, c+1, c+2, c+4 \rangle$. \item If $c \equiv 3\!\pmod 5$, then $S=\langle 5, c, c+1, c+3, c+4 \rangle$. \item If $c \equiv 4\!\pmod 5$, then either $S=\langle 5, c-2, c, c+2, c+4 \rangle$ or $S=\langle 5, c, c+2, c+3, c+4 \rangle$. \end{enumerate} \end{proposition} \vspace*{-0.1 cm} \begin{proof} We first note that all the semigroups given in the proposition are Arf numerical semigroups. Let $S$ be an Arf numerical semigroup with multiplicity $5$ and conductor $c$. As we have already noted, $c-1$ cannot be a multiple of $5$. So, $c \equiv k\!\pmod m$ for some $k \in \{0,2,3,4\}$. Lemma \ref{2:4} asserts that $w(4)=c-k+4$ and \[w(1)=\begin{cases} c+1 & {\mbox{if }} k = 0,\\ c-k+6 & {\mbox{if }} k \neq 0. \end{cases} \] Moreover, applying Lemma {\ref{2:5}} \begin{equation}\label{eq:5-2} w(4) \leq w(2)+2, \end{equation} and from $w(1)=w(5-4) \leq w(5-2)+(5-2)=w(3)+3$ we get \begin{equation}\label{eq:5-3} w(1) \leq w(3)+3. \end{equation} Let us also note that $w(i) \leq c+4$ for all $i\in\{1, 2, 3, 4\}$. \emph{(i)} If $c \equiv 0\!\pmod 5$, then $ w(1)=c+1$ and $w(4)=c+4$. In light of inequality \eqref{eq:5-2}, we get $c+4=w(4) \leq w(2)+2 \leq c+6$ which implies $w(2)=c + 2$. Similarly, using \eqref{eq:5-3}, we get $c+1=w(1)\leq w(3)+3 \leq c+7$, which yields $c-2 \leq w(3) \leq c+4$. This implies $w(3)=c-2$ or $w(3)=c+3$. It follows that \[S=\langle 5, c-2, c+1, c+2, c+4 \rangle \ {\mbox{or}} \ S=\langle 5, c+1, c+2, c+3, c+4 \rangle.\] \emph{(ii)} If $c \equiv 2\!\pmod 5$, then $w(1)=c+4$ and $w(4)=c+2$. Using inequality \eqref{eq:5-2}, we get $c+2 = w(4) \leq w(2) + 2 \leq c+6$, which gives $c \leq w(2)\leq c+4$. Consequently, $w(2)=c$. Analogously, from \eqref{eq:5-3}, we get $c+4 = w(1) \leq w(3)+3 \leq c+7$, which yields $ c+1 \leq w(3) \leq c+4$ and this implies $w(3)= c+1$. It follows that \[S=\langle 5, c, c+1, c+2, c+4 \rangle.\] \emph{(iii)} If $c \equiv 3\!\pmod 5$, then $w(1)=c+3$ and $w(4)=c+1$. In this case, $c+4=w(2)$ is the largest element of $\mathrm{Ap}(S,5)$. As before, \eqref{eq:5-3} yields $c+3=w(1) \leq w(3)+3 \leq c+7$, and thus $c \leq w(3) \leq c+4$. Hence $w(3)=c$, and \[S=\langle 5, c, c+1, c+3, c+4 \rangle.\] \emph{(iv)} If $c \equiv 4\!\pmod 5$, then $ w(1)=c+2$ and $w(4)=c$. In this case, $c+4=w(3)$ is the largest element of $\mathrm{Ap}(S,5)$. Applying \eqref{eq:5-2}, we get $c= w(4) \leq w(2)+2 \leq c+6$, whence $c-2 \leq w(2) \leq c+4$. Thus $w(2)=c - 2$ or $w(2)=c+3$. It follows that \[S=\langle 5, c-2, c, c+2, c+4 \rangle \ {\mbox{or}} \ S=\langle 5, c, c+2, c+3, c+4 \rangle.\qedhere \] \end{proof} As a consequence of Proposition {\ref{5:1}}, we can count the number of Arf numerical semigroups with multiplicity $5$ and conductor $c$, with $c$ an integer greater than or equal to five and $c \not \equiv 1\!\pmod 5$. \begin{corollary} Let $c$ be an integer such that $c \geq 5$ and $c \not \equiv 1\!\pmod 5)$. Then \[\mathrm n_A(c,5)= \begin{cases} 2 & {\mbox{if }} c \equiv 0\!\pmod 5 \ {\mbox{or}} \ c \equiv 4\!\pmod 5,\\ 1 & {\mbox{if }} c \equiv 2\!\pmod 5 \ {\mbox{or}} \ c \equiv 3\!\pmod 5. \end{cases}\] \end{corollary} \begin{example} The two Arf numerical semigroups with multiplicity $5$ and conductor $30$ (Frobenius number 29) are $\langle 5,28,31,32,34 \rangle=\{0,5,10,15,20,25,28,30, \rightarrow\}$ and $ \langle 5,31,32,33,34 \rangle=\{0,5,10,15,20,25,30 \rightarrow\}.$ \begin{verbatim} gap> l:=NumericalSemigroupsWithFrobeniusNumber(29);; gap> l5:=Filtered(l,s->MultiplicityOfNumericalSemigroup(s)=5);; gap> Filtered(l5,IsArfNumericalSemigroup); [ <Numerical semigroup with 5 generators>, <Numerical semigroup with 5 generators> ] gap> List(last,MinimalGenerators); [ [ 5, 28, 31, 32, 34 ], [ 5, 31, 32, 33, 34 ] ] \end{verbatim} The only Arf numerical semigroup with multiplicity $5$ and conductor $32$ (Frobenius number 31) is $\langle 5,32,33,34,36 \rangle=\{0,5,10,15,20,25,30,32, \rightarrow\}.$ The only Arf numerical semigroup with multiplicity $5$ and conductor $33$ (Frobenius number 32) is $\langle 5,33,34,36,37 \rangle=\{0,5,10,15,20,25,30,33 \rightarrow\}.$ The two Arf numerical semigroups with multiplicity $5$ and conductor $34$ (Frobenius number $33$) are \[ \begin{array}{l} \langle 5,32,34,36,38 \rangle=\{0,5,10,15,20,25,30,32,34, \rightarrow\},\\ \langle 5,34,36,37,38 \rangle=\{0,5,10,15,20,25,30,34, \rightarrow\}. \end{array} \] \end{example} \subsection{Arf Numerical Semigroups with multiplicity six} To determine all Arf numerical semigroups with multiplicity $6$ and a given conductor $c$, we will make use of the ratio of a numerical semigroup. Recall that for a given numerical semigroup $S$, the \emph{ratio} is the smallest integer in $S$ that is not a multiple of its multiplicity, or in other words, the smallest minimal generator greater than the multiplicity \cite{tres-cuatro}. We will use $r$ to denote the ratio of $S$. Let $S$ be an Arf numerical semigroup with multiplicity $6$ and conductor $c$. Then $c \equiv 0, 2, 3, 4$ or $5 \!\pmod 6)$. Since $c+5$ is the largest element of the minimal set of generators of $S$, as the second least element of the minimal set of generators for $S$, the ratio of $S$ satisfies \begin{equation*} r \leq c +1. \end{equation*} The following proposition describes all Arf numerical semigroups $S$ with multiplicity $6$ and conductor $c$. \begin{proposition} {\label{6:1}} Let $S$ be an Arf numerical semigroup with multiplicity $6$ and conductor $c$. \begin{enumerate}[(i)] \item If $c \equiv 0\!\pmod 6$, then $S$ equals one of the following numerical semigroups \[\begin{array}{l} \langle 6, c+1, c+2, c+3, c+4, c+5 \rangle,\\ \langle 6, 6u+2, 6u+4, c+1, c+3, c+5 \rangle,\\ \langle 6, 6u+3, c+1, c+2, c+4, c+5 \rangle, \\ \langle 6, 6u+4, 6u+8, c+1, c+3, c+5 \rangle, \end{array}\] for some integer $u$ such that $1 \leq u \leq \frac{c}{6} - 1$. \item If $c \equiv 2\!\pmod 6$, then $S$ is of one of the following forms \[\begin{array} {l} \langle 6, 6t+2, 6t+4, c+1, c+3, c+5 \rangle,\\ \langle 6, 6u+3, c, c+2, c+3, c+5 \rangle ,\\ \langle 6, 6u+4, 6u+8, c+1, c+3, c+5 \rangle, \end{array}\] for some integers $t$ and $u$ with $1 \leq t \leq \frac{c - 2}{6} $ and $1 \leq u \leq \frac{c - 2}{6} - 1$. \item If $c \equiv 3\!\pmod 6$, then \[S=\langle 6, 6t+3, c+1, c+2, c+4, c+5 \rangle,\] for some integer $t$ such that $1 \leq t \leq \frac{c - 3}{6}$. \item If $c \equiv 4\!\pmod 6$, then $S$ is equal to one of the following numerical semigroups \[\begin{array} {l} \langle 6, 6t+2, 6t+4, c +1, c+3, c+5 \rangle,\\ \langle 6, 6t+4, 6t+8, c +1, c+3, c+5 \rangle, \end{array}\] for some integer $t$ with $1 \leq t \leq \frac{c - 4}{6}$. \item If $c \equiv 5\!\pmod 6)$, then $S$ is of one of the following forms \[\begin{array} {l} \langle 6, c, c+2, c+3, c+4, c+5 \rangle,\\ \langle 6, 6t+3, c, c+2, c+3, c+5 \rangle, \end{array}\] for some integer $t$ with $1 \leq t \leq \frac{c - 5}{6}$. \end{enumerate} \end{proposition} \begin{proof} We first note that all the semigroups given in the proposition are Arf numerical semigroups. Let $S$ be an Arf numerical semigroup with multiplicity $6$ and conductor $c$. As we have already mentioned, $c \equiv k\!\pmod 6$ for some $k \in \{0,2,3,4,5\}$. By Lemma \ref{2:4}, we have $w(5)=c-k+5$ and \begin{equation}\label{eq:6-1} w(1)= \begin{cases} c+1 & {\mbox{if }} k = 0,\\ c-k+7 & {\mbox{if }} k \neq 0. \end{cases} \end{equation} Also, $ w(4) \leq w(2)+2$ and $w(2)=w(6-4) \leq w(6-2)+(6-2)=w(4)+4$ by Lemma {\ref{2:5}}. Combining these two inequalities we get \begin{equation}\label{eq:6-2} w(4)-2 \leq w(2) \leq w(4)+4. \end{equation} Let us also note that $w(i) \leq c+5$ for all $i\in\{1, 2, 3, 4, 5\}$. \emph{(i)} If $c \equiv 0\!\pmod 6$, then $ w(1)=c+1$ and $w(5)=c+5$ by \eqref{eq:6-1}. The ratio $r$ of $S$ is one of $w(1)=c+1$, $w(2)$, $w(3)$ or $w(4)$. \begin{enumerate}[--] \item If $r = c+1$, then $w(2)= c+2$, $w(3)= c+3$, $w(4)= c+4$, and consequently \[S=\langle 6, c+1, c+2, c+3, c+4, c+5 \rangle.\] \item If $r = w(2)$, then $w(2)<c$, $w(2) < w(3)$ and $w(2)<w(4)$ by the definition of ratio. Hence $w(3)=c+3$ by Lemma {\ref{2:3}} and $w(2) = w(4)-2$ or equivalently, $w(4)=w(2)+2$ by \eqref{eq:6-2}. Write $w(2)=6u+2$. Then $w(4)=6u+4$ and \[S=\langle 6, 6u+2, 6u+4, c+1, c+3, c+5 \rangle,\] with $1 \leq u \leq \frac{c}{6}-1$. \item If $r = w(3)$, then $w(3)<c$. Also $w(3) < w(2)$ and $w(3) < w(4)$ by the definition of ratio. Hence $w(2)= c+2$ and $w(4)= c+4$ by Lemma {\ref{2:3}}. By setting $w(3)=6u+3$, we get \[S=\langle 6, 6u+3, c+1, c+2, c+4, c+5 \rangle,\] with $1 \leq u \leq \frac{c}{6}-1.$ \item If $r = w(4)$, then the definition of ratio forces $w(4)<c$, $w(4) < w(3)$ and $w(4) < w(2)$. Hence $w(3)= c+3$ by Lemma {\ref{2:3}} and $w(2) = w(4)+4$ by \eqref{eq:6-2}. Put $w(4)=6u+4$. Then $w(2)=6u+8$ and we have \[S=\langle 6, 6u+4, 6u+8, c+1, c+3, c+5 \rangle,\] with $1 \leq u \leq \frac{c}{6}-1.$ \end{enumerate} \emph{(ii)} If $c \equiv 2\!\pmod 6$, then $ w(1)=c+5$ and $w(5)=c+3$ by \eqref{eq:6-1}. Also we have in this setting that $r\in\{ w(2), w(3), w(4)\}$. \begin{enumerate}[--] \item If $r = w(2)$, the definition of ratio yields $w(2) \leq c$, $w(2) < w(3)$ and $w(2) < w(4)$. Hence $w(3)= c+1$ by Lemma {\ref{2:3}}, and $w(2) = w(4)-2$, or equivalently, $w(4)=w(2)+2$ by \eqref{eq:6-2}. Write $w(2)=6t+2$. Then $w(4)=6t+4$ and we obtain \[S=\langle 6, 6t+2, 6t+4, c+1, c+3, c+5 \rangle,\] with $1 \leq t \leq \frac{c-2}{6}$. \item If $r = w(3)$, then $w(3)< w(2)$ and $w(3) < w(4)$ by the definition of the ratio. Hence $w(4)= c+2$ and $w(2)=c$ by Lemma {\ref{2:3}}. Note also that $w(3) \leq c-5$, since $w(3)<c$ and $w(3)\equiv c+1 \!\pmod 6$. By denoting $w(3)=6u+3$, we get \[S=\langle 6, 6u+3, c, c+2, c+3, c+5 \rangle,\] where $1 \leq u \leq \frac{c-2}{6}-1$. \item If $r = w(4)$, then $w(4)<c$, $w(4)<w(2)$ and $w(4)<w(3)$ by the definition of ratio. Hence $w(3)= c+1$ by Lemma {\ref{2:3}}, and $w(2) = w(4)+4$ by \eqref{eq:6-2}. Put $w(4)=6u+4$. Then $w(2) = 6u+8$, and we have \[S=\langle 6, 6u+4, 6u+8, c+1, c+3, c+5 \rangle,\] with $1 \leq u \leq \frac{c-2}{6}-1$. \end{enumerate} \emph{(iii)} If $c \equiv 3\!\pmod 6$, then $ w(1)=c+4$ and $w(5)=c+2$ by \eqref{eq:6-1}. In this case, $c+5=w(2)$ is the largest element in $\mathrm{Ap}(S,6)$. Using \eqref{eq:6-2}, $c+5 =w(2) \leq w(4)+4$ which implies $c+1 \leq w(4)$ and thus $w(4)=c+1$. Therefore, the only possibility for the ratio is $r = w(3)$ and if we express $w(3)=6t+3$, we get \[S=\langle 6, 6t+3, c+1, c+2, c+4, c+5 \rangle,\] where $1 \leq t \leq \frac{c-3}{6}$. \emph{(iv)} If $c \equiv 4\!\pmod 6$, then $ w(1)=c+3$ and $w(5)=c+1$ by \eqref{eq:6-1}. In this case, $c+5 =w(3)$ is the largest element of $\mathrm{Ap}(S,6)$. Since $w(4) \leq c$, the ratio $r$ of $S$ is either $w(2)$ or $w(4)$. \begin{enumerate}[--] \item If $r = w(2)$, then $w(2)< w(4)$ and thus $w(4)=w(2)+2$ by \eqref{eq:6-2}. Write $w(2)$ as $w(2)=6t+2$. Then $w(4)=6t+4$ and \[S=\langle 6, 6t+2, 6t+4, c+1, c+3, c+5 \rangle,\] with $1 \leq t \leq \frac{c-4}{6}$. \item If $r = w(4)$, then $w(4) < w(2)$ and thus $w(2)=w(4)+4$ by $(6.2)$. Put $w(4)=6t+4$. Then $w(2)=6t+8$ and $$S=\langle 6, 6t+4, 6t+8, c+1, c+3, c+5 \rangle$$ \noindent where $1 \leq t \leq \frac{c-4}{6}.$ \end{enumerate} \emph{(v)} If $c \equiv 5\!\pmod 6$, then $ w(1)=c+2$ and $w(5)=c$ by \eqref{eq:6-1}. In this case, $c+5=w(4)$ is the largest element of $\mathrm{Ap}(S,6)$. Using \eqref{eq:6-2}, we obtain $c+5 =w(4) \leq w(2)+2$ and then $c+3 \leq w(2)$. This implies $w(2)=c+3$. Therefore, either $r = w(5) = c$ or $r = w(3)$. \begin{enumerate}[--] \item If $r = c$, then \[S=\langle 6, c, c+2, c+3, c+4, c+5 \rangle.\] \item If $r = w(3)$ and if we put $w(3)=6t+3$, then \[S=\langle 6, 6t+3, c, c+2, c+3, c+5 \rangle,\] where $1 \leq t \leq \frac{c-5}{6}$.\qedhere \end{enumerate} \end{proof} If $c$ is an integer such that $c \geq 6$ and $c \not \equiv 1\!\pmod 6)$, then Proposition {\ref{6:1}} can be used to count Arf numerical semigroups with multiplicity $6$ and conductor $c$. \begin{corollary} Let $c$ be an integer such that $c \geq 6$ and $c \not \equiv 1\!\pmod 6)$. Then \[\mathrm n_A(c,6)= \begin{cases} \frac{c}{2}-2 & {\mbox{if }} c \equiv 0\!\pmod 6, \\ \frac{c-2}{2}-2 & {\mbox{if }} c \equiv 2\!\pmod 6, \\ \frac{c-3}{6} & {\mbox{if }} c \equiv 3\!\pmod 6,\\ \frac{c-4}{3} & {\mbox{if }} c \equiv 4\!\pmod 6, \\ \frac{c+1}{6} & {\mbox{if }} c \equiv 5\!\pmod 6. \end{cases} \] \end{corollary} \begin{example} The 13 Arf numerical semigroups with multiplicity $6$ and conductor $30$ (Frobenius number 29) are $$ \langle 6,31,32,33,34,35 \rangle=\{0,6,12,18,24,30, \rightarrow\},$$ $$\langle 6,8,10,31,33,35 \rangle=\{0,6,8,10,12,14,16,18,20,22,24,26,28,30, \rightarrow\},$$ $$ \langle 6,14,16,31,33,35 \rangle=\{0,6,12,14,16,18,20,22,24,26,28,30, \rightarrow\},$$ $$ \langle 6,20,22,31,33,35 \rangle=\{0,6,12,18,20,22,24,26,28,30, \rightarrow\},$$ $$ \langle 6,26,28,31,33,35 \rangle=\{0,6,12,18,24,26,28,30, \rightarrow\},$$ $$\langle 6,9,31,32,34,35 \rangle=\{0,6,9,12,15,18,21,24,27,30, \rightarrow\},$$ $$\langle 6,15,31,32,34,35 \rangle=\{0,6,12,15,18,21,24,27,30, \rightarrow\},$$ $$\langle 6,21,31,32,34,35 \rangle=\{0,6,12,18,21,24,27,30, \rightarrow\},$$ $$\langle 6,27,31,32,34,35 \rangle=\{0,6,12,18,24,27,30, \rightarrow\},$$ $$\langle 6,10,14,31,33,35 \rangle=\{0,6,10,12,14,16,18,20,22,24,26,28,30, \rightarrow\},$$ $$\langle 6,16,20,31,33,35 \rangle=\{0,6,12,16,18,20,22,24,26,28,30, \rightarrow\},$$ $$\langle 6,22,26,31,33,35 \rangle=\{0,6,12,18,22,24,26,28,30, \rightarrow\},$$ $$\langle 6,28,32,31,33,35 \rangle=\{0,6,12,18,24,28,30, \rightarrow\}.$$ \end{example}
1,116,691,498,018
arxiv
\section{Introduction} In recent years, neural machine translation (NMT) \cite{cho2014learning,bahdanau2014neural,vaswani2017attention} has achieved rapid advancement in the translation performance \cite{yang2020csp,lu-etal-2021-attention}. However, the NMT model is not always stable enough, as its performance can drop significantly when small perturbations are added into the input sentences \cite{belinkov2017synthetic,cheng-etal-2020-advaug}. Such perturbed inputs are often referred to as adversarial examples in the literature, and how to effectively generate and utilize adversarial examples for NMT is still an open question. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{pic/abstract.pdf} \caption{\label{fig:abs} An example of the source-target-source RTT process on a perturbed input $\mathbf x_\delta$ by replacing ``\begin{CJK*}{UTF8}{gbsn}{巨大}\end{CJK*} (huge)'' to ``\begin{CJK*}{UTF8}{gbsn}{轻便}\end{CJK*} (light)''. } \end{figure} Conventional approaches \cite{ebrahimi2018hotflip,cheng2019robust} for generating NMT adversarial examples always follow the meaning-preserving assumption, i.e., an NMT adversarial example should preserve the meaning of the source sentence but destroy the translation performance drastically \cite{michel2019evaluation,niu-etal-2020-evaluating}. With the meaning-preserving restriction, the researchers try to add perturbations on the source inputs as small as possible to ensure the meaning of the source sentence is unchanged, which severely limits the search space of the adversarial examples. Additionally, it is much problematic to craft a minor perturbation on discrete text data, since some random transformations (e.g., swap, deletion and replacement) may change, or even reverse semantics of the text data, breaking the aforementioned meaning-preserving assumption. To break this limitation, \citet{zhang-etal-2021-crafting} introduce a new criterion for NMT adversarial examples: \emph{an effective NMT adversarial example imposes minor shifting on the source and degrades the translation dramatically, would naturally lead to a semantic-destroyed round-trip translation result}. Take the case in Figure \ref{fig:abs} as an example: $\mathbf x_{\mathbf \delta}$ reverses the semantics of input $\mathbf x$ by replacing ``\begin{CJK*}{UTF8}{gbsn}{巨大}\end{CJK*} (huge)'' to ``\begin{CJK*}{UTF8}{gbsn}{轻便}\end{CJK*} (light)''. Since the semantics of $\mathbf x$ and $\mathbf x_{\mathbf \delta}$ are completely different, it is unreasonable to use the original target sentence of $\mathbf x$ to evaluate the attacks directly. Therefore, \citet{zhang-etal-2021-crafting} propose to evaluate the BLEU score between $\mathbf {x_\delta}$ and its reconstructed sentence $\mathbf {\hat{x}_\delta}$ from the source-target-source round-trip translation (RTT), as well as the BLEU score between the original sentence $\mathbf x$ and its reconstructed sentence $ \mathbf {\hat{x}}$. They take the decrease between the two BLEU scores mentioned above as the adversarial effect. Specifically, if the BLEU decrease exceeds a predefined threshold, $\mathbf{ x_\delta}$ is concluded to be an adversarial example for the target NMT model. While achieving promising results by breaking the meaning-preserving constraint, there are two potential pitfalls in the work of \citet{zhang-etal-2021-crafting}: (1) Since the source-target-source RTT involves two stages, i.e., the source-to-target translation (S2T) performed by the target NMT model and target-to-source translation (T2S) performed by an auxiliary backward NMT model, we cannot decide whether the BLEU decrease is really caused by the target NMT model. As we can see from the example in Figure \ref{fig:abs}, the translation from $\mathbf {x_\delta}$ to $\mathbf {y'_\delta}$ is pretty good, but the translation from $\mathbf{ y'_\delta}$ to $\mathbf {\hat {x}_\delta}$ is really poor. We can conclude that the BLEU decrease is actually caused by the auxiliary backward model and thus $\mathbf {x_\delta}$ is not the adversarial example for the target NMT model. Even if \citet{zhang-etal-2021-crafting} try to mitigate this problem by fine-tuning the auxiliary backward model on the test sets, we find this problem still remains. (2) They only generate the monolingual adversarial examples on the source side to attack the NMT model, without proposing methods on how to defend these adversaries and improve the robustness of the NMT model. To address the issues mentioned above, we first propose a new criterion for NMT adversarial examples based on Doubly Round-Trip Translation (DRTT), which can ensure the examples that meet our criterion are the authentic adversarial examples for the target NMT model. Specifically, apart from the source-target-source RTT \cite{zhang-etal-2021-crafting}, we additionally consider a target-source-target RTT on the target side. The main intuition is that an effective adversarial example for the target NMT model shall cause a large BLEU decrease on the source-target-source RTT while maintaining a small BLEU decrease on target-source-target RTT. Based on this criterion, we craft the candidate adversarial examples with the source-target-source RTT as \citet{zhang-etal-2021-crafting}, and then pick out the authentic adversaries with the target-source-target RTT. Furthermore, to solve the second problem, we introduce the masked language models (MLMs) to construct the bilingual adversarial pairs by performing phrasal replacement on the generated monolingual adversarial examples and the original target sentences synchronously, which are then utilized to train the NMT model directly. Experiments on both clean and noisy test sets (including five types of artificial and nature noise) show that the proposed approach not only generates effective adversarial examples, but also improves the robustness of the NMT model over all kinds of noises. To conclude, our main contributions are summarized as follows: \begin{itemize}[leftmargin=*] \item{We propose a new criterion for NMT adversarial examples based on the doubly round-trip translation, which can pick out the authentic adversarial examples for the target NMT model. } \item{We introduce the masked language models to construct the bilingual adversarial pairs, which are then utilized to improve the robustness of the NMT model.} \item{Extensive experiments show that the proposed approach not only improves the robustness of the NMT model on both artificial and natural noise, but also performs well on the clean test sets\footnote{The code is publicly available at: \url{https://github.com/lisasiyu/DRTT}}. } \end{itemize} \section{Related Work} \subsection{Adversarial Examples for NMT} The previous approaches for constructing NMT adversarial examples can be divided into two branches: white-box and black-box. The white-box approaches are based on the assumption that the architecture and parameters of the NMT model are accessible \cite{ebrahimi2018hotflip,cheng2019robust,chen2021manifold}. These methods usually achieve superior performance since they can construct and defend the adversaries tailored for the model. However, in the real application scenario, it is always impossible for us to access the inner architecture of the model. On the contrary, the black-box approaches never access to inner architecture and parameters of the model. In this line, \citet{belinkov2017synthetic} rely on synthetic and naturally occurring language error to generate adversarial examples and \citet{michel2019evaluation} propose a meaning-preserving method by swapping the word internal character. Recently, \citet{zhang-etal-2021-crafting} craft adversarial examples beyond the meaning-preserving restriction with the round-trip translation and our work builds on top of it. \subsection{Masked Language Model} Masked Language Model (MLM) \cite{bert,conneau2019cross} has achieved state-of-the-art results on many monolingual and cross-lingual language understanding tasks. MLM randomly masks some of the tokens in the input, and then predicts those masked tokens. Recently, some work adopt MLM to do word replacement as a data augmentation strategy. \citet{jiao2019tinybert} leverage an encoder-based MLM to predict word replacements for single-piece words. \citet{liu2021counterfactual} construct augmented sentence pairs by sampling new source phrases and corresponding target phrases with transformer-based MLMs. Following \citet{liu2021counterfactual}, we introduce the transformer-based MLMs to construct the bilingual adversarial pairs. The main difference between our work and \citet{liu2021counterfactual} is that we choose to mask the adversarial phrases or words at each step and \citet{liu2021counterfactual} mask the words randomly. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{pic/model2.pdf} \caption{\label{fig:1} The overview of the bilingual adversarial pair generation under the criterion of DRTT. $(\mathbf x,\mathbf y)$ denote the source and target sentence. $(\mathbf {x_\delta}, \mathbf {y_\delta})$ denote the generated bilingual adversarial pair. } \end{figure*} \section{Method} In this section, we first describe our proposed criterion for NMT adversarial examples, and then present the way of constructing the bilingual adversarial pairs. \subsection{Adversarial Examples for NMT} For clarity, we first introduce the traditional criteria for NMT adversarial examples, i.e., the criteria based on the meaning-preserving \citep{michel2019evaluation,karpukhin2019training} and RTT \citep{zhang-etal-2021-crafting}, and then elaborate our new criterion based on DRTT. We will use the following notations: $\mathbf x$ and $\mathbf y$ denotes the source and target sentence, respectively. $\mathbf {x_\delta}$ and $\mathbf {y_\delta}$ denote the perturbed version of $\mathbf x$ and $\mathbf y$, respectively. $f(\cdot)$ is the forward translation process performed by the target NMT model and $g(\cdot)$ is the backward translation process performed by the auxiliary backward NMT model. ${\rm sim}(\cdot,\cdot)$ is a function for evaluating the similarity of two sentences, and we use BLEU \citep{papineni2002bleu} as the similarity function. \paragraph{Criterion based on meaning-preserving.} Suppose $\mathbf {y'}=f(\mathbf x)$ and $\mathbf {y_\delta'}=f(\mathbf { x_\delta})$ is the forward translation of the input $\mathbf x$ and its perturbed version $\mathbf{x_\delta}$, respectively. $\mathbf x_\delta$ is an adversarial examples when it meets: \begin{equation} \left\{ \begin{array}{lr} {\rm sim}(\mathbf x,\mathbf{x_\delta}) > \eta, & \\ {\rm sim}(\mathbf y,\mathbf{y'})-{\rm sim}(\mathbf y,\mathbf{y'_{\delta}}) > \alpha, \end{array} \right. \end{equation} where $\eta$ is a threshold to ensure a high similarity between $\mathbf {x_\delta}$ and $\mathbf x$, so that they can meet the meaning-preserving restriction. A larger $\alpha$ indicates a more strict criterion of the NMT adversarial example. \paragraph{Criterion based on RTT.} \citet{zhang-etal-2021-crafting} point out that the perturbation $\delta$ may change, even reverse the meaning of $\mathbf x$, so it is incorrect to use $\mathbf{y}$ as a target sentence to measure the semantic alteration on the target side. Therefore, they introduce the criterion based on RTT which gets rid of the meaning-preserving restriction. The percentage decrease of similarity between $\mathbf{x}$ and $\mathbf{x_\delta}$ through the source-target-source RTT is regarded as the adversarial effect ${\rm d_{src}}(\mathbf{x},\mathbf{x_\delta})$, is calculated as: \begin{equation} {\rm d_{src}}(\mathbf{x},\mathbf{x_\delta})=\frac{{}{\rm sim}(\mathbf x,\mathbf{\hat{x}})-{\rm sim}(\mathbf{x_{\delta}},\mathbf{\hat{x}_\delta})}{{\rm sim}(\mathbf x,\mathbf{\hat{x})}},\label{func:2} \end{equation} where $\mathbf{\hat{x}}$ and $\mathbf{{\hat{x}}_\delta}$ are reconstructed sentences generated with source-target-source RTT: $\mathbf{\hat{x}}=g(f(\mathbf x))$, ${\mathbf{\hat{x}_\delta}}=g(f(\mathbf{x_\delta}))$. A large ${\rm d_{src}}(\mathbf{x},\mathbf{x_\delta})$ indicates that the perturbed sentence $\mathbf{x_\delta}$ can not be well reconstructed by RTT when compared to the reconstruction quality of the original source sentence $\mathbf x$, so $\mathbf{x_\delta}$ is likely to be an adversarial example. \paragraph{ Criterion based on DRTT.} \label{sec:2.3} In Eq.(\ref{func:2}), $\rm sim(\mathbf x,\mathbf{\hat{x}})$ is a constant value given the input $\mathbf x$ and the NMT models. Therefore, the ${\rm d_{src}}(\mathbf{x},\mathbf{x_\delta})$ is actually determined by $-\rm sim(\mathbf{x_{\delta}},\mathbf{{\hat{x}_\delta}})$, which can be interpreted as the reconstruction error between $\mathbf{x_{\delta}}$ and $\mathbf{{\hat{x}_\delta}}$. As we mentioned above, the reconstruction error can be caused by two independent translation processes: the forward translation process $f(\cdot)$ performed by the target NMT model and the backward translation process $g(\cdot)$ performed by the auxiliary backward model. Consequently, there may be three occasions when we get a large ${\rm d_{src}}(\mathbf{x},\mathbf{x_\delta})$: 1) A large semantic alteration in $f(\mathbf{x_{\delta}})$ and a small semantic alteration in $g(\mathbf{y'_{\delta}})$; 2) A large semantic alteration in $f(\mathbf{x_{\delta}})$ and a large alteration in $g(\mathbf{y'_{\delta}})$; 3) A small semantic alteration in $f(\mathbf{x_{\delta}})$ and a large alteration in $g(\mathbf{y'_{\delta}})$. We can conclude $\mathbf{x_{\delta}} $ is an adversarial example for the target NMT model in occasion 1 and 2, but not in occasion 3. Therefore, the criterion based on RTT may contain many fake adversarial examples. To address this problem, we add a target-source-target RTT starting from the target side. The percentage decrease of the similarity between $\mathbf{y}$ and $\mathbf{y'_\delta}$ through the target-source-target RTT, denoted as ${\rm {d_{tgt}}}(\mathbf{y},\mathbf{y'_\delta})$, is calculated as: \begin{equation} {\rm {d_{tgt}}}(\mathbf{y},\mathbf{y'_\delta})= \frac{{\rm sim}(\mathbf y,\mathbf{\hat{y}})-{\rm sim}(\mathbf{y'_{\delta}},\mathbf{\hat{y}'_{\delta}})}{{\rm sim}(\mathbf y,\mathbf{\hat{y}})}, \label{func:3} \end{equation} where $\mathbf{\hat{y}}=f(g(\mathbf y))$ and $\mathbf{\hat y'_{\delta}} = f(g(\mathbf{y'_\delta}))$ are reconstructed sentences generated with the target-source-target RTT. We take both ${\rm d_{src}}(\mathbf{x},\mathbf{x_\delta})$ and ${\rm {d_{tgt}}}(\mathbf{y},\mathbf{y'_\delta})$ into consideration and define $\mathbf x_\delta$ as an adversarial examples when it meets: \begin{equation} \left\{ \begin{array}{lr} {\rm {d_{src}}}(\mathbf{x},\mathbf{x_\delta}) > \beta, & \\ {\rm {d_{tgt}}}(\mathbf{y},\mathbf{y'_\delta}) < \gamma, \label{func:4} \end{array} \right. \end{equation} where $\beta$ and $\gamma$ are thresholds ranging in $[-\infty,1]$ \footnote{It is possible that the reconstruction quality of the perturbed sentence is higher than the original one.}. The interpretation of this criterion is intuitive: if ${\rm {d_{tgt}}}(\mathbf{y},\mathbf{y'_\delta})$ is lower than $\gamma$, we can conclude that the reconstruction error between $\mathbf{y'_{\delta}}$ and $\mathbf{\hat y'_{\delta}}$ is very low. Namely, we can ensure a small semantic alteration of $g(\mathbf{y'_{\delta}})$. Therefore, if ${\rm {d_{src}}}(\mathbf{x},\mathbf{x_\delta})$ is larger than $\beta$, we can conclude the BLEU decrease through the source-target-source RTT is caused by the target NMT model, so that we can conclude $\mathbf x_\delta$ is an authentic adversarial example. \subsection{Bilingual Adversarial Pair Generation} Since the proposed criterion breaks the meaning-preserving restriction, the adversarial examples may be semantically distant from the original source sentence. Thus, we cannot directly pair the adversarial examples with the original target sentences. In this section, we propose our approach for generating bilingual adversarial pairs, which performs the following three steps: 1) Training Masked Language Models: using monolingual and parallel data to train masked language models; 2) Phrasal Alignment: obtaining alignment between the source and target phrases; 3) Phrasal Replacement: generating bilingual adversarial pairs by performing phrasal replacement on the source and target sentences synchronously with the trained masked language models. The whole procedure is illustrated in Figure \ref{fig:1}. \paragraph{Training Masked Language Models.} We train two kinds of masked language models, namely monolingual masked language model (M-MLM) \citep{bert} and translation masked language model (T-MLM) \citep{conneau2019cross}, for phrasal replacement on the source and target sentence, respectively. The M-MLM introduces a special [MASK] token which randomly masks some of the tokens from the input in a certain probability, and predict the original masked words. Following \citet{liu2021counterfactual}, we train the M-MLM on monolingual datasets and use an encoder-decoder Transformer model \citep{vaswani2017attention} to tackle the undetermined number of tokens during generation. The T-MLM takes the identical model structure and similar training process as the M-MLM. The main difference is T-MLM relies on the parallel corpus. T-MLM concatenates parallel sentences by a special token [SEP] and only masks words on the target side. The objective is to predict the original masked words on the target side. \paragraph{Phrasal Alignment.} Phrasal alignment projects each phrase in the source sentence $\mathbf x$ to its alignment phrase in the target sentence $\mathbf y$. We first generate the alignment between $\mathbf x$ and $\mathbf y$ using FastAlign \citep{dyer2013simple}. Then we extract the phrase-to-phrase alignment by the phrase extraction algorithm of NLTK\footnote{\url{https://github.com/nltk/nltk/blob/develop/nltk/translate/phrase_based.py}}, and get a mapping function $p$. \paragraph{Phrasal Replacement.} Given the source sentence ${\mathbf x}=\{s_1,s_2,\dots,s_n\}$ and the target sentence ${\mathbf y}=\{t_1,t_2,\dots,t_m\}$, $s_i$ is the $i$-th phrase in $\mathbf x$, $t_{p(i)}$ is the $p(i)$-th phrase in $\mathbf y$ which is aligned to $s_i$ by the mapping function $p$. We construct the candidate bilingual adversarial pairs $(\mathbf x_\delta,\mathbf y_\delta)$ by performing the phrasal replacement on $(\mathbf x, \mathbf y)$ repeatedly until $c$ percentage phrases in $\mathbf x$ have been replaced. For each step, we select the phrase that yields the most significant reconstruction quality degradation. Here, we take the replacing process for $s_i$ and $t_{p(i)}$ as an example. Considering the not attacked yet phrase $s_i$ in $\mathbf x$, we first build a candidate set $\mathcal{R}_i=\{r_i^1,r_i^2,\dots,r_i^k\}$ for $s_i$ with the prepared M-MLM. Specifically, we extract the $k$ candidate phrases with top $k$ highest predicted probabilities by feeding $\mathbf x^{\backslash i}$ into M-MLM, where $\mathbf x^{\backslash i}$ is the masked version of $\mathbf x$ by masking $s_i$. We select the best candidate $r_i^*$ for $s_i$ as: \begin{equation} r_i^* = \mathop{\arg\max}\limits_{j \in \{1,\cdots,k\}} {\rm d_{src}}(\mathbf x,\mathbf x^{\backslash i:j}), \end{equation} where $\mathbf x^{\backslash i:j}$ is the noised version by replacing $s_i$ with $r_i^j$. With $s_i$ being replaced, we need to replace $t_{p(i)}$ to ensure they are still semantically aligned. To this end, we feed the concatenation of $\mathbf x^{\backslash i:*}$ and $\mathbf y^{\backslash p(i)}$ into T-MLM, and choose the output phrase with the highest predicted probability as the substitute phrase for $t_{p(i)}$. Finally, to decide whether $(\mathbf x_\delta,\mathbf y_\delta)$ is an authentic bilingual adversarial pair for the target NMT model, we perform a target-source-target RTT starting from the target side and calculate ${\rm d_{tgt}}(\mathbf y,\mathbf {y'_\delta})$ between $\mathbf y'_\delta$ and its reconstruction sentence $\mathbf{\hat{y}'_\delta}$ according to Eq.(\ref{func:4}). We take $(\mathbf x_\delta,\mathbf y_\delta)$ as an authentic bilingual adversarial pair if ${\rm d_{src}}(\mathbf x,\mathbf{x_\delta})$ is greater than $\beta$ and ${\rm d_{tgt}}(\mathbf y,\mathbf{y'_\delta})$ is less than $\gamma$. We formalize these steps in Algorithm \ref{algorithm:alg1} in Appendix \ref{sec:appendixa}. After generating adversarial data through the above steps, we combine it with original training data and use them to train the NMT model directly. \section{Experimental Settings} We evaluate our model under artificial noise in Zh$\rightarrow$En and En$\rightarrow$De translation tasks, and under natural noise in En$\rightarrow$Fr translation task. The details of the experiments are elaborated in this section. \subsection{Dataset} For the Zh$\rightarrow$En task, we use the LDC corpus with 1.25M sentence pairs for training\footnote{It is extracted from LDC data, including LDC 2002E18, 2003E07, 2003E14, 2004T08 and 2005T06.}, NIST06 for validation, and NIST 02, 03, 04, 05, 08 for testing. For the En$\rightarrow$De task, we use the publicly available dataset WMT'17 En-De (5.85M) for training, and take the \emph{newstest16} and \emph{newstest17} for validation and testing, respectively. In En$\rightarrow$Fr task, we follow \citet{liu2021counterfactual} to combine the WMT'19 En$\rightarrow$Fr (36k) robustness dataset with Europarl-v7 (2M) En-Fr pairs for training. We take the development set of the MTNT \cite{michel2018mtnt} for validation and the released test set of the WMT'19 robustness task for testing. As for MLMs, we use the Chinese sentences of the parallel corpus to train the Chinese M-MLM, and use the whole parallel corpus to train Zh-En T-MLM. We train the English M-MLM with News Commentary and News Crawl 2010 (7.26M in total) monolingual corpus following \citet{liu2021counterfactual}. T-MLM for En-De and En-Fr are trained with their original parallel corpus. \begin{table*} \centering \renewcommand\arraystretch{0.98} \scalebox{0.95}{ \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|c c c | c|c c c| c} \toprule[1.2pt] \multirow{2}{*}{\textbf{Noise}} &\multirow{2}{*}{\textbf{Model}} & \multicolumn{4}{c|}{\textbf{Zh$\rightarrow$En}} & \multicolumn{4}{c}{\textbf{En$\rightarrow$De}} \\ \cmidrule(r){3-6}\cmidrule(r){7-10} \multicolumn{1}{c|}{}&\multicolumn{1}{c|}{} & 0.1 & 0.2 & 0.3 & \textbf{AVG} & 0.1 & 0.2 & 0.3 & \textbf{AVG} \\ \hline \multirow{5}{*}{\textbf{Deletion}} & \multicolumn{1}{l|}{baseline} & 32.98 & 26.59 & 20.54 &26.70 & 19.82 & 13.71 & 9.33 & 14.29 \\ & \multicolumn{1}{l|}{+CharSwap} & 32.94 & 26.92 & 20.46 & 26.77 & \textbf{19.92} & 13.64 & 9.30 & 14.29\\ & \multicolumn{1}{l|}{+TCWR} & 34.47 & 27.76 & 21.38 & 27.87 & 19.61 & 13.77 & 9.08 & 14.15 \\ & \multicolumn{1}{l|}{+RTT} & 33.84 & 27.43 & 20.74 & 27.33 & 19.61 & 13.48 & 9.27 & 14.12 \\ & \multicolumn{1}{l|}{+DRTT(ours)} & ~~~~\textbf{35.10$**$} & ~~\textbf{28.12$*$} & ~~~~\textbf{22.07$**$} & \textbf{28.43} & 19.83 & \textbf{14.22} & \textbf{9.48} & \textbf{14.51} \\ \hline \multirow{5}{*}{\textbf{Swap}} & \multicolumn{1}{l|}{baseline} & 36.14 & 32.88 & 30.21 & 33.08 & 21.47 & 16.97 & \textbf{13.21} & 17.22 \\ & \multicolumn{1}{l|}{+CharSwap} & 36.71 & 33.38 & 30.58 & 33.55 & 20.49 & 16.31 & 11.93 & 16.24\\ & \multicolumn{1}{l|}{+TCWR} & 37.67 & 34.15 & 31.47 & 34.43 & 20.52 & 16.31 & 12.80 & 16.54 \\ & \multicolumn{1}{l|}{+RTT} & 37.14 & 34.34 & 31.42 & 34.30 & 20.23 & 15.47 & 11.52 & 15.74 \\ & \multicolumn{1}{l|}{+DRTT(ours)} & ~~\textbf{37.90$*$} & \textbf{34.65} & ~~\textbf{31.92$*$} & \textbf{34.82} & ~~~~\textbf{21.51}$**$ & ~~~~\textbf{17.36}$**$ & ~~~~12.91$**$ & \textbf{17.26}\\ \hline \multirow{5}{*}{\textbf{Insertion}} & \multicolumn{1}{l|}{baseline} & 39.96 & 39.10 & 38.41 & 39.16 & 26.86 & \textbf{26.54} & 25.48 & 25.96 \\ & \multicolumn{1}{l|}{+CharSwap} & 40.26 & 39.66 & 39.03 &39.65 & 26.69 & 25.79 & 25.23 & 25.90\\ & \multicolumn{1}{l|}{+TCWR} & 41.32 & 40.07 & 39.60 & 40.33 & 26.27 & 25.55 & 24.33 & 25.38 \\ & \multicolumn{1}{l|}{+RTT} & 41.75 & 40.82 & 39.90 & 40.82& 26.18 & 25.06 & 23.68 & 24.97 \\ & \multicolumn{1}{l|}{+DRTT(ours)} & \textbf{41.98} & \textbf{40.90} & ~~\textbf{40.34$*$} & \textbf{41.07} & ~~~~\textbf{27.32}$**$ & ~~~~26.40$**$ & ~~~~\textbf{25.71}$**$ & \textbf{26.48}\\ \hline \multirow{5}{*}{\textbf{Rep src}} & \multicolumn{1}{l|}{baseline} & 35.25 & 29.69 & 24.64 & 29.86& \textbf{21.65} & 17.40 & 14.45 & 17.83 \\ & \multicolumn{1}{l|}{+CharSwap} & 35.01 & 30.25 & 25.27 & 30.18 & 21.56 & 17.67 & 14.60 & 17.94\\ & \multicolumn{1}{l|}{+TCWR} & 35.73 & \textbf{30.48} & 25.65 & \textbf{30.62} & 21.57 & \textbf{17.71} & \textbf{14.95} & \textbf{18.08} \\ & \multicolumn{1}{l|}{+RTT} & 35.63 & 30.17 & \textbf{25.86} & 30.55 & 21.06 & 17.01 & 14.36& 17.48 \\ & \multicolumn{1}{l|}{+DRTT(ours)} & \textbf{35.81} & 30.18 & 25.70 & 30.56& ~~21.51$*$ & 17.22 & 14.33 & 17.69 \\ \hline \multirow{5}{*}{\textbf{Rep both}} & \multicolumn{1}{l|}{baseline} & 22.33 & 18.77 & 15.98 & 19.03 & 25.52 & 22.68 & 20.07 & 22.76\\ & \multicolumn{1}{l|}{+CharSwap} & 21.99 & 18.08 & 15.77 & 18.61 & 25.18 & 22.39 & 19.98 & 22.52 \\ & \multicolumn{1}{l|}{+TCWR} & 22.98 & 19.69 & 17.14 & 19.94 & 25.44 & 22.64 & 20.43 & 22.84\\ & \multicolumn{1}{l|}{+RTT} & 22.92 & 19.56 & 16.76 & 19.75 & 25.30 & 22.76 & 20.66 & 22.91\\ & \multicolumn{1}{l|}{+DRTT(ours)} & ~~~~\textbf{23.37$**$} & ~~~~\textbf{20.23$**$} & ~~~~\textbf{17.37$**$} & \textbf{20.32} & ~~\textbf{26.19}$*$ & ~~~~\textbf{23.31}$**$ & \textbf{20.98} & \textbf{23.49}\\ \bottomrule \end{tabular} } } \caption{\label{tab:2} The BLEU scores (\%) for forward-translation on noisy test sets with noise ratio 0.1, 0.2 and 0.3, and `AVG' denotes the average BLEU (\%) on all noise ratios. We re-implement all baselines to eliminate the discrepancy caused by MLMs and the auxiliary backward model. `$*/**$': significantly \citep{koehn2004statistical} better than the RTT with $p<0.05$ and $p<0.01$, respectively.} \end{table*} \subsection{Model Configuration and Pre-processing} The MLMs and NMT models in this paper take Transformer-base \cite{vaswani2017attention} as the backbone architecture. We implement all models base on the open-source toolkit Fairseq \citep{ott2019fairseq}. As for hyper-parameters, $\beta$ is set to 0.01 and $\gamma$ is set to 0.5 for Zh$\rightarrow$En. For En$\rightarrow$De and En$\rightarrow$Fr, $\beta$ and $\gamma$ are set to 0.5. The replacement ratio $c$ is set to 0.2 following \citet{liu2021counterfactual}, and the candidate number $k$ is set to 1. The details of model configuration and the number of the generated adversarial examples are shown in the Appendix \ref{sec:appendixb}. Following previous work, the Zh$\rightarrow$En performance is evaluated with the BLEU \citep{papineni2002bleu} score calculated by \emph{multi-bleu.perl} script. For En$\rightarrow$De and En$\rightarrow$Fr, we use SacreBLEU \citep{post2018call} for evaluation\footnote{nrefs:1 | case:mixed | eff:no | tok:intl | smooth:exp | version:2.0.0}. \subsection{Comparison Methods} To test the effectiveness of our model, we take both meaning-preserving and meaning-changeable systems as comparison methods: \paragraph{Baseline:} The vanilla Transformer model for NMT \citep{vaswani2017attention}. In our work, we use the baseline model to perform the forward and backward translation in the round-trip translation. \paragraph{CharSwap:} \citet{michel2019evaluation} craft a minor perturbation on word by swapping the internal character. They claim that character swaps have been shown to not affect human readers greatly, hence making them likely to be meaning-preserving. \paragraph{TCWR:} \citet{liu2021counterfactual} propose the approach of translation-counterfactual word replacement which creates augmented parallel translation corpora by random sampling new source and target phrases from the masked language models. \paragraph{RTT:} \citet{zhang-etal-2021-crafting} propose to generate adversarial examples with the single round-trip translation. However, they do not provide any approach for generating the bilingual adversarial pairs. To make a fair comparison, we generate the bilingual adversarial pairs from their adversarial examples in the same way as ours. \begin{table*}[htb] \centering \renewcommand\arraystretch{0.98} \scalebox{0.95}{ \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|c c c| c|c c c |c} \toprule[1.2pt] \multirow{2}{*}{\textbf{Noise}} &\multirow{2}{*}{\textbf{Model}} & \multicolumn{4}{c|}{\textbf{Zh$\rightarrow$En}} & \multicolumn{4}{c}{\textbf{En$\rightarrow$De}} \\ \cmidrule(r){3-6}\cmidrule(r){7-10} \multicolumn{1}{c|}{}&\multicolumn{1}{c|}{} & 0.1 & 0.2 & 0.3 & \textbf{AVG} & 0.1 & 0.2 & 0.3 & \textbf{AVG} \\ \hline \multirow{5}{*}{\textbf{Deletion}} & \multicolumn{1}{l|}{baseline} & 35.31 & 31.53 & 28.22 & 31.69 & 21.42 & 19.90 & 17.42 & 19.58 \\ & \multicolumn{1}{l|}{+CharSwap} & 34.94 & 31.12 & 28.14 & 31.40 & 22.70 & 20.57 & 18.88 & 20.72\\ & \multicolumn{1}{l|}{+TCWR} & 35.02 & 31.74 & 28.45 & 31.74 & 22.45 & 20.48 & 18.66 & 20.53\\ & \multicolumn{1}{l|}{+RTT} & 35.23 & 32.12 & 28.03 & 31.79 & 23.34 & 22.30 & 20.36 & 22.00 \\ & \multicolumn{1}{l|}{+DRTT(ours)} & ~~\textbf{36.63$*$} & ~~\textbf{32.96$*$} & ~~~~\textbf{29.94$**$} & \textbf{33.18} & ~~~~\textbf{24.06}$**$ & ~~~~\textbf{23.02}$**$ & ~~~~\textbf{21.18}$**$ & \textbf{22.75} \\ \hline \multirow{5}{*}{\textbf{Swap}} & \multicolumn{1}{l|}{baseline} & 28.63 & 22.82 & 18.21 & 23.22 & 19.01 & 15.92 & 14.25 & 16.39 \\ & \multicolumn{1}{l|}{+CharSwap} & 29.55 & 24.46 & 20.97 & 24.99 & 19.80 & 16.51 & 14.54 & 16.95\\ & \multicolumn{1}{l|}{+TCWR} & 31.01 & 26.03 & 22.25 & 26.43 & 19.56 & 16.65 & 14.95 &17.05 \\ & \multicolumn{1}{l|}{+RTT} & 31.07 & 26.06 & 22.08 & 26.40 & 20.51 & 17.63 & 16.17 & 18.10 \\ & \multicolumn{1}{l|}{+DRTT(ours)} & ~~\textbf{32.03$*$} & ~~~~\textbf{26.95}$**$ & ~~~~\textbf{23.71$**$} & \textbf{27.56} & ~~~~\textbf{21.40}$**$ & ~~~~\textbf{18.68}$**$ & ~~~~\textbf{17.53}$**$ & \textbf{19.20} \\ \hline \multirow{5}{*}{\textbf{Insertion}} & \multicolumn{1}{l|}{baseline} & 30.13 & 23.57 & 17.95 & 23.88 & 19.57 & 16.24 & 13.12 & 16.31 \\ & \multicolumn{1}{l|}{+CharSwap} & 29.03 & 22.17 & 17.01 & 22.73 & 20.47 & 16.86 & 13.71 & 17.01\\ & \multicolumn{1}{l|}{+TCWR} & 30.12 & 23.76 & 18.02 & 23.97 & 20.73 & 17.27& \textbf{14.12} &17.37 \\ & \multicolumn{1}{l|}{+RTT} & 29.72 & 22.75 & 17.87 & 23.45 & 20.79 & 16.81 & 13.80 & 17.13\\ & \multicolumn{1}{l|}{+DRTT(ours)} & ~~~~\textbf{31.84$**$} &~~~~\textbf{24.42$**$} & ~~~~\textbf{19.43$**$} & \textbf{25.23} & ~~~~\textbf{21.24}$**$ & ~~~~\textbf{17.53}$**$ & ~~\textbf{14.12}$*$ & \textbf{17.63} \\ \hline \multirow{5}{*}{\textbf{Rep src}} & \multicolumn{1}{l|}{baseline} & 33.02 & 28.15 & 23.26 & 28.14 & 20.56 & 18.40 & 16.53 & 18.50\\ & \multicolumn{1}{l|}{+CharSwap} & 31.71 & 26.97 & 21.92 & 26.87 & 21.56 & 18.81 & 17.11 & 19.16\\ & \multicolumn{1}{l|}{+TCWR} & 32.83 & 28.11 & 23.38 & 28.11 & 21.43 & 19.22 & 17.10 & 19.25 \\ & \multicolumn{1}{l|}{+RTT} & 32.65 & 27.23 & 23.05 & 27.65 & 22.25 & 20.14 & 18.45 & 20.28 \\ & \multicolumn{1}{l|}{+DRTT(ours)} & ~~~~\textbf{34.76$**$} & ~~~~\textbf{29.04$**$} & ~~~~\textbf{25.06$**$} & \textbf{29.62} & ~~\textbf{22.74}$*$ & ~~\textbf{20.59}$*$ & ~~\textbf{18.87}$*$ & \textbf{20.73} \\ \hline \multirow{5}{*}{\textbf{Rep both}} & \multicolumn{1}{l|}{baseline} & 38.25 & 36.17 & 35.48 & 36.63 & 23.62 & 23.23 & 22.13 & 22.99 \\ & \multicolumn{1}{l|}{+CharSwap} & 36.23 & 34.90 & 33.81 & 34.98 & 25.23 & 24.37 & 23.33 & 24.31\\ & \multicolumn{1}{l|}{+TCWR} & 38.38 & 36.92 & 35.44 & 36.91 & 24.84 & 24.77 & 23.34 & 24.32\\ & \multicolumn{1}{l|}{+RTT} &39.13 & 36.92 & 35.23 & 37.09 & 25.51 & 24.77 & 24.12 & 24.80 \\ & \multicolumn{1}{l|}{+DRTT(ours)} & ~~\textbf{40.07$*$} & ~~~~\textbf{38.34$**$} & ~~~~\textbf{37.22$**$} & \textbf{38.54} & ~~~~\textbf{26.28}$**$& ~~\textbf{25.26}$*$ & ~~~~\textbf{24.87}$**$ & \textbf{25.47} \\ \bottomrule \end{tabular} } } \caption{\label{tab:3} The RTT BLEU scores (\%) for round-trip translation on noisy test sets. `$*/**$': significantly better than RTT with $p<0.05$ and $p<0.01$, respectively.} \end{table*} \section{Results and Analysis} \subsection{Main Results} \paragraph{Artificial Noise.} To test robustness on noisy inputs, we follow \citet{cheng2018towards} to construct five types of synthetic perturbations with different noise ratios on the standard test set\footnote{ For each test set, we report three results with noise ratio as 0.1, 0.2 and 0.3, respectively. Noise ratio 0.1 means 10 percent of the words in the source sentence are perturbed.}: 1) \emph{Deletion:} some words in the source sentence are randomly deleted; 2) \emph{Swap:} some words in the source sentence are randomly swapped with their right neighbors; 3) \emph{Insertion}: some words in the source sentence are randomly repeated; 4) \emph{Rep src:} short for `replacement on src'. Some words in the source sentence are replaced with their relevant word according to the similarity of word embeddings\footnote{\url{https://github.com/Embedding/Chinese-Word-Vectors} \\ \url{https://nlp.stanford.edu/projects/glove/} }; 5) \emph{Rep both:} short for `replacement on both'. Some words in the source sentence and their aligned target words are replaced by masked language models \footnote{Each sentence has four references on NIST test sets, we only choose sb0 for replacement.}. Table \ref{tab:2} shows the BLEU scores of forward translation results on Zh$\rightarrow$En and En$\rightarrow$De noisy test sets. For Zh$\rightarrow$En, our approach achieves the best performance on 4 out of 5 types of noisy test sets. Compared to RTT, DRTT achieves the improvement up to 1.1 BLEU points averagely on \emph{deletion}. For En$\rightarrow$De, DRTT also performs best results on all types of noise except \emph{Rep src}. We suppose the reason is \emph{Rep src} sometimes reverses the semantics of the original sentence as we claimed above. Since the perturbations we introduced above may change the semantics of the source sentence, it may be problematic for us to calculate the BLEU score against the original reference sentence in Table \ref{tab:2}. Therefore, following \citet{zhang-etal-2021-crafting}, we also report the BLEU score between the source sentence and its reconstructed version through the source-target-source RTT, which is named as RTT BLEU. The intuition behind it is that: a robust NMT model translates noisy inputs well and thus has minor shifting on the round-trip translation, resulting in a high BLEU between inputs and their round-trip translation results. Following \citet{zhang-etal-2021-crafting}, we fine-tune the backward model (vanilla Transformer model) with its test set to minimize the impact of the T2S process. As shown in Table \ref{tab:3}, DRTT outperforms the meaning-preserving method and other methods on all types of noise on Zh$\rightarrow$En and En$\rightarrow$De tasks. Considering the results of Table \ref{tab:2} and Table \ref{tab:3} together, DRTT significantly improves the robustness of NMT models under various artificial noises. \begin{table} \centering \renewcommand\arraystretch{0.98} \scalebox{0.95}{ \begin{tabular}{l|c|c} \toprule \textbf{Method} & \textbf{En$\rightarrow$Fr} & \textbf{BLEU}$\bm \Delta$\\ \midrule baseline & 35.02 \\ +CharSwap & 35.59 & +0.57\\ +TCWR & 35.64 & +0.62\\ +RTT & 35.73 & +0.71\\ \midrule +DRTT(ours) & ~~\textbf{36.36$*$} & \textbf{+1.34}\\ \bottomrule \end{tabular} } \caption{\label{tab:4} The BLEU scores (\%) on the WMT'19 En$\rightarrow$Fr robustness task. `BLEU$\Delta$' denotes the gain of BLEU compared to baseline. `$*/**$': significantly better than RTT with $p<0.05$ and $p<0.01$, respectively. } \end{table}% \begin{table*} \centering \renewcommand\arraystretch{1} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|c| c c c c c |c|c c} \toprule[1.2pt] \multirow{2}{*}{\textbf{Model}} & \multicolumn{7}{c|}{\textbf{Zh$\rightarrow$En}} & \multicolumn{2}{c}{\textbf{En$\rightarrow$De}} \\ \cmidrule(r){2-8}\cmidrule(r){9-10} \multicolumn{1}{c|}{}& MT06 & MT02 & MT03 & MT04 & MT05 & MT08 & \textbf{AVG} & newstest16 & newstest17 \\ \midrule baseline & 44.59 & 44.38 & 43.65 & 45.37 & 44.42 & 35.80 & 42.72 & 29.11 & 27.94\\ +CharSwap & 43.28 & 44.80 & 44.24 & 45.52 & 43.82 & 34.29 & 42.53 & 28.48 & 27.54\\ +TCWR & 44.55 & \textbf{45.99} &44.68 & 45.77 & 44.16 & 34.98 & 43.12 & 29.13 & 27.98\\ +RTT & 44.62 &45.13 & 44.01 & 46.00 & \textbf{44.96} & 35.18 & 43.06 & 29.06 & 27.42\\ \midrule +DRTT(ours) & \textbf{44.76} & 45.01 & ~~~~\textbf{45.16$**$} & ~~~~\textbf{46.63$**$} & 44.78 & ~~\textbf{35.82$*$} & \textbf{43.48} & \textbf{29.30} & ~~~~\textbf{28.37$**$}\\ \bottomrule \end{tabular} } \caption{\label{tab:5} The BLEU scores (\%) on NIST Zh$\rightarrow$En and WMT17 En$\rightarrow$De. `$*/**$': significantly better than RTT with $p<0.05$ and $p<0.01$, respectively.} \end{table*} \paragraph{Natural Noise.} In addition to the artificial noise, we also test the performance of our model on WMT'19 En$\rightarrow$Fr robustness test set which contains various noise in real-world text, e.g., exhibits typos, grammar errors, code-switching, etc. As shown in Table \ref{tab:4}, DRTT yields improvements of 1.34 BLEU compared to the baseline, it proves that our approach also performs well in real noise scenario. Besides, DRTT achieves 0.63 BLEU improvement over RTT by filtering out 10\% of fake adversarial examples (according to Table \ref{tab:6}), which demonstrates that filtering out fake adversarial examples further improves the robustness of the model. \subsection{Effectiveness of Adversarial Examples} In this sub-section, we evaluate the effectiveness of the generated adversarial examples on attacking the victim NMT model (i.e., the target NMT model without being trained on the generated adversarial pairs). In our approach, $\gamma$ in Eq.(\ref{func:4}) is a hyper-parameter to control the strictness of our criterion on generating adversarial examples. Thus, we evaluate the effectiveness of adversarial examples by studying the translation performance of the victim NMT model on the set of adversarial pairs generated with different $\gamma$. That is to say, if a sample is an adversary, it should destroy the translation performance drastically, resulting in a low BLEU score between the translation result and its paired target sentence. The average BLEU scores of the victim model on the different adversarial pair sets (generated with $\gamma$ from -10 to 1 on NIST 06) are shown in Figure \ref{fig:3}. Specifically, the average BLEU on the adversarial sets generated with $\gamma=-10$ is 8.0. When we remove the restriction of $\gamma$, i.e., the DRTT is degenerated into RTT, the average BLEU for the constructed adversarial examples reaches up to 11.2. This shows that the adversarial examples generated with lower $\gamma$ (more strict restriction) attack the model more successfully. Therefore, we can select more effective adversarial examples compared to \citet{zhang-etal-2021-crafting} by lowering the threshold $\gamma$ to create a more strict criterion. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{pic/contour3.pdf} \caption{\label{fig:3} Black spots represent the distribution of adversarial samples. The darker color indicates more effective adversarial examples generated with lower $\gamma$. } \end{figure} \subsection{Clean Test set} Adding a large amount of noisy parallel data to clean training data may harm the NMT model performance on the clean test sets seriously \citep{khayrallah2018impact}. In this sub-section, we test the performance of the proposed model on the clean test sets and the results are presented in Table \ref{tab:5}. The meaning-preserving method CharSwap has negative effect on clean test set while DRTT achieves the best translation performance on Zh$\rightarrow$En and En$\rightarrow$De clean test sets. It demonstrates that our approach not only improves the robustness of the NMT model, but also maintains its good performance on clean test sets. \section{Case Study and Limitations} \label{sec:appendixc} In Table \ref{table:case study}, we present some cases from Zh-En adversarial pairs generated by our approach. From the case 1, we can see ``\begin{CJK*}{UTF8}{gbsn}{拥护}\end{CJK*}'' in the source sentence is replaced by its antonym ``\begin{CJK*}{UTF8}{gbsn}{反对}\end{CJK*}'', which reverse the meaning of the original sentence, and DRTT makes a corresponding change in the target sentence by replacing ``support'' with ``oppose''. In the other case, DRTT replaces ``\begin{CJK*}{UTF8}{gbsn}{良好}\end{CJK*}'' by its synonym ``\begin{CJK*}{UTF8}{gbsn}{不错}\end{CJK*}'', thus, ``satisfactory'' in the target sentence remains unchanged. From these cases, we find that DRTT can reasonably substitute phrases in source sequences based on the contexts and correctly modify the corresponding target phrases synchronously. Although the proposed approach achieves promising results, it still has limitations. A small number of authentic adversarial examples may be filtered out when the large ${\rm {d_{tgt}}}(\mathbf{y},\mathbf{y'_\delta})$ is caused by $f(\hat{x}_\delta)$, we will ameliorate this problem in the further. \begin{table} \renewcommand\arraystretch{1} \resizebox{\linewidth}{!}{ \begin{tabular}{l} \toprule x : \begin{CJK*}{UTF8}{gbsn}{\small 我们坚决拥护政府处理这一事件所采取的措施。}\end{CJK*}\\ y : we resolutely support measures taken by our \\ government in handling this incident.\\ \hline ${\rm x_\delta}$ : \begin{CJK*}{UTF8}{gbsn}{\small 我们坚决\textcolor{red}{反对}政府处理这一\textcolor{red}{案件}所采取的\textcolor{red}{举措}。}\end{CJK*}\\ ${\rm y_\delta}$ : we resolutely \textcolor{blue}{oppose} measures taken by our \\ government in handling this \textcolor{blue}{case}. \\ \toprule x : \begin{CJK*}{UTF8}{gbsn}{\small 中美双方认为 , 当前世界经济形势是良好的。通货膨胀}\end{CJK*} \\ \begin{CJK*}{UTF8}{gbsn}{\small继续保持低水平, 大多数新兴 市场经济体的经济增长强劲。}\end{CJK*}\\ y : china and the united states agreed that the present \\ economic situation in the world is satisfactory, with \\ inflation kept at a low level and most of the new market \\ economies growing strong.\\ \hline ${\rm x_\delta}$ : \begin{CJK*}{UTF8}{gbsn}{\small \textcolor{red}{俄}美双方认为, 当前世界\textcolor{red}{贸易势头}是\textcolor{red}{不错}的。 通货膨胀}\end{CJK*}\\ \begin{CJK*}{UTF8}{gbsn}{\small 继续保持低\textcolor{red}{速度}, 大多数新兴市场经济体的经济\textcolor{red}{发展}强劲。}\end{CJK*}\\ ${\rm y_\delta}$ : \textcolor{blue}{russia} and the united states agreed that the present \\ \textcolor{blue}{trade trend} in the world is satisfactory, with inflation \\ kept at a low \textcolor{blue}{rate} and most of the new market economies \\ \textcolor{blue}{developing} strong. \\ \toprule \end{tabular} } \caption{Case study for the proposed approach. The words in red and blue color represents the augmented words on the source and target side, respectively. } \label{table:case study} \end{table} \section{Conclusion and Future Work} We propose a new criterion for NMT adversarial examples based on Doubly Round-Trip Translation, which can ensure the examples that meet our criterion are the authentic adversarial examples. Additionally, based on this criterion, we introduce the masked language models to generate bilingual adversarial pairs, which can be used to improve the robustness of the NMT model substantially. Extensive experiments on both the clean and noisy test sets show that our approach not only improves the robustness of the NMT model but also performs well on the clean test sets. In future work, we will refine the limitations of this work and then explore to improve the robustness of forward and backward models simultaneously. We hope our work will provide a new perspective for future researches on adversarial examples. \section*{Acknowledgements} The research work descried in this paper has been supported by the National Key R\&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976016, 61976015, and 61876198). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper.
1,116,691,498,019
arxiv
\section{Introduction} There are mounting data from type Ia supernovae, cosmic microwave background (CMB) radiation, and so on\cite{1998snia,Spergel,Riess,Seljak}, have provided strong evidences for the present spatially flat and accelerated expanding universe, corresponding to $\ddot{a}>0$, which is dominated by dark sectors. Combined analysis of the above cosmological observations support that the energy of our universe is occupied by dark energy(DE) about $73\%$, dark matter about $23\%$ and usual baryon matter only about $4\%$ which can be described by the well known particle theory. In the context of Friedmann-Robertson-Walker(FRW) cosmology, the evolution of scale factor is governed by the temporal part of Enistein equation $3\frac{\ddot{a}}{a}=-4\pi G(\rho+3p)$, this acceleration may be attributed to the exotic form of negative pressure satisfying $p<-3\rho$, the so-called DE. So far, the nature of DE remains a mystery. To describe the property of this component, a significant parameter $w=\frac{p}{\rho}$, called Equation of State (EoS), was introduced. And it is need to be $w<-\frac{1}{3}$ theoretically. Based on different evolution of the EoS we can obtain different candidate for DE. Currently, it is widely taken the candidate as a small cosmological constant $\Lambda$ (or vacuum energy) with EoS $w=-1$ as well as a dynamical component such as the Quintessence with $-1<w<1$\cite{Wetterich:1987fm, Ratra:1987rm}, Phantom with $w<-1$\cite{Caldwell:1999ew}, K-essence with both $w\geq-1$ and $w<-1$ but never crossing $-1$\cite{ArmendarizPicon:2000ah,Chiba:1999ka}. Although the recent fits to the data in combination of WMAP\cite{Spergel:2006hy,Komatsu:2008hk}, the recently released 182 SNIa Gold sample\cite{Riess:2006fw} and also other cosmological observational data show remarkably the consistence of the cosmological constant, it is worth of noting that a class of dynamical models with the EoS across $-1$ {\it Quintom} is mildly favored \cite{Feng:2004ad,Zhao:2006qg, Zhao:2006bt, Wang:2006ts}. In the literature there have been a lot of theoretical studies of Quintom-like models. Especially, a No-Go theorem has been proved to constrain the model building of Quintom\cite{Xia:2007km}, and according to this No-Go theorem there are models which involve higher derivative terms for a single scalar field \cite{Li:2005fm}, models with vector field \cite{ArmendarizPicon:2004pm}, making use of an extended theory of gravity \cite{Cai:2005ie}, non-local string field theory \cite{Aref'eva:2005fu}, and others (see e.g. \cite{Guo:2004fq, Quintom_tf, Cai:2006dm, Quintom_1, Cai:2007qw, Cai:2007gs, Cai:2007zv, Quintom_others, Xiong:2007cn}). The similar work applied in scalar-tensor theory is also studied in Ref. \cite{Elizalde:2004mq}. Except that many works have been done in pursuit of establishing concrete model to understanding the theoretical nature and origin of this special fluid, there also are a number of papers committing themselves to investigating the thermodynamic properties of DE fluid. The thermodynamics of de Sitter space-time was first investigated by Gibbons and Hawking\cite{Gibbons:1977mu} and \cite{Verlinde:2000wg, Cai:2009rd, Pollock:1989pn, Frolov:2002va} extended the study to quasi-de Sitter space-time. Based on an assumption that DE is a thermallized ensemble at certain temperature with an associated thermodynamical entropy, Ref. \cite{Brevik:2004sd, Lima:2004wf, GonzalezDiaz:2004eu,Izquierdo:2005ku, MohseniSadjadi:2005ps, Setare:2006vz, Setare:2006rf, Wang:2005pk, Santos:2006ce, Santos:2007jy, Bilic:2008zk} made various aspects of the thermodynamic discussions. The papers\cite{Sheykhi:2008qr,Sheykhi:2008qs} have studied the GSL of modified gravity. In the literature\cite{Husain:2008qx}, the thermodynamics of Quantum Gravity has been investigated. Ref.\cite{Li:2008tc} considered the apparent horizon of the Friedmann-Robertson-Walker universe as a thermodynamical system and investigate the thermodynamics of LQC in the semiclassical region. Previously, it have been considered that a Quintom dark energy with non-regular spinor matter\cite{Cai:2008gk}. In succession, to understand the possible combinations among different types of Quintom model in spinor field we study the implications of cosmic duality with this class of models and realize additional Quintom models by the aid of this dual properties. In the meantime, we also perform the statefinder diagnostic for this Spinor Quintom model\cite{wang:2008cs}. In this paper, we will discuss the thermodynamics of the Spinor Quintom model. From the thermodynamical point of view, our universe can be considered as a thermodynamical system filled with DE perfect fluid, we will examine the GSL and thermodynamic stability in this system. This letter is organized as follows. In section 2, we investigate the validity of GSL in spinor field with Quintom DE model, we indicate that the conditions under which the GSL can be satisfied. In section 3, we explore the conditions for thermodynamic stability of the combination between Quintom model with spinor field and the GCG perfect fluid. Some thermodynamic parameters, as functions of entropy and volume, are given in section 4, and we also display the relation with the stability from the point of view of quantum perturbation stability. Section 5 contains our conclusions and prospects. \section{GSL in a System Filled with Spinor Quintom Matter} One of the distinguishing features of the driver of current accelerating expansion, the alleged DE, lies in violating the strong energy condition, $\rho+3p>0$\cite{Riess,Sper}. As a result of the dependence on theoretical models this strength of acceleration is a question in debating. While most model independent analysis suggest that it be below the De Sitter value\cite{Daly:2003iy}, it is certainly true that the body of observational data allows of a wide parameter space compatible with an acceleration larger than de Sitter's\cite{Caldwell:1999ew,Hannestad:2004cb}. If eventually it is proven to be the case, this dark component would violate not only the strong energy condition $\rho+3p>0$ but also the dominated energy condition $\rho+p>0$. In the literature, component with the above specialities was dubbed Phantom\cite{Caldwell:1999ew,Caldwell:2003vq}, suffering from a long list of pathologies such as quantum instabilities\cite{Carroll:2003st,Frampton:2003xg} which leads to supersonic and causes a super accelerating universe ending in a big rip or big crunch along the cosmic evolution. Attracting many attentions, the interesting fluid has been widely discussed recent years\cite{Dabrowski:2003jm,Meng:2003tc}, and Ref.\cite{GonzalezDiaz:2004eu,Myung:2008km} investigated the thermodynamics on phantom dark energy dominant universe. The thermodynamics of DE with constant EoS in the range of $-1<w<-\frac{1}{3}$ was considered in \cite{Danielsson:2004xw}, and that of K-essence also was studied in Ref.\cite{Bilic:2008zk}. Based on the relation between the event of horizon and the thermodynamics of a black hole assumed by Bekenstein in 1973 \cite{Bekenstein:1973ur}, the event of horizon of a black hole is a measure of its entropy. This idea has been generalized to horizons of cosmological models, so that each horizon corresponds to an entropy. Correspondingly, the second law of thermodynamics was modified in the way that in generalized form, the sum of all time derivative of entropies related to horizons plus time derivative of normal entropy must be positive, i.e., the sum of entropies must be increasing with time. Ref. \cite{Davies:1987ti} investigated the validity of GSL for the cosmological models which departs slightly from de Sitter space. Ref.\cite{Izquierdo:2005ku} explored the thermodynamics of DE taking the existence of the observer's event horizon in accelerated universes into account. The conditions of validity of generalized second law in phantom dominated era was studied in \cite{MohseniSadjadi:2005ps}. The validity of the GSL of thermodynamics for the Quintom DE model with two scalar fields without coupling potential term was considered by \cite{Setare:2006rf}. In this section, we will discuss the validity of the GSL of thermodynamics for a Quintom-dominated universe in spinor field and clarify its relation with three cosmological entropy bounds: the Bekenstein bound\cite{Bekenstein:1980jp}, the holographic Bekenstein-Hawking bound, and the Hubble bound\cite{Verlinde:2000wg}. To begin with the discussion, we deal with the homogeneous and isotropic Friedmann-Robertson-Walker (FRW) space-time, then the space-time metric reads, \begin{equation} ds^{2}=dt^{2}-a^{2}(t)d\vec{x}^2. \end{equation} Assuming that the dynamics of gravity is governed by the Einstein-Hilbert action, for a spinor minimally coupled to general relativity\cite{ArmendarizPicon:2003qk,Vakili:2005ya,Ribas:2005vr}, we have, \begin{equation} S=S_\psi+S_m-\frac{1}{6}\int d^4 x\sqrt{-g}R. \end{equation} where $R$ is the scalar curvature, $S_\psi$ is given by the the Dirac action and $S_m$ describes additional matter fields, such as scalar fields and gauge fields.\footnote{Here, we postulate symmetries, diffeomorphism and local Lorentz invariance.} We consider the spinor component as the thermodynamical system we may discuss, which is filled with Quintom DE fluid. With the aid of the dynamics of a spinor field which is minimally coupled to Einstein's gravity\cite{Weinberg,BirrellDavies,GSW}, we can write down the following Dirac action in a curved space-time background \begin{eqnarray} S_\psi&=&\int d^4 x~e~[\frac{i}{2}(\bar\psi\Gamma^{\mu}D_{\mu} \psi-D_{\mu}\bar\psi\Gamma^{\mu}\psi)-\Phi]\nonumber\\ &=&\int d^4 x ~e~{\cal L}_{\psi}, \end{eqnarray} Here, $e$ is the determinant of the vierbein $e_{\mu}^{a}$ and $\Phi$ stands for any scalar function of $\psi$, $\bar\psi$ and possibly additional matter fields. We will assume that $\Phi$ only depends on the scalar bilinear $\bar\psi\psi$. From the expression of the Dirac action, we have the energy density and the pressure of the spinor field: \begin{eqnarray} \label{density}\rho_\psi&=&T_{0}^{0}=\Phi~,\\ \label{pressure}p_\psi&=&-T_{i}^{i}=\Phi'\bar\psi\psi-\Phi~, \end{eqnarray} For a gauge-transformed homogeneous and a space-independent spinor field, the equation of motion of spinor reads\cite{Cai:2008gk} \begin{eqnarray} \dot{\psi}+\frac{3}{2}H\psi+i\gamma^{0} \Phi' \psi&=&0,\\ \dot{\bar\psi}+\frac{3}{2}H\bar\psi-i\gamma^{0}\Phi' \bar\psi&=&0, \end{eqnarray} where a dot denotes a time derivative while a prime denotes a derivative with respect to $\bar\psi\psi$, and $H$ is Hubble parameter. In the framework of FRW cosmology, the Friedmann constraint equation will be\footnote{Note that we use units $8\pi G=\hbar=c=1$ and all parameters are normalized by $M_p=1/\sqrt{8 \pi G}$ in the letter.} \begin{equation} H^2=\frac{1}{3}\rho_\psi~, \end{equation} From the equation of motion of spinor and the Friedmann constraint equation, we can obtain the the derivative of Hubble parameter with respect to time, \begin{equation} \dot{H}=\frac{\dot{\rho_\psi}}{6H}=\frac{\Phi'\bar{\psi}\psi}{2}. \end{equation} So we have \begin{equation} \rho_\psi+p_\psi=-2\dot{H}. \end{equation} According to the Gibbons equation \begin{equation} Tds=dE+p_\psi dV=(p_\psi+\rho_\psi)dV+Vdp_\psi, \end{equation} combined with the above relations and the expression of volume $V=\frac{4}{3}\pi R_H^3$ ($R_H$ is the event of the horizon), we may rewriting the first law of thermodynamics as, \begin{eqnarray} Tds&=&-2\dot{H}d(\frac{4}{3}\pi R_H^3)+\frac{4}{3}\pi R_H^3 d\rho_\psi\nonumber\\ &=&-8\pi R_H^2\dot{H}dR_H+8\pi HR_H^3 dH, \end{eqnarray} where $T$ is the temperature of the background of Spinor fluid. Therefore, the derivative of normal entropy is given as follows: \begin{equation} \dot{s}=\frac{ds}{dt}=\frac{1}{T}8\pi\dot{H}R_H^2(HR_H-\dot{R_H}). \end{equation} Now we turn to consider the entropy corresponding to the event horizon. The definition of event horizon in a de Sitter space-time is \begin{equation} R_H=a(t)\int_t^\infty\frac{dt'}{a(t')}. \end{equation} So the time derivative of event of horizon in a spinor field approaching to de Sitter space satisfies the following equation: \begin{equation} \dot{R_H}=\dot{a(t)}\int_t^\infty\frac{dt'}{a(t')}+a(t)\dot{\int_t^\infty\frac{dt'}{a(t')}}=HR_H-1. \end{equation} i. In the parameter range of $HR_H\leq1$, the Bekenstein bound, which is supposed to hold for systems with limited self-gravity, is appropriate. And the EoS of spinor larger than $-1$, corresponding to a Quintessence dominant universe\cite{Davies:1987ti}. ii. While for $HR_H\geq1$, corresponding to a strongly self-gravitating universe, the Bekenstein bound has to be replaced by holographic Bekenstein-Hawking bound in which one has $S_B\geq S_{BH}$. And one can get a Phantom phase\cite{MohseniSadjadi:2005ps}. iii. If $HR_H=1$, the Bekenstein bound $S_B$ is equal to holographic Bekenstein-Hawking bound $S_{BH}$. Then we can write the final form of the time derivative of normal entropy, \begin{equation} \dot{s}=\frac{8\pi R_H^2\dot{H}}{T}. \end{equation} As we well know, the entropy is proportional to the area of its event horizon. If the horizon entropy corresponding to $R_H$ is defined as $s_H=\pi R_H^2$, the GSL can be stated as: \begin{equation} \dot{s}+\dot{s_H}=\frac{8\pi R_H^2\dot{H}}{T}+2\pi R_H\dot{R_H}\geq0. \end{equation} In the following, we will take the Quintom-B model realized in Ref. \cite{Cai:2008gk} to discuss the validity of GSL in spinor field. The temperature of Spinor Quintom-B is assumed to be positive. (1). Phantom dominated evolution: \\In this phase $\dot{R_H}\leq0$, so $\dot{s_H}\leq0$. From $V'<0$ one can get $\dot{H}>0$. So the condition for validity of GSL can be expressed as: \begin{equation} \dot{H}\geq\mid\frac{\dot{R_H}T}{4R_H}\mid. \end{equation} (2). Quintessence dominated evolution:\\ In this period of evolution $\dot{R_H}\geq0$, then we have a negative time derivative of Hubble parameter but that of horizon entropy is not a negative value. Thus the condition for validity of GSL is: \begin{equation} \mid\dot{H}\mid\leq\frac{\dot{R_H}T}{4R_H}. \end{equation} (3). Phase transition from Phantom to Quintessence:\\ At the transition point, we have $w=-1$ and $V'=0$, that is to say $\dot{H}=0$, so $\dot{s}=0$. Assuming that the event horizon $R_H$ varies continuous, one may expect that $\dot{R_H}=0$ in transition time, so the horizon entropy is continuous and differentiable\cite{Setare:2006rf}. Therefore, to realize the transition, it need to be continuous and differentiable in transition time for the total entropy of the universe. (4). The final phase--an approximate de Sitter universe:\\ In such a state, the temperature is \cite{Davies:1987ti}, \begin{equation} T=\frac{bH}{2\pi}, \end{equation} where $b$ is a parameter. During this period, the universe lies in the Quintessence phase, so \begin{equation} b\geq\frac{8\pi\mid\dot{H}\mid R_H}{H\dot{R_H}}, \end{equation} in de Sitter space-time case $R_H=\frac{1}{H}$, one can get $b\geq8\pi$, which should be satisfied if GSL is valid. In conclusion, one can find that the conditions for the validity GSL of Spinor Quintom model are similar to that of the Quintom DE model constructed by two scalar fields without coupling potential term which was considered in \cite{Setare:2006rf}. \section{Thermodynamic Stability of The Combination between Spinor Quintom and GCG Perfect Fluid} Since the Chaplygin gas was generalized people have make many correlative studies\cite{Bento:2003dj,Chimento:2003ta} to reconcile the standard model with observations. Ref.\cite{Santos:2006ce} discusses the behavior of temperature and the thermodynamic stability of a generalized Chaplygin gas considering only general thermodynamics --- the corresponding thermal equation of state for the GCG and analyzed its temperature behavior as well as its thermodynamic stability considering both adiabatic and thermal equations of state. While in the literature \cite{Santos:2007jy}, Chaplygin gas was modified again, and a scenario was set up to determine the corresponding thermal equation of state of the modified Chaplygin gas(MCG) and it reveals that the MCG only presents thermodynamic stability during any expansion process if its thermal equation of state depends on temperature only, $P=P(T)$. Moreover, the modified Chaplygin gas may cool down through any thermodynamic process without facing any critical point or phase transition. We have established a combination between Chaplygin gas and Spinor Quintom in Ref.\cite{Cai:2008gk}, in this section we will investigate the thermodynamic stability in a universe filled with the fluid combined by both Quintom and GCG in spinor field. In Ref.\cite{Cai:2008gk},we took the form of potential as $\Phi=\sqrt[1+\beta]{\Phi_0(\bar{\psi}\psi)^{1+\beta}+c}$, and got the EoS of GCG model \begin{equation} p=-\frac{c}{\rho^{\beta}}~, \end{equation} where parameter $\beta$ is a constant and positive $\beta>0$ and $c$ is also positive and a universal constant\cite{Santos:2006ce}. Here we consider a closed thermodynamic system full of dark energy fluid, in which the combination of Spinor Quintom with GCG play important role. Assuming the internal energy $U$ and pressure $p$ as only the functions of their natural viables entropy $s$ and volume $V$: $U=U(s,V), p=p(s,V)$, and the energy density of DE fluid is \begin{equation} \rho=\frac{U}{V}~. \end{equation} From general thermodynamics\cite{Kubo,Landau}, we know that \begin{equation} (\frac{\partial U}{\partial V})_s=-p~. \end{equation} Combined the above three equations, we can get the following form, \begin{equation} (\frac{\partial U}{\partial V})_s=c\frac{V^\beta}{U^\beta}, \end{equation} and the expression of the internal energy of this system is also given by its solution, \begin{equation} U=\sqrt[1+\beta]{cV^{1+\beta}+b}, \end{equation} where $b=b(s)$ is an integration parameter. It can be proven that even $c=c(s)$ is not a universal constant, the above expression remains valid. The Eq. (25) also can be written as\cite{Santos:2006ce}: \begin{equation} U=V\sqrt[1+\beta]{c[1+(\frac{\sigma}{V})^{1+\beta}]}, \end{equation} where parameter $\sigma^{1+\beta}=\frac{b}{c}$. Then we may deduce the expressions of energy density and pressure with respect to this parameter, \begin{eqnarray} \label{density}\rho&=&\sqrt[1+\beta]{c[1+(\frac{\sigma}{V})^{1+\beta}]}~,\\ \label{pressure}p&=&-\sqrt[1+\beta]{\frac{c}{[1+(\frac{\sigma}{V})^{1+\beta}]^{\beta}}}~. \end{eqnarray} By these two equations, we could understand the behavior of both past and future of our universe. In the early time with small scale factor and volume, the energy density and pressure behave as the below form: \begin{eqnarray} \rho&\approx&c^{\frac{1}{1+\beta}}\frac{\sigma}{V},\\ p&\approx&c^{\frac{1}{1+\beta}}(\frac{V}{\sigma})^{\beta}\sim0, \end{eqnarray} corresponding to a high energy density and approximative pressureless matter dominant phase. During this period the energy density reduces as its entropy and volume adiabatically. Along with the cosmological expansion through to some late times, these two parameters are approximate respectively to \begin{eqnarray} \rho&\approx&c^{\frac{1}{1+\beta}}+\frac{c^{\frac{1}{1+\beta}}}{1+\beta}(\frac{\sigma}{V})^{1+\beta},\\ p&\approx&c^{\frac{1}{1+\beta}}. \end{eqnarray} During this period of the evolution, the total system can be seen as constituted by two components: one with constant energy density and the other with an alterable energy density with respect to volume. While for a large value of scale factor, the energy density may rather lower and EoS is $p=-\rho=c^{\frac{1}{1+\beta}}$ which is a de Sitter Space-time. Consequently, we realize a transformation from dust-like matter- dominated universe to a de Sitter phase in the point of view of thermodynamics. In what follows, we will extensively examine the conditions for the thermodynamic stability of this combined system. (1). We determine how the pressure change with volume through the adiabatic expansion.\\ Using Eq. (28), one can get \begin{equation} (\frac{\partial p}{\partial V})_s=\beta\frac{p}{V}[1-\frac{1}{1+(\frac{\sigma}{V})^{1+\beta}}], \end{equation} it is obvious that we exclude the case of $\beta=0$ due to a constant pressure and the disappearing derivative. While in the case of $\beta>0$ the above derivative is always negative value. (2). To make a system stable, it is necessary for the thermal capacity at constant volume to be positive $c_V>0$, the pressure reduces as volume at constant temperature, as well.\\ For this purpose, we calculate the formula of temperature $T$ and entropy $s$ to determine how the temperature depends on its entropy and volume. In the thermodynamics and statistical physics, the temperature of a system is defined as: \begin{equation} T=(\frac{\partial U}{\partial s})_V, \end{equation} combined with the expression of internal energy, the formula of temperature can be written as follows\cite{Santos:2006ce}: \begin{equation} T=\frac{1}{1+\beta}(cV^{1+\beta}+\varepsilon)^{-\frac{\beta}{1+\beta}}(V^{1+\beta}\frac{dc}{ds}+\frac{d\varepsilon}{ds}). \end{equation} Clearly, if we take parameter as both $c$ and $\varepsilon$ are universal constant, the temperature equals to $0$ for any value of pressure and volume. As a result, the isotherm $T=0$ is simultaneously an isentropic curve at $s=const$, which violates the third law of thermodynamics\cite{Santos:2006ce}. Taking this factor into account, we choose $c$ as a universal constant and $\frac{d\varepsilon}{ds}>0$. From dimensional analysis it can be understood that $\varepsilon$ has a dimension of energy, $[\varepsilon]^{1+\beta}=[U]$. In this case, we take it as\cite{Santos:2006ce} \begin{equation} b=(T_0s)^{1+\beta}, \end{equation} so, \begin{equation} \frac{d\varepsilon}{ds}=(1+\beta)(T_0s)^{\beta}T_0. \end{equation} Then the formulae of temperature and entropy of this system can be written as: \begin{eqnarray} T&=&T_0^{1+\beta}s^{\beta}[cV^{1+\beta}+(T_0s)^{1+\beta}]^{-\frac{\beta}{1+\beta}},\\ s&=&\frac{c^{\frac{1}{1+\beta}}}{T_0}\frac{T^{\frac{1}{\beta}}}{(T_0^{\frac{1+\beta}{\beta}}-T^{\frac{1+\beta}{\beta}})^{\frac{1}{1+\beta}}}V. \end{eqnarray} A stable thermodynamic system requires a positive and finite entropy, which requests that the temperature satisfy \begin{equation} 0<T<T_0. \end{equation} By the definition of $c_V$ and the formulae of temperature and entropy, one can rewrite $c_V$ as, \begin{equation} c_V=\frac{1}{\beta T_0}\frac{c^{\frac{1}{\beta}}V}{[1-(\frac{T}{T_0})^{\frac{1+\beta}{\beta}}]^{\frac{2+\beta}{1+\beta}}}(\frac{T}{T_0})^{\frac{1}{\beta}}, \end{equation} Thus, When $\beta>0$ and $0<T<T_0$, one can get a positive $c_V$. Correspondingly, we can obtain the expression of pressure, \begin{equation} p=-c^{\frac{1}{1+\beta}}[1-(\frac{T}{T_0})^{\frac{1+\beta}{\beta}}]^{\frac{\beta}{1+\beta}}, \end{equation} It can be seen that the pressure is only the function of temperature, so $(\frac{\partial p}{\partial V})_T>0$ is satisfied. In a word, in the case of $\beta>0$ and $0<T<T_0$, the system we consider is thermodynamically stable. \section{Thermodynamic parameters and its relation with quantum stabilities} In the first two sections, we have studied the stability of a system filled with Spinor Quintom DE fluid from the classical thermodynamic point of view. For this part, we will derive a class of thermal quantities as functions either of entropy or volume, then we may discuss the relation with quantum perturbation and which constraint is much stronger. From the expressions of energy density (EQ.\ref{density}) and pressure (EQ.\ref{pressure}), we can get, \begin{eqnarray} &&\rho+P=\sqrt[1+\beta]{c(1+(\frac{\sigma}{V})^{1+\beta})}-\sqrt[1+\beta]{\frac{c}{1+(\frac{\sigma}{V})^{1+\beta}}}\nonumber\\ &&=\sqrt[1+\beta]{c}(\sqrt[1+\beta]{1+(\frac{\sigma}{V})^{1+\beta}}-\frac{1}{\sqrt[1+\beta]{1+(\frac{\sigma}{V})^{1+\beta}}}). \end{eqnarray} Besides, From the definition of entropy \begin{equation} S\equiv\frac{\rho+P}{T}V, \end{equation} we can derive a defining equation of tempertature for an adiabatic process, \begin{equation} \label{T}T\equiv\frac{\rho+P}{S}V. \end{equation} Then we have the temperature \begin{eqnarray} T_{(V)}&=&\frac{\sqrt[1+\beta]{c}}{S}(\sqrt[1+\beta]{V^{1+\beta}+\sigma^{1+\beta}}\nonumber\\&&-\frac{V^2}{\sqrt[1+\beta]{V^{1+\beta}+\sigma^{1+\beta}}}). \end{eqnarray} In addition, the EoS $W_{V}$, squared speed of sound $C^2_{s(V)}$ and entropy $S_{V}$ read respectively, \begin{eqnarray} W_{(V)}&=&\frac{P}{\rho}=-\frac{V^{1+\beta}}{V^{1+\beta}+\sigma^{1+\beta}},\\ C^2_{s(V)}&=&\frac{\partial P}{\partial\rho}=\frac{V^{1+\beta}}{\sigma^{1+\beta}},\\ S_{(V)}&=&\frac{C^{\frac{1}{1+\beta}}}{S}(\sqrt[1+\beta]{V^{1+\beta}+\sigma^{1+\beta}}\nonumber\\&&-\frac{V^2}{\sqrt[1+\beta]{V^{1+\beta}+\sigma^{1+\beta}}}). \end{eqnarray} The combination among the integrability condition \begin{equation} \frac{\partial^2S}{\partial T\partial V}=\frac{\partial^2S}{\partial V\partial T}, \end{equation} the Maxwell Relation \begin{equation} \frac{\partial T}{\partial V}=-\frac{\partial P}{\partial S}, \end{equation} and EQ.(\ref{T}), can lead to the relation, \begin{equation} dP=-\frac{\rho+P}{S}dS. \end{equation} And setting $\beta=1$ in EQ. (\ref{density}) and EQ. (\ref{pressure}), one has \begin{eqnarray} \rho+P&=&-\sqrt{c}\frac{\frac{\sigma^2}{V^2}}{\sqrt{1+(\frac{\sigma}{V})^2}} \nonumber\\ &=&\frac{c}{P}-P, \end{eqnarray} so \begin{equation} \frac{PdP}{P^2-c}=\frac{dS}{S}. \end{equation} Finally we can get the thermal quantities as functions of entropy. \begin{eqnarray} P_{(S)}&=&-\sqrt{c}\sqrt{1-(\frac{S}{S_*})^2},\\ \rho_(S)&=&\frac{\sqrt{c}}{\sqrt{1-(\frac{S}{S_*})^2}},\\ W_{(S)}&=&(\frac{S}{S_*})^2-1,\\ C^2_{S(S)}&=&1-(\frac{S}{S_*})^2. \end{eqnarray} Based on the above expressions of these quantities, we may analyze the quantum stability in connection with perturbations which is one important issue of a DE model. Usually systems with negative kinetic modes from ghost fields suffer from the quantum instabilities which may induce some supersonic phenomenon. However, in our Spinor Quintom DE model, we do not introduce any ghost field, and is it to say that this model will not perform any quantum instability? To study this issue, we would like to redefine the spinor as $\psi_N\equiv a^{\frac{3}{2}}\psi$. Then perturbing the spinor field, one gives the perturbation equation as follows \cite{Cai:2008gk}, \begin{eqnarray}\label{perteq} &&\frac{d^2}{d\tau^2}\delta\psi_N-\nabla^2\delta\psi_N+\nonumber\\ &&a^2\left[ V'^2+i\gamma^0 (HV'-3HV''\bar\psi\psi) \right]\delta\psi_N\nonumber\\ &&=-2a^2V'V''\delta(\bar\psi\psi)\psi_N\nonumber\\ &&-i\gamma^\mu\partial_\mu[a V''\delta(\bar\psi\psi)]\psi_N~, \end{eqnarray} where $\tau$ is the conformal time defined by $d\tau\equiv dt/a$. From the perturbation equation above, we can read that the sound speed is equal to $1$ which eliminates the instability of the system in short wavelength. Thus to what degree the system is stable in both quantum and classical level, and which constraint is much stronger. Furthermore, whether there are some instability from the unrenormalizable quantum effect. Such issues we may discuss in detail in our future work. \section{Conclusion and Discussions} To summarize, we have investigated the thermodynamics of Quintom DE dominant thermodynamical system in spinor field. Firstly, we show the conditions in which the total entropy may not decrease with time not only in Phantom and Quintessence phase but also at the transition time and the final approximative de Sitter phase. We set up the similar conditions to a Quintom universe with two scalar fields without coupling potential term. In the second place, we, using general thermodynamics, explore the thermodynamic stability of a system full of the DE fluid combined Spinor Quintom with GCG, and we conclude that in a certain range of temperature, i.t. $0<T<T_0$, this system remains thermodynamically stable without any limitation on pressure. We also derive a class of thermal quantities as functions either of entropy or volume, then we may discuss the relation with quantum perturbation. And in our future work, we may clarify which constraint is much stronger by detailed calculations. \section*{Acknowledgements} It is a pleasure to thank Xin-min Zhang for helpful discussions and advisements. This work is supported in part by Natural Sciences Foundation of China (Nos. 10975046), NSFC (No.10773017) and National Basic Research Program of China (2009CB824800). \vfill
1,116,691,498,020
arxiv
\section*{Methods} \noindent{\bf Atom-light Hamiltonian} The Hamiltonian of equation (\ref{eqn:heff}) is obtained from the off-resonant coupling to an atomic dipole transition. The excited atomic states can be adiabatically eliminated from the dipole-interaction Hamiltonian under the conditions described in the main text. The dispersive effects arising from the Stark shift of the atomic levels are described via an effective Hamiltonian \begin{equation} \hat H=-\int_0^Ldz\rho A\left(a_0\hat\phi+a_1\hat{s}_z\hat{j}_z+a_2\left[\hat\phi\hat{j}_z^2 -\hat s_-\hat j_+^2-\hat s_+\hat{j}_-^2\right]\right). \end{equation} Here $L$ is the length of the atomic sample and $A$ its cross section overlapping with the probe light propagating in the $z$-direction, and $\rho\equiv\rho(z)$ is the atomic density. The light is described via Stokes operators $\hat s_{i}\equiv\hat s_{i}(z,t)$ ($\hat s_{\pm}=\hat s_1\pm i\hat s_2$) and $\hat\phi$ is the total photon density. The coefficients $a_i$ depend on the laser wavelength and the characteristics of the atomic transition. The first term, proportional to $a_0$, gives the (polarization independent) ac Stark shift. In the limit where the detuning is large compared to the hyperfine splitting of the excited state $a_2\approx0$, and only the linear QND coupling between the Stokes operator and atomic spin remains. This is equivalently written in equation (\ref{eqn:heff}) as a coupling between the Stokes operator and the effective atomic spin component $\hat J_z^{\rm eff}$. To proceed, we assume that $\hat J_z^{\rm eff}$ is time independent. This is valid since: (i) it is conserved by the Hamiltonian equation (\ref{eqn:heff}), and (ii) the measurement time for a pulsed probe is much shorter than the atomic spin diffusion time. The outgoing $\hat X$ quadrature of light is then given by Eq.~(\ref{eqn:xout}) provided that $a_1^2F^2N_{\rm at}^2/2\ll1$. The coupling constant reads $\kappa=a_1\sqrt{N_{\rm at}N_{\rm ph}F/2}$. \noindent{\bf 1D spin chains} The strongly correlated states of spin-1 atoms on a 1D lattice considered in this paper are examples of ground states (or idealizations thereof) of the generalized spin-1 Heisenberg Hamiltonian \cite{demler,yip,legaza} \begin{equation} H=\sum_{\ew{n,n'}} \cos\beta\,\hat{\vecb{j}}(z_n)\cdot\hat{\vecb{j}}(z_{n'})+\sin\beta\, \left[\hat{\vecb{j}}(z_n)\cdot\hat{\vecb{j}}(z_{n'})\right]^2. \end{equation} Due to the interplay between the bilinear and the biquadratic interaction (parametrized by $\beta$), this Hamiltonian presents three antiferromagnetic quantum phases: dimerized $\beta \in (-3\pi/4,-\pi/4)$, Haldane $\beta \in (-\pi/4,\pi/4)$, and critical $\beta \in [\pi/4,\pi/2)$. The representative point of the dimerized phase is found at $\beta=-\pi/2$ where the two-site ground state is a singlet ({\it dimer}, \cite{demler,yip}): $\ket{s}_{1,2}=(\ket{+1}_1\ket{-1}_2+\ket{-1}_1\ket{+1}_2-\ket{0}_1\ket{0}_2)/\sqrt3$. Here we denote by $\ket{m_z=\pm1,0}_n$ the eigenstates of $\hat j_z(z_n)$. This ground state is analytical only in the thermodynamic limit or for periodic boundary conditions. In a finite chain with an even number of sites, however, the ground state is close to a concatenation of spin-1 singlets (see Fig.~\ref{fig1}(b)). The representative state of the Haldane phase is found at $\beta=\arctan(1/3)$, where the exact ground state corresponds to the AKLT state (see Fig.~\ref{fig1}(d)). The critical phase is less well understood, though known to have period three correlations. At $\beta=\pi/4$ the three-site ground state is a singlet ({\it trimer}), and a caricature of the ground state for a larger number of sites is given by a concatenation of trimers (cf.~Fig.~\ref{fig1}(c)). These three isotropic ground states are eigenstates of any component of the total spin. Thus for $k_P=k$ where all atoms couple equally to the light, no additional noise is imprinted on the $\hat X$ quadrature. For $k_P\neq k$, in general these states are not eigenstates of $\hat{J}^{\rm\ eff}_z$ (although $\langle \hat{J}^{\rm\ eff}_z \rangle=0$) and as we demonstrate in Fig.~\ref{fig1}(c), the fluctuations permit to discriminate them unambiguously.
1,116,691,498,021
arxiv
\section{Introduction} Graph neural networks \citep{2016Semi,bruna2013spectral,defferrard2016convolutional,velivckovic2017graph,abu2018watch,zhang2018gaan,lee2018graph,klicpera2018predict} have achieved great success in processing graph data which is rich in information about the relationships between objects, and have been successfully applied to chemistry \citep{2019Graph,kearnes2016molecular,de2018molgan,gilmer2017neural}, traffic prediction \citep{ 2019Traffic,2019Predicting,2019Predicting2}, knowledge graph \citep{2019Estimating,2019Knowledge}, social network \citep{2019Learning,qiu2018deepinf}, recommendation system \citep{2018Graph} and other aspects. As deep learning develops, the explanation from the mathematical perspective is highly valued. This facilitates a deeper understanding of deep neural networks and promotes its development. In recent years, the rapid development of graph neural network research has been accompanied by some problems that show more demand for mathematical explanations. Many scholars analyze GNNs using mathematical tools, including Weisfeiler-Lehman tests \citep{xu2018powerful,maron2019provably,azizian2020expressive}, spectral analysis \citep{nt2019revisiting,li2018deeper,wu2019simplifying} and dynamical system \citep{oono2019graph}. However, like other deep neural networks, theoretical analysis related to GNNs is still scarce. Different perspectives are needed to expand scholars' understanding and promote the development of GNNs. The deepening of the network has brought about changes in neural networks and caused a boom in deep learning. Unlike typical deep neural networks, there is an important problem in the training of graph neural networks that affects the further deepening of the network. Researchers have found that the performance of some graph neural networks decreases instead as the depth of the model increases. Most scholars attribute this anomaly to \emph{over-smoothing}\citep{li2018deeper}, a phenomenon in which the feature vectors of different nodes tend to be consistent as the network deepens, leading to indistinguishable node representations. Many researchers have studied this problem and proposed some improvement methods \citep{li2018deeper,oono2019graph,rong2019dropedge,chen2020simple,chen2020measuring,huang2020tackling, cai2020note,yang2020revisiting,chiang2019cluster,li2020deepergcn}. There are also articles explaining and analyzing the oversmoothing problem from the spectral analysis \citep{li2018deeper} and dynamical system perspectives \citep{oono2019graph}. However, there is still no unified theoretical framework to prove the effectiveness of these methods. In addition, the articles on related theoretical analysis have focused on specific models like \emph{graph convolution network} (GCN) or \emph{graph attention network} (GAT), and lack a comprehensive analysis and understanding of the general graph neural network. \\ \noindent \textbf{Contribution.} \begin{itemize} \item Noting the Markov property of the forward propagation process of GNNs, in this paper we try to develop a mathematical model to explain and analyze GNNs. Viewing the node set as a state space and the features of the nodes as the distribution over the state space, we model the forward message passing process of GNN as a discrete-time finite state Markov chain. We divide the message passing neural networks into two categories of operator-consistent models and operator-inconsistent models based on whether the message passing operators of each layer are consistent. Furthermore, we model the forward propagation process of the graph convolution model as a simple random walk on the graph and model the forward propagation process of the graph attention model as a time-inhomogeneous Markov chains on the graph. In addition, we model the stochastic method in GNN, DropEdge \citep{rong2019dropedge}, as a random environment and use \emph{Markov Chains in Random Environments} (MCRE) to study the GNN models which use the DropEdge method. \item We comprehensively study the over-smoothing problem of graph neural networks using the Markov chain model. We attribute the over-smoothing problem to the convergence of the probability distribution over the node set to the stationary distribution. Further, without being restricted to a specific GNN model, the over-smoothing problem of the general GNN is studied. On the one hand, we state that for the operator-consistent GNN, the node features cannot avoid over-smoothing, nor can they avoid over-smoothing at the exponential rate. On the other hand, we show that operator-inconsistent GNN model does not necessarily over-smooth and prove a sufficient condition for it to avoid over-smoothing. In addition, we interpret the methods for alleviating over-smoothing as different forms of lazy walk on the graph and prove the effectiveness of these methods. This part not only solve the important problem that limit the development of GNNs, but also is a successful case of using Markov chain to understand and analyze GNNs, demonstrating the potential of the Markov model to study other problems in GNNs. \item We designed experiments to verify our conclusions. We propose a regularization term based on it, which we call GNN-OI\footnote{'OI' is taken from \textbf{O}perator-\textbf{I}nconsistent. GNN-OI is the generic terms for various GNN models which can be specified such as GAT-OI and GEN-OI.} and can be plugged into existing GNN models by simply adding it on the original objective. GAT-OI can improve the performance as well as alleviate the over-smoothing problem. Also for GEN~\citep{li2020deepergcn}, we conduct experiments on Appendix B and the results prove the sufficient promotion of GEN-OI on related graph tasks. \end{itemize} \textbf{Outline.} In Section \ref{2}, we introduce the graph neural networks, as well as over-smoothing, which is an important issue limiting the development of graph neural networks. In addition some definitions and conclusions in Markov chain theory are introduced. In Section \ref{3}, We model the GNN with the Markov chain on the graph in detail and model DropEdge \citep{rong2019dropedge} which is a common stochastic regularization method in GNNs with the random environment. In Sections \ref{4}, we study the over-smoothing problem based on the modeling in Section \ref{3}. Finally, in Section \ref{6}, we validate our conclusions through experiments on the real datasets.\\ \noindent \textbf{Notation.} Let $ \mathcal{G}=(\mathcal{V},\mathcal{E}) $ be a connected non-bipartite graph, where $ \mathcal{V}:=\{1,2,\ldots,N\} $ is the node set, $ \mathcal{E} $ is the edge set, $ N=|\mathcal{V}| $ is the number of nodes and $ |\mathcal{A}| $ denotes the number of elements in the set $ \mathcal{A} $. If there are connected edges between nodes $ u,v\in\mathcal{V} $, then denote by $ (u,v)\in\mathcal{E} $. $ \deg(u) $ denotes the degree of node $ u\in\mathcal{V} $ and $ \mathcal{N}(v) $ denotes the neighbors of node $ v $. The corresponding adjacency matrix is $ A $ and the degree matrix is $ D $. $ \tilde{\mathcal{G}}=(\tilde{\mathcal{V}},\tilde{\mathcal{E}}) $ denotes the graph $ \mathcal{G} $ added self-loop, the corresponding adjacency matrix is $ \tilde{A} $ and the degree matrix is $ \tilde{D} $. Let $ \mathbb{R} $ be the set of real numbers, $ \mathbb{N} $ be the set of natural numbers, $ \mathbb{Z}^{+} $ be the set of positive integers, and $ F $ be the features dimensions of the nodes. Let $ (\Omega,\mathcal{F},\mathbf{P}) $ be a probability space and $ \mathbf{E} $ be the expectation operator on it. $ \|\;\cdot\;\| $ denotes the $ L^{1} $ norm and $ \|\;\cdot\;\|_{TV} $ denotes the total variation distance. $ P^{\text{T}} $ denotes the transposed matrix of the square matrix $ P $. $ p(i,j) $ denotes the $ i $th row $ j $th column element of the matrix. $ P(i,\;\cdot\;) $ denotes the row vector formed by the $ i $th row of the matrix $ P $ and $ P(\;\cdot\;,j) $ denotes the column vector formed by the $ j $th column of the matrix $ P $. Let $ P_{\text{rw}}:=D^{-1}A $ be the transition matrix of simple random walk on the graph $ \mathcal{G} $, $ \tilde{P}_{\text{rw}}:=\tilde{D}^{-1}\tilde{A} $ be the transition matrix of simple random walk on the graph $ \tilde{\mathcal{G}} $, and let $ P_{\text{lazy}} := (1-\gamma)D^{-1}A+\gamma I $ for the transition matrix of the lazy walk on the graph $ \mathcal{G} $. \section{Preliminaries}\label{2} In this section we introduce the basic GNN model, the over-smoothing problem, and some definitions and conclusions in Markov chains theory. \subsection{Graph Neural Networks}\label{2.1} \textbf{Graph convolution networks} are the most important class of GNN models at present. Researchers have been working to migrate the success of convolutional neural networks to the learning of graph data. Given a graph $ \mathcal{G}=(\mathcal{V},\mathcal{E}) $ and the initial features of the nodes on it $ H^{(0)}\in\mathbb{R}^{N\times F} $, \citet{2016Semi} improved neural network models on graphs \citep{bruna2013spectral,defferrard2016convolutional} and proposed the most widely studied and applied vanilla GCN $$ H^{(l)}=\sigma_{W^{(l)}}\left(P_{\text{GCN}}H^{(l-1)}\right), $$ where $ H^{(l)}\in\mathbb{R}^{N\times F} $ is the node feature vector output by the $ l $th hidden layer. $ W^{(l)} $ is the parameter of the $ l $th hidden layer. $ P_{\text{GCN}}:=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}} $ is the \emph{graph convolution operator}. \begin{remark} Since the parameter $ W^{(l)} $ is fixed during the forward propagation, we may consider writing the parameter $ W^{(l)} $ together with the activation function as $ \sigma_{W^{(l)}} $. This notation is also maintained in the writing of other models later on. \end{remark} \noindent \textbf{Graph attention networks.} Inspired by the attention mechanism, many scholars have proposed attention-based graph neural network models \citep{velivckovic2017graph,abu2018watch,zhang2018gaan,lee2018graph}. Among them, GAT \citep{velivckovic2017graph} is the most representative model. GAT establishes attention functions between nodes $ u,v\in\mathcal{E} $ with connected edges $ (u,v)\in\mathcal{V} $ $$ \alpha_{u,v}^{(l)}=\frac{\exp(\phi^{(l)}(h_{u}^{(l-1)},h_{v}^{(l-1)}))}{\sum_{k\in\mathcal{N}(u)}\exp(\phi^{(l)}(h_{u}^{(l-1)},h_{k}^{(l-1)}))}, $$ where $ h_{u}^{(l)}\in\mathbb{R}^{F} $ is the embedding for node $ u $ at the $ l $-th layer and $$ \phi^{(l)}(h_{u}^{(l-1)},h_{v}^{(l-1)}):=\text{LeakyReLU}(\mathbf{a}^{\text{T}}[W^{(l)}h_{u}^{(l-1)}\|W^{(l)}h_{v}^{(l-1)}]), $$ where $ \mathbf{a}\in\mathbb{R}^{2F} $ and $ W^{(l)} $ is the weight matrix. Then the GAT layer is defined as $$ h_{u}^{(l)}:=\sigma_{W^{(l)}}\left(\sum_{v\in\mathcal{N}(u)}\alpha_{u,v}^{(l)}h_{v}^{(l-1)}\right). $$ Written in matrix form $$ H^{(l)}=\sigma_{W^{(l)}}(P_{\text{att}}^{(l)}H^{(l-1)}), $$ where $ P_{\text{att}}^{(l)}\in\mathbb{R}^{N\times N}$ is the attention matrix satisfying $ P_{\text{att}}^{(l)}(u,v)=\alpha_{u,v}^{(l)} $ if $ v\in\mathcal{N }(u) $ otherwise $ P_{\text{att}}^{(l)}(u,v)=0 $ and $ \sum_{v=1}^{N}P_{\text{att}}^{(l)}(u,v)=1 $. ~\\ \noindent \textbf{Message Passing Neural Network} (MPNN) is a GNN model proposed by \citet{gilmer2017neural}. MPNN is a general framework for current GNN models, and most GNN models can be unified under this framework. It describes the GNN uniformly as a process that the information of the neighbors of a node $ u\in\mathcal{V} $ in a graph is passed as messages along an edge and aggregated at the node $ u $, i.e. $$ h_{u}^{(l)}=\text{UP}^{(l)}\left(h_{u}^{(l-1)},\text{MSG}^{(l)}(h_{v}^{(l-1)},v\in\mathcal{N}(u))\right), \eqno{(1)}$$ where $ h_{u}^{(l)} $ denotes the features of node $ u $ output from the $ l $th hidden layer of the model, $ \text{UP}^{(l)} $ and $ \text{SG}^{(l)} $ denote the update and message passing functions of the $ l $th layer, respectively. GNN such as GCN and GAT can be written in this form, differing only in the update and message passing functions. \subsection{Over-smoothing}\label{2.2} When neural networks are getting deeper and deeper, a phenomenon that the GCN had better experimental results in the shallow layer case, and instead did not work well as the number of layers in the network increased was found. The researchers found that this is due to the fact that during the GCN training process, the hidden layer representation of each node tend to converge to the same value as the number of layers increases. This phenomenon is called over-smoothing. This problem affects the deepening of GNN layers and limits the further development of GNNs. The main current methods to alleviate over-smoothing are residual connections method \citep{2016Semi,chiang2019cluster}, personalized propagation of neural predictions (PPNP) \citep{klicpera2018predict}, and the DropEdge method \citep{rong2019dropedge,huang2020tackling}. ~\\ \noindent \textbf{Residual connections method} is proposed based on an intuitive analysis of the over-smoothing problem. From a qualitative perspective, the over-smoothing problem is that as the network is stacked, the model forgets the initial input features and only updates the features based on the structure of the graph data. It is natural to think that the problem of the model forgetting the initial features can be alleviated by reminding the network what its previous features are. Many methods have been proposed based on such intuitive analysis. The simplest one, \citet{2016Semi} propose to add residual connections to graph convolutional networks $$ H^{(l)}=\sigma_{W^{(l)}}\left(P_{\text{GCN}}H^{(l-1)}\right)+H^{(l-1)}. \eqno{(2)}$$ The node features of the $ l \;$th hidden layer are directly added to the node features of the previous layer $ H^{(l-1)} $ to remind the network not to forget the previous features. However, \citet{chiang2019cluster} argues that residual connectivity ignores the structure of the graph and should be considered to reflect more the influence of the weights of different neighboring nodes. So this work gives more weight to the features from the previous layer in the message passing of each GCN layer by improving the graph convolution operator $$ H^{(l)}=\sigma_{W^{(l)}}\left(\left(P_{\text{GCN}}+I\right)H^{(l-1)}\right). \eqno{(3)}$$ ~\\ \noindent \textbf{Personalized propagation of neural predictions.} The well-known graph neural network PPNP \citep{klicpera2018predict} has also been experimentally proven to alleviate the over-smoothing. The innovation of PPNP is to decouple node information embedding and node feature propagation in GNN models. It simplifies the model by not learning the parameters during model propagation, and becomes one of the GNN models that are widely studied and applied \citep{bojchevski2020scaling}. PPNP's node information embedding is implemented by a MLP $$ H^{(0)}=f_{W}(X). $$ Inspired by personalized PageRank (PPR), the node feature propagation process is $$ H^{(l)}=\sigma\left((1-\alpha)\tilde{D}^{-1}\tilde{A}H^{(l-1)}+\alpha H^{(0)}\right), \eqno{(4)}$$ where $ \alpha\in(0,1) $ is the teleport (or restart) probability. ~\\ \noindent \textbf{DropEdge} is a method proposed by \citet{rong2019dropedge} to alleviate the over-smoothing problem. Its simple idea and powerful generalization in GNN models have made this method widely used. And it becomes a method for dealing with large-scale graph data. The idea of DropEdge is to randomly drop some edges in the original graph $ \tilde{\mathcal{G}}=(\tilde{\mathcal{V}},\tilde{\mathcal{E}}) $ at each layer. The specific operation is to randomly let some elements $ 1 $ of the adjacency matrix $ A $ become $ 0 $ $$ A^{(l)}_{\text{drop}}=A-A'^{(l)}, $$ where $ A'^{(l)} $ is the adjacency matrix formed by the expansion of a random subset $ \tilde{\mathcal{E}}' $ of $ \tilde{\mathcal{E}} $. Although these three methods have been experimentally verified to alleviate GCN oversmoothing, their nature lacks explanation. Residual connections method only discusses the intuitive understanding, and PPNP is experimentally found to alleviate over-smoothing. DropEdge, as a widely used stochastic regularization method, needs more mathematical explanation. In addition, the effectiveness of these three methods needs to be proved theoretically. \subsection{Results in Markov Chains}\label{2.3} Since the graph has a finite node set and the forward propagation process of graph neural networks is time-discrete, we focus on the results related to discrete-time Markov chains in finite state. This section introduces some conclusions of Markov chains, Markov chains in random environments \citep{cogburn1980markov,cogburn1990direct,nawrotzki1982finite}, which has only been developed in the last three decades, and mixing time \citep{levin2017markov}, which is an important tool to characterize the transitions of Markov chains. The proofs of some conclusions in this section will be added in the Appendix A. \begin{lemma}[Dobrushin's inequality]\label{lem2} Let $ \mu $ and $ \nu $ be probability distributions on a finite state space $ E $ and $ P $ be a transition matrix, then $$ \|\mu P-\nu P \|\;\leq \; C(P)\;\|\mu-\nu\|, $$ where $$ C(P):=\frac{1}{2}\;\sup_{i,j}\;\sum_{k\in E}\;|p(i,k)-p(j,k)| $$ is called the \textbf{ Dobrushin contraction coefficient} of the transition matrix $ P $ and $ \|\;\cdot\;\| $ denotes the $ L^{1} $ norm. \end{lemma} \begin{theorem}\label{thm2.1.2 If one of the following two conditions is satisfied \begin{itemize} \item[(1)] $ P $ is irreducible and aperiodic. \item[(2)] $ C(P)<1. $ \end{itemize} Then there exist stationary distribution $ \pi $, constants $ \alpha\in(0,1) $ and $ C>0 $ such that $$ \max_{i\in E}\| P^{n}(i,\;\cdot\;)-\pi\|\leq C \;\alpha^{n}, $$ where $ P(i,\;\cdot\;) $ denotes the row vector consisting of the $ i $th row of the transition matrix $ P $. \end{theorem} Compared to the time-homogeneous chain, it is much more difficult to describe the transfer of an arbitrary initial distribution according to a time-inhomogeneous chain. The limiting case that an arbitrary initial distribution transfer according to a time-inhomogeneous chain was discussed by \citet{bowerman1977convergence,huang1976rate}. The following Dobrushin-Isaacson-Madsen theorem gives a sufficient condition for the existence of stationary distributions in the limiting sense of time-inhomogeneous Markov chains. \begin{theorem}[Dobrushin-Isaacson-Madsen theorem]\label{thm2.1.3} Let $ \vec{X}=\{X_{n},n\in T\} $ be a time-inhomogeneous Markov chain on a finite state space $ E $ with transition matrix $ P^{(n)} $. If the following $ (1), (2) $ and $ (3A) $ or $ (3B) $ are satisfied \begin{itemize} \item[(1)] There exists a stationary distribution $ \pi^{(n)} $ when $ P^{(n)} $ is treated as a transition matrix of a time-homogeneous chain; \item[(2)] $ \sum_{n}\|\pi^{(n)}-\pi^{(n+1)}\|<\infty; $ \item[(3A)] (Isaacson-Madsen condition) For any probability distribution $ \mu $, $ \nu $ on $ E $ and positive integer $ k $ $$ \|(\mu-\nu)P^{(k)}\cdots P^{(n)}\|\rightarrow 0,\quad n\rightarrow \infty. $$ \item[(3B)] (Dobrushin condition) For any positive integers $ k $ $$ C(P^{(k)}\cdots P^{(n)})\rightarrow 0,\quad n\rightarrow \infty. $$ \end{itemize} Then there exists a probability measure $ \pi $ on $ E $ such that \begin{itemize} \item[(1)] $ \|\pi^{(n)}-\pi \|\rightarrow 0,\quad n\rightarrow \infty; $ \item[(2)] Let the initial distribution be $ \mu_{0} $ and the distribution of the chain $ \vec{X} $ at step $ n $ be $ \mu_{n}:=\mu_{n-1}P^{(n)} $, then for any initial distribution $ \mu_{0} $, we have $$ \|\mu_{n}-\pi \|\rightarrow 0,\quad n\rightarrow \infty, $$ \end{itemize} where $ \|\;\cdot\;\| $ denotes the $ L^{1} $ norm. \end{theorem} Although Markov chains are widely used in real-world engineering, some practical problems have more complex environmental factors that affect the transition of the original Markov chain. We need a more advanced mathematical tool to deal with such problems. If these complex environmental factors are described as a stochastic process that affects the transition function of the original chain, the original chain may lose the Markov property as a result, so researchers \citet{cogburn1980markov,cogburn1990direct}, \citet{nawrotzki1982finite} and \citet{orey1991markov} developed a theory of \emph{Markov chains in random environments} (MCRE) to deal with these problems. Let $ (\Omega,\mathcal{F},\mathbf{P}) $ be a probability space, $ (E,\mathcal{E}) $, $ (Y,\mathcal{Y}) $ be two state spaces, $ T_{1}=\mathbb{N}=\{0,1,2,\ldots\} $, $ T_{2}=\mathbb{Z}^{+}=\{1,2,\ldots\} $ are two sets of times. In the following we first introduce the random Markov kernel that couples the original chain to the random environment. \begin{definition}[random Markov kernel] Let $ p(\;\cdot\;;\;\cdot\;,\;\cdot\;):Y\times E\times\mathcal{E}\mapsto[0,1] $. If \begin{itemize} \item[(1)] For every fixed $ \theta\in Y $, $ p(\theta;\;\cdot\;,\;\cdot\;)\in\text{MK}\;(E,\mathcal{E}) $, where $ \text{MK}\; (E,\mathcal{E}) $ is the set of all Markov kernels on $ (E,\mathcal{E}) $. \item[(2)] For every fixed $ A\in\mathcal{E} $, $ p(\;\cdot\;;\;\cdot\;,A) $ on $ \mathcal{Y}\ times\mathcal{X} $ is measurable. \end{itemize} Then $ p $ is said to be a \textbf{random Markov kernel}. The set of all random Markov kernels on $ (E,\mathcal{E}) $ is noted as $ \text{RMK}\;(E,\mathcal{E}) $. In particular, if $ E $ is a finite set, then any random Markov kernel $ p(\theta;i,A) $ is given by a \textbf{random transition matrix} $$ P(\theta):=(\; p (\theta;i,j),i,j\in E \;), $$ where $ p (\theta;i,j):=p (\theta;i,\{j\}) $, and $$ p (\theta;i,A)=\sum_{j\in A}\; p (\theta;i,j). $$ \end{definition} In the following we give the definition of a Markov chain in a positive time-dependent random environment with finite state space. \begin{definition}[Markov chains in random environments] Let $ \vec{X}=\{X_{n},n\in T_{1}\} $ and $ \vec{\xi}=\{\xi_{n},n\in T_{2}\} $ be two random sequences defined on the probability space $ (\Omega,\mathcal{F},\mathbf{P}) $ taking values in the finite sets $ E $ and $ Y $, respectively. $ p(\;\cdot\;;\;\cdot\;,\;\cdot\;) $ is a random Markov kernel. If for any $ n\ge 2 $ and any $ i_{0},\ldots,i_{n}\in E $, we have $$ \mathbf{P}(X_{n}=i_{n}\;|\; X_{0}=i_{0},\cdots,X_{n-1}=i_{n-1},\vec{\xi}\;)=p(\xi_{n-1};i_{n-1},i_{n}). $$ Then we say that $ (\vec{X},\vec{\xi}\;) $ is a Markov chain in a positive time-dependent random environment, which is abbreviated as \textbf{Markov chain in a random environment} in this paper, and is denoted as MCRE. \end{definition} If $ \vec{\xi}=\{\xi_{n},n\in T_{2}\} $ is not stochastic, then the MCRE degenerates to a classical Markov chain. The MCRE is the Markov chain whose transition probability is influenced by a random environment. \begin{remark} In general, under the definition of \citet{cogburn1980markov,cogburn1990direct}, Markov chains in positive time-dependent random environments and Markov chains in random environments are two different concepts. The Markov chain in a positive time-dependent random environment is not necessarily a MCRE, and a MCRE is not necessarily a Markov chain in a positive time-dependent random environment. Without causing confusion, we use the term MCRE to express Markov chains in positive time-dependent random environments for the convenience of illustration in this paper. \end{remark} We are also interested in the rate that the initial distribution over state space converge to the stationary distribution. In Markov chains theory, the mixing time is used to denote the time required for a certain probability distribution to converge to the stationary distribution. \begin{definition}[Mixing Time] Let $ \vec{X}=\{X_{n},n\in T\} $ be a time-homogeneous Markov chain on a finite state space $ E $ with transition matrix $ P $ and stationary distribution $ \pi $. The \textbf{mixing time} is defined by $$ t_{mix}(\epsilon):=\min\{t:d(t)\leq\epsilon\}, $$ where $$ d(t):=\max_{i\in E}\|P^{t}(i,\;\cdot\;)-\pi\|_{TV}. $$ \end{definition} Following are the properties of \emph{total variational distance}. \begin{proposition}\label{prop2} Let $ \mu $ and $ \nu $ be two probability distributions on a finite set $ E $. Then $$ \|\mu-\nu\|_{TV}=\frac{1}{2}\|\mu-\nu\|=\frac{1}{2}\sum_{i\in E}|\mu(i)-\nu(i)|. $$ \end{proposition} \begin{proposition}\label{prop3} Let $ P $ be the transition matrix of a Markov chain with state space $ E $ and let $ \mu $ and $ \nu $ be any two distributions on $ E $. Then $$ \|\mu P-\nu P\|_{TV}\leq \|\mu-\nu\|_{TV}. $$ This in particular shows that $ \|\mu P^{t+1}-\pi\|_{TV}\leq \|\mu P^{t}-\pi\|_{TV} $, that is, advancing the chain can only move it closer to stationarity. \end{proposition} \section{Markov chain modeling for GNNs}\label{3} This section introduces the Markov chain model for GNN. In Section \ref{3.1}, we describe the Message Passing Neural Network (MPNN) with a discrete-time finite Markov chain on the graph. And we divide the MPNN into two classes. More specifically, in \ref{3.2}, we describe GCN with the simple random walk on the graph. In \ref{3.3}, we describe the graph attention model with the time-inhomogeneous chain on the graph. In \ref{3.5}, the DropEdge+GCN model is described by a random walk on graph in a random environment. \subsection{Message passing framework}\label{3.1} In this subsection we model the message forward propagation process of MPNN as a Markov chain on the graph and divide it into two categories of models. Recalling the message passing framework equation (1), since the node features of the $ l $th layer are obtained from the node features of the $ l-1 $th layer only, independent of the previous $ l-2 $ layers, the message passing process can be described as a Markov chain on the graph. We take the node set $ \mathcal{V} $ as the state space and construct the family of transition matrices $$ \left\{P^{(1)},P^{(2)},\ldots,P^{(l)},\ldots\right\} $$ according to $ \text{MSG}^{(l)} $ and $ \text{UP}^{(l)} $. Consider the features on the nodes $ H^{(l)} $ as the distribution on the node set $ \mathcal{V} $. Then the message passing process at $l$th layer in the MPNN is a one-step transition process of the distribution $ H^{(l-1)} $ according to the one-step transtion matrix $ P^{(l)} $. In this way, we model the forward process of the message passing model as a the Markov Chain $ \vec{V} $ with state space of $ \mathcal{V} $ and initial distribution $ H^{(0)} $, transferring according to the family of transition matrices $ \left\{P^{(1)},P^{(2)},\ldots,P^{(l)},\ldots\right\} $. Then we can use Markov chains theory to study the message passing model. If the message passing operator is consistent at each layer, i.e., for all $ l\ge 1 $, $ MSG^{(l)} $ and $ UP^{(l)} $ are the same, then the transition matrices $ P^{(l)},\ \forall l\ge 1 $ are the same and we can model it with a time-homogeneous Markov chain. Otherwise, the message passing operator is inconsistent, and we can model it with a time-inhomogeneous Markov chain. We classify GNNs into two categories, \emph{operator-consistent GNN} and \emph{operator-inconsistent GNN}, based on whether the message passing operators of each layer of the model are consistent. Specifically, we use GCN and GAT as representatives of operator-consistent GNN and operator-inconsistent GNN, respectively, which are discussed in Sections \ref{3.2} and \ref{3.3}. \subsection{Graph convolution model}\label{3.2} In this section, we discuss the equivalence between GCN and a simple random walk on the graph. Considering $ \vec{V} $ is a simple random walk on $ \tilde{\mathcal{G}} $ with transition matrix $$ \tilde{P}=\tilde{D}^{-1}\tilde{A}. $$ The graph convolution operator can be written as $$ P_{\text{GCN}}=\tilde{D}^{-\frac{1}{2}}\tilde{P}^{\text{T}}\tilde{D}^{\frac{1}{2}}. $$ Thus the message passing of the $ l \; $th layer of GCN is $$ H^{(l)}=P_{\text{GCN}}H^{(l-1)}=\left(P_{\text{GCN}}\right)^{l}H^{(0)}=\tilde{D}^{-\frac{1}{2}}\left(\tilde{P}^{\text{T}}\right)^{l}\tilde{D}^{\frac{1}{2}}H^{(0)}. \eqno{(5)}$$ Both sides multiply left $ \tilde{D}^{\frac{1}{2}} $ at the same time $$ \tilde{D}^{\frac{1}{2}}H^{(l)}=\left(\tilde{P}^{\text{T}}\right)^{l}\tilde{D}^{\frac{1}{2}}H^{(0)}. $$ Let $ X^{(l)}=\left(\tilde{D}^{\frac{1}{2}}H^{(l)}\right)^{\text{T}}\in\mathbb{R}^{F\times N} $, then for both sides transposed simultaneously we have $$ X^{(l)}=X^{(l-1)}\tilde{P}. \eqno{(6)} $$ This is exactly the simple random walk $ \vec{V}_{\text{rw}} $ with initial distribution $ X^{(0)} $ on the graph $ \tilde{\mathcal{G}} $. \begin{remark}\label{rmk2} Similar to \citet{li2018deeper}, in this paper's discussion of graph neural networks, we omit the nonlinear activation function $ \sigma $ between layers in graph convolutional networks. In fact, according to the spectral analysis of \citet{wu2019simplifying} and the experimental results, the GNN with omitted nonlinear activation function is not different from the GNN with added activation function in terms of performance. In this paper, we call this expression in the form of equation (5) the message passing of the model. \end{remark} Combining the above discussion, we model the most basic model in GNN, vanilla GCN, as simple random walk on $ \tilde{\mathcal{G}} $ which is the simplest Markov chain on the graph. The graph convolution models \citep{2016Semi,atwood2016diffusion,simonovsky2017dynamic,pham2017column} are constructed by designing graph convolution operators and then stacking the graph convolution layers layer by layer. We can follow the above discussion to describe the graph convolution operator as a one-step transition matrix and construct a time-homogeneous Markov chain on the graph $ \mathcal{G} $ to complete the Markov chain modeling of the graph convolution model. Accordingly, we can study the graph convolution model with the help of studying the time-homogeneous Markov chain. \subsection{Graph attention model}\label{3.3} In this section, we discuss the equivalence between GAT and a time-inhomogeneous Markov chain on the graph. Since $ P_{\text{att}}^{(l)}(u,v)\ge0 $ and $ \sum_{v=1}^{N}P_{\text{att}}^{(l)}(u,v)=1 $, $ P_{\text{att}}^{(l)} $ is a transition matrix. We consider a weighted graph $ \mathcal{G}^{(l)}_{att}=(\mathcal{V}^{(l)}_{att},\mathcal{E}^{(l)}_{att}) $ with edges $ (u,v)\in \mathcal{E}^{(l)}_{att} $ weights $$ w^{(l)}(u,v):=\exp\left(\text{LeakyReLU}(\mathbf{a}^{\text{T}}[W^{(l)}h_{u}^{(l-1)}\|W^{(l)}h_{v}^{(l-1)}])\right). $$ The weighted degree of node $ u\in\mathcal{G}^{(l)}_{att} $ is $$ \deg(u):=\sum_{k\in\mathcal{N}(u)}\exp\left(\text{LeakyReLU}(\mathbf{a}^{\text{T}}[W^{(l)}h_{u}^{(l-1)}\|W^{(l)}h_{k}^{(l-1)}])\right). $$ Based on this definition, $ P_{\text{att}}^{(l)} $ is the transition matrix of simple random walk on the weighted graph $ \mathcal{G}^{(l)}_{att} $. After analyzing the GAT operator $ P_{\text{att}}^{(l)} $ for each layer, we return to GAT. Unlike the GCN model, the message transition matrices $ P_{\text{att}}^{(l)} $ for each layer of GAT are inconsistent. It is natural to use a time-inhomogeneous Markov chain $ \vec{V}_{\text{att}} $ with the state space $ \mathcal{V} $ and a family of transition matrices $$ \left\{P_{\text{att}}^{(1)},P_{\text{att}}^{(2)},\ldots,P_{\text{att}}^{(l)},\ldots\right\} $$ to model GAT. In Section \ref{4.4}, we will discuss the properties of GAT message passing using the relevant conclusions of time-inhomogeneous Markov chains. \subsection{DropEdge method}\label{3.5} Recalling the DropEdge method introduced in Section \ref{2.2}, in this subsection we model it as a random environment. Specifically, we connect DropEdge+GCN\footnote{DropEdge+GCN denotes the GCN model using DropEdge method} and a random walk in a random environment on the graph. Consider the stochastic process $$ \vec{\xi}=(\xi_{1}, \xi_{2},\ldots,\xi_{l},\ldots):=(\Theta^{(1)},\Theta^{(2)},\ldots,\Theta^{(l)},\ldots) $$ with the set of time parameters $ T_{2}=\mathbb{Z}^{+} $, and taking values in $ \vec{\Xi}:=\Xi_{1}\times\Xi_{2}\times\cdots\times\Xi_{l}\times\cdots $, where $ \Xi_{i}\subset\{0,1\}^{N\times N},\forall i\ge 1 $. $ \Theta^{(l)},\ l\ge 1 $ are the random adjacency matrices that are independent and have the same distribution. If $ (u,v)\in\tilde{\mathcal{E}} $, its elements $ \theta^{(l)}(u,v) $ satisfy $$ \mathbf{P}(\theta^{(l)}(u,v))=\begin{cases} 1-\dfrac{1}{|\tilde{\mathcal{E}}|}&\theta^{(l)}(u,v)=1\\ \dfrac{1}{|\tilde{\mathcal{E}}|}&\theta^{(l)}(u,v)=0 \end{cases} $$ is a Bernoulli random variable. It indicates that each edge $ (u,v) $ is dropped with a uniform probability $ \frac{1}{|\tilde{\mathcal{E}}|} $. We use the random environment $ \vec{\xi}=(\Theta^{(1)},\Theta^{(2)},\ldots,\Theta^{(l)},\ldots) $ to model $ \left(A^{(1)}_{\text{drop}},A^{(2)}_{\text{drop}},\ldots,A^{(l)}_{\text{drop}},\ldots\right) $. We next model DropEdge+GCN. Since $ H^{(l)} $ is only related to $ H^{(l-1)} $ and $ A_{\text{drop}}^{(l)} $, and we have described GCN as a simple random walk on the graph, according to the definition of MCRE in Section \ref{2.3}, we describe the DropEdge+GCN as a simple random walk on the graph in the random environment. The message passing of DropEdge+GCN at the $ l $th layer is $$ H^{(l)} = \left(D_{\text{drop}}^{(l)}\right)^{-1}A^{(l)}_{\text{drop}}H^{(l-1)}, $$ where the degree matrix after DropEdge is $ D_{\text{drop}}^{(l)}:=diag(\zeta^{(l)}_{1},\ldots,\zeta^{(l)}_{N}) $, the degree of node $ u $ $$ \zeta^{(l)}_{u}:=\sum_{v\in\mathcal{N}(u)}\theta^{(l)}(u,v) \eqno{(7)}$$ is a random variable. Since $ \theta^{(l)}(u,v) $ is a Bernoulli random variable, the random variable $ \zeta^{(l)}_{u} $ follows a binomial distribution with parameters $ |\mathcal{N}(u)|=\deg(u) $ and $ 1-\frac{1}{|\mathcal{E}|} $, i.e. $$ \zeta^{(l)}_{u}\sim B(\deg(u),1-\frac{1}{|\mathcal{E}|}). $$ Let $ \vec{V}=(V_{0},V_{1},\ldots,V_{l},\ldots) $ be the original chain. $ V_{i},i=0,1,\ldots $ is random variables taking values in $ \mathcal{V} $. Consider the random transition matrix $$ P(\Theta^{(l)}):=\tilde{D}_{\Theta^{(l)}}^{-1} \Theta^{(l)}, $$ where $ \tilde{D}_{\Theta^{(l)}}=D_{\text{drop}}^{(l)} $. Since $ \forall l\ge 2 $ and $ \forall v_{0},v_{1},\ldots,v_{l}\in \mathcal{V} $, $$ \mathbf{P}(V_{l}=v_{l}|V_{0}=v_{0},\ldots,V_{l-1}=v_{l-1},\vec{\xi})=p(\Theta^{(l)};v_{l-1},v_{l})=\frac{\theta^{(l)}(v_{l-1},v_{l})}{\zeta^{(l)}_{v_{l-1}}}, $$ $ (\vec{V},\vec{\xi}) $ is a MCRE. To summarize the discussion, we describe the DropEdge method as a stochastic process $ \vec{\xi} $. And specifically, we model DropEdge+GCN as a random walk in a random environment on the graph $ (\vec{V},\vec{\xi}\;) $ with random transition matrices $ \{P(\Theta^{(l)})\}_{l\ge 1} $.. DropEdge applied to different GNN models means MCRE with the same random environment but different original chains. Although the specific details are not same, the modeling methods are the same. Next, in Section \ref{4.2}, we discuss in depth the effectiveness of DropEdge method to alleviate the over-smoothing problem. \section{Markov analysis of the over-smoothing problem}\label{4} In this section, we attribute the over-smoothing problem to the convergence of an arbitrary initial distribution to a stationary distribution. In Section \ref{4.2}, we demonstrate the effectiveness of previous methods to alleviate the over-smoothing problem by analyzing the lazy walk on the graph. Furthermore, we point out that these methods still can not avoid over-smoothing at the exponential rate. In Section \ref{4.3}, we show that the over-smoothing phenomenon is inherent to the operator-consistent GNN model, and that it is neither possible to avoid over-smoothing nor to avoid over-smoothing at the exponential rate in the Markovian sense. In Section \ref{4.4}, we prove the conclusion that the operator-inconsistent GNN models can avoid over-smoothing under certain conditions, and we give a sufficient condition. \subsection{Cause of over-smoothing}\label{4.1} In Section \ref{3.1} we show that a Markov Chain $ \vec{V} $ can be used to describe the message passing framework. Meanwhile, the feature vectors of the nodes can be described as distributions over the node set. The forward propagation process of node features is the transfer process of the distribution on the node set $ \mathcal{V} $. Thus over-smoothing problem is the process of the feature distribution converging to the stationary distribution. In the following, we take vanilla GCN as an example to specifically analyze why the node representations are over-smoothing. For a simple random walk on the graph $ \tilde{\mathcal{G}} $, for all $ v\in\tilde{\mathcal{V}} $ $$ \sum_{u\in\tilde{\mathcal{V}}}\deg(u)\tilde{p}_{\text{rw}}(u,v)=\sum_{(u,v)\in\tilde{\mathcal{E}}}\frac{\deg(u)}{\deg(u)}=\deg(v), $$ where $ \tilde{p}_{\text{rw}}(u,v)=\frac{1}{\deg(u)} $ if $ (u,v)\in\tilde{\mathcal{E}} $. To get a probability, we simply normalize by $ \sum_{v\in\tilde{\mathcal{V}}}\deg(v)=2|\tilde{\mathcal{E}}| $. Then the probability measure $$ \pi(u)=\frac{\deg(u)}{2|\tilde{\mathcal{E}}|}\quad\forall u\in\tilde{\mathcal{V}} $$ is always a stationary for the walk. Write in matrix form $$ \pi=\pi \tilde{P}_{\text{rw}.} $$ Recall Section \ref{3.2} we use a simple random walk on the graph to describe the vanilla GCN. Noting $ X^{(l)}=\left(\tilde{D}^{\frac{1}{2}}H^{(l)}\right)^{\text{T}} $, the message passing of the $ l \; $th GCN hidden layer is $$ X^{(l)}=X^{(l-1)}\tilde{P}_{\text{rw}}. $$ Since simple random walks on connected graphs are irreducible and aperiodic Markov chains, $$ X^{(l)}(k,\;\cdot\;)\rightarrow\pi,\quad l\rightarrow\infty, $$ where $ X^{(l)}(k,\;\cdot\;),\ k=1,2,\ldots,F $ is the distribution over the node set $ \tilde{\mathcal{V}} $ consisting of the $ k $th component of each node feature, $ \pi $ is the unique stationary distribution of $ \tilde{P}_{\text{rw}} $, thus for all $ u\in \tilde{\mathcal{V}} $ $$ x^{(l)}(1,u)=x^{(l)}(2,u)=\cdots=x^{(l)}(F,u)=\pi(u), $$ where $ \pi(u) $ is the $ u $th component of $ \pi $. From the above discussion, the over-smoothing problem is due to the fact that the features on the nodes tend to be stationary distribution with the move of the Markov chain, resulting in the convergence of the each node feature vector. In the next subsection we will analyze the methods that alleviate the over-smoothing problem in detail. \subsection{Lazy walk analysis of methods to alleviate over-smoothing}\label{4.2} In this subsection, we use the lazy walk on the graph to uniformly model the methods that alleviate the over-smoothing problem including the residual connections method \citep{2016Semi,chiang2019cluster}, personalized propagation of neural predictions (PPNP)\citep{klicpera2018predict}, and the DropEdge method \citep{rong2019dropedge,huang2020tackling} as lazy walks on the graph. Finally, we proof the effectiveness of these methods. ~\\ \noindent \textbf{Residual Connections Method.} Essentially, consider the simplification of the model in Remark \ref{rmk2}. If we omit the nonlinear activation function in the forward propagation process of the graph neural network and focus only on the node message passing process, then the models of equation (2) and (3) are the same. The operator $ P_{\text{GCN}}+I $ is normalized $$ P_{\text{res}}:=\frac{1}{2}P_{\text{GCN}}+\frac{1}{2}I. $$ Then the message passing for the residual connections method is $$ H^{(l)}=\left(\frac{1}{2}P_{\text{GCN}}+\frac{1}{2}I\right)H^{(l-1)}=P_{\text{res}}H^{(l-1)}. $$ According to the analysis of \citet{wang2019improving}, we describe $ P_{\text{res}} $ as the transition matrix of a lazy walk with parameter $ \gamma=\frac{1}{2} $. Thus residual connections method can be regarded as a lazy walk on the graph. ~\\ \noindent \textbf{PPNP.} We consider the lazy walk with the parameter $ \alpha $ on $ \tilde{\mathcal{G}} $ with transition matrix $$ P_{\text{lazy}}=(1-\alpha)\tilde{D}^{-1}\tilde{A}+\alpha I. \eqno{(8)}$$ This is formally similar to the PPNP messaging (equation (4)). Intuitively, equation (8) indicates that the original random walk has a $ 1-\alpha $ probability of continuing to walk, and a $ \alpha $ probability of staying in place. This is exactly the idea of personalized PageRank. Combining the above discussion, we can use a lazy walk on $ \tilde{\mathcal{G}} $ with parameter $ \alpha $ to describe the node feature propagation in PPNP. As can be seen from the message passing expression (8), PPNP is a more general method than the residual connections method. In particular, if $ \alpha= \frac{1}{2} $, formally PPNP becomes the residual connections method. ~\\ \noindent \textbf{DropEdge.} Unlike residual connections method and PPNP, DropEdge method explicitly have nothing to do with the form of lazy walk. However, we will prove that message passing of the DropEdge+GCN model is a lazy walk on the graph. In Section \ref{3.5} we modeled DropEdge+GCN as a MCRE $ (\vec{V},\vec{\xi}\;) $ on the graph. The following theorem illustrates that the original chain $ \vec{V} $ is a lazy walk on the graph. \begin{theorem}\label{thm4.2} Let $ (\vec{V},\vec{\xi}\;) $ be the MCRE that describes the DropEdge+GCN model in Section \ref{3.5}. $$ \vec{\xi}=(\Theta^{(1)},\Theta^{(2)},\ldots,\Theta^{(l)},\ldots) $$ is a random environment with independent identical distribution. Then the original chain $ \vec{V} $ is a time-homogeneous Markov chain with a transition matrix $$ P_{\text{drop}}:=(I-\Gamma)\tilde{D}^{-1}\tilde{A}+\Gamma, \eqno{(9)}$$ where $$ \Gamma:=diag\left(\frac{1}{|\mathcal{E}|^{\deg(1)}},\frac{1}{|\mathcal{E}|^{\deg(2)}},\ldots,\frac{1}{|\mathcal{E}|^{\deg(N)}}\right) $$ is a diagonal matrix. \end{theorem} {\bf Proof}. Let the distribution of $ \xi_{1} $ be $ \mu:=\mathbf{P}\circ(\xi_{1})^{-1} $, then the distribution of $ \vec{\xi} $ is $ \nu=\mu^{\mathbb{Z}^{+}} $. Given any path $ (V_{0}=v_{0},V_{1}=v_{1},\ldots,V_{l}=v_{l}),\ v_{i}\in\mathcal{V},\ i=0,1,\ldots,l $ of $ (\vec{V},\vec{\xi}\;) $, $$ \begin{aligned} \mathbf{P}(V_{0}=v_{0},\ldots,V_{l}=v_{l})&=\int_{\vec{\Xi}}\delta_{v,v_{0}}\; p(\Theta^{(1)};v_{0},v_{1})\; p(\Theta^{(2)};v_{1},v_{2})\cdots p(\Theta^{(l)};v_{l-1},v_{l})\;\nu(d\vec{\Xi})\\ &=\delta_{v,v_{0}}\int_{\Xi}p(\Theta^{(1)};v_{0},v_{1})\;\mu(d\Theta^{(1)})\cdots \int_{\Xi}p(\Theta^{(l)};v_{l-1},v_{l})\;\mu(d\Theta^{(l)})\\ &=\delta_{v,v_{0}}\;\mathbf{E}\left[\frac{\theta^{(1)}(v_{0},v_{1})}{\zeta^{(1)}_{v_{0}}}\right]\mathbf{E}\left[\frac{\theta^{(2)}(v_{1},v_{2})}{\zeta^{(2)}_{v_{1}}}\right]\cdots\mathbf{E}\left[\frac{\theta^{(l)}(v_{l-1},v_{l})}{\zeta^{(l)}_{v_{l-1}}}\right],\\ \end{aligned} $$ where the notations are all defined in Section \ref{3.5}, and the random variables $ \zeta^{(l)}_{v_{l-1}} $ are defined as equation (7). Since $ \Theta^{(0)},\Theta^{(1)},\ldots,\Theta^{(l)},\ldots $ are independently identical distribution, $ \vec{V} $ is a time-homogeneous Markov chain with the transition matrix $$ \mathbf{E}[P(\Theta^{(1)})]=\cdots=\mathbf{E}[P(\Theta^{(l)})]=\cdots. $$ The following computes the transition matrix $ \mathbf{E}[P(\Theta^{(l)})] $, i.e., the expectation of $ \frac{\theta^{(l)}(u,v)}{\zeta^{(l)}_{u}} $, for all $ (u,v)\in\mathcal{E} $. Since $$ \zeta^{(l)}_{u}\sim B(\deg(u),1-\frac{1}{|\mathcal{E}|}), $$ for $ k=1,2,\ldots,\deg(u) $, $$\begin{aligned} \mathbf{P}\left(\frac{\theta^{(l)}(u,v)}{\zeta^{(l)}_{u}}=\frac{1}{k}\right)&=\mathbf{P}(\theta^{(l)}(u,v)=1,\zeta^{(l)}_{u}=k)\\ &=\mathbf{P}(\theta^{(l)}(u,v)=1)\mathbf{P}(\zeta^{(l)}_{u}=k|\theta^{(l)}(u,v)=1)\\ &=\mathbf{P}(\theta^{(l)}(u,v)=1)\mathbf{P}(\zeta^{(l)}_{u}-\theta^{(l)}(u,v)=k-1|\theta^{(l)}(u,v)=1)\\ &=(1-\frac{1}{|\mathcal{E}|})C_{\deg(u)-1}^{k-1}\left(1-\frac{1}{|\mathcal{E}|}\right)^{k-1}\left(\frac{1}{|\mathcal{E}|}\right)^{(\deg(u)-1)-(k-1)}\\ &=C_{\deg(u)-1}^{k-1}\left(1-\frac{1}{|\mathcal{E}|}\right)^{k}\left(\frac{1}{|\mathcal{E}|}\right)^{\deg(u)-k};\\ \mathbf{P}\left(\frac{\theta^{(l)}(u,v)}{\zeta^{(l)}_{u}}=0\right)&=1-\sum_{k=1}^{\deg(u)}\mathbf{P}\left(\frac{\theta^{(l)}(u,v)}{\zeta^{(l)}_{u}}=\frac{1}{k}\right), \end{aligned} $$ where $ C_{n}^{m}:=\frac{n!}{m!(n-m)!} $ denotes the combinatorial number, satisfying the formula $$ \frac{1}{n}\cdot C_{n}^{m}=\frac{1}{m}\cdot C_{n-1}^{m-1}. $$ The probability distribution of $\frac{\theta^{(l)}(u,v)}{\zeta^{(l)}_{u}} $ is obtained, and its expectation is calculated below $$ \begin{aligned} \mathbf{E}\left[\dfrac{\theta^{(l)}(u,v)}{\zeta^{(l)}_{u}}\right]&=\sum_{k=1}^{\deg(u)}C_{\deg(u)-1}^{k-1}\left(1-\frac{1}{|\mathcal{E}|}\right)^{k}\left(\frac{1}{|\mathcal{E}|}\right)^{\deg(u)-k}\cdot\frac{1}{k}\\ &=\sum_{k=1}^{\deg(u)}\left(1-\frac{1}{|\mathcal{E}|}\right)^{k}\left(\frac{1}{|\mathcal{E}|}\right)^{\deg(u)-k}\cdot\frac{1}{k}\cdot C_{\deg(u)-1}^{k-1}\\ &=\sum_{k=1}^{\deg(u)}\left(1-\frac{1}{|\mathcal{E}|}\right)^{k}\left(\frac{1}{|\mathcal{E}|}\right)^{\deg(u)-k}\cdot\frac{1}{\deg(u)}\cdot C_{\deg(u)}^{k}\\ &=\frac{1}{\deg(u)}\cdot \left(\sum_{k=1}^{\deg(u)}C_{\deg(u)}^{k}\left(1-\frac{1}{|\mathcal{E}|}\right)^{k}\left(\frac{1}{|\mathcal{E}|}\right)^{\deg(u)-k}\right)\\ &=\frac{1}{\deg(u)}\cdot\left(1-C_{\deg(u)}^{0}\left(\frac{1}{|\mathcal{E}|}\right)^{\deg(u)}\right)\\ &=\left(1-\frac{1}{|\mathcal{E}|^{\deg(u)}}\right)\tilde{p}_{\text{rw}}(u,v), \end{aligned} $$ where $ \tilde{p}_{\text{rw}}(u,v)=\frac{1}{\deg(u)} $ is the $ u $th row, $ v $th column element of $ \tilde{P}_{\text{rw}}:=\tilde{D}^{-1}\tilde{A} $. So the transition matrix of the original chain $ \vec{V} $ is $$ P_{\text{drop}}:=\mathbf{E}[P(\Theta^{(l)})]=(I-\Gamma)\tilde{D}^{-1}\tilde{A}+\Gamma, $$ where $$ \Gamma:=diag\left(\frac{1}{|\mathcal{E}|^{\deg(1)}},\frac{1}{|\mathcal{E}|^{\deg(2)}},\cdots,\frac{1}{|\mathcal{E}|^{\deg(N)}}\right) $$ is a diagonal matrix. The element $ \frac{1}{|\mathcal{E}|^{\deg(u)}} $ denotes the probability that all edges connected to the node $ u $ are dropped. \hfill\BlackBox\\ The conclusion of Theorem \ref{thm4.2} tells us that the DropEdge+GCN model is a lazy walk on the graph. Equation (9) is intuitively equivalent to that the message of node $ u $ passes with the probability $ 1-\frac{1}{|\mathcal{E}|^{\deg(u)}} $, and stays in node $ u $ with probability $ \frac{1 }{|\mathcal{E}|^{\deg(u)}} $. In fact, equation (9) is a more generalized form of lazy walk on the graph. In particular, if $ \mathcal{G} $ is a regular graph, i.e., a graph with the same degree of each node. Then the matrix $ \Gamma:=diag(\frac{1}{|\mathcal{E}|^{\deg(1)}},\frac{1}{|\mathcal{E}|^{\deg(2)}},\ldots,\frac{1}{|\mathcal{E}|^{\deg(N)}}) $ degenerates to the constant $ \frac{1}{|\mathcal{E}|^{\deg(u)}} $. Then the message passing of the DropEdge+GCN model is a lazy walk with the parameter $ \frac{1}{|\mathcal{E}|^{\deg(u)}} $ on the graph. We have already modeled residual connections method, PPNP and DropEdge method uniformly as lazy walk on graphs, and these three methods have gradually generalized lazy walks from special to general. Next we will use the mixing time theory as a mathematical tool to analyze the property of the lazy walk on the graph. This is used to illustrate the effectiveness of these methods to alleviate the over-smoothing problem. The following theorem illustrates that starting from the any initial distribution, the lazy walk on the graph moves to the stationary distribution more slowly than the simple random walk. \begin{theorem}\label{thm4.2.1} Let $$ P_{\text{rw}}:=D^{-1}A $$ be the transition matrix of a simple random walk $ \vec{V}_{\text{rw}} $ on the graph $ \mathcal{G}=(\mathcal{V},\mathcal{E}) $, $$ P_{\text{lazy}}:=(1-\gamma)D^{-1}A+\gamma I $$ be the transition matrix of the lazy walk $ \vec{V}_{\text{lazy}} $ on the graph $ \mathcal{G}=(\mathcal{V},\mathcal{E}) $. If $ \vec{V}_{\text{rw}} $ and $ \vec{V}_{\text{lazy}} $ move from any distribution on $ \mathcal{V} $, then there is the following conclusion \begin{itemize} \item[(1)]$ \vec{V} $ and $ \vec{V}_{\text{lazy}} $ have the same stationary distribution $ \pi $, where $$ \pi(u)=\frac{\deg(u)}{2|\mathcal{E}|},\quad\forall u\in\mathcal{V}. $$ \item[(2)]For all $ l>0 $, $$ \max_{u\in\mathcal{V}}\|P^{l}_{\text{lazy}}(u,\;\cdot\;)-\pi\|_{TV}\ge \max_{u\in\mathcal{V}}\|P_{\text{rw}}^{l}(u,\;\cdot\;)-\pi\|_{TV}, $$ where $ \|\;\cdot\;\|_{TV} $ is the total variation distance defined in Section \ref{2}. $ P(u,\;\cdot\;) $ is the row vector corresponding to the $ u $th row of the matrix $ P $. \end{itemize} \end{theorem} {\bf Proof}. \begin{itemize} \item[(1)]For $ \vec{V}_{\text{rw}} $, for all $ v\in\mathcal{V} $, since $$ \begin{aligned} \sum_{u\in\mathcal{V}}\pi(u)\; p_{\text{rw}}(u,v)&=\sum_{(u,v)\in\mathcal{E}}\frac{\deg(u)}{2|\mathcal{E}|}\frac{1}{\deg(u)}\\ &=\frac{\deg(v)}{2|\mathcal{E}|}\\ &=\pi(v), \end{aligned} $$ $ \pi $ is a stationary distribution of $ \vec{V}_{\text{rw}} $. For $ \vec{V}_{\text{lazy}} $, $ \forall v\in\mathcal{V} $, since $$ \begin{aligned} \sum_{u\in\mathcal{V}}\pi(u)\; p_{\text{lazy}}(u,v)&=\sum_{(u,v)\in\mathcal{E}}\frac{\deg(u)}{2|\mathcal{E}|}\frac{1-\gamma}{\deg(u)}+\frac{\deg(v)}{2|\mathcal{E}|}\gamma\\ &=\frac{\deg(v)}{2|\mathcal{E}|}(1-\gamma)+\frac{\deg(v)}{2|\mathcal{E}|}\gamma\\ &=\frac{\deg(v)}{2|\mathcal{E}|}\\ &=\pi(v), \end{aligned} $$ $ \pi $ also is a stationary distribution of $ \vec{V}_{\text{lazy}} $. \item[(2)]Notice the intuitive definition of $ P^{l}_{\text{lazy}} $, $$ P^{l}_{\text{lazy}}=\sum_{i=0}^{l}C_{l}^{i}\gamma^{l-i}(1-\gamma)^{i}P_{\text{rw}}^{i}. $$ That is, each step transition matrix $ P_{\text{lazy}} $ is a identity matrix $ I $ with independent probability $ \gamma $. And for all $ i<l $, by the proposition \ref{prop3} we have $$ \max_{u\in\mathcal{V}}\|P_{\text{rw}}^{i}(u,\;\cdot\;)-\pi\|_{TV}\ge\max_{u\in\mathcal{V}}\|P_{\text{rw}}^{l}(u,\;\cdot\;)-\pi\|_{TV}. $$ Then $$ \begin{aligned} \max_{u\in\mathcal{V}}\|P^{l}_{\text{lazy}}(u,\;\cdot\;)-\pi\|_{TV}&=\max_{u\in\mathcal{V}}\|\sum_{i=0}^{l}C_{l}^{i}\gamma^{l-i}(1-\gamma)^{i}P_{\text{rw}}^{i}(u,\;\cdot\;)-\pi\|_{TV}\\ &=\max_{u\in\mathcal{V}}\|\sum_{i=0}^{l}C_{l}^{i}\gamma^{l-i}(1-\gamma)^{i}P_{\text{rw}}^{i}(u,\;\cdot\;)-\sum_{i=0}^{l}C_{l}^{i}\gamma^{l-i}(1-\gamma)^{i}\pi\|_{TV}\\ &=\max_{u\in\mathcal{V}}\|\sum_{i=0}^{l}C_{l}^{i}\gamma^{l-i}(1-\gamma)^{i}[P_{\text{rw}}^{i}(u,\;\cdot\;)-\pi]\|_{TV}\\ &\ge\sum_{i=0}^{l}C_{l}^{i}\gamma^{l-i}(1-\gamma)^{i}\max_{u\in\mathcal{V}}\|P_{\text{rw}}^{l}(u,\;\cdot\;)-\pi\|_{TV}\\ &=\max_{u\in\mathcal{V}}\|P_{\text{rw}}^{l}(u,\;\cdot\;)-\pi\|_{TV} \end{aligned}. $$ \end{itemize} \hfill\BlackBox\\ Notice the definition of mixing time in Section \ref{2} $$ t_{mix}(\epsilon):=\min\{t:d(t)\leq\epsilon\},\ d(t):=\max_{u\in\mathcal{V}}\|P^{l}(u,\;\cdot\;)-\pi\|_{TV}. $$ Theorem \ref{thm4.2.1} shows that the mixing time for a lazy walk on the graph moving to the stationary distribution is longer than that for a simple random walk. This result is very intuitive: the time required for a lazy walk to converge to the stationary distribution is naturally larger than that of a random walk. Back to the over-smoothing problem in GCN, in Section \ref{4.1}, we interpret over-smoothing as the convergence of the probability distribution to the stationary distribution with the move of the Markov chain. Theorem \ref{thm4.2.1} shows that the residual connections method, PPNP and DropEdge method which can be modeled as lazy walks on the graph can indeed slow down the convergence of the GCN model to over-smoothing. We have shown that methods based on the lazy walk can slow down the rate of GCN model over-smoothing. There is a other problem that can these methods avoid the over-smoothing. The answer is no. The following Theorem \ref{thm4.2.2} shows that for a lazy walk on the graph, the probability distribution on the node set still converges to the stationary distribution, and the rate of convergence is exponential as with a simple random walk on the graph. \begin{theorem}\label{thm4.2.2} Let $ L:=I-D^{-\frac{1}{2}}AD^{-\frac{1}{2}} $ be the normalized Laplacian matrix of the graph $ \mathcal{G} $, $ \lambda_{1}\leq \lambda_{2}\leq\cdots\leq\lambda_{N} $ be the eigenvalues of $ L $, $ \phi_{1},\phi_{2},\cdots,\phi_{N} $ are the corresponding eigenvectors of the corresponding eigenvalues. Then for any initial distribution $ \mu:\mathcal{V}\rightarrow\mathbb{R} $, $ l>0 $, $$ \mu P_{\text{rw}}^{l}=\pi+\sum_{k=2}^{N}(1-\lambda_{k})^{l}a_{k}\phi_{k}D^{\frac{1}{2}}, $$ $$ \mu P_{\text{lazy}}^{l}=\pi+\sum_{k=2}^{N}(1-(1-\gamma)\lambda_{k})^{l}a_{k}\phi_{k}D^{\frac{1}{2}}, $$ where $ a_{k},k=1,2,\ldots,N $ are the coordinates of the vector $ \mu D^{-\frac{1}{2}} $ on the basis $ (\phi_{1},\phi_{2},\cdots,\phi_{N}) $, i.e. $ \mu D^{-\frac{1}{2}}=\sum_{k=1}^{N}a_ {k}\phi_{k} $. Further, by the Frobenius-Perron Theorem, we have $ 0=\lambda_{1}\leq \lambda_{2}\leq\cdots\leq\lambda_{N}\leq 2 $, so $ \mu P^{l} $ and $ \mu P^{l}_{\text{lazy}} $ both converge to $ \pi $ with $ l $ exponentially. \end{theorem} {\bf Proof}. $$ \begin{aligned} P_{\text{rw}}^{l}&=(D^{-1}A)^{l}\\ &=D^{-\frac{1}{2}}(D^{-\frac{1}{2}}AD^{-\frac{1}{2}})^{l}D^{\frac{1}{2}}\\ &=D^{-\frac{1}{2}}(I-L)^{l}D^{\frac{1}{2}}. \end{aligned} $$ Then $$ \begin{aligned} \mu P_{\text{rw}}^{l}&=\mu D^{-\frac{1}{2}}(I-L)^{l}D^{\frac{1}{2}}\\ &=\sum_{k=1}^{N}(1-\lambda_{k})^{l}a_{k}\phi_{k}D^{\frac{1}{2}}\\ &=\pi+\sum_{k=2}^{N}(1-\lambda_{k})^{l}a_{k}\phi_{k}D^{\frac{1}{2}}, \end{aligned} $$ where $ \pi=a_{1}\phi_{1}D^{\frac{1}{2}} $ is concluded from \citet{chung1997spectral}. On the other hand, the corresponding normalized Laplacian matrix of $ P_{\text{lazy}} $ is $$ \begin{aligned} L_{\text{lazy}}&=I-D_{\text{lazy}}^{-\frac{1}{2}}A_{\text{lazy}}D_{\text{lazy}}^{-\frac{1}{2}}\\ &=I-D^{-\frac{1}{2}}((1-\gamma)A+\gamma D)D^{-\frac{1}{2}}\\ &=I-(1-\gamma)D^{-\frac{1}{2}}AD^{-\frac{1}{2}}-\gamma I\\ &=(1-\gamma)L. \end{aligned} $$ Therefore, the eigenvalue of $ L_{\text{lazy}} $ is $ 0=(1-\gamma)\lambda_{1}\leq (1-\gamma)\lambda_{2}\leq\cdots\leq(1-\gamma)\lambda_{N}< 2 $, and the eigenvector remains $ \phi_{1},\phi_{2},\cdots,\phi_{N} $. In the same way as $ \mu P_{\text{rw}}^{l} $, there are $$ \mu P_{\text{lazy}}^{l}=\pi+\sum_{k=2}^{N}(1-(1-\gamma)\lambda_{k})^{l}a_{k}\phi_{k}D^{\frac{1}{2}}. $$ \hfill\BlackBox\\ In this subsection, we use Theorem \ref{thm4.2.1} to show the effectiveness of the residual connections method, PPNP and DropEdge method to alleviate the over-smoothing problem. It is further shown by Theorem \ref{thm4.2.2} that these methods can neither make the GCN model avoid over-smoothing nor can they make the GCN model avoid over-smoothing at the exponential rate. Then there are problems, can the operator-consistent GNN model avoid over-smoothing? Or can it avoid over-smoothing at the exponential rate? We will discuss these two questions in detail in the next section. \subsection{Conclusion of the operator-consistent GNN model}\label{4.3} In Section \ref{4.2}, we study several major methods to alleviate over-smoothing, model them uniformly as lazy walks on the graph, and rigorously demonstrate the effectiveness of these methods to alleviate the over-smoothing problem using the mixing time theory of Markov chains. However, we also point out that these methods cannot avoid the GCN model from converging to over-smoothing, but only reduce the rate at which the model converges to over-smoothing, and this rate remains exponential. In this section, we discuss following two core issues \begin{itemize} \item Can operator-consistent GNN models avoid over-smoothing? \item If over-smoothing cannot be avoided, can operator-consistent GNN models avoid over-smoothing at the exponential rate? \end{itemize} The answer to both questions is no. The following Theorem \ref{thm4.3.1} answers the first question by showing that the stationary distribution must exist for the transition matrix of time-homogeneous random walk on the graph. See Appendix A for proof. \begin{theorem}\label{thm4.3.1} Let $ P $ be a transition matrix of random walk on the connected, non-bipartite graph $ \mathcal{G} $, Then there exists a unique probability distribution $ \pi $ over $ \mathcal{V} $ that satisfies $$ \pi=\pi P. $$ \end{theorem} \begin{corollary}\label{cor4.3} Under the conditions of Theorem \ref{thm4.3.1}, the transition matrix $ P=(p(u,v),u,v\in\mathcal{V}) $ of the random walk on the graph can be expressed formally as $$ p(u,v)=\frac{r(u,v)}{\sum_{z\in\mathcal{N}(u)}r(u,z)}, $$ where $ r(u,v)>0 $. Then its unique stationary distribution is $$ \pi(u)=\frac{\deg(u)}{\sum_{k\in\mathcal{V}}\deg(k)}, $$ where $ \deg(u)=\sum_{z\in\mathcal{N}(u)}r(u,z) $. \end{corollary} {\bf Proof}. For all $ v\in\mathcal{V} $, since $$ \begin{aligned} \sum_{u\in\mathcal{V}}\pi(u)p(u,v)&=\sum_{(u,v)\in\mathcal{E}}\frac{\deg(u)}{\sum_{k\in\mathcal{V}}\deg(k)}\frac{r(u,v)}{\sum_{z\in\mathcal{N}(u)}r(u,z)}\\ &=\sum_{(u,v)\in\mathcal{E}}\frac{r(v,u)}{\sum_{k\in\mathcal{V}}\deg(k)}\\ &=\frac{\deg(v)}{\sum_{k\in\mathcal{V}}\deg(k)}\\ &=\pi(v), \end{aligned} $$ $ \pi $ is a stationary distribution of $ P $. \hfill\BlackBox\\ Theorem \ref{thm4.3.1} shows that there must be a stationary distribution for the message passing operator on the graph. Then distribution on $ \mathcal{V}$ starting from any initial distribution will converge to a stationary distribution $ \pi $. According to the discussion of the stationary distribution and over-smoothing problem in Section \ref{4.1}, for an operator-consistent GNN, regardless of the initial input, the features of the nodes over-smoothing as the GNN propagates forward. Combining the above discussion, the operator-consistent GNN cannot avoid over-smoothing. The following Theorem \ref{thm4.3.2} answers the second question by showing that the time-homogeneous random walk on the graph will converge at the exponential rate to its stationary distributions $ \pi $. See Appendix A for proof. \begin{theorem}\label{thm4.3.2} Under the condition of Theorem \ref{thm4.3.1}, there exist constants $ \alpha\in(0,1) $ and $ C>0 $ such that $$ \max_{u\in\mathcal{V}}\|P^{l}(u,\;\cdot\;)-\pi\|_{TV}\leq C\alpha^{n}, $$ where $ P(u,\;\cdot\;) $ denotes the row vector consisting of the $ i $th row of the transition matrix $ P $. \end{theorem} Notice the definition of mixing time in Section \ref{2} $$ t_{mix}(\epsilon):=\min\{t:d(t)\leq\epsilon\},\ d(t):=\max_{u\in\mathcal{V}}\|P^{l}(u,\;\cdot\;)-\pi\|_{TV}. $$ Returning to the GNN model, Theorem \ref{thm4.3.2} shows that under the mixing time theory, the rate at which the operator-consistent GNN model converges to over-smoothing is exponential. So as long as the message passing operators of each layer of the GNN model are consistent, the GNN model cannot avoid over-smoothing at the exponential rate. In Markovian sense, we give the general conclusion on the problem of operator-consistent GNN model over-smoothing. The operator-consistent GNN model can neither avoid over-smoothing nor can it avoid over-smoothing at the exponential rate. For operator-consistent GNN models, we can only alleviate but not avoid over-smoothing. However, is there a similar conclusion for operator-inconsistent GNN models? We discuss this in detail in Section \ref{4.4}. \subsection{Conclusion of the operator-inconsistent GNN models}\label{4.4} In this subsection, We give a sufficient condition for the operator-inconsistent GNN models to avoid over-smoothing. In addition, experimental verification is performed in Section \ref{6}. Previously, \citet{wang2019improving} discussed the over-smoothing problem of GAT. And they concluded that GAT will over-smooth. Similar to our modeling process, they view the GAT operator $ P_{\text{att}}^{(l)} $ at each layer as a transition matrix of a random walk on the graph. However, they ignore the fact that the complete forward propagation process of GAT is a time-inhomogeneous random walk on the graph. Their proof that the GAT will over-smooth relies on the stationary distribution $ \pi^{(l)} $ of the GAT operator $ P_{\text{att}}^{(l)} $ being consistent for each layer, i.e. $$ \pi^{(1)}=\pi^{(2)}=\cdots=\pi^{(l)}=\cdots. $$ However, by Corollary \ref{cor4.3}, for the GAT operator $ P_{\text{att}}^{(l)} $ $$ \pi^{(l)}(u)=\frac{\deg^{(l)}(u)}{\sum_{v\in\mathcal{V}}\deg^{(l)}(v)}\quad\forall u\in\mathcal{V}, $$ where $$ \deg^{(l)}(u):=\sum_{k\in\mathcal{N}(u)}\exp(\phi^{(l)}(h_{u}^{(l-1)},h_{k}^{(l-1)})) $$ is the weighted degree of $ u $, and $$ \phi^{(l)}(h_{u}^{(l-1)},h_{k}^{(l-1)}):=\text{LeakyReLU}(\mathbf{a}^{\text{T}}[W^{(l)}h_{u}^{(l-1)}\|W^{(l)}h_{k}^{(l-1)}]). $$ Since the weighted degree of $ u $ is different between each layer, then $$ \pi^{(1)}\neq\pi^{(2)}\neq\cdots\neq\pi^{(l)}\neq\cdots. $$ The proof of \citet{wang2019improving} is flawed. Thus the conclusion that GAT will over-smoothing in the Markovian sense is also incorrect. We have modeled the graph attention model in Section \ref{3.3} as a time-inhomogeneous Markov chain $ \vec{V}_{\text{att}}$ with finite states on the graph. Then by the Dobrushin-Isaacson-Madsen theorem (Theorem \ref{thm2.1.3}), considering the GAT operator $ \{P_{\text{att}}^{(l)}\} $ as a family of transition matrices can only satisfy the condition $ (1) $, which neither guarantees the condition $ (2) $ nor the Isaacson-Madsen condition or the Dobrushin condition. So the time-inhomogeneous Markov chain $ \vec{V_{\text{att}}} $ does not necessarily have a stationary distribution. In Section \ref{4.1}, we interpreted over-smoothing problem of the GNN model as the convergence of the node feature distribution to a stationary distribution with the move of the Markov chain on the graph. Since the time-inhomogeneous Markov chain $ \vec{V}_{\text{att}} $ does not necessarily have a stationary distribution in the limit sense, this suggests that GAT does not necessarily oversmooth. We give the conclusion that the operator-consistent GNN model cannot avoid over-smoothing at the exponential rate in Section \ref{4.3}. GAT is used as example to show that operator-inconsistent GNNs do not necessarily oversmooth. To further refine the discussion of this problem, we propose a necessary condition for the existence of the stationary distribution (limiting sense) for the time-inhomogeneous Markov chain. And a sufficient condition is developed to ensure that GAT avoid over-smoothing. \begin{theorem}\label{thm5.3} Let $ \vec{X}=\{X_{n},n\in T\} $ be a time-inhomogeneous Markov chain on a finite state space $ E $, and write its $ n $th-step transition matrix as $ P^{(n)} $, satisfying that, $ P^{(n)} $ is irreducible and aperiodic, there exists a unique stationary distribution $ \pi^{(n)} $ as the time-homogeneous transition matrix, and $ C(P^{(n)})<1 $. Let the initial distribution be $ \mu_{0} $ and the distribution of the chain $ \vec{X} $ at step $ n $ be $ \mu_{n}:= \mu_{n-1}P^{(n)} $. Then the necessary condition for there exists a probability measure $ \pi $ on $ E $ such that $ \|\mu_{n}-\pi\|\rightarrow 0,\; n\rightarrow \infty $ is $$ \|\pi^{(n)}-\pi\|\rightarrow 0,\quad n\rightarrow \infty. $$ \end{theorem} {\bf Proof}. Suppose there exists a probability measure $ \pi $ on $ E $ such that $ \|\mu_{n}-\pi\|\rightarrow 0, n\rightarrow \infty. $ The following conclusion $$ \|\pi^{(n)}-\mu_{n-1}\|\rightarrow 0,\quad n\rightarrow \infty $$ is proved by contradiction. If for any $ N\in\mathbb{N}^{+} $, there exists $ \delta>0 $, when $ n>N $, all have $$ \|\pi^{(n)}-\mu_{n-1}\|>\delta. $$ Then by the triangle inequality and the Dobrushin inequality (Lemma\ref{lem2}) $$ \begin{aligned} \|\mu_{n}-\mu_{n-1}\|&=\|(\pi^{(n)}-\mu_{n-1})-(\pi^{(n)}-\mu_{n})\| \\ &\ge \|\pi^{(n)}-\mu_{n-1}\|-\|\pi^{(n)}-\mu_{n}\|\\ &= \|\pi^{(n)}-\mu_{n-1}\|-\|\pi^{(n)}P^{(n)}-\mu_{n-1}P^{(n)}\|\\ &\ge \|\pi^{(n)}-\mu_{n-1}\|-C(P^{(n)})\|\pi^{(n)}-\mu_{n-1}\|\\ &=(1-C(P^{(n)}))\|\pi^{(n)}-\mu_{n-1}\|\\ &>(1-C(P^{(n)}))\delta. \end{aligned} $$ And since $ C(P^{(n)})<1 $, then for any $ N\in\mathbb{N}^{+} $, there exists $ (1-C(P^{(n)}))\delta>0 $, for all $ n > N, $ $$ \|\mu_{n}-\mu_{n-1}\|>(1-C(P^{(n)}))\delta. $$ By Cauchy's convergence test, it is contradictory to $$ \|\mu_{n}-\pi\|\rightarrow 0,\quad n\rightarrow \infty. $$ Thus for any $ \epsilon>0 $, there exists $ N_{1}\in\mathbb{N}^{+} $, and when $ n>N_{1} $, $$ \|\pi^{(n)}-\mu_{n-1}\|<\frac{\epsilon}{2}. $$ Since $ \|\mu_{n}-\pi\|\rightarrow 0,\ n\rightarrow \infty $, there exists $ N_{2}\in\mathbb{N}^{+} $, for all $ n>N_{2}, $ $$ \|\mu_{n-1}-\pi\|<\frac{\epsilon}{2}. $$ Taking $ N=\max\{N_{1},N_{2}\} $, when $ n>N $, we have $$ \begin{aligned} \|\pi^{(n)}-\pi\|&=\|(\pi^{(n)}-\mu_{n-1})+(\mu_{n-1}-\pi)\| \\ &\leq \|\pi^{(n)}-\mu_{n-1}\|+\|\mu_{n-1}-\pi\|\\ &<\epsilon. \end{aligned} $$ Then $$ \|\pi^{(n)}-\pi\|\rightarrow 0,\quad n\rightarrow \infty. $$ \hfill\BlackBox\\ This necessary condition is very intuitive. In the limit sense, the transition of $ \mu_{n-1} $ satisfies $$ \lim\limits_{n\rightarrow \infty}\mu_{n-1}P^{(n)}=\lim\limits_{n\rightarrow \infty}\mu_{n}=\lim\limits_{n\rightarrow \infty}\mu_{n-1}=\pi. $$ On the other hand, for any $ n > 0 $, $ \pi^{(n)} $ is the only solution of the equation $$ \mu=\mu P^{(n)}. $$ Naturally, $$ \lim\limits_{n\rightarrow \infty}\pi^{(n)}=\lim\limits_{n\rightarrow \infty}\mu_{n-1}=\pi. $$ According to Theorem \ref{thm5.3}, we propose the following corollary which give a sufficient condition to ensure that GAT avoid over-smoothing in Markovian sense. \begin{corollary}[Sufficient condition for GAT to avoid over-smoothing]\label{cor5.1} Let $ h_{u}^{(l)} $ be the $ l\; $th hidden layer feature of node $ u\in\mathcal{V} $ in GAT, then a sufficient condition for GAT to avoid over-smoothing is that there exists a hyperparameter $ \delta>0 $ such that for any $ l\ge2 $, satisfying $$ \|h_{u}^{(l-1)}-h_{u}^{(l)}\|>\delta. \eqno{(10)}$$ \end{corollary} {\bf Proof}. By Corollary \ref{cor4.3}, for the GAT operator $ P_{\text{att}}^{(l)}, $ $$ \pi^{(l)}(u)=\frac{\deg^{(l)}(u)}{\sum_{v\in\mathcal{V}}\deg^{(l)}(v)}\quad\forall u\in\mathcal{V}, $$ where $ \deg^{(l)}(u):=\sum_{k\in\mathcal{N}(u)}\exp(\phi^{(l)}(h_{u}^{(l-1)},h_{k}^{(l-1)})) $ is the weighted degree of $ u $, where $$ \phi^{(l)}(h_{u}^{(l-1)},h_{k}^{(l-1)}):=\text{LeakyReLU}(\mathbf{a}^{\text{T}}[W^{(l)}h_{u}^{(l-1)}\|W^{(l)}h_{k}^{(l-1)}]). $$ Since $ \mathcal{G} $ is connected, non-bipartite graph, $$ C(P_{\text{att}}^{(l)})<1. $$ By Theorem \ref{thm5.3}, the sufficient condition for that there is no probability measure $ \pi $ on $ E $ such that $$ \|\mu_{n}-\pi\|\rightarrow 0,\quad n\rightarrow \infty $$ is $$ \|\pi^{(n)}-\pi\|\nrightarrow 0,\quad n\rightarrow \infty. $$ By the Cauchy's convergence test, it is equivalent to the existence of $ \delta_{\pi}>0 $ such that for any $ l\ge1 $, satisfying $$ \|\pi^{(l)}-\pi^{(l+1)}\|>\delta_{\pi}. $$ Let $ D^{(l)}:=\sum_{u\in\mathcal{V}}\deg^{(l)}(u) $, $ D_{\text{min}}=\min\{D^{(l)},D^{(l+1)}\}, $ $$ \begin{aligned} \|\pi^{(l)}-\pi^{(l+1)}\|&=\sum_{u\in\mathcal{V}}\left\vert\frac{\deg^{(l)}(u)}{D^{(l)}}-\frac{\deg^{(l+1)}(u)}{D^{(l+1)}}\right\vert\\ &>\left\vert\frac{\deg^{(l)}(u)}{D^{(l)}}-\frac{\deg^{(l+1)}(u)}{D^{(l+1)}}\right\vert\\ &>\frac{1}{D_{\text{min}}}\left\vert\deg^{(l)}(u)-\deg^{(l+1)}(u)\right\vert \end{aligned} $$ and $$ \deg^{(l)}(u):=\sum_{k\in\mathcal{N}(u)}\exp(\text{LeakyReLU}(\mathbf{a}^{\text{T}}[W^{(l)}h_{u}^{(l-1)}\|W^{(l)}h_{k}^{(l-1)}])). $$ Then if there exists $ \delta>0 $ such that for any $ l\ge2 $, satisfying $$ \|h_{u}^{(l-1)}-h_{u}^{(l)}\|>\delta, $$ there must exist $ \delta_{\pi}>0 $ such that for any $ l\ge1 $, satisfying $$ \|\pi^{(l)}-\pi^{(l+1)}\|>\delta_{\pi}. $$ \hfill\BlackBox\\ This sufficient condition is intuitive. The essence of over-smoothing is that the node features converge with the propagation of the network. By Cauchy's convergence test, the condition (10) exactly avoid features $ h_{u}^{(l)} $ of the node $ u $ from converging as network deepens. \section{Experiments}\label{6} In this section, we experimentally verify the correctness of the theoretical analysis. Since we do not aim to refresh State of the Arts (SOTA), we verify that the sufficient conditions in Section \ref{4.4} can indeed avoid over-smoothing and improve the performance of GAT, while keeping the other hyperparameters (network structure, learning rate, dropout, epoch, etc.) the same (note that this is not necessarily the optimal hyperparameter). We also conduct experiments on GEN-SoftMax~\citep{li2020deepergcn} and we leave this part in Appendix B. \subsection{Setup} In this section we briefly introduce the settings in our experiments; see Appendix C for more specific settings. \noindent \textbf{Variant of sufficient condition.}\quad Notice that the sufficient condition in the Corollary \ref{cor5.1} is in the form of inequality, which is not conducive to conducting experiments. In the concrete implementation, let $ h_{u}^{(l)} $ be the $ l \; $th hidden layer feature on node $ u\in\mathcal{V} $. We normalize the distance of the node features between two adjacent layers and then let it equal to a given hyperparameter threshold $ T\in(0,1) $, i.e., for the GNN model with $n$ layers: $$ \left(\frac{1}{n}\sum_{l=0}^{n-1}(\| \; \text{Sigmoid}(h_{u}^{(l)})-\text{Sigmoid}(h_{u}^{(l+1)}) \; \|)-T \right)^2=0. \eqno{(11)}$$ Since there must exist $ \delta>0 $ that satisfies $ T>\delta $, equation (10) can be satisfied. For a detailed discussion of the threshold, we put it in Appendix C. \noindent \textbf{Datasets.}\quad In terms of datasets, we follow the datasets used in the original work of GAT \citep{velivckovic2017graph}. We use three standard benchmark datasets - Cora, Citeseer, and Pubmed \citep{sen2008collective}, covering the basic transductive learning tasks. \noindent \textbf{Implementation Details.}\quad For the specific implementation, we refer to the open-source code of vanilla GAT, and models with different layers share the same settings: We use the Adam SGD optimizer \citep{kingma2014adam} with learning rate 0.01, the hidden dimension is 64, each GAT layer has 8 heads and the amount of training epoch is 500. All experiments are conducted on a single Nvidia Tesla v100. \subsection{Results of GAT} To keep statistical confidence, we repeat all experiments 5 times and record the mean value and standard deviation. Results shown in Table~\ref{table: result of gat} demonstrate that almost on each dataset and number of layers, GAT-OI will gain a improvement in the performance. Specifically, on Cora and Citeseer, GAT's performance begins to decrease drastically when layer numbers surpass 6 and 5 but GAT-OI can relieve this trend in some way. On Pubmed, vanilla GAT's performance has a gradual decline, however, GAT-OI's performance keeps competitive for all layer numbers. \begin{table} \centering \caption{Results of GAT} \label{table: result of gat} ~\\ \resizebox{\linewidth}{!}{ \begin{tabular}{l|c|cccccc} \hline \multirow{2}{*}{datasets}&\multirow{2}{*}{model}&\multicolumn{6}{c}{\#layers}\\ \cline{3-8} &&3&4&5&6&7&8 \\ \hline \multirow{2}{*}{Cora}&GAT&0.7773($ \pm $0.0054) &0.7602($ \pm $0.0166) &0.4821($ \pm $0.3021)&0.2774($ \pm $0.2542) &0.1672($ \pm $0.0780) &0.0958($ \pm $0.0059)\\ &GAT-OI&0.7884($ \pm $0.0157) &0.7872($ \pm $0.0127) &0.7648($ \pm $0.0077)&0.6454($ \pm $0.2508) &0.3244($ \pm $0.2465) &0.1678($ \pm $0.0756)\\ \multirow{2}{*}{Citeseer}&GAT&0.6643($ \pm $0.0063) &0.6541($ \pm $0.0076) &0.3472($ \pm $0.2582)&0.2474($ \pm $0.1947) &0.1768($ \pm $0.0064) &0.1902($ \pm $0.0598)\\ &GAT-OI&0.6678($ \pm $0.0157) &0.6692($ \pm $0.0072) &0.6208($ \pm $0.0380)&0.2706($ \pm $0.1884) &0.1915($ \pm $0.0200) &0.1864($ \pm $0.0229)\\ \multirow{2}{*}{Pubmed}&GAT&0.7616($ \pm $0.0115) &0.7534($ \pm $0.0114) &0.7653($ \pm $0.0072)&0.7468($ \pm $0.0084) &0.7468($ \pm $0.0045) &0.7076($ \pm $0.0112)\\ &GAT-OI&0.7673($ \pm $0.0064) &0.7659($ \pm $0.0123) &0.7684($ \pm $0.0063)&0.7664($ \pm $0.0092) &0.7596($ \pm $0.0107) &0.7618($ \pm $0.0114)\\ \hline \end{tabular}} \end{table} \subsection{Verification of avoiding over-smoothing} In this subsection, we further show that the sufficient conditions in Section \ref{4.4} not only improve the performance of the model but also do avoid the over-smoothing problem. Since the neural network is a black-box model, we cannot explicitly compute the stationary distribution of the graph neural network when it is over-smoothed. Therefore we measure the degree of over-smoothing by calculating the standard deviation of each node's representation at each layer. A lower value implies more severe over-smoothing. Results shown in Fig.~\ref{Fig:distance on Cora}-\ref{Fig:distance on Pubmed} demonstrate that the node representations obtained from GAT-OI are more diverse than those from GAT, which means the remission of over-smoothing. It's also interesting that there is an accordance between the performance and over-smoothing, for example on Cora dataset, the performance would have a huge decrease when the number of layers is larger than 5, Fig.~\ref{Fig:distance on Cora} shows the over-smoothing phenomenon is severe at the same time. Also on Pubmed dataset, the performance is relatively stable and the corresponding Fig.~\ref{Fig:distance on Pubmed} shows that the model trained on this dataset suffers from over-smoothing lightly. These results enlighten us that over-smoothing may be caused by various objective reasons, e.g. the property of the dataset, and GAT-OI can relieve this negative effect to some extent. \begin{figure}[hbpt!] \centering \subfigcapskip=-5pt \subfigure[3-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Cora/3-1.0.pdf}} \subfigure[4-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Cora/4-0.5.pdf}} \subfigure[5-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Cora/5-0.7.pdf}} \subfigure[6-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Cora/6-0.9.pdf}} \subfigure[7-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Cora/7-1.0.pdf}} \subfigure[8-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Cora/8-0.8.pdf}} \vspace{-3mm} \caption{Measurement of over-smoothing of GAT on Cora. Orange curve indicates the results of GAT-OI and blue curve indicates the results of vanilla GAT} \label{Fig:distance on Cora} \end{figure} \begin{figure}[hbpt!] \centering \vspace{-8mm} \subfigure[3-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Citeseer/3-0.3.pdf}} \subfigure[4-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Citeseer/4-0.3.pdf}} \subfigure[5-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Citeseer/5-1.0.pdf}} \subfigure[6-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Citeseer/6-0.5.pdf}} \subfigure[7-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Citeseer/7-0.2.pdf}} \subfigure[8-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Citeseer/8-0.1.pdf}} \vspace{-3mm} \caption{Measurement of over-smoothing of GAT on Citeseer. Orange curve indicates the results of GAT-OI and blue curve indicates the results of vanilla GAT} \label{Fig:distance on Citeseer} \end{figure} \begin{figure}[hbpt!] \vspace{-8mm} \centering \subfigure[3-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Pubmed/3-0.3.pdf}} \subfigure[4-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Pubmed/4-1.0.pdf}} \subfigure[5-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Pubmed/5-1.0.pdf}} \subfigure[6-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Pubmed/6-0.8.pdf}} \subfigure[7-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Pubmed/7-0.5.pdf}} \subfigure[8-layer]{ \includegraphics[width=0.15\linewidth]{Fig/GAT/Pubmed/8-0.7.pdf}} \vspace{-3mm} \caption{Measurement of over-smoothing of GAT on Pubmed. Orange curve indicates the results of GAT-OI and blue curve indicates the results of vanilla GAT} \label{Fig:distance on Pubmed} \end{figure} \section{Conclusion} This article provides a theoretical tool for explaining and analyzing GNNs by modeling the forward propagation process of GNNs as Markov chains on graphs. We model GCN as a simple random walk on the graph, GAT as a time-inhomogeneous Markov chain on the graph and the GNN using the DropEdge method as the Markov chain in random environment. We connect Markov chains with GNNs in the hope that we can solve some problems in the field of GNNs through the study of Markov chains, and inspire more scholars to try to analyze from the Markov perspective when studying GNNs, as well as to design high-performance GNN models by the guidance of Markov chains theory. We study the over-smoothing which is an important problem limiting the development of GNNs. We attribute the over-smoothing problem to the convergence of the probability distribution over the node set to the stationary distribution. Using results from the study of Markov chains, we prove a series of important conclusions containing the effectiveness of the methods to alleviate the over-smoothing problem, the inability of operator-consistent GNNs to avoid over-smoothing at an exponential rate, and the sufficient condition for operator-inconsistent GNNs to avoid over-smoothing in the Markovian sense. Finally, according to the experiments we designed, the proposed sufficient condition can indeed improve the performance of operator-inconsistent GNNs by solving the over-smoothing problem. \acks{This paper is supported by National key research and development program of China (2021YFA1000403). And it is supported by the National Natural Science Foundation of China (Nos.11991022, U19B2040), and the Strategic Priority Research Program of Chinese Academy of Sciences (No.XDA27000000) and the Fundamental Research Funds for the Central Universities.} \newpage
1,116,691,498,022
arxiv
\section{Introduction} A clear understanding of nuclear structure beyond the valley of $\beta$-stability requires detailed spectroscopic investigations. Direct reactions, such as single-nucleon transfer reactions are established probes of the single-particle nuclear shell structure and have provided considerable insight into the properties of stable nuclei in the past. With the on-going increase in radioactive nuclear beam intensities, such as those achieved at the SPIRAL facility, this kind of reaction is now feasible. The inverse kinematics of such reactions leads, however, to significant constraints on the experimental apparatus \cite{Win97,Lenske98}. One of the main obstacles to overcome is to reach good energy resolution in the kinematically reconstructed excitation energy given that the energy spread of the secondary beam may be relatively large, the target-like residue can be emitted over a large angular range and that thick targets are often required to compensate for the relatively low intensities of the beams \cite{Win97}. Already, pioneer detectors such as MUST \cite{Blum99} and the active target MAYA \cite{Demon07} have been build to tackle some of these obstacles and, the detector TIARA described here proposes a new alternative to these other apparatus. The TIARA array is designed and built specifically to study direct reactions with radioactive beams and addresses the challenge of the excitation energy resolution by employing the technique of $\gamma$-ray tagging. This has the advantage of providing, in principle, a final excitation energy resolution limited only by Doppler broadening. The TIARA array was commissioned at the GANIL laboratory through a study of the d($^{14}$N,p)$^{15}$N reaction \cite{Phil69,Krets80} with coincident $\gamma$-ray detection. The results are reported here together with a full description of the array. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm,height=7cm,angle=0]{fig1_TiaraCoverage.eps} \end{center} \caption{Position of the silicon TIARA array and the target changing mechanism in the reaction chamber. The beam goes from the right to the left. The angular range covered by each component of the array for the commisioning experiment (see text) is shown.} \label{fig:TiaraCoverage} \end{figure} \begin{figure*}[ht] \begin{center} \includegraphics[width=10cm,height=6cm,angle=0]{fig2_Barrel+Hyball.eps} \end{center} \caption{The SiHyBall annular detector (left) and the octagonal barrel (right).} \label{fig:photoHybBar} \end{figure*} \section{Detector Description} The TIARA array \cite{Cat02b,Cat04} has been designed with the ultimate goal of performing nucleon transfer and other direct reaction studies in inverse kinematics using radioactive ion beams \cite{Win97,Cat02}. The array is used to identify the binary reaction channels and to determine the excitation energies of the populated states. This task is achieved by providing position and deposited energy measurements of the light charged target-like residue, which can be emitted over a wide angular range. TIARA consists of a set of single-layer silicon detectors manufactured by Micron Semiconductor \cite{Micron} which covers 85$\%$ of 4$\pi$ (Fig.~\ref{fig:TiaraCoverage}). The set includes a large annular double sided silicon strip detector (SiHyBall), eight resistive charge division silicon detectors forming a ``barrel'' around the target, and two smaller ``CD-type'' silicon strip detectors (S1 and S2). \subsection{Resistive Charge Division Detectors} Eight resistive charge division detectors based on 6-inch silicon wafer technology form an octagonal barrel around the beam axis and surrounding the target. Each of the detectors presents an active area 94.6 mm long and 22.5 mm wide with a thickness of 400 $\mu$m. The junction side facing the target is divided into 4 longitudinal resistive strips obtained by p$^{+}$ implantation on n-type silicon. Each 4k$\Omega$-resistive strip has a 5.65 mm pitch while the inter-strip gap is 100 $\mu$m. The strips provide for measurement and pixellation of the azimuthal angle in 32 bins of approximately 9.5$^{o}$. The PCB board around the silicon has been minimised and bevelled so that the dead area between the detectors as well as between the barrel and annular detectors, is minimised. At one end of the Ohmic side of the detector, the PCB board is extended by $\sim$15 mm in order to gather all the output signal tracks: the 8 position signals (2 signals per strip) and the connection of the Ohmic side to ground. Miniature Junkosha coaxial cables of 1 mm diameter and 30 cm length were chosen for their favourable vacuum properties to transmit the signals from the detector to the vessel feed-throughs. Once assembled (Fig.~\ref{fig:photoHybBar} right), the barrel presents an octagonal cross section of 27.6 mm side length and 33.3 mm inner radius. From the centre, the angular range spans 36$^{o}$ to 144$^{o}$. For the commissioning measurements described later, the centre of the barrel was mounted 1 mm forward of the target position leading to an angular coverage of 35.5$^{o}$ to 143.5$^{o}$. The measurement of the position along the strip is achieved by resistive charge division and, with alpha particles of 5.5 MeV, the position resolution along the longitudinal axis is determined to better than 0.5 mm (FWHM). The resulting polar angle is thus deduced with a precision better than 1$^{o}$. The energy of the particle is obtained simply by summing the signals from the two strip ends. Figure~\ref{fig:3Alphas} was obtained using a mixed source of $^{239}$Pu, $^{241}$Am and $^{244}$Cm with alpha energies of 5156, 5484 and 5805 keV respectively. It illustrates the correlation between the signals from the ends of the strips (Fig~\ref{fig:3Alphas}(a)). With a shaping time of 1$\mu$s the barrel suffers slightly from ballistic deficits which result in a non-linear dependence of the energy sum, measured at each end of a strip, as a function of the position. Nevertheless, this dependence is easily described with a second-order polynomial function and a corresponding corrective factor can be applied to the energy sum. The resolution for one strip is $\sim$70 keV (FWHM) for 5.5 MeV alphas (Fig~\ref{fig:3Alphas}(b)). \begin{figure}[ht] \begin{center} \includegraphics[width=12cm,height=7cm,angle=0]{fig3_BarVsHybStrip-3A.eps} \end{center} \caption{Typical response of the barrel and SiHyball strips with a 3-alpha source. (a): signals collected at both ends of a barrel strip. (b): total energy collected in a single strip of a barrel detector (thick dashed histogram) and in a single strip of the DSS SiHyBall detector (thin line histogram), obtained with a 1$\mu$s shaping time. (c): same as (b) for the 4 strips of a barrel detector.} \label{fig:3Alphas} \end{figure} \subsection{Annular Silicon Detectors} As noted earlier, in order to enhance the angular coverage of the TIARA array, double-sided DC annular silicon-strip detectors are mounted at both ends of the barrel. For these detectors the annular rings on the entrance face (junction side) were fabricated by p$^{+}$ implantation on n-type silicon. The forward angles are covered by two 500 $\mu$m thick annular detectors based on 4-inch wafer technology. The smallest of the two (S2-design) was positioned 150 mm downstream of the target position covering the polar angular range [3.8$^{o}$,13.1$^{o}$]. The active area is delimited by a disk of 11 mm inner radius and 35 mm outer radius. The detector is divided into 48 rings of 0.5 mm pitch at the front (target side) and 16 azimuthal sectors at the back. However, for the present measurements, the number of channels to instrument was reduced by linking the rings in threes giving effectively 16 rings of 1.5 mm pitch. The second forward annular detector (S1 design) was mounted 92 mm downstream of the target position to cover the polar angular range [12.6 $^{o}$,27.5$^{o}$]. Its active area is divided into 4 quadrants of 20.5 mm inner and 48 mm outer radii. Although each quadrant has 16 front rings (1.65 mm pitch) and 4 azimuthal back sectors, for the experiment reported here the four quadrants were combined to form two semi-circles to reduce the total number of rings from 16$\times$4 to 16$\times$2. The backward angles from 137.0$^{o}$ to 169.4$^{o}$ are covered by a 400 $\mu$m thick double-sided silicon-strip detector (DSSSD) based on 6-inch wafer technology and positioned 150 mm upstream of the target position. This detector is composed of six individual wedges (Fig.~\ref{fig:photoHybBar}) originally developed at Oak Ridge for the SiHyBall forward array \cite{HyBall}. Each wedge is divided into 16 strips facing the target and 8 azimuthal back sectors. The active area of a wedge is delimited by inner and outer radii of 28.11 mm and 140 mm, respectively, and spans approximately 55$^{o}$ of the total azimuthal angle. The pitch of the rings is 5.3 mm and the polar angular range is close to 2 degrees per strip. The energy resolution, illustrated in Fig.~\ref{fig:3Alphas}b, is typically $\sim$70 keV (FWHM) for 5.5 MeV alpha particles. \begin{figure}[ht] \begin{center} \includegraphics[width=6cm,height=6cm,angle=0]{fig4_target.eps} \end{center} \caption{The target mechanism on a test bench. The rod has just picked up a target frame from the storage wheel.} \label{fig:target} \end{figure} \subsection{Target Changing Mechanism} One of the critical features of the TIARA array is the target changing mechanism (Fig.~\ref{fig:target}). The design of this mechanism has been chosen to maximise the solid angle coverage of the array. Positioned upstream, just behind the SiHyBall, it offers the possibility to use four different targets during a run without breaking vacuum. The mechanism consists of a target storage wheel with 4 positions and a rod parallel and slightly offset to the beam axis. A set of clamps, four on the storage wheel and one at the extremity of the rod (Fig.~\ref{fig:target}) are used to hold the target frames. The rod is driven along the beam axis via a ball screw. It first picks up a target from the wheel and continues its motion along the beam axis through the inner hole of the SiHyBall detector until the target position in the barrel is reached. The target frame is 3$\times$3 cm$^2$ in area with a central hole of 20 mm diameter. It can only be positioned perpendicular to the beam axis, introducing some shadowing at 90$^{o}$ in the barrel detector (Section 3.3). The whole mechanism is controlled remotely and the position of both the wheel and the rod is monitored by optical readouts. Four feed-throughs on the vacuum vessel are used for the target control system. \begin{figure*}[ht] \begin{center} \includegraphics[width=14cm,height=10cm,angle=0]{fig5_TIARAVesselClosed+VAMOS.eps} \end{center} \caption{Left: Picture of TIARA in situ. The support structure holding 4 EXOGAM Ge clover detectors has been opened up, showing the TIARA reaction chamber at the entrance of the VAMOS spectrometer. Right: The TIARA array and chamber as defined in GEANT4 simulation.} \label{fig:photoSetup} \end{figure*} \subsection{The Vacuum Vessel} The reaction chamber of TIARA is made of aluminium and is some 56 cm long (excluding the target mechanism). Figure~\ref{fig:photoSetup} shows the vessel in position in front of the VAMOS spectrometer and in the middle of the EXOGAM support structure. The vessel presents a longitudinal diabolo shape with a central cylindrical section of 85 mm outer diameter housing the barrel and two 500 mm diameter cylindrical sections at each end housing the annular detectors. Two aluminium end plates accommodating Fischer DBPE 105-series feed-throughs (27 pins each) and supporting kinematics plates for detector alignment complete the chamber. While one of the end-plates can accommodate up to 17 feed-throughs, the other one, which also includes two pipes for additional pumping, can accommodate up to 15 of them. Given that 4 feed-throughs are already used for the target mechanism, a total of 28 feed-throughs can be used for the transmission of the detector signals. The TIARA reaction chamber has been designed to allow a gamma-ray array such as EXOGAM to be placed as close as possible to the target. As such, the thickness of the walls of the central section has been limited to 2 mm in order to reduce the $\gamma$-ray attenuation to a minimum. For a photon energy of 1 MeV, the linear attenuation coefficient in Aluminium is 0.166 cm$^{-1}$. This leads to an attenuation of 3.3$\%$ in a 0.2 cm layer compare to 8$\%$ in a 0.5 cm layer. \subsection{Electronics and Data Acquisition} There are 8$\times$2$\times$4 channels to be instrumented for the octagonal barrel, (16 rings + 8 sectors)$\times$6 channels for the SiHyBall detector, (16 rings + 8 sectors)$\times$2 channels for the S1 detector and (16 rings + 16 sectors) channels for the S2 detector, resulting in a total of 288 channels. \begin{figure*}[ht] \begin{center} \includegraphics[width=10cm,height=10cm,angle=-90]{fig6_Tiara-electronics.eps} \end{center} \caption{Diagram of the TIARA electronics for one end of a barrel strip, one ring and one sector of the annular detectors (SiHyball, S1, S2).} \label{fig:TiaELEC} \end{figure*} A schematic diagram of the TIARA electronics is shown in Fig.~\ref{fig:TiaELEC}. Eighteen 16-channel charge-sensitive preamplifier modules manufactured at the University of the West of Scotland\footnote{Previously University of Paisley.}, eighteen CAEN N568B 16-channel spectroscopy amplifiers controlled remotely via a CAENET V288 controller module, eighteen CAEN V814 16-channel low threshold discriminators and nine 32-channel ADC modules are employed to record the energy signals from the array. A SY2527 CAEN universal multi-channel power supply system equipped with a A1737N 12 High Voltage (HV) channel board provided the -50 Volts necessary for the full depletion of all the silicon detectors. Typically leakage currents of around 0.2, 0.3, 4.0 and 1.1 $\mu$A respectively are drawn by each element of the barrel and each wedge of the SiHyBall, the S1 and the S2 detectors. The 16 charge-sensitive preamplifiers are mounted in double width NIM modules and are designed specifically for use with room temperature silicon-strip detectors and resistive-sheet detectors with capacitances in the range of 0 to 1000pF. Each unit houses two 8 channel motherboards with easily dismountable preamplifier chips. Both the motherboards and preamplifiers chips are housed in a rugged well shielded metal housing. With a quiescent DC output approaching zero, this unit is well adapted for use with the CAEN N568B spectroscopy amplifier, which has 50 $\Omega$ input impedance. For each of the six wedges of the SiHyBall detector, one complete module was used for the 16 front rings (with all 16 HV inputs combined) while half of another module was used for the eight back sectors. In this way, three preamplifier modules instrumented two wedges of the SiHyBall. Similarly three and two modules instrumented the S1 and S2 detector, respectively. For the four resistive strips of each of the eight barrel detectors, only half a module was required with all the corresponding eight preamplifier HV inputs combined and connected. Among the eighteen discriminators, five had to be adapted by the manufacturer to run with positive polarity inputs in order to instrument the eighty back sectors of the SiHyBall, S1 and S2 DSSSD annular detectors. Both the CAEN amplifiers and discriminators are remotely programmable and, for TIARA, the control of this hardware is ensured via the Multi Instance Data Acquisition System (MIDAS)\cite{Puck} application developed at the STFC Daresbury Laboratory. Also used for the present work, and programmable via MIDAS, are the eight 32-channel GANIL XDC3214 ADCs \cite{GANILADC} operating in common dead-time mode and an additional 32-channel Silena S9418 ADC. EXOGAM, VAMOS and the TIARA array have their own stand-alone electronics and data acquisition systems (DAQs). For the TIARA commissioning measurements discussed below, the 3 DAQs were merged together using 3 hardware VXI CENTRUM modules which provided time stamping of the events and the MERGER software for building the events \cite{Witt05}. The principal trigger of this commissioning experiment was defined by a hit in any element of TIARA. Due to room constraint around the TIARA vacuum chamber, a $\sim$3 m cable length was necessary to connect the TIARA detectors to the preamplifiers. As a direct consequence, the thresholds in energy had to be set high. They were $\sim$1 MeV for the double sided annular detectors and $\sim$1.5 MeV for the resistive charge division detector. \begin{figure*}[ht] \begin{center} \includegraphics[width=14cm,height=8cm,angle=0]{fig7_EXOGAMeffTIARA.eps} \end{center} \caption{ (a) Image of the TIARA and EXOGAM arrays reconstructed from the first interaction point of a simulated 1 MeV $\gamma$-ray. This is a projection in the plane perpendicular to the beam axis ($\it{z}$) and conditioned by -4 cm $\le$ $\it{z}$ $\le$ 4 cm. (b) Simulation of EXOGAM efficiency as a function of the $\gamma$-ray energy. The 4 clovers, denuded of their BGO Compton shields, are only ~5 cm away from the target/source position. The triangles and the stars represent the photopeak and total efficiencies, respectively. The dots and diamonds represent the photopeak efficiencies after incorporating the ``addback'' procedure with and without the Lorentz boost.} \label{fig:TiaEXOSim} \end{figure*} \section{Commissioning} \subsection{Experimental Details} For the commissioning of the array and in order to validate the technique of heavy-ion---particle---$\gamma$ coincidence measurements, the d($^{14}$N,p$\gamma$)$^{15}$N reaction was investigated at an energy representative of SPIRAL radioactive beams. The 10.6 MeV/nucleon $^{14}$N beam was delivered by the first cyclotron of the GANIL facility. The target was a 1 mg/cm$^{2}$ deuterated polythene (CD$^{2}$)$_{n}$ self-supporting foil. The choice of the reaction was dictated by a number of considerations: (a) the ground state of $^{15}$N is isolated and easily resolved from the excited states; (b) the excited states populated in the reaction cannot be easily resolved in inverse kinematics and (c) the reaction has been studied before in direct kinematics with light-particle detection at a similar centre-of-mass energy. As shown in Fig.~\ref{fig:photoSetup} the TIARA reaction chamber allows four clovers of the EXOGAM array \cite{Sim00} to be mounted in a cube-like configuration. In this configuration the segmented Ge clover detectors are all positioned at 90$^{o}$ relative to the beam axis and from a distance between the target and the front face of each detector of approximately 5 cm. The photopeak efficiency in this configuration (Figure~\ref{fig:TiaEXOSim}) is 13.5$\%$ at 1.332 MeV when the 4 central contact signals of the 4 crystals in each clover detector are added together (``addback''). The TIARA and EXOGAM arrays were mounted at the entrance of the VAMOS spectrometer (Fig.~\ref{fig:photoSetup}) operating in momentum-dispersive mode \cite{Savaj99}. The forward focused beam-like fragments are then identified in mass and charge from measurements of the time-of-flight, energy loss, residual energy and position in VAMOS. A plastic finger was placed in front of the VAMOS focal plane detection system to intercept the intense non-interacting direct beam, which could damage the focal plane detectors. The data presented here were recorded over a total of approximately 4 hours of beam time with an average beam intensity of 2$\cdot$10$^{6}$ pps. \subsection{Simulations and Data Analysis} Knowing the efficiency of the experimental setup is essential if reaction cross sections and, hence, spectroscopic factors, are to be extracted. In this context a complete and realistic simulation of the setup can be extremely useful. A Monte-Carlo simulation based on the GEANT4 code \cite{Agos02} has been developed to mimic the response of the TIARA and EXOGAM arrays. The geometry defined in this simulation includes, in particular, the entire active area of the TIARA array and the 2 mm thick aluminium walls of the reaction chamber, the four EXOGAM Ge clover detectors, and the target \cite{Labi05}. Figure~\ref{fig:TiaEXOSim}(a) illustrates a reconstructed image of the response of the setup to a 1MeV $\gamma$-ray. For the simulation of nucleon-transfer reactions, the event generator takes into account the kinematics of the 2-body reactions and the differential angular cross section is set to be isotropic. The position of the proton source (or interaction) in the target is chosen randomly according to the beam spot size and the target thickness. The $\gamma$-rays are simulated assuming isotropic emission in the rest frame of the beam-like reaction product and then boosted by the Lorentz effect. The intrinsic resolutions of all the detectors are also included. Taking into account the inactive regions of the Si detectors, the simulated overall efficiency of the TIARA array for proton detection with energies of a few MeV emitted isotropically was found to be 84$\%$. The efficiency of the various components of the array as a function of the polar angle is illustrated in Figure~\ref{fig:EffAng}. An isotropic $\gamma$-source at the target position and with variable energy was also simulated to estimate the EXOGAM photopeak efficiency and the result is shown in Figure~\ref{fig:TiaEXOSim}(b). The photopeak attenuation induced by the presence of the TIARA detectors and the reaction chamber is about 5$\%$ at 1.332 MeV, with the silicon layer accounting for about 1$\%$. The output of the simulation is recorded in a ROOT tree which includes as many leaves as channels for the two arrays. The simulated data and the real calibrated data can then be analysed identically using the same analysis code performing the ``add-back'' and Doppler corrections. An ``addback'' correction between the clover detectors was not considered here and, as noted above, was only applied to the 4 crystals of each clover. When more than one crystal was hit in a clover, the energies collected by the central contacts were summed together. The crystal with the highest deposited energy was taken to be, for the Doppler correction, the first crystal hit. Simulations (Figure~\ref{fig:EXOGSim}) show that this assumption is a valid approximation as long as the energy of the $\gamma$-rays is higher than 500 keV. Indeed, below 500 keV, when two crystals are hit the energies deposited in each crystal tend to be similar (see bottom-left panel of Fig.~\ref{fig:EXOGSim} ) and, as a consequence, the identification of the first crystal to be hit becomes uncertain. For events of crystal multiplicity $M_{crys}=1$, the average angles chosen for the Doppler correction are 78${^o}$ for downstream crystals and 102${^o}$ for upstream crystals. For events of higher crystal multiplicity, the angles become respectively 84${^o}$ and 96${^o}$ as the closer to an adjacent crystal a Compton interaction occurs the higher is the probability for $M_{crys}>1$. These angles have been determined empirically by matching of the photopeaks in downstream and upstream crystals and are consistent with the angles returned by the simulations. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm,height=8cm,angle=0]{fig8_TiaraEffAngular.eps} \end{center} \caption{Efficiency of the TIARA array as a function of the laboratory angle according to a GEANT4 simulation in which 2-5 MeV protons were generated and emitted isotropically.} \label{fig:EffAng} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=8cm,height=12cm,angle=0]{fig9_crystalAB1st.eps} \end{center} \caption{ Top panels: Simulation of the energy deposited in crystal A versus the energy deposited in crystal B for an incident $\gamma$-ray of E$_{\gamma}$=1332 keV, with no condition (left) and with the condition that crystal A is hit first (right). Middle Panels: E$_{\gamma}$=700 keV. Bottom panels: E$_{\gamma}$=500 keV. } \label{fig:EXOGSim} \end{figure} \subsection{Results} The energy of the charged particles deposited in TIARA resulting from the reaction of the $^{14}$N beam with the (CD$_{2}$)$_{n}$ target as a function of laboratory polar angle recorded in the TIARA array is displayed in Figure~\ref{fig:dE_Pos}. The shadowing introduced by the presence of the target frame at 90$^{o}$ is noticeable. At backward laboratory angles where the emission of protons is expected, two clear kinematic loci are observed. These loci become even more pronounced when the $^{15}$N residue is identified in coincidence in the focal plane of the VAMOS spectrometer (Fig.~\ref{fig:dE_Pos}b). In the barrel detector, data associated with a low discriminator threshold have been removed, resulting in a noticeable inverted V-shaped cut at low deposited energy in Figs~\ref{fig:dE_Pos}a and b. The additional requirement of the detection of any $\gamma$-ray in coincidence, shown in Figure~\ref{fig:dE_Pos}c, leads to the disappearance of the protons in region R1 and, consequently, allows one to definitively associate this locus with the d($^{14}$N,p)$^{15}$N$_{gs}$ reaction. Indeed, when the kinematics of this reaction channel are used as input to the simulation, a perfect match between the simulation and the data (Fig.~\ref{fig:dE_Pos}d) is obtained. Both the kinematics and the expected proton punch-through energies are well reproduced. Note that the results of the simulations extend to very forward angles because the differential cross sections d$\sigma$/d$\Omega$ have, as noted earlier, been assumed to be isotropic. Since the reaction cross-section decreases relatively sharply with decreasing proton laboratory angle, the protons punching through the detectors are not so apparent in the data. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm,height=9cm,angle=0]{fig10_TiaraEvsP.eps} \end{center} \caption{ (a) Proton energy-angle spectrum for events detected in TIARA in coincidence with VAMOS. (b) Same as (a) with a gate on $^{15}$N identified in VAMOS. (c) Same as (b) with a coincidence in EXOGAM. (d) Same as (b) with Monte-Carlo simulations superimposed.} \label{fig:dE_Pos} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=8cm,height=9cm,angle=0]{fig11_ProtonSpect.eps} \end{center} \caption{ Excitation energy spectra obtained from the charge particles detected in TIARA: (a) With $^{15}$N detected in coincidence in VAMOS; (b) As for (a) with a $\gamma$-ray in coincidence in EXOGAM; (c) As for (a) with a 1885 keV (solid) and 2295 keV (dash) $\gamma$-ray in coincidence.} \label{fig:Ex} \end{figure} Similarly, the simulation indicates that the protons observed in region R2 can be associated with the population of the 5/2$^{+}$ (7.16 MeV), 3/2$^{+}$ (7.30 MeV) or 7/2$^{+}$ (7.57 MeV) states of the $^{15}$N. The energy resolution of the excitation energy spectrum (Fig.~\ref{fig:Ex}) reconstructed from the proton energy and position measured in the barrel detector is $\sim$1 MeV (FWHM) and is clearly insufficient to resolve the three states that lie above 7.1 MeV within 500 keV of each other. Apart from a broad structure centred at 7.5 MeV, Figure ~\ref{fig:Ex}(a and b) only reveals a small structure above 5 MeV. This can easily be interpreted as the direct population of the known 5/2$^{+}$ (5.27 MeV) and 1/2$^{+}$ (5.3 MeV) states. In addition, with a spectroscopic factor only 6 times smaller than the spectroscopic factors of the 7.16 and 7.57 MeV states, and 15 times higher than the 5.3 MeV state\cite{Krets80}, the 5/2$^{+}$ state at 5.27 MeV is most probably the main contribution. \begin{figure}[ht] \begin{center} \includegraphics[width=9.5cm,height=12cm,angle=0]{fig12_EXOG_NRJ15N.eps} \end{center} \caption{ The $\gamma$-ray energy spectra in coincidence with the protons in region R2 of Fig.~\ref{fig:dE_Pos}. The data and the simulations (histograms) of the $\gamma$-cascades from 5/2$^{+}$ and 7/2$^{+}$ levels to the ground state are shown before (a) and after (b) Doppler correction. The contributions of the cascades from the 5/2$^{+}$ and 7/2$^{+}$ levels at 7.16 and 7.57 MeV excitation energy are shown separately in (b). The distributions are normalized to the data by the integral of the number of events between 1 and 6 MeV. (c) Displays the data collected by a single clover after Doppler correction based on the 4 central contact signals (black histogram) and on the 16 outer contact signals (grey histogram). (d) Low energy $^{15}$N level scheme including the strongest transitions.} \label{fig:ExogamData} \end{figure} The spectrum shown in Figure~\ref{fig:ExogamData} illustrates the crucial role played by the $\gamma$-ray array. All the spectra of Figure~\ref{fig:ExogamData} are conditioned by a clover multiplicity equal to one. The middle panel shows the energy distribution of the $\gamma$-ray measured in coincidence with the protons of region R2, after ``addback'' and Doppler correction. Two narrow peaks at 1885 and 2295 keV and a broader structure at 5270 keV which result from the de-excitation of the 5/2$^{+}$ and 7/2$^{+}$ states of $^{15}$N at 7.16 and 7.57 MeV, are observed (Figs.~\ref{fig:ExogamData}). As discussed earlier, the de-excitation of the 5/2$^{+}$ and 1/2$^{+}$ states at 5.27 and 5.30 MeV, populated directly in the reaction (Fig.~\ref{fig:Ex}), also contributes to the broad structure at 5170 keV, to a small extent. It should be noted that the 3/2$^{+}$ level at 7.30 MeV, also observed by \cite{Phil69}, decays directly to the ground state (Fig.\ref{fig:ExogamData}d) and will, therefore, not be seen in coincidence with $\gamma$-rays in EXOGAM (owing to the very low detection efficiency at such high energies). The Compton edges of the 1885 and 5270 keV $\gamma$-rays are also evident at around 1650 and 5000 keV respectively. A simulation of the two decay cascades, taking into account the Lorentz boost and assuming that the two states were equally populated, has been carried out. The result is displayed by the histogram in Figure~\ref{fig:ExogamData}(a) which has been normalized to the data such that the integrals between 1 and 6 MeV are the same. Although the intensities of the photopeaks at 1885 and 2296 keV seem slightly over-estimated, there is a good overall agreement between the simulation and the data. The discrepancy below 1 MeV is believed to arise from $\gamma$-rays scattering in material surrounding the detectors that have not been included in the simulation. Indeed, in both the data and the simulation, only the events for which a single clover detector is hit, have been taken into account. When all multiplicities are required, the number of counts at low energy increases in both the data and the simulated spectrum. Figure~\ref{fig:ExogamData}(b) also displays the contribution of each cascade and, in particular, the contribution of the Compton background in the region of the photopeaks. According to the simulation these Compton events represent 26$\%$ of the peak intensity at 1885 keV and $\sim$3.5$\%$ at 5270 keV. Given that the $\gamma$-decay of the two states (assumed here to be equally populated) proceeds via the 5.27 MeV state, half of the Compton contribution from the line at 5270 keV is actually in coincidence with the unobserved 1885 keV $\gamma$-ray. Therefore the total background contribution to the 1885 peak is 27.8$\%$. The Compton background from the 5270 keV line to the 2296 keV peak has similarly been estimated to be 2.5$\%$. While, with the proton detection only, the final resolution on the excitation energy is restricted to $\sim$1 MeV, the gamma tagging technique improves dramatically the resolution to $\sim$100 keV, allowing for the two closely spaced $^{15}$N excited states to be resolved. During the experiment, the segmentation information of the Ge crystals was only available for one of the four clover detectors. A comparison of the $\gamma$-ray energy spectra recorded in this clover with and without segmentation information is shown in Figure~\ref{fig:ExogamData}(c). At 2.3 MeV, the resolution (FWHM) is 80 and 120 keV with and without the segmentation information, respectively. The proton angular distributions displayed in Figure~\ref{fig:DataDWBA} were extracted by selecting events in region R1 of Figure~\ref{fig:dE_Pos}(b) corresponding to the $^{15}$N ground state, and in coincidence with the 1885 and 2296 keV $\gamma$-ray lines (Fig.~\ref{fig:ExogamData}b) corresponding to the $^{15}$N levels at 7.16 and 7.57 MeV. \begin{figure*}[ht] \begin{center} \includegraphics[width=13.9cm,height=6.cm,angle=0]{fig13_DCrossS_14N.eps} \end{center} \caption{ Proton differential cross section from the population of the ground (a) and excited states at 7155 (b) and 7565 keV (c) of $^{15}$N in the reaction d($^{14}$N,p). The filled circles show the DWBA calculations normalized by previously measured spectroscopic factors. The filled squares represent the data normalized to fit the DWBA calculations. } \label{fig:DataDWBA} \end{figure*} The DWBA calculations displayed in Figure~\ref{fig:DataDWBA} were performed using the TWOFNR code \cite{Toyama} with the optical model parameters calculated according to the Johnson-Soper prescription \cite{Johnson70}. Each of the theoretical distributions have been multiplied by the corresponding spectroscopic factors derived from direct kinematics measurements in order to obtain the absolute differential cross section \cite{Krets80}: C$^{2}$S(g.s.)=1.33 , C$^{2}$S(7.16 MeV)=0.90 and C$^{2}$S(7.57 MeV)=0.88. Transferred angular momenta of {\it l}=1 for the ground state and {\it l}=2 for the two excited states were considered in the DWBA calculations in agreement with the results obtained in direct kinematics \cite{Phil69,Krets80}. The experimental proton angular distributions for the ground and excited states were normalized to the theoretical distribution. The shape of the experimental distributions for the ground state (Fig.~\ref{fig:DataDWBA}a) is in good agreement with the theoretical distributions. Similarly, the shapes of the experimental and theoretical distributions for the two excited states Figures ~\ref{fig:DataDWBA}(b), and (c), are also in good agreement. Unfortunately, in the present measurements, the absolute cross sections, and hence spectroscopic factors, could not be derived from the data. As noted earlier, part of the focal plane of VAMOS was protected from the transmitted beam by a ``finger''. As a result, no direct measurement of the beam dose or of the elastic scattering could be made. Using, however, as a global normalisation the spectroscopic factor of 1.33 for the ground state \cite{Krets80}, the relative spectroscopic factors for the two excited states may be estimated. Values some 0.7 that of those previously measured \cite{Krets80} were so deduced, which is within the statistical uncertainties of the measured differential cross section. Future measurements with radioactive beams will employ an active ``finger'' together with beam detectors also capable of counting the beam particles. \subsection{Discussion} In the present study, knowing the level scheme of $^{15}$N facilitates the identification of the populated states. For a nucleus with unknown level scheme, provided that there is enough statistics, a $\gamma$-$\gamma$ coincidence analysis can be carried out, in addition of simulations, to restore or establish a consistent level scheme. Therefore, when coupled to a high efficiency $\gamma$-ray detector array, the TIARA array proposes a new alternative to other existing detectors for direct reaction with unstable beams. Other detectors like MUST offer a much higher dynamic range, better particle identification and intrinsic resolution than the TIARA array but they have a limited solid-angle coverage which can make (d,p) reactions studies difficult. On the other hand, active target detectors like MAYA competes very much with the TIARA array in term of solid-angle coverage and are known to have significant lower energy threshold than silicon detectors. However, the large volume occupied by active target detectors prohibits a coupling with a $\gamma$-ray array and, consequently, the $\gamma$-tagging technique can hardly be used. \section{Summary} A new compact, large solid-angle segmented silicon detector array, TIARA, designed for the study of direct reactions in inverse kinematics with radioactive beams, has been described. Coupled with a high efficiency $\gamma$-ray detector array, such as EXOGAM, TIARA employs the technique of light (target-like) particle-$\gamma$ coincidences to obtain the necessary resolution in excitation energy in the residual target-like recoil. Identification of the latter, if required, may be performed using a magnetic spectrometer such as VAMOS. These techniques have been validated in a commissioning experiment in which the d($^{14}$N,p$\gamma$)$^{15}$N reaction was measured. In the near future, it is planned to increase the dynamic range of the silicon detector by the installation of a second 700 $\mu$m thick Si layer around the existing barrel detectors together with 15 mm thick CsI(Tl) segmented detectors backing the forward angle annular detectors. \section{Acknowledgements} The collaboration wishes to thank the GANIL cyclotron operations crew for delivering the $^{14}$N beam. Partial support from the European Union under contract N$^{o}$506065 and from the Spanish MEC Grant FPA2005-03993 are also gratefully acknowledged. The development and construction of TIARA were financed by an EPSRC(UK) grant.
1,116,691,498,023
arxiv
\section{Introduction} For an interval $J \subset [0, 1]$ and $g: [0, 1] \rightarrow \C$, we define \begin{align*} (\E_{J}g)(x) := \int_{J}g(\xi)e(\xi x_1 + \xi^2 x_2)\, d\xi \end{align*} where $e(a) := e^{2\pi i a}$. For an interval $I$, let $P_{\ell}(I)$ be the partition of $I$ into intervals of length $\ell$. By writing $P_{\ell}(I)$, we are assuming that $|I|/\ell \in \N$. We will also similarly define $P_{\ell}(B)$ for squares $B$ in $\R^2$. Next if $B = B(c, R)$ is a square in $\R^2$ centered at $c$ of side length $R$, let $$w_{B}(x) := (1 + \frac{|x - c|}{R})^{-100}.$$ We will always assume that our squares have sides parallel to the $x$ and $y$-axis. We observe that $1_B \leq 2^{100}w_B$. For a function $w$, we define $$\nms{f}_{L^{p}(w)} := (\int_{\R^2}|f(x)|^{p}w(x)\, dx)^{1/p}.$$ For $\delta \in \N^{-1}$, let $D(\delta)$ be the best constant such that \begin{align}\label{decdef} \nms{\E_{[0, 1]}g}_{L^{6}(B)} \leq D(\delta)(\sum_{J \in P_{\delta}([0, 1])}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})^{1/2} \end{align} for all $g: [0, 1] \rightarrow \C$ and all squares $B$ in $\R^2$ of side length $\delta^{-2}$. Let $D_{p}(\delta)$ be the decoupling constant where the $L^6$ in \eqref{decdef} is replaced with $L^{p}$. Since $1_B \lsm w_B$, the triangle inequality combined with Cauchy-Schwarz shows that $D_{p}(\delta) \lsm_{p} \delta^{-1/2}$. The $l^2$ decoupling theorem for the paraboloid proven by Bourgain and Demeter in \cite{bd} implies that for the parabola we have $D_{p}(\delta) \lsm_{\vep} \delta^{-\vep}$ for $2 \leq p \leq 6$ and this range of $p$ is sharp. Decoupling-type inequalities were first studied by Wolff in \cite{wolff}. Following the proof of $l^2$ decoupling for the paraboloid by Bourgain and Demeter in \cite{bd}, decoupling inequalities for various curves and surfaces have found many applications to analytic number theory (see for example \cite{zeta, bourgainmvt, bdweyl, bdguo, bdg, meansquare, prend, guozhang, guozorin, heathbrown}). Most notably is the proof of Vinogradov's mean value theorem by Bourgain-Demeter-Guth using decoupling for the moment curve $t \mapsto (t, t^2, \ldots, t^n)$ in \cite{bdg}. Wooley in \cite{nested} was also able to prove Vinogradov's mean value theorem using his nested efficient congruencing method. This paper probes the connections between efficient congruencing and $l^2$ decoupling in the simplest case of the parabola. For a slightly different interpretation of the relation between efficient congruencing and decoupling for the cubic moment curve inspired from \cite{hbwooley}, see \cite{guoliyung}. Our proof of $l^2$ decoupling for the parabola is inspired by the exposition of efficient congruencing in Pierce's Bourbaki seminar exposition \cite[Section 4]{pierce}. This proof will give the following result. \begin{thm}\label{ef2d_main} For $\delta \in \N^{-1}$ such that $0 < \delta < e^{-200^{200}}$, we have \begin{align*} D(\delta) \leq \exp(30\frac{\log\frac{1}{\delta}}{\log\log\frac{1}{\delta}}). \end{align*} \end{thm} In the context of discrete Fourier restriction, Theorem \ref{ef2d_main} implies that for all $N$ sufficiently large and arbitrary sequence $\{a_n\} \subset l^2$, we have \begin{align*} \nms{\sum_{|n| \leq N}a_n e^{2\pi i (nx + n^2 t)}}_{L^{6}(\mathbb{T}^2)} \lsm \exp(O(\frac{\log N}{\log\log N}))(\sum_{|n| \leq N}|a_n|^{2})^{1/2} \end{align*} which rederives (up to constants) the upper bound obtained by Bourgain in \cite[Proposition 2.36]{bourgain} but without resorting to using a divisor bound. It is an open problem whether the $\exp(O(\frac{\log N}{\log\log N}))$ can be improved. \subsection{More notation and weight functions} We define \begin{align*} \nms{f}_{L^{p}_{\#}(B)} := (\frac{1}{|B|}\int_{B}|f(x)|^{p}\, dx)^{1/p}, \quad \nms{f}_{L^{p}_{\#}(w_B)} := (\frac{1}{|B|}\int|f|^{p}w_B)^{1/p}, \end{align*} and given a collection $\mc{C}$ of squares, we let $$\avg{\Delta \in \mc{C}}\,f(\Delta) := \frac{1}{|\mc{C}|}\sum_{\Delta \in \mc{C}}f(\Delta).$$ Finally we will let $\eta$ be a Schwartz function such that $\eta \geq 1_{B(0, 1)}$ and $\supp(\wh{\eta}) \subset B(0, 1)$. For $B = B(c, R)$ we also define $\eta_{B}(x) := \eta(\frac{x - c}{R})$. In Section \ref{fc} we care about explicit constants and so we will use the explicit $\eta$ constructed in \cite[Corollary 2.2.9]{thesis}. In particular, for this $\eta$, $\eta_{B} \leq 10^{2400}w_B$. For the remaining sections in this paper, we will ignore this constant. The most important facts about $w_B$ we will need are that $$w_{B(0, R)} \ast w_{B(0, R)} \lsm R^{2}w_{B(0, R)}$$ and $$1_{B(0, R)} \ast w_{B(0, R)} \gtrsim R^{2}w_{B(0, R)}$$ (\cite[Lemma 2.1]{thesis}) from which we can derive all the other properties about weights we will use such as given a partition $\{\Delta\}$ of $B$, $\sum_{\Delta}w_{\Delta} \lsm w_B$ (\cite[Proposition 2.14]{thesis}) and $$\nms{f}_{L^{p}(w_{B(0, R)})}^{p}\lsm \int_{\R^2}\nms{f}_{L^{p}_{\#}B(y, R)}^{p}w_{B(0, R)}(y)\, dy$$ (see \cite[Corollary 2.4]{thesis}). We refer the reader to \cite[Section 4]{sg} and \cite[Section 2.2]{thesis} for some useful details and properties of the weights $w_B$ and $\eta_B$. \subsection{Outline of proof of Theorem \ref{ef2d_main}} Our argument is inspired by the discussion of efficient congruencing in \cite[Section 4]{pierce} which in turn is based off Heath-Brown's simplification \cite{hbwooley} of Wooley's proof of the cubic case of Vinogradov's mean value theorem \cite{wooleycubic}. Our first step, much like the first step in both efficient congruencing and decoupling for the parabola, is to bilinearize the problem. Throughout we will assume $\delta^{-1} \in \N$ and $\nu \in \N^{-1} \cap (0, 1/100)$. Fix arbitrary integers $a, b \geq 1$. Suppose $\delta$ and $\nu$ were such that $\nu^{a}\delta^{-1}, \nu^{b}\delta^{-1} \in \N$. This implies that $\delta \leq \min(\nu^a, \nu^b)$ and the requirement that $\nu^{\max(a, b)}\delta^{-1}\in \N$ is equivalent to having $\nu^{a}\delta^{-1}, \nu^{b}\delta^{-1} \in \N$. For this $\delta$ and $\nu$, let $M_{a, b}(\delta, \nu)$ be the best constant such that \begin{align}\label{mabdef} \int_{B}|\E_{I}g|^{2}|\E_{I'}g|^{4} \leq M_{a, b}(\delta, \nu)^{6}(\sum_{J \in P_{\delta}(I)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})(\sum_{J' \in P_{\delta}(I')}\nms{\E_{J'}g}_{L^{6}(w_B)}^{2})^{2} \end{align} for all squares $B$ of side length $\delta^{-2}$, $g: [0, 1] \rightarrow \C$, and all intervals $I \in P_{\nu^{a}}([0, 1])$, $I' \in P_{\nu^{b}}([0, 1])$ with $d(I, I') \geq 3\nu$. We will say that such $I$ and $I'$ are $3\nu$-separated. Applying H\"{o}lder followed by the triangle inequality and Cauchy-Schwarz shows that $M_{a, b}(\delta, \nu)$ is finite. This is not the only bilinear decoupling constant we can use (see \eqref{bik_const} and \eqref{mb_bds} in Sections \ref{bik} and \ref{bds}, respectively), but in this outline we will use \eqref{mabdef} because it is closest to the one used in \cite{pierce} and the one we will use in Section \ref{fc}. Our proof of Theorem \ref{ef2d_main} is broken into the following four lemmas. We state them below ignoring explicit constants for now. \begin{lemma}[Parabolic rescaling]\label{parab_outline} Let $0 < \delta < \sigma < 1$ be such that $\sigma, \delta, \delta/\sigma \in \N^{-1}$. Let $I$ be an arbitrary interval in $[0, 1]$ of length $\sigma$. Then \begin{align*} \nms{\E_{I}g}_{L^{6}(B)} \lsm D(\frac{\delta}{\sigma})(\sum_{J \in P_{\delta}(I)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})^{1/2} \end{align*} for every $g: [0, 1] \rightarrow \C$ and every square $B$ of side length $\delta^{-2}$. \end{lemma} \begin{lemma}[Bilinear reduction]\label{bi_outline} Suppose $\delta$ and $\nu$ were such that $\nu\delta^{-1} \in \N$. Then $$D(\delta) \lsm D(\frac{\delta}{\nu}) + \nu^{-1}M_{1, 1}(\delta, \nu).$$ \end{lemma} \begin{lemma}\label{freq_outline} Let $a$ and $b$ be integers such that $1 \leq a \leq 2b$. Suppose $\delta$ and $\nu$ were such that $\nu^{2b}\delta^{-1} \in \N$. Then $$M_{a, b}(\delta, \nu) \lsm \nu^{-1/6}M_{2b, b}(\delta, \nu).$$ \end{lemma} \begin{lemma}\label{switch_outline} Suppose $b$ is an integer and $\delta$ and $\nu$ were such that $\nu^{2b}\delta^{-1} \in \N$. Then $$M_{2b, b}(\delta, \nu) \lsm M_{b, 2b}(\delta, \nu)^{1/2}D(\frac{\delta}{\nu^b})^{1/2}.$$ \end{lemma} Applying Lemma \ref{freq_outline}, we can move from $M_{1, 1}$ to $M_{2, 1}$ and then Lemma \ref{switch_outline} allows us to move from $M_{2, 1}$ to $M_{1, 2}$ at the cost of a square root of $D(\delta/\nu)$. Applying Lemma \ref{freq_outline} again moves us to $M_{2, 4}$. Repeating this we can eventually reach $M_{2^{N-1}, 2^{N}}$ paying some $O(1)$ power of $\nu^{-1}$ and the value of the linear decoupling constants at various scales. This combined with Lemma \ref{bi_outline} and the choice of $\nu = \delta^{1/2^N}$ leads to the following result. \begin{lemma} Let $N \in \N$ and suppose $\delta$ was such that $\delta^{-1/2^N} \in \N$ and $0 < \delta < 100^{-2^N}$. Then \begin{align*} D(\delta) \lsm D(\delta^{1 - \frac{1}{2^{N}}}) + \delta^{-\frac{4}{3\cdot 2^{N}}}D(\delta^{1/2})^{\frac{1}{3\cdot 2^{N}}}\prod_{j = 0}^{N-1}D(\delta^{1 - \frac{1}{2^{N - j}}})^{\frac{1}{2^{j + 1}}}. \end{align*} \end{lemma} This then gives a recursion which shows that $D(\delta) \lsm_{\vep} \delta^{-\vep}$ (see Section \ref{fc_iter} for more details). The proof of Lemma \ref{parab_outline} is essentially a change of variables and applying the definition of the linear decoupling constant (some small technical issues arise because of the weight $w_B$, see \cite[Section 2.4]{thesis}). The idea is that a cap on the paraboloid can be stretched to the whole paraboloid without changing any geometric properties. The bilinear reduction Lemma \ref{bi_outline} follows from H\"{o}lder's inequality. The argument we use is from Tao's exposition on the Bourgain-Demeter-Guth proof of Vinogradov's mean value theorem \cite{tao2d}. In general dimension, the multilinear reduction follows from a Bourgain-Guth argument (see \cite{bg} and \cite[Section 8]{sg}). We note that if $a$ and $b$ are so large that $\nu^{a}, \nu^{b} \approx \delta$ then $M_{a, b} \approx O(1)$ and so the goal of the iteration is to efficiently move from small $a$ and $b$ to very large $a$ and $b$. Lemma \ref{freq_outline} is the most technical of the four lemmas and is where we use a Fefferman-Cordoba argument in Section \ref{fc}. We can still close the iteration with Lemma \ref{freq_outline} replaced by $M_{a, b} \lsm M_{b, b}$ for $1 \leq a < b$ and $M_{b, b} \lsm \nu^{-1/6}M_{2b, b}$. Both these estimates come from the same proof of Lemma \ref{freq_outline} and is how we approach the iteration in Sections \ref{unc} and \ref{bik} (see Lemmas \ref{mglem2} and \ref{mglem1} and their rigorous counterparts Lemmas \ref{mglem2_rig} and \ref{mglem1_rig}). The proof of these lemmas is a consequence of $l^2 L^2$ decoupling and ball inflation. Finally, Lemma \ref{switch_outline} is an application of H\"{o}lder and parabolic rescaling. \subsection{Comparison with efficient congruencing as in \cite[Section 4]{pierce}} The main object of iteration in \cite[Section 4]{pierce} is the following bilinear object \begin{align*} &I_{1}(X; a, b)\\ & = \max_{\xi \neq \eta\Mod{p}}\int_{(0, 1]^k}|\sum_{\st{1 \leq x \leq X\\x \equiv \xi \Mod{p^a}}}e(\alpha_1 x + \alpha_2 x^2)|^{2}|\sum_{\st{1 \leq y \leq X\\y \equiv \eta \Mod{p^a}}}e(\alpha_1 y + \alpha_2 y^2)|^{4}\, d\alpha. \end{align*} Lemma \ref{parab_outline}-\ref{switch_outline} correspond directly to Lemmas 4.2-4.5 of \cite[Section 4]{pierce}. The observation that Lemmas 4.2 and 4.3 of \cite{pierce} correspond to parabolic rescaling and bilinear reduction, respectively was already observed by Pierce in \cite[Section 8]{pierce}. We think of $p$ as $\nu^{-1}$, $J(X)/X^{3}$ as $D(\delta)$, and $p^{a + 2b}I_{1}(X; a, b)/X^{3}$ as $M_{a, b}(\delta, \nu)^{6}$. We have the expressions $J(X)/X^3$ and $p^{a + 2b}I_{1}(X; a, b)/X^3$ because heuristically assuming square root cancellation (ignoring $X^{\vep}$ powers) we expect $J(X) \approx X^3$ and $I_{1}(X; a, b) \approx X^{3}/p^{a + 2b}$. This heuristic explains why $$I_{1}(X; a, b) \leq p^{2b - a}I_{1}(X; 2b, b)$$ from \cite[Lemma 4.4]{pierce} becomes (essentially, after ignoring the $\nu^{-1} \approx \delta^{-\vep}$) $$M_{a, b}(\delta, \nu)^{6} \lsm M_{2b, b}(\delta, \nu)^{6}.$$ In the definition of $I_1$, the $\max_{\xi \neq \eta\Mod{p}}$ condition can be thought of as corresponding to the transversality condition that $I_1$ and $I_2$ are $\nu$-separated intervals of length $\nu$. The integral over $(0, 1]^2$ corresponds to an integral over $B$. Finally the expression $$|\sum_{\st{1 \leq x \leq X\\x \equiv \xi \Mod{p^a}}}e(\alpha_1 x + \alpha_2 x^2)|,$$ can be thought of as corresponding to $|\E_{I}g|$ for $I$ an interval of length $\nu^a$ and so the whole of $I_{1}(X; a, b)$ can be thought of as $\int_{B}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4}$ where $\ell(I_1) = \nu^a$ and $\ell(I_2) = \nu^b$ with $I_1$ and $I_2$ are $O(\nu)$-separated. This will be our interpretation in Section \ref{fc}. Interpreting the proof of Lemma \ref{freq_outline} using the uncertainty principle, we reinterpret $I_{1}(X; a, b)$ as (ignoring weight functions) \begin{align}\label{iab_interpret} \avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I}g}_{L^{2}_{\#}(\Delta)}^{2}\nms{\E_{I'}g}_{L^{4}_{\#}(\Delta)}^{4} \end{align} where $I$ and $I'$ are length $\nu^{a}$ and $\nu^{b}$, respectively and are $\nu$-separated. The uncertainty principle says that \eqref{iab_interpret} is essentially equal to $\frac{1}{|B|}\int_{B}|\E_{I}g|^{2}|\E_{I'}g|^{4}$. Finally in Section \ref{bds} we replace \eqref{iab_interpret} with \begin{align*} \avg{\Delta \in P_{\nu^{-b}}(B)}(\sum_{J \in P_{\nu^{b}}(I)}\nms{\E_{J}g}_{L^{2}_{\#}(\Delta)}^{2})(\sum_{J' \in P_{\nu^{b}}(I')}\nms{\E_{J'}g}_{L^{2}_{\#}(\Delta)}^{2})^{2} \end{align*} where $I$ and $I'$ are length $\nu$ and $\nu$-separated. Note that when $b = 1$ this then is exactly equal to $\frac{1}{|B|}\int_{B}|\E_{I}g|^{2}|\E_{I'}g|^{4}$. The interpretation given above is now similar to the $A_p$ object studied by Bourgain-Demeter in \cite{sg}. \subsection{Overview} Theorem \ref{ef2d_main} will be proved in Section \ref{fc} via a Fefferman-Cordoba argument. This argument does not generalize to proving that $D_{p}(\delta) \lsm_{\vep} \delta^{-\vep}$ except for $p = 4, 6$. However in Section \ref{unc}, by the uncertainty principle we reinterpret a key lemma from Section 2 (Lemma \ref{abup}) which allows us to generalize the argument in Section \ref{fc} so that it can work for all $2 \leq p \leq 6$. We make this completely rigorous in Section \ref{bik} by defining a slightly different (but morally equivalent) bilinear decoupling constant. A basic version of the ball inflation inequality similar to that used in \cite[Theorem 9.2]{sg} and \cite[Theorem 6.6]{bdg} makes an appearance. Finally in Section \ref{bds}, we reinterpret the argument made in Section \ref{bik} and write an argument that is more like that given in \cite{sg}. We create a 1-parameter family of bilinear constants which in some sense ``interpolate" between the Bourgain-Demeter argument and our argument here. The three arguments in Sections \ref{fc}-\ref{bds} are similar but will use slightly different bilinear decoupling constants. We will only mention explicit constants in Section \ref{fc}. In Sections \ref{bik} and \ref{bds}, for simplicity, we will only prove that $D(\delta) \lsm_{\vep} \delta^{-\vep}$. Because the structure of the iteration in Sections \ref{bik} and \ref{bds} is the same as that in Section \ref{fc}, we obtain essentially the same quantitative bounds as in Theorem \ref{ef2d_main} when making explicit the bounds in Sections \ref{bik} and \ref{bds}. \subsubsection*{Acknowledgements} The author would like to thank Ciprian Demeter, Larry Guth, and his advisor Terence Tao for encouragement and many discussions on decoupling. The author would also like to thank Kevin Hughes and Trevor Wooley for a fruitful discussion on efficient congruencing at the \emph{Harmonic Analysis and Related Areas} conference held by the Clay Math Institute at the University of Oxford in September 2017. The author is partially supported by NSF grants DGE-1144087 and DMS-1266164. \section{Proof of Theorem \ref{ef2d_main}}\label{fc} We recall the definition of the bilinear decoupling constant $M_{a, b}$ as in \eqref{mabdef}. The arguments in this section will rely strongly on that the exponents in the definition of $M_{a, b}$ are 2 and 4, though we will only essentially use this in Lemma \ref{abup}. Given two expressions $x_1$ and $x_2$, let $$\geom_{2, 4} x_i := x_{1}^{2/6}x_{2}^{4/6}.$$ H\"{o}lder gives $\nms{\geom_{2, 4} x_i}_{p} \leq \geom_{2, 4}\nms{x_i}_{p}$. \subsection{Parabolic rescaling and consequences} The linear decoupling constant $D(\delta)$ obeys the following important property. \begin{lemma}[Parabolic rescaling] Let $0 < \delta < \sigma < 1$ be such that $\sigma, \delta, \delta/\sigma \in \N^{-1}$. Let $I$ be an arbitrary interval in $[0, 1]$ of length $\sigma$. Then \begin{align*} \nms{\E_{I}g}_{L^{6}(B)} \leq 10^{20000} D(\frac{\delta}{\sigma})(\sum_{J \in P_{\delta}(I)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})^{1/2} \end{align*} for every $g: [0, 1] \rightarrow \C$ and every square $B$ of side length $\delta^{-2}$. \end{lemma} \begin{proof} See \cite[Proposition 7.1]{sg} for the proof without explicit constants and \cite[Section 2.4]{thesis} with $E = 100$ for a proof with explicit constants (and a clarification of parabolic rescaling with weight $w_B$). \end{proof} As an immediate application of parabolic rescaling we have almost multiplicativity of the decoupling constant. \begin{lemma}[Almost multiplicativity] Let $0 < \delta < \sigma < 1$ be such that $\sigma, \delta, \delta/\sigma \in \N^{-1}$, then $$D(\delta) \leq 10^{20000}D(\sigma)D(\delta/\sigma).$$ \end{lemma} \begin{proof} See \cite[Proposition 2.4.1]{thesis} with $E = 100$. \end{proof} The trivial bound of $O(\nu^{(a + 2b)/6}\delta^{-1/2})$ for $M_{a, b}(\delta, \nu)$ is too weak for applications. We instead give another trivial bound that follows from parabolic rescaling. \begin{lemma}\label{mabtriv} If $\delta$ and $\nu$ were such that $\nu^{a}\delta^{-1}, \nu^{b}\delta^{-1} \in \N$, then $$M_{a, b}(\delta, \nu) \leq 10^{20000}D(\frac{\delta}{\nu^a})^{1/3}D(\frac{\delta}{\nu^b})^{2/3}.$$ \end{lemma} \begin{proof} Fix arbitrary $I_1 \in P_{\nu^{a}}([0, 1])$ and $I_2 \in P_{\nu^{b}}([0, 1])$ which are $3\nu$-separated. H\"{o}lder's inequality gives that \begin{align*} \nms{\geom_{2, 4}|\E_{I_i}g|}_{L^{6}(B)}^{6} \leq \nms{\E_{I_1}g}_{L^{6}(B)}^{2}\nms{\E_{I_2}g}_{L^{6}(B)}^{4}. \end{align*} Parabolic rescaling bounds this by \begin{align*} 10^{120000}D(\frac{\delta}{\nu^a})^{2}D(\frac{\delta}{\nu^b})^{4}(\sum_{J \in P_{\delta}(I_1)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})(\sum_{J' \in P_{\delta}(I_2)}\nms{\E_{J'}g}_{L^{6}(w_B)}^{2})^{2}. \end{align*} Taking sixth roots then completes the proof of Lemma \ref{mabtriv}. \end{proof} H\"{o}lder and parabolic rescaling allows us to interchange the $a$ and $b$ in $M_{a, b}$. \begin{lemma}\label{switch} Suppose $b \geq 1$ and $\delta$ and $\nu$ were such that $\nu^{2b}\delta^{-1} \in \N$. Then \begin{align*} M_{2b, b}(\delta, \nu) \leq 10^{10000}M_{b, 2b}(\delta, \nu)^{1/2}D(\delta/\nu^b)^{1/2}. \end{align*} \end{lemma} \begin{proof} Fix arbitrary $I_1$ and $I_2$ intervals of length $\nu^{2b}$ and $\nu^{b}$, respectively which are $3\nu$-separated. H\"{o}lder's inequality then gives \begin{align*} \nms{|\E_{I_1}g|^{1/3}|\E_{I_2}g|^{2/3}}_{L^{6}(B)}^{6} \leq (\int_{B}|\E_{I_1}g|^{4}|\E_{I_2}g|^{2})^{1/2}(\int_{B}|\E_{I_2}g|^{6})^{1/2}. \end{align*} Applying the definition of $M_{b, 2b}$ and parabolic rescaling bounds the above by \begin{align*} (10^{20000})^{3}M_{b, 2b}(\delta, \nu)^{3}D(\frac{\delta}{\nu^b})^{3}(\sum_{J \in P_{\delta}(I_1)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})(\sum_{J' \in P_{\delta}(I_2)}\nms{\E_{J'}g}_{L^{6}(w_B)}^{2})^{2} \end{align*} which completes the proof of Lemma \ref{switch}. \end{proof} \begin{lemma}[Bilinear reduction]\label{biv1} Suppose $\delta$ and $\nu$ were such that $\nu\delta^{-1} \in \N$. Then $$D(\delta) \leq 10^{30000}(D(\frac{\delta}{\nu}) + \nu^{-1}M_{1, 1}(\delta, \nu)).$$ \end{lemma} \begin{proof} Let $\{I_i\}_{i = 1}^{\nu^{-1}} = P_{\nu}([0, 1])$. We have \begin{align}\label{ef2d_bieq1} \nms{\E_{[0, 1]} g}_{L^{6}(B)} &= \nms{\sum_{1 \leq i \leq \nu^{-1}}\E_{I_i}g}_{L^{6}(B)} \leq \nms{\sum_{1 \leq i, j \leq \nu^{-1}}|\E_{I_i}g||\E_{I_j}g|}_{L^{3}(B)}^{1/2}\nonumber\\ &\leq \sqrt{2}\bigg( \nms{\sum_{\st{1 \leq i, j \leq \nu^{-1}\\|i - j| \leq 3}}|\E_{I_i}g||\E_{I_j}g|}_{L^{3}(B)}^{1/2} + \nms{\sum_{\st{1 \leq i, j \leq \nu^{-1}\\|i - j| > 3}}|\E_{I_i}g||\E_{I_j}g|}_{L^{3}(B)}^{1/2}\bigg). \end{align} We first consider the diagonal terms. The triangle inequality followed by Cauchy-Schwarz gives that \begin{align*} \nms{\sum_{\st{1 \leq i, j \leq \nu^{-1}\\|i - j| \leq 3}}|\E_{I_i}g||\E_{I_j}g|}_{L^{3}(B)} \leq \sum_{\st{1 \leq i, j \leq \nu^{-1}\\|i - j| \leq 3}}\nms{\E_{I_i}g}_{L^{6}(B)}\nms{\E_{I_j}g}_{L^{6}(B)}. \end{align*} Parabolic rescaling and Cauchy-Schwarz bounds this by \begin{align*} 10^{40000}&D(\frac{\delta}{\nu})^{2}\sum_{\st{1 \leq i, j \leq \nu^{-1}\\|i - j| \leq 3}}(\sum_{J \in P_{\delta}(I_i)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})^{1/2}(\sum_{J \in P_{\delta}(I_j)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})^{1/2}\\ &\leq 10^{40010} D(\frac{\delta}{\nu})^{2}\sum_{J \in P_{\delta}([0, 1])}\nms{\E_{J}g}_{L^{6}(w_B)}^{2}. \end{align*} Therefore the first term in \eqref{ef2d_bieq1} is bounded above by \begin{align}\label{diagbd} 10^{30000}D(\frac{\delta}{\nu})(\sum_{J \in P_{\delta}([0, 1])}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})^{1/2}. \end{align} Next we consider the off-diagonal terms. We have \begin{align*} \nms{\sum_{\st{1 \leq i, j \leq \nu^{-1}\\|i - j| > 3}}|\E_{I_i}g||\E_{I_j}g|}_{L^{3}(B)}^{1/2} \leq \nu^{-1}\max_{\st{1 \leq i, j \leq \nu^{-1}\\|i - j| > 3}}\nms{|\E_{I_i}g||\E_{I_j}g|}_{L^{3}(B)}^{1/2} \end{align*} H\"{o}lder's inequality gives that \begin{align}\label{holder_bi} \nms{|\E_{I_i}g||\E_{I_j}g|}_{L^{3}(B)}^{1/2} \leq \nms{|\E_{I_i}g|^{1/3}|\E_{I_j}g|^{2/3}}_{L^{6}(B)}^{1/2}\nms{|\E_{I_i}g|^{2/3}|\E_{I_j}g|^{1/3}}_{L^{6}(B)}^{1/2} \end{align} and therefore from \eqref{mabdef} (and using that $\nu\delta^{-1} \in \N$), the second term in \eqref{ef2d_bieq1} is bounded by $$\sqrt{2}\nu^{-1}M_{1, 1}(\delta, \nu)(\sum_{J \in P_{\delta}([0, 1])}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})^{1/2}.$$ Combining this with \eqref{diagbd} and applying the definition of $D(\delta)$ then completes the proof of Lemma \ref{biv1}. \end{proof} \subsection{A Fefferman-Cordoba argument} In the proof of Lemma \ref{abup} we need a version of $M_{a, b}$ with both sides being $L^{6}(w_B)$. The following lemma shows that these two constants are equivalent. \begin{lemma}\label{wuw} Suppose $\delta$ and $\nu$ were such that $\nu^{a}\delta^{-1}$, $\nu^{b}\delta^{-1} \in \N$. Let $M_{a, b}'(\delta, \nu)$ be the best constant such that \begin{align*} \int |\E_{I}g|^{2}|\E_{I'}g|^{4}w_{B} \leq M_{a, b}'(\delta, \nu)^{6}(\sum_{J \in P_{\delta}(I)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})(\sum_{J' \in P_{\delta}(I')}\nms{\E_{J'}g}_{L^{6}(w_B)}^{2})^{2} \end{align*} for all squares $B$ of side length $\delta^{-2}$, $g: [0, 1] \rightarrow \C$, and all $3\nu$-separated intervals $I \in P_{\nu^{a}}([0, 1])$ and $I' \in P_{\nu^{b}}([0, 1])$. Then \begin{align*} M_{a, b}'(\delta, \nu) \leq 12^{100/6}M_{a, b}(\delta, \nu). \end{align*} \end{lemma} \begin{rem} Since $1_B \lsm w_B$, $M_{a, b}(\delta, \nu) \lsm M_{a, b}'(\delta, \nu)$ and hence Lemma \ref{wuw} implies $M_{a, b} \sim M_{a, b}'$. \end{rem} \begin{proof} Fix arbitrary $3\nu$-separated intervals $I_1 \in P_{\nu^{a}}([0, 1])$ and $I_2 \in P_{\nu^{b}}([0, 1])$. It suffices to assume that $B$ is centered at the origin. Corollary 2.2.4 of \cite{thesis} gives \begin{align*} \nms{\geom_{2, 4}|\E_{I_i}g|}_{L^{6}(w_{B})}^{6} \leq 3^{100}\int_{\R^2}\nms{\geom_{2, 4}|\E_{I_i}g|}_{L^{6}_{\#}(B(y, \delta^{-2}))}^{6}w_{B}(y)\, dy. \end{align*} Applying the definition of $M_{a, b}$ gives that the above is \begin{align*} &\leq 3^{100}\delta^{4}M_{a, b}(\delta, \nu)^{6}\int_{\R^2}\geom_{2, 4}(\sum_{J \in P_{\delta}(I_i)}\nms{\E_{J}g}_{L^{6}(w_{B(y, \delta^{-2})})}^{2})^{3}w_{B}(y)\, dy\\ &\leq 3^{100}\delta^{4}M_{a, b}(\delta, \nu)^{6}\geom_{2, 4}\int_{\R^2}(\sum_{J \in P_{\delta}(I_i)}\nms{\E_{J}g}_{L^{6}(w_{B(y, \delta^{-2})})}^{2})^{\frac{1}{2}\cdot 6}w_{B}(y)\, dy\\ &\leq 3^{100}\delta^{4}M_{a, b}(\delta, \nu)^{6}\geom_{2, 4}(\sum_{J \in P_{\delta}(I_i)}(\int_{\R^2}\nms{\E_{J}g}_{L^{6}(w_{B(y, \delta^{-2})})}^{6}w_{B}(y)\, dy)^{1/3})^{3} \end{align*} where the second inequality is by H\"{o}lder and the third inequality is by Minkowski. Since $B$ is centered at the origin, $w_B \ast w_B \leq 4^{100}\delta^{-4}w_B$ \cite[Lemma 2.2.1]{thesis} and hence \begin{align*} \delta^{4}\int_{\R^2}\nms{\E_{J}g}_{L^{6}(w_{B(y, \delta^{-2})})}^{6}w_{B}(y)\, dy \leq 4^{100} \nms{\E_{J}g}_{L^{6}(w_{B})}^{6}. \end{align*} This then immediately implies that $M_{a, b}'(\delta, \nu) \leq 12^{100/6} M_{a, b}(\delta, \nu)$ which completes the proof of Lemma \ref{wuw}. \end{proof} We have the following key technical lemma of this paper. We encourage the reader to compare the argument with that of \cite[Lemma 4.4]{pierce}. \begin{lemma}\label{abup} Let $a$ and $b$ be integers such that $1 \leq a \leq 2b$. Suppose $\delta$ and $\nu$ was such that $\nu^{2b}\delta^{-1} \in \N$. Then $$M_{a, b}(\delta, \nu) \leq 10^{1000} \nu^{-1/6}M_{2b, b}(\delta, \nu).$$ \end{lemma} \begin{proof} It suffices to assume that $B$ is centered at the origin with side length $\delta^{-2}$. The integrality conditions on $\delta$ and $\nu$ imply that $\delta \leq \nu^{2b}$ and $\nu^{a}\delta^{-1}, \nu^{b}\delta^{-1} \in \N$. Fix arbitrary intervals $I_1 = [\alpha, \alpha + \nu^a] \in P_{\nu^{a}}([0, 1])$ and $I_2 = [\beta, \beta + \nu^b] \in P_{\nu^{b}}([0, 1])$ which are $3\nu$-separated. Let $g_{\beta}(x) := g(x + \beta)$, $T_{\beta} = (\begin{smallmatrix} 1 & 2\beta\\0 & 1\end{smallmatrix})$, and $d := \alpha - \beta$. Shifting $I_2$ to $[0, \nu^b]$ gives that \begin{align}\label{cov} \int_{B}|(\E_{I_1}g)(x)|^{2}|(\E_{I_2}g)(x)|^{4}\, dx &= \int_{B}|(\E_{[d, d + \nu^a]}g_{\beta})(T_{\beta}x)|^{2}|(\E_{[0, \nu^b]}g_{\beta})(T_{\beta}x)|^{4}\, dx\nonumber\\ & = \int_{T_{\beta}(B)}|(\E_{[d, d + \nu^a]}g_{\beta})(x)|^{2}|(\E_{[0, \nu^b]}g_{\beta})(x)|^{4}\, dx. \end{align} Note that $d$ can be negative, however since $g: [0, 1] \rightarrow \C$ and $d = \alpha - \beta$, $\E_{[d, d + \nu^a]}g_{\beta}$ is defined. Since $|\beta| \leq 1$, $T_{\beta}(B) \subset 100B$. Combining this with $1_{100B} \leq \eta_{100B}$ gives that \eqref{cov} is \begin{align}\label{target0} &\leq \int_{\R^2} |(\E_{[d, d + \nu^a]}g_{\beta})(x)|^{2}|(\E_{[0, \nu^b]}g_{\beta})(x)|^{4}\eta_{100B}(x)\, dx\nonumber\\ & = \sum_{J_1, J_2 \in P_{\nu^{2b}}([d, d + \nu^a])}\int_{\R^2}(\E_{J_1}g_{\beta})(x)\ov{(\E_{J_2}g_{\beta})(x)}|(\E_{[0, \nu^b]}g_{\beta})(x)|^{4}\eta_{100B}(x)\, dx. \end{align} We claim that if $d(J_1, J_2) > 10\nu^{2b - 1}$, the integral in \eqref{target0} is equal to 0. Suppose $J_1, J_2 \in P_{\nu^{2b}}([d, d + \nu^a])$ such that $d(J_1, J_2) > 10\nu^{2b - 1}$. Expanding the integral in \eqref{target0} for this pair of $J_1, J_2$ gives that it is equal to \begin{equation}\label{target1} \int_{\R^2}\bigg(\int_{J_1 \times [0, \nu^b]^2 \times J_2 \times [0, \nu^b]^2}\prod_{i = 1}^{3}g_{\beta}(\xi_i)\ov{g_{\beta}(\xi_{i + 3})}e(\cdots)\, \prod_{i = 1}^{6}d\xi_i\bigg)\eta_{100B}(x)\, dx \end{equation} where the expression inside the $e(\cdots)$ is $$((\xi_1 - \xi_4)x_1 + (\xi_{1}^{2} - \xi_{4}^{2})x_2) + ((\xi_2 + \xi_3 - \xi_5 - \xi_6)x_1 + (\xi_{2}^{2} + \xi_{3}^{2} -\xi_{5}^{2} - \xi_{6}^{2})x_2).$$ Interchanging the integrals in $\xi$ and $x$ shows that the integral in $x$ is equal to the Fourier inverse of $\eta_{100B}$ evaluated at \begin{align*} (\sum_{i = 1}^{3}(\xi_{i} - \xi_{i + 3}), \sum_{i = 1}^{3}(\xi_{i}^{2} - \xi_{i + 3}^{2})). \end{align*} Since the Fourier inverse of $\eta_{100B}$ is supported in $B(0, \delta^{2}/100)$, \eqref{target1} is equal to 0 unless \begin{align}\label{f2} |\sum_{i = 1}^{3}(\xi_{i} - \xi_{i + 3})| &\leq \delta^{2}/200\nonumber\\ |\sum_{i = 1}^{3}(\xi_{i}^{2} - \xi_{i + 3}^{2})| & \leq \delta^{2}/200. \end{align} Since $\delta \leq \nu^{2b}$ and $\xi_{i} \in [0, \nu^b]$ for $i = 2, 3, 5, 6$, \eqref{f2} implies \begin{align}\label{f3} |\xi_1 - \xi_4||\xi_1 + \xi_4| = |\xi_{1}^{2} - \xi_{4}^{2}| \leq 5\nu^{2b}. \end{align} Since $I_1, I_2$ are $3\nu$-separated, $|d| \geq 3\nu$. Recall that $\xi_1 \in J_1$, $\xi_4 \in J_2$ and $J_1, J_2$ are subsets of $[d, d + \nu^a]$. Write $\xi_1 = d + r$ and $\xi_4 = d + s$ with $r, s \in [0, \nu^a]$. Then \begin{align}\label{f4} |\xi_1 + \xi_4| = |2d + (r + s)| \geq 6\nu - |r + s| \geq 6\nu - 2\nu^{a} \geq 4\nu. \end{align} Since $d(J_1, J_2) > 10\nu^{2b - 1}$, $|\xi_1 - \xi_4| > 10\nu^{2b - 1}$. Therefore the left hand side of \eqref{f3} is $> 40\nu^{2b}$, a contradiction. Thus the integral in \eqref{target0} is equal to 0 when $d(J_1, J_2) > 10\nu^{2b - 1}$. The above analysis implies that \eqref{target0} is \begin{align*} \leq \sum_{\st{J_1, J_2 \in P_{\nu^{2b}}([d, d + \nu^a])\\d(J_1, J_2) \leq 10\nu^{2b - 1}}}\int_{\R^2}|(\E_{J_1}g_{\beta})(x)||(\E_{J_2}g_{\beta})(x)||(\E_{[0, \nu^b]}g_{\beta})(x)|^{4}\eta_{100B}(x)\, dx. \end{align*} Undoing the change of variables as in \eqref{cov} gives that the above is equal to \begin{align}\label{pent1} \sum_{\st{J_1, J_2 \in P_{\nu^{2b}}(I_1)\\d(J_1, J_2) \leq 10\nu^{2b - 1}}}\int_{\R^2}|(\E_{J_1}g)(x)||(\E_{J_2}g)(x)||(\E_{I_2}g)(x)|^{4}\eta_{100B}(T_{\beta}x)\, dx. \end{align} Observe that \begin{align*} \eta_{100B}(T_{\beta}x) \leq 10^{2400}w_{100B}(T_{\beta}x) \leq 10^{2600} w_{100B}(x) \leq 10^{2800} w_{B}(x) \end{align*} where the second inequality is an application of \cite[Lemma 2.2.16]{thesis} and the last inequality is because $w_{B}(x)^{-1}w_{100B}(x) \leq 10^{200}$. An application of Cauchy-Schwarz shows that \eqref{pent1} is \begin{align*} \leq 10^{2800}\sum_{\st{J_1, J_2 \in P_{\nu^{2b}}(I_1)\\d(J_1, J_2) \leq 10\nu^{2b - 1}}}(\int_{\R^2}|\E_{J_1}g|^{2}|\E_{I_2}g|^{4}w_B)^{1/2}(\int_{\R^2}|\E_{J_2}g|^{2}|\E_{I_2}g|^{4}w_{B})^{1/2}. \end{align*} Note that for each $J_1\in P_{\nu^{2b}}(I_1)$, there are $\leq 10000\nu^{-1}$ intervals $J_2 \in P_{\nu^{2b}}(I_1)$ such that $d(J_1, J_2) \leq 10\nu^{2b - 1}$. Thus two applications of Cauchy-Schwarz bounds the above by \begin{align*} 10^{2802}\nu^{-1/2}&(\sum_{J_1 \in P_{\nu^{2b}}(I_1)}\int_{\R^2}|\E_{J_1}g|^{2}|\E_{I_2}g|^{4}w_B)^{1/2}\times\\ &\hspace{1in}(\sum_{J_1 \in P_{\nu^{2b}}(I_1)}\sum_{\st{J_2 \in P_{\nu^{2b}}(I_2)\\d(J_1, J_2) \leq 10\nu^{2b - 1}}}\int_{\R^2}|\E_{J_2}g|^{2}|\E_{I_2}g|^{4}w_B)^{1/2}. \end{align*} Since there are $\leq 10000\nu^{-1}$ relevant $J_2$ for each $J_1$, the above is \begin{align*} &\leq 10^{3000} \nu^{-1}\sum_{J \in P_{\nu^{2b}}(I_1)}\int_{\R^2}|\E_{J}g|^{2}|\E_{I_2}g|^{4}w_B\\ &\leq 10^{3000}12^{100}M_{2b, b}(\delta, \nu)^{6}(\sum_{J \in P_{\delta}(I_1)}\nms{\E_{J}g}_{L^{6}(w_B)}^{2})(\sum_{J' \in P_{\delta}(I_2)}\nms{\E_{J'}g}_{L^{6}(w_B)}^{2})^{2} \end{align*} where the last inequality is an application of Lemma \ref{wuw}. This completes the proof of Lemma \ref{abup}. \end{proof} Iterating Lemmas \ref{switch} and \ref{abup} repeatedly gives the following estimate. \begin{lemma}\label{m11iter} Let $N \in \N$ and suppose $\delta$ and $\nu$ were such that $\nu^{2^{N}}\delta^{-1} \in \N$. Then \begin{align*} M_{1, 1}(\delta, \nu) \leq 10^{60000}\nu^{-1/3}D(\frac{\delta}{\nu^{2^{N-1}}})^{\frac{1}{3\cdot 2^{N}}}D(\frac{\delta}{\nu^{2^{N}}})^{\frac{2}{3\cdot 2^{N}}}\prod_{j = 0}^{N-1}D(\frac{\delta}{\nu^{2^j}})^{1/2^{j + 1}}. \end{align*} \end{lemma} \begin{proof} Lemmas \ref{switch} and \ref{abup} imply that if $1 \leq a \leq 2b$ and $\delta$ and $\nu$ were such that $\nu^{2b}\delta^{-1} \in \N$, then \begin{align}\label{brev} M_{a, b}(\delta, \nu) \leq 10^{20000}\nu^{-1/6}M_{b, 2b}(\delta, \nu)^{1/2}D(\frac{\delta}{\nu^b})^{1/2}. \end{align} Since $\nu^{2^{N}}\delta^{-1} \in \N$, $\nu^{i}\delta^{-1} \in \N$ for $i = 0, 1, 2, \ldots, 2^{N}$. Applying \eqref{brev} repeatedly gives \begin{align*} M_{1, 1}(\delta, \nu) \leq 10^{40000}\nu^{-1/3}M_{2^{N-1}, 2^{N}}(\delta, \nu)^{\frac{1}{2^N}}\prod_{j = 0}^{N-1}D(\frac{\delta}{\nu^{2^j}})^{1/2^{j + 1}}. \end{align*} Bounding $M_{2^{N-1}, 2^{N}}$ using Lemma \ref{mabtriv} then completes the proof of Lemma \ref{m11iter}. \end{proof} \begin{rem} A similar analysis as in \eqref{f2}-\eqref{f4} shows that if $1 \leq a < b$ and $\delta$ and $\nu$ were such that $\nu^{b}\delta^{-1} \in \N$, then $M_{a, b}(\delta, \nu) \lsm M_{b, b}(\delta, \nu)$. Though we do not iterate this way in this section, it is enough to close the iteration with $M_{a, b} \lsm M_{b, b}$ for $1 \leq a < b$, and $M_{b, b} \lsm \nu^{-1/6}M_{2b, b}$, and Lemma \ref{switch}. We interpret the iteration and in particular Lemma \ref{abup} this way in Sections \ref{unc}-\ref{bds}. \end{rem} \subsection{The $O_{\vep}(\delta^{-\vep})$ bound}\label{fc_iter} Combining Lemma \ref{m11iter} with Lemma \ref{biv1} gives the following. \begin{cor}\label{decrec} Let $N \in \N$ and suppose $\delta$ and $\nu$ were such that $\nu^{2^{N}}\delta^{-1} \in \N$. Then \begin{align*} D(\delta) \leq 10^{10^{5}}\bigg(D(\frac{\delta}{\nu}) + \nu^{-4/3}D(\frac{\delta}{\nu^{2^{N-1}}})^{\frac{1}{3\cdot 2^{N}}}D(\frac{\delta}{\nu^{2^{N}}})^{\frac{2}{3\cdot 2^{N}}}\prod_{j = 0}^{N-1}D(\frac{\delta}{\nu^{2^j}})^{1/2^{j + 1}}\bigg) \end{align*} \end{cor} Choosing $\nu = \delta^{1/2^{N}}$ in Corollary \ref{decrec} and requiring that $\nu=\delta^{1/2^{N}} \in \N^{-1} \cap (0, 1/100)$ gives the following result. \begin{cor}\label{core} Let $N \in \N$ and suppose $\delta$ was such that $\delta^{-1/2^{N}} \in \N$ and $\delta < 100^{-2^{N}}$. Then \begin{align*} D(\delta) \leq 10^{10^{5}}\bigg(D(\delta^{1 - \frac{1}{2^{N}}}) + \delta^{-\frac{4}{3\cdot 2^{N}}}D(\delta^{1/2})^{\frac{1}{3\cdot 2^{N}}}\prod_{j = 0}^{N-1}D(\delta^{1 - \frac{1}{2^{N - j}}})^{\frac{1}{2^{j + 1}}}\bigg). \end{align*} \end{cor} Corollary \ref{core} allows us to conclude that $D(\delta) \lsm_{\vep} \delta^{-\vep}$. To see this, the trivial bounds for $D(\delta)$ are $1 \lsm D(\delta) \lsm \delta^{-1/2}$ for all $\delta \in \N^{-1}$. Let $\ld$ be the smallest real number such that $D(\delta) \lsm_{\vep} \delta^{-\ld - \vep}$ for all $\delta \in \N^{-1}$. From the trivial bounds, $\ld \in [0, 1/2]$. We claim that $\ld = 0$. Suppose $\ld > 0$. Choose $N$ to be an integer such that \begin{align}\label{ef2d_mchoice} \frac{5}{6} + \frac{N}{2} - \frac{4}{3\ld}\geq 1. \end{align} Then by Corollary \ref{core}, for $\delta^{-1/2^{N}} \in \N$ with $\delta < 100^{-2^{N}}$, \begin{align*} D(\delta) &\lsm_{\vep} \delta^{-\ld(1 - \frac{1}{2^{N}}) - \vep} + \delta^{-\frac{4}{3\cdot 2^{N}} - \frac{\ld}{6\cdot 2^{N}} - \sum_{j = 0}^{N-1}(1 - \frac{1}{2^{N - j}})\frac{\ld}{2^{j + 1}}- \vep}\\ &\lsm_{\vep} \delta^{-\ld(1 - \frac{1}{2^{N}}) - \vep} + \delta^{-\ld(1 - (\frac{5}{6} + \frac{N}{2} - \frac{4}{3\ld})\frac{1}{2^{N}}) - \vep} \lsm_{\vep} \delta^{-\ld(1 - \frac{1}{2^{N}}) - \vep} \end{align*} where in the last inequality we have used \eqref{ef2d_mchoice}. Applying almost multiplicativity of the linear decoupling constant (similar to \cite[Section 2.10]{thesis} or the proof of Lemma \ref{expstep2} later) then shows that for all $\delta \in \N^{-1}$, \begin{align*} D(\delta) \lsm_{N, \vep} \delta^{-\ld(1 - \frac{1}{2^{N}}) - \vep}. \end{align*} This then contradicts minimality of $\ld$. Therefore $\ld = 0$ and thus we have shown that $D(\delta) \lsm_{\vep}\delta^{-\vep}$ for all $\delta \in \N^{-1}$. \subsection{An explicit bound} Having shown that $D(\delta) \lsm_{\vep} \delta^{-\vep}$, we now make this dependence on $\vep$ explicit. Fix arbitrary $0 < \vep < 1/100$. Then $D(\delta) \leq C_{\vep}\delta^{-\vep}$ for all $\delta \in \N^{-1}$. \begin{lemma}\label{expstep1} Fix arbitrary $0 < \vep < 1/100$ and suppose $D(\delta) \leq C_{\vep}\delta^{-\vep}$ for all $\delta\in \N^{-1}$. Let integer $N \geq 1$ be such that $$\frac{5}{6} + \frac{N}{2} - \frac{4}{3\vep} > 0.$$ Then for $\delta$ such that $\delta^{-1/2^{N}} \in \N$ and $\delta < 100^{-2^{N}}$, we have $$D(\delta) \leq 2\cdot 10^{10^{5}}C_{\vep}^{1 - \frac{\vep}{2^{N}}}\delta^{-\vep}.$$ \end{lemma} \begin{proof} Inserting $D(\delta) \leq C_{\vep}\delta^{-\vep}$ into Corollary \ref{core} gives that for all integers $N \geq 1$ and $\delta$ such that $\delta^{-1/2^{N}} \in \N$, $\delta < 100^{-2^{N}}$, we have \begin{align*} D(\delta) \leq 10^{10^{5}}(C_{\vep}\delta^{\frac{\vep}{2^{N}}} + C_{\vep}^{1 - \frac{2}{3 \cdot 2^N}}\delta^{\frac{\vep}{2^{N}}(\frac{5}{6} + \frac{N}{2} - \frac{4}{3\vep})})\delta^{-\vep}. \end{align*} Thus by our choice of $N$, \begin{align}\label{exp1} D(\delta) \leq 10^{10^{5}}(C_{\vep}\delta^{\frac{\vep}{2^{N}}} + C_{\vep}^{1 - \frac{2}{3 \cdot 2^N}})\delta^{-\vep}. \end{align} There are two possibilities. If $\delta < C_{\vep}^{-1}$, then since $0 < \vep < 1/100$, \eqref{exp1} becomes \begin{align}\label{exp2} D(\delta) \leq 10^{10^{5}}(C_{\vep}^{1 - \frac{\vep}{2^{N}}} + C_{\vep}^{1 - \frac{2}{3\cdot 2^{N}}})\delta^{-\vep} \leq 2\cdot 10^{10^{5}}C_{\vep}^{1 - \frac{\vep}{2^{N}}}\delta^{-\vep}. \end{align} On the other hand if $\delta \geq C_{\vep}^{-1}$, the trivial bound gives \begin{align*} D(\delta) \leq 2^{100/6}\delta^{-1/2} \leq 2^{100/6}C_{\vep}^{1/2} \end{align*} which is bounded above by the right hand side of \eqref{exp2}. This completes the proof of Lemma \ref{expstep1}. \end{proof} Note that Lemma \ref{expstep1} is only true for $\delta$ satisfying $\delta^{-1/2^{N}} \in \N$ and $\delta < 100^{-2^{N}}$. We now use almost multiplicativity to upgrade the result of Lemma \ref{expstep1} to all $\delta \in \N^{-1}$. \begin{lemma}\label{expstep2} Fix arbitrary $0 < \vep < 1/100$ and suppose $D(\delta) \leq C_{\vep}\delta^{-\vep}$ for all $\delta \in \N^{-1}$. Then \begin{align*} D(\delta) \leq 10^{10^6}2^{4\cdot 8^{1/\vep}}C_{\vep}^{1 - \frac{\vep}{8^{1/\vep}}}\delta^{-\vep} \end{align*} for all $\delta \in \N^{-1}$. \end{lemma} \begin{proof} Choose \begin{align}\label{mchoice3} N := \lceil \frac{8}{3\vep} - \frac{5}{3}\rceil \end{align} and $\delta \in \{2^{-2^{N}n}\}_{n = 7}^{\infty} = \{\delta_{n}\}_{n = 7}^{\infty}$. Then for these $\delta$, $\delta^{-1/2^{N}} \in \N$ and $\delta < 100^{-2^{N}}$. If $\delta \in (\delta_{7}, 1] \cap \N^{-1}$, then \begin{align*} D(\delta) \leq 2^{100/6}\delta^{-1/2} \leq 2^{100/6}2^{2^{N-1} \cdot 7}. \end{align*} If $\delta \in (\delta_{n + 1}, \delta_{n}]$ for some $n \geq 7$, then almost multiplicativity and Lemma \ref{expstep1} gives that \begin{align*} D(\delta) &\leq 10^{20000}D(\delta_n)D(\frac{\delta}{\delta_n})\\ & \leq 10^{20000}(2\cdot 10^{10^{5}}C_{\vep}^{1 - \frac{\vep}{2^{N}}}\delta_{n}^{-\vep})(2^{100/6}(\frac{\delta_n}{\delta})^{1/2})\\ &\leq 10^{10^{6}}2^{2^{N-1}}C_{\vep}^{1 - \frac{\vep}{2^{N}}}\delta^{-\vep} \end{align*} where $N$ is as in \eqref{mchoice3} and the second inequality we have used the trivial bound for $D(\delta/\delta_n)$. Combining both cases above then shows that if $N$ is chosen as in \eqref{mchoice3}, then \begin{align*} D(\delta) \leq 10^{10^{6}}2^{7 \cdot 2^{N-1}}C_{\vep}^{1 - \frac{\vep}{2^{N}}}\delta^{-\vep} \end{align*} for all $\delta \in \N^{-1}$. Since we are no longer constrained by having $N\in \N$, we can increase $N$ to be $3/\vep$ and so we have that \begin{align*} D(\delta) \leq 10^{10^6}2^{4\cdot 8^{1/\vep}}C_{\vep}^{1 - \frac{\vep}{8^{1/\vep}}}\delta^{-\vep} \end{align*} for all $\delta \in \N^{-1}$. This completes the proof of Lemma \ref{expstep2}. \end{proof} \begin{lemma}\label{expstep3} For all $0 < \vep < 1/100$ and all $\delta \in \N^{-1}$, we have \begin{align*} D(\delta) \leq 2^{200^{1/\vep}}\delta^{-\vep}. \end{align*} \end{lemma} \begin{proof} Let $P(C, \ld)$ be the statement that $D(\delta) \leq C\delta^{-\ld}$ for all $\delta \in \N^{-1}$. Lemma \ref{expstep2} implies that for $\vep \in (0, 1/100)$, \begin{align*} P(C_{\vep}, \vep) \implies P(10^{10^6}2^{4\cdot 8^{1/\vep}}C_{\vep}^{1 - \frac{\vep}{8^{1/\vep}}}, \vep). \end{align*} Iterating this $M$ times gives that \begin{align*} P(C_{\vep}, \vep) \implies P([10^{10^6}2^{4\cdot 8^{1/\vep}}]^{\sum_{j = 0}^{M-1}(1 - \frac{\vep}{8^{1/\vep}})^{j}}C_{\vep}^{(1 - \frac{\vep}{8^{1/\vep}})^M}, \vep). \end{align*} Letting $M \rightarrow \infty$ thus gives that for all $0 < \vep < 1/100$, \begin{align*} D(\delta) \leq (10^{10^6}2^{4\cdot 8^{1/\vep}})^{8^{1/\vep}/\vep}\delta^{-\vep} \leq 2^{100^{1/\vep}/\vep}\delta^{-\vep} \leq 2^{200^{1/\vep}}\delta^{-\vep} \end{align*} for all $\delta \in \N^{-1}$. This completes the proof of Lemma \ref{expstep3}. \end{proof} Optimizing in $\vep$ then gives the proof of our main result. \begin{proof}[Proof of Theorem \ref{ef2d_main}] Choose $A = (\log_{2}200)(\log\frac{1}{\delta})$, $\eta = \log A - \log\log A$, and $\vep = \frac{1}{\eta}\log 200$. Note that if $\eta = \log A - \log\log A$, then $\eta\exp(\eta) = A(1 - \frac{\log\log A}{\log A}) \leq A$. Then from our choice of $\eta, A, \vep$, $$200^{1/\vep}\log 2 \leq \vep\log\frac{1}{\delta}$$ and hence \begin{align}\label{optimize_eq1} 2^{200^{1/\vep}}\delta^{-\vep} \leq \exp(2\vep\log\frac{1}{\delta}). \end{align} Since $\eta = \log A - \log\log A$, we need to ensure that our choice of $\vep$ is such that $0 < \vep < 1/100$. Thus we need \begin{align*} \vep = \frac{\log 200}{\log((\log_{2}200)(\log\frac{1}{\delta})) - \log\log((\log_{2}200)(\log\frac{1}{\delta}))} < \frac{1}{100}. \end{align*} Note that for all $x > 0$, $\log\log x < (\log x)^{1/2}$ and hence for all $0 < \delta < e^{-\frac{e^4}{\log_{2}200}}$, \begin{align} \log((\log_{2}200)(\log\frac{1}{\delta})) &- \log\log((\log_{2}200)(\log\frac{1}{\delta}))\nonumber\\ & \geq \log((\log_{2}200)(\log\frac{1}{\delta})) - [\log((\log_{2}200)(\log\frac{1}{\delta}))]^{1/2}\nonumber\\ & \geq \frac{1}{2}\log((\log_{2}200)(\log\frac{1}{\delta})) \geq \frac{1}{2}\log\log\frac{1}{\delta}.\label{optimize_eq2} \end{align} Thus we need $0 < \delta < e^{-\frac{e^4}{\log_{2}200}}$ to also be such that \begin{align*} \frac{2\log 200}{\log\log\frac{1}{\delta}} < \frac{1}{100} \end{align*} and hence $\delta < e^{-200^{200}}$. Therefore using \eqref{optimize_eq1} and \eqref{optimize_eq2}, we have that for $\delta \in (0, e^{-200^{200}}) \cap \N^{-1}$, \begin{align*} D(\delta) \leq \exp(30\frac{\log\frac{1}{\delta}}{\log\log\frac{1}{\delta}}). \end{align*} This completes the proof of Theorem \ref{ef2d_main}. \end{proof} \section{An uncertainty principle interpretation of Lemma \ref{abup}}\label{unc} We now give a different interpretation of Lemma \ref{abup}, making use of the uncertainty principle. We will pretend all weight functions $w_B$ are indicator functions $1_B$ in this section and will make the argument rigorous in the next section. The main point was of Lemma \ref{abup} was to show that if $1 \leq a \leq 2b$, $\delta$ and $\nu$ such that $\nu^{2b}\delta^{-1} \in \N$, then \begin{align}\label{unctar} \int_{B}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} \lsm \nu^{-1}\sum_{J \in P_{\nu^{2b}}(I_1)}\int_{B}|\E_{J}g|^{2}|\E_{I_2}g|^{4} \end{align} for arbitrary $I_1 \in P_{\nu^{a}}([0, 1])$ and $I_2 \in P_{\nu^{b}}([0, 1])$ such that $d(I_1, I_2) \gtrsim \nu$. From Lemma \ref{m11iter}, we only need \eqref{unctar} to be true for $1 \leq a \leq b$. Our goal of this section is to prove (heuristically under the uncertainty principle) the following two statements: \begin{enumerate}[(I)] \item For $1 \leq a < b$, $M_{a, b}(\delta, \nu) \lsm M_{b, b}(\delta, \nu)$; in other words \begin{align}\label{ineq1_unc} \int_{B}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} \lsm \sum_{J \in P_{\nu^b}(I_1)}\int_{B}|\E_{J}g|^{2}|\E_{I_2}g|^{4} \end{align} for arbitrary $I_1 \in P_{\nu^a}([0, 1])$ and $I_2 \in P_{\nu^b}([0, 1])$ such that $d(I_1, I_2) \gtrsim \nu$. \item $M_{b, b}(\delta, \nu) \lsm \nu^{-1/6}M_{2b, b}(\delta, \nu)$; in other words \begin{align}\label{ineq2_unc} \int_{B}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} \lsm \nu^{-1}\sum_{J \in P_{\nu^{2b}}(I_1)}\int_{B}|\E_{J}g|^{2}|\E_{I_2}g|^{4} \end{align} for arbitrary $I_1, I_2 \in P_{\nu^b}([0, 1])$ such that $d(I_1, I_2) \gtrsim \nu$. \end{enumerate} Replacing 4 with $p - 2$ then allows us to generalize to $2 \leq p < 6$. The particular instance of the uncertainty principle we will use is the following. Let $I$ be an interval of length $1/R$ with center $c$. Fix an arbitrary $R \times R^2$ rectangle $T$ oriented in the direction $(-2c, 1)$. Heuristically for $x \in T$, $(\E_{I}g)(x)$ behaves like $a_{T, I}e^{2\pi i \om_{T, I}\cdot x}$. Here the amplitude $a_T$ depends on $g, T$, and $I$ and the phase $\om_{T, I}$ depends on $T$ and $I$. In particular, $|(\E_{I}g)(x)|$ is essentially constant on every $R \times R^2$ rectangle oriented in the direction $(-2c, 1)$. This also implies that if $\Delta$ is a square of side length $R$, then $|(\E_{I}g)(x)|$ is essentially constant on $\Delta$ (with constant depending on $\Delta, I, g$) and $\nms{\E_{I}g}_{L^{p}_{\#}(\Delta)}$ is essentially constant with the same constant independent of $p$. We introduce two standard tools from \cite{sg, bdg}. \begin{lemma}[Bernstein's inequality]\label{bern_uw} Let $I$ be an interval of length $1/R$ and $\Delta$ a square of side length $R$. If $1 \leq p \leq q < \infty$, then \begin{align*} \nms{\E_{I}g}_{L^{q}_{\#}(\Delta)} \lsm \nms{\E_{I}g}_{L^{p}_{\#}(\Delta)}. \end{align*} We also have $$\nms{\E_{I}g}_{L^{\infty}(\Delta)} \lsm \nms{\E_{I}g}_{L^{p}_{\#}(\Delta)}.$$ \end{lemma} \begin{proof} See \cite[Corollary 4.3]{sg} or \cite[Lemma 2.2.20]{thesis} for a rigorous proof. \end{proof} The reverse inequality in the above lemma is just an application of H\"{o}lder. \begin{lemma}[$l^2 L^2$ decoupling]\label{l2l2_uw} Let $I$ be an interval of length $\geq 1/R$ such that $R|I| \in \N$ and $\Delta$ a square of side length $R$. Then \begin{align*} \nms{\E_{I}g}_{L^{2}(\Delta)} \lsm (\sum_{J \in P_{1/R}(I)}\nms{\E_{J}g}_{L^{2}(\Delta)}^{2})^{1/2}. \end{align*} \end{lemma} \begin{proof} See \cite[Proposition 6.1]{sg} or \cite[Lemma 2.2.21]{thesis} for a rigorous proof. \end{proof} The first inequality \eqref{ineq1_unc} is an immediate application of the uncertainty principle and $l^2 L^2$ decoupling. \begin{lemma}\label{mglem2} Suppose $1 \leq a < b$ and $\delta$ and $\nu$ were such that $\nu^{b}\delta^{-1} \in \N$. Then \begin{align*} \int_{B}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} \lsm \sum_{J \in P_{\nu^{b}}(I_1)}\int_{B}|\E_{J}g|^{2}|\E_{I_2}g|^{4} \end{align*} for arbitrary $I_1 \in P_{\nu^{a}}([0, 1])$ and $I_2 \in P_{\nu^{b}}([0, 1])$ such that $d(I_1, I_2) \gtrsim \nu$. In other words, $M_{a, b}(\delta, \nu) \lsm M_{b, b}(\delta, \nu).$ \end{lemma} \begin{proof} It suffices to show that for each $\Delta' \in P_{\nu^{-b}}(B)$, we have \begin{align*} \int_{\Delta'}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} \lsm \sum_{J \in P_{\nu^{b}}(I_1)}\int_{\Delta'}|\E_{J}g|^{2}|\E_{I_2}g|^{4}. \end{align*} Since $I_2$ is an interval of length $\nu^b$, $|\E_{I_2}g|$ is essentially constant on $\Delta'$. Therefore the above reduces to showing \begin{align*} \int_{\Delta'}|\E_{I_1}g|^{2} \lsm \sum_{J \in P_{\nu^{b}}(I_1)}\int_{\Delta'}|\E_{J}g|^{2} \end{align*} which since $a < b$ and $I_1$ is of length $\nu^a$ is just an application of $l^2 L^2$ decoupling. This completes the proof of Lemma \ref{mglem2}. \end{proof} Inequality \eqref{ineq2_unc} is a consequence of the following ball inflation lemma which is reminiscent of the ball inflation in the Bourgain-Demeter-Guth proof of Vinogradov's mean value theorem. The main point of this lemma is to increase the spatial scale so we can apply $l^2 L^2$ decoupling while keep the frequency scales constant. \begin{lemma}[Ball inflation]\label{ef2d_ball} Let $b \geq 1$ be a positive integer. Suppose $I_1$ and $I_2$ are intervals of length $\nu^b$ with $d(I_1, I_2) \gtrsim \nu$. Then for any square $\Delta'$ of side length $\nu^{-2b}$, we have \begin{align*} \avg{\Delta \in P_{\nu^{-b}}(\Delta')}\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta)}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta)}^{4} \lsm \nu^{-1}\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta')}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta')}^{4}. \end{align*} \end{lemma} \begin{proof} The uncertainty principle implies that $|\E_{I_1}g|$ and $|\E_{I_2}g|$ are essentially constant on $\Delta$. Therefore we essentially have \begin{align*} \avg{\Delta \in P_{\nu^{-b}}(\Delta')}\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta)}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta)}^{4} &\sim \frac{1}{|P_{\nu^{-b}}(\Delta')|}\sum_{\Delta \in P_{\nu^{-b}}(\Delta')}\frac{1}{|\Delta|}\int_{\Delta}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4}\\ &= \frac{1}{|\Delta'|}\int_{\Delta'}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4}. \end{align*} Cover $\Delta'$ by disjoint rectangles $\{T_1\}$ of size $\nu^{-b} \times \nu^{-2b}$ pointing in the direction $(-2c_{I_1}, 1)$ where $c_{I_1}$ is the center of $I_!$. Similarly form the collection of $\nu^{-b} \times \nu^{-2b}$ rectangles $\{T_2\}$ corresponding to $I_2$. From the uncertainty principle, $|\E_{I_1}g| \sim \sum_{T_1}|a_{T_1}|1_{T_1}$ and $|\E_{I_2}g| \sim \sum_{T_2}|a_{T_2}|1_{T_2}$ for some constants $|a_{T_i}|$ depending on $T_i, g$, and $\Delta'$. Since $I_1$ and $I_2$ are $O(\nu)$-separated, for any two tubes $T_1, T_2$ corresponding to $I_1, I_2$, we have $|T_1 \cap T_2| \lsm \nu^{-1 - 2b}$. Therefore \begin{align*} \frac{1}{|\Delta'|}\int_{\Delta'}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} \lsm \nu^{-1}\frac{\nu^{-2b}}{|\Delta'|}\sum_{T_1, T_2} |a_{T_1}|^{2}|a_{T_2}|^{4}. \end{align*} Since \begin{align*} \nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta')}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta')}^{4} \sim \frac{\nu^{-6b}}{|\Delta'|^2}\sum_{T_1, T_2}|a_{T_1}|^{2}|a_{T_2}|^{4} \end{align*} and $|\Delta'| = \nu^{-4b}$, this completes the proof of Lemma \ref{ef2d_ball}. \end{proof} We now prove inequality \eqref{ineq2_unc}. \begin{lemma}\label{mglem1} Suppose $\delta$ and $\nu$ were such that $\nu^{2b}\delta^{-1} \in \N$. Then \begin{align*} \int_{B}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} \lsm \nu^{-1}\sum_{J \in P_{\nu^{2b}}(I_1)}\int_{B}|\E_{J}g|^{2}|\E_{I_2}g|^{4} \end{align*} for arbitrary $I_1 \in P_{\nu^{b}}([0, 1])$ and $I_2 \in P_{\nu^{b}}([0, 1])$ such that $d(I_1, I_2) \gtrsim \nu$. In other words, $M_{b, b}(\delta, \nu) \lsm \nu^{-1/6}M_{2b, b}(\delta, \nu).$ \end{lemma} \begin{proof} This is an application of ball inflation, $l^2 L^2$ decoupling, Bernstein, and the uncertainty principle. Since $\nu^{2b}\delta^{-1} \in \N$, $\nu^{b}\delta^{-1} \in \N$ and $\delta \leq \nu^{2b}$. Fix arbitrary $I_1, I_2 \in P_{\nu^{b}}([0, 1])$. We have \begin{align}\label{mgeq2} \frac{1}{|B|}\int_{B}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} &= \frac{1}{|B|}\sum_{\Delta \in P_{\nu^{-b}}(B)}\int_{\Delta}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4}\nonumber\\ &\leq \frac{1}{|B|}\sum_{\Delta \in P_{\nu^{-b}}(B)}(\int_{\Delta}|\E_{I_1}g|^{2})\nms{\E_{I_2}g}_{L^{\infty}(\Delta)}^{4}\nonumber\\ &\lsm \frac{1}{|P_{\nu^{-b}}(B)|}\sum_{\Delta \in P_{\nu^{-b}}(B)}(\frac{1}{|\Delta|}\int_{\Delta}|\E_{I_1}g|^{2})\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta)}^{4}\nonumber\\ &= \avg{\Delta \in P_{\nu^{-b}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta)}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta)}^{4} \end{align} where the second inequality is because of Bernstein. From ball inflation we know that for each $\Delta' \in P_{\nu^{-2b}}(B)$, \begin{align*} \avg{\Delta \in P_{\nu^{-2b}}(\Delta')}\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta)}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta)}^{4} \lsm \nu^{-1}\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta')}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta')}^{4}. \end{align*} Averaging the above over all $\Delta' \in P_{\nu^{-2b}}(B)$ shows that \eqref{mgeq2} is \begin{align*} \lsm \nu^{-1}\avg{\Delta' \in P_{\nu^{-2b}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta')}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta')}^{4}. \end{align*} Since $I_1$ is of length $\nu^{b}$, $l^2 L^2$ decoupling gives that the above is \begin{align*} &\lsm \nu^{-1}\sum_{J \in P_{\nu^{2b}}(I_1)}\avg{\Delta' \in P_{\nu^{-2b}}(B)}\nms{\E_{J}g}_{L^{2}_{\#}(\Delta')}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta')}^{4}\\ &= \nu^{-1}\frac{1}{|B|}\sum_{J \in P_{\nu^{2b}}(I_1)}\sum_{\Delta' \in P_{\nu^{-2b}}(B)}\nms{\E_{I_2}g}_{L^{4}(\Delta')}^{4}\nms{\E_{J}g}_{L^{2}_{\#}(\Delta')}^{2}\\ &= \nu^{-1}\frac{1}{|B|}\sum_{J \in P_{\nu^{2b}}(I_1)}\sum_{\Delta' \in P_{\nu^{-2b}}(B)}(\int_{\Delta'}|\E_{I_2}g|^{4})\nms{\E_{J}g}_{L^{2}_{\#}(\Delta')}^{2}. \end{align*} Since $|\E_{J}g|$ is essentially constant on $\Delta'$, the uncertainty principle gives that essentially we have $$(\int_{\Delta'}|\E_{I_2}g|^{4})\nms{\E_{J}g}_{L^{2}_{\#}(\Delta')}^{2} \sim \int_{\Delta'}|\E_{J}g|^{2}|\E_{I_2}g|^{4}.$$ Combining the above two centered equations then completes the proof of Lemma \ref{mglem1}. \end{proof} \begin{rem} The proof of Lemma \ref{mglem1} is reminiscent of our proof of Lemma \ref{abup}. The $\nms{\E_{I_2}g}_{L^{\infty}(\Delta)}$ can be thought as using the trivial bound for $\xi_i$, $i = 2, 3, 5, 6$ to obtain \eqref{f3}. Then we apply some data about separation, much like in ball inflation here to get large amounts of cancelation. \end{rem} \section{An alternate proof of $D(\delta) \lsm_{\vep} \delta^{-\vep}$}\label{bik} The ball inflation lemma and our proof of Lemma \ref{mglem1} inspire us to define a new bilinear decoupling constant that can make our uncertainty principle heuristics from the previous section rigorous. The left hand side of the definition of $D(\delta)$ in \eqref{decdef} is unweighted, however recall that \cite[Proposition 2.2.11]{thesis} implies that \begin{align}\label{wlhs} \nms{\E_{[0, 1]}g}_{L^{6}(w_{B})} \lsm D(\delta)(\sum_{J \in P_{\delta}([0, 1])}\nms{\E_{J}g}_{L^{6}(w_{B})}^{2})^{1/2}. \end{align} for all $g: [0, 1] \rightarrow \C$ and squares $B$ of side length $\delta^{-2}$. We will assume that $\delta^{-1} \in \N$ and $\nu \in \N^{-1} \cap (0, 1/100)$. Let $\mc{M}_{a, b}(\delta, \nu)$ be the best constant such that \begin{align}\label{bik_const} \begin{aligned} \avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}&\nms{\E_{I}g}_{L^{2}_{\#}(w_{\Delta})}^{2}\nms{\E_{I'}g}_{L^{4}_{\#}(w_{\Delta})}^{4}\\ &\hspace{-0.1in} \leq \mc{M}_{a, b}(\delta, \nu)^{6}(\sum_{J \in P_{\delta}(I)}\nms{\E_{J}g}_{L^{6}_{\#}(w_B)}^{2})(\sum_{J \in P_{\delta}(I')}\nms{\E_{J'}g}_{L^{6}_{\#}(w_B)}^{2})^{2} \end{aligned} \end{align} for all squares $B$ of side length $\delta^{-2}$, $g: [0, 1] \rightarrow \C$ and all intervals $I \in P_{\nu^{a}}([0, 1])$, $I' \in P_{\nu^{b}}([0, 1])$ with $d(I, I') \geq \nu$. Suppose $a > b$ (the proof when $a \leq b$ is similar). The uncertainty principle implies that \begin{align*} \avg{\Delta \in P_{\nu^{-a}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta)}^{2}&\nms{\E_{I_2}g}_{L^{4}_{\#}(\Delta)}^{4}\\ & = \frac{1}{|P_{\nu^{-a}}(B)|}\sum_{\Delta \in P_{\nu^{-a}}(B)}(\frac{1}{|\Delta|}\int_{\Delta}|\E_{I_2}g|^{4})\nms{\E_{I_1}g}_{L^{2}_{\#}(\Delta)}^{2}\\ & \sim \frac{1}{|B|}\int_{B}|\E_{I_1}g|^{2}|\E_{I_2}g|^{4} \end{align*} where the last $\sim$ is because $|\E_{I_1}g|$ is essentially constant on $\Delta$. Therefore our bilinear constant $\mc{M}_{a, b}$ is essentially the same as the bilinear constant $M_{a, b}$ we defined in \eqref{mabdef}. \subsection{Some basic properties} We now have the weighted rigorous versions of Lemmas \ref{bern_uw} and \ref{l2l2_uw}. Note that we will only need the $L^{\infty}$ version of Lemma \ref{bern_uw}. \begin{lemma}[Bernstein]\label{loc_bern} Let $I$ be an interval of length $1/R$ and $\Delta$ a square of side length $R$. Then $$\nms{\E_{I}g}_{L^{\infty}(\Delta)} \lsm \nms{\E_{I}g}_{L^{p}_{\#}(w_{\Delta})}.$$ \end{lemma} \begin{lemma}[$l^2 L^2$ decoupling] Let $I$ be an interval of length $\geq 1/R$ such that $R|I| \in \N$ and $\Delta$ a square of side length $R$. Then \begin{align*} \nms{\E_{I}g}_{L^{2}(w_{\Delta})} \lsm(\sum_{J \in P_{1/R}(I)}\nms{\E_{J}g}_{L^{2}(w_{\Delta})}^{2})^{1/2}. \end{align*} \end{lemma} We now run through the substitutes of Lemmas \ref{mabtriv}-\ref{biv1}. \begin{lemma}\label{triv} Suppose $\delta$ and $\nu$ were such that $\nu^{a}\delta^{-1}$, $\nu^{b}\delta^{-1} \in \N$. Then \begin{align*} \mc{M}_{a, b}(\delta, \nu) \lsm D(\frac{\delta}{\nu^{a}})^{1/3}D(\frac{\delta}{\nu^b})^{2/3}. \end{align*} \end{lemma} \begin{proof} Let $I_1 \in P_{\nu^a}([0, 1])$ and $I_2 \in P_{\nu^b}([0, 1])$. H\"{o}lder's inequality gives that \begin{align*} &\avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta})}^{4}\\ &\quad\quad\leq \avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_1}g}_{L^{6}_{\#}(w_{\Delta})}^{2}\nms{\E_{I_2}g}_{L^{6}_{\#}(w_{\Delta})}^{4}\\ &\quad\quad\leq (\avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_1}g}_{L^{6}_{\#}(w_{\Delta})}^{6})^{1/3}(\avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_2}g}_{L^{6}_{\#}(w_{\Delta})}^{6})^{2/3}\\ &\quad\quad \lsm\nms{\E_{I_1}g}_{L^{6}_{\#}(w_{B})}^{2}\nms{\E_{I_2}g}_{L^{6}_{\#}(w_{B})}^{4} \end{align*} where the last inequality we have used that $\sum_{\Delta} w_{\Delta} \lsm w_{B}$ (see for example \cite[Proposition 2.2.14]{thesis}). Finally applying \eqref{wlhs} with parabolic rescaling then completes the proof of Lemma \ref{triv}. \end{proof} \begin{lemma}\label{interchange} Suppose $\nu^{a}\delta^{-1}, \nu^{b}\delta^{-1} \in \N$. Then \begin{align*} \mc{M}_{a, b}(\delta, \nu) \lsm \mc{M}_{b, a}(\delta, \nu)^{1/2}D(\frac{\delta}{\nu^b})^{1/2}. \end{align*} \end{lemma} \begin{proof} Let $I_1 \in P_{\nu^a}([0, 1])$ and $I_2 \in P_{\nu^b}([0, 1])$. We have \begin{align*} &\avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta})}^{4}\\ &\,\,\leq \avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{2}\nms{\E_{I_2}g}_{L^{2}_{\#}(w_{\Delta})}\nms{\E_{I_2}g}_{L^{6}_{\#}(w_{\Delta})}^{3}\\ &\,\,\leq (\avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{4}\nms{\E_{I_2}g}_{L^{2}_{\#}(w_{\Delta})}^{2})^{1/2}(\avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_2}g}_{L^{6}_{\#}(w_{\Delta})}^{6})^{1/2}\\ &\,\,\lsm (\avg{\Delta \in P_{\nu^{-\max(a, b)}}(B)}\nms{\E_{I_1}g}_{L^{4}_{\#}(w_{\Delta})}^{4}\nms{\E_{I_2}g}_{L^{2}_{\#}(w_{\Delta})}^{2})^{1/2}\nms{\E_{I_2}g}_{L^{6}_{\#}(w_B)}^{3} \end{align*} where the first and second inequalities are because of H\"{o}lder and the third inequality is an application of H\"{o}lder and the estimate $\sum_{\Delta}w_{\Delta} \lsm w_{B}$. Applying parabolic rescaling and the definition of $\mc{M}_{b, a}$ then completes the proof of Lemma \ref{interchange}. \end{proof} \begin{lemma}[Bilinear reduction]\label{bi_red} Suppose $\delta$ and $\nu$ were such that $\nu\delta^{-1} \in \N$. Then $$D(\delta) \lsm D(\frac{\delta}{\nu}) + \nu^{-1}\mc{M}_{1, 1}(\delta, \nu).$$ \end{lemma} \begin{proof} The proof is essentially the same as that of Lemma \ref{biv1} except when analyzing \eqref{holder_bi} in the off-diagonal terms we use \begin{align*} \nms{|\E_{I_i}g|^{1/3}|\E_{I_j}g|^{2/3}}_{L^{6}_{\#}(B)}^{6} &= \avg{\Delta \in P_{\nu^{-1}}(B)}\frac{1}{|\Delta|}\int_{\Delta}|\E_{I_i}g|^{2}|\E_{I_j}g|^{4}\\ &\leq \avg{\Delta \in P_{\nu^{-1}}(B)}\nms{\E_{I_i}g}_{L^{2}_{\#}(\Delta)}^{2}\nms{\E_{I_j}g}_{L^{\infty}(\Delta)}^{4}\\ &\lsm \avg{\Delta \in P_{\nu^{-1}}(B)}\nms{\E_{I_i}g}_{L^{2}_{\#}(w_{\Delta})}^{2}\nms{\E_{I_j}g}_{L^{4}_{\#}(w_{\Delta})}^{4} \end{align*} where the second inequality we have used Bernstein. \end{proof} \subsection{Ball inflation} We now prove rigorously the ball inflation lemma we mentioned in the previous section. \begin{lemma}[Ball inflation]\label{ball_bik} Let $b \geq 1$ be a positive integer. Suppose $I_1$ and $I_2$ are $\nu$-separated intervals of length $\nu^b$. Then for any square $\Delta'$ of side length $\nu^{-2b}$, we have \begin{align}\label{bmain} \avg{\Delta \in P_{\nu^{-b}}(\Delta')}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta})}^{4} \lsm \nu^{-1}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta'})}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta'})}^{4}. \end{align} \end{lemma} \begin{proof} Without loss of generality we may assume that $\Delta'$ is centered at the origin. Fix intervals $I_1$ and $I_2$ intervals of length $\nu^b$ which are $\nu$-separated with centers $c_1$ and $c_2$, respectively. Cover $\Delta'$ by a set $\mc{T}_1$ of mutually parallel nonoverlapping rectangles $T_1$ of dimensions $\nu^{-b} \times \nu^{-2b}$ with longer side pointing in the direction of $(-2c_1, 1)$ (the normal direction of the piece of parabola above $I_1$). Note that any such $\nu^{-b} \times \nu^{-2b}$ rectangle outside $4\Delta'$ cannot cover $\Delta'$ itself. Thus we may assume that all rectangles in $\mc{T}_1$ are contained in $4\Delta'$. Finally let $T_{1}(x)$ be the rectangle in $\mc{T}_1$ containing $x$. Similarly define $\mc{T}_2$ except this time we use $I_2$. For $x \in 4\Delta'$, define \begin{align*} F_{1}(x) := \begin{cases} \sup_{y \in 2T_{1}(x)}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{B(y, \nu^{-b})})} & \text{ if } x \in \bigcup_{T_{1} \in \mc{T}_{1}}T_{1}\\ 0 & \text{ if } x \in 4\Delta'\bs\bigcup_{T_{1} \in \mc{T}_{1}}T_{1} \end{cases} \end{align*} and \begin{align*} F_{2}(x) := \begin{cases} \sup_{y \in 2T_{2}(x)}\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{B(y, \nu^{-b})})} & \text{ if } x \in \bigcup_{T_{2} \in \mc{T}_{2}}T_{2}\\ 0 & \text{ if } x \in 4\Delta'\bs\bigcup_{T_{2} \in \mc{T}_{2}}T_{2}. \end{cases} \end{align*} Given a $\Delta \in P_{\nu^{-b}}(\Delta')$, if $x \in \Delta$, then $\Delta \subset 2T_{i}(x)$. This implies that the center of $\Delta$, $c_{\Delta} \in 2T_{i}(x)$ for $x \in \Delta$ and hence for all $x \in \Delta$, $$\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})} \leq F_{1}(x)$$ and $$\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta})} \leq F_{2}(x).$$ Therefore \begin{align}\label{bins} \nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta})}^{4} \leq \frac{1}{|\Delta|}\int_{\Delta}F_{1}(x)^{2}F_{2}(x)^{4}\, dx. \end{align} By how $F_i$ is defined, $F_i$ is constant on each $T_i \in \mc{T}_i$. That is, for each $x \in \bigcup_{T_i \in \mc{T}_i}T_{i}$, $$F_{i}(x) = \sum_{T_{i} \in \mc{T}_i}a_{T_{i}}1_{T_i}(x)$$ for some constants $a_{T_i} \geq 0$. Thus using \eqref{bins} and that the $T_i$ are disjoint, the left hand side of \eqref{bmain} is bounded above by \begin{align}\label{bstep2} \frac{1}{|\Delta'|}\int_{\Delta'}F_{1}(x)^{2}F_{2}(x)^{4}\, dx &= \frac{1}{|\Delta'|}\sum_{T_1, T_2}a_{T_1}^{2}a_{T_2}^{4}|T_1 \cap T_2|\lsm \nu^{-1}\frac{\nu^{-2b}}{|\Delta'|}\sum_{T_1, T_2}a_{T_{1}}^{2}a_{T_2}^{4} \end{align} where the last inequality we have used that since $I_1$ and $I_2$ are $\nu$-separated, sine of the angle between $T_1$ and $T_2$ is $\gtrsim \nu$ and hence $|T_1 \cap T_2| \lsm \nu^{-1 - 2b}$. Note that \begin{align*} \nms{F_{1}}_{L^{2}_{\#}(4\Delta')}^{2} &= \frac{\nu^{-3b}}{|4\Delta'|}\sum_{T_1}a_{T_1}^{2} \end{align*} and \begin{align*} \nms{F_{2}}_{L^{4}_{\#}(4\Delta')}^{4} &= \frac{\nu^{-3b}}{|4\Delta'|}\sum_{T_2}a_{T_2}^{4}. \end{align*} Therefore \eqref{bstep2} is \begin{align*} \lsm \nu^{-1}\nms{F_{1}}_{L^{2}_{\#}(4\Delta')}^{2}\nms{F_{2}}_{L^{4}_{\#}(4\Delta')}^{4}. \end{align*} Thus we are done if we can prove that $$\nms{F_1}_{L^{2}_{\#}(4\Delta')}^{2} \lsm \nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta'})}^{2}$$ and $$\nms{F_2}_{L^{4}_{\#}(4\Delta')}^{4} \lsm \nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta'})}^{4}$$ but this was exactly what was shown in \cite[Eq. (29)]{sg} (and \cite[Lemma 2.6.3]{thesis} for the same inequality but with explicit constants). \end{proof} Our choice of bilinear constant \eqref{bik_const} makes the rigorous proofs of Lemmas \ref{mglem2} and \ref{mglem1} immediate consequences of ball inflation and $l^2 L^2$ decoupling. \begin{lemma}\label{mglem2_rig} Suppose $1 \leq a < b$ and $\delta$ and $\nu$ were such that $\nu^{b}\delta^{-1} \in \N$. Then $$\mc{M}_{a, b}(\delta, \nu) \lsm \mc{M}_{b, b}(\delta, \nu).$$ \end{lemma} \begin{proof} For arbitrary $I_1 \in P_{\nu^{a}}([0, 1])$ and $I_2 \in P_{\nu^{b}}([0, 1])$ which are $\nu$-separated, it suffices to show that \begin{align*} \avg{\Delta \in P_{\nu^{-b}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{2}&\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta})}^{4}\\ & \lsm\sum_{J \in P_{\nu^{b}}(I_1)}\avg{\Delta \in P_{\nu^{-b}}(B)}\nms{\E_{J}g}_{L^{2}_{\#}(w_{\Delta})}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta})}^{4}. \end{align*} But this is immediate from $l^2 L^2$ decoupling which completes the proof of Lemma \ref{mglem2_rig}. \end{proof} \begin{lemma}\label{mglem1_rig} Let $b \geq 1$ and suppose $\delta$ and $\nu$ were such that $\nu^{2b}\delta^{-1} \in \N$. Then $$\mc{M}_{b, b}(\delta, \nu) \lsm \nu^{-1/6}\mc{M}_{2b, b}(\delta, \nu).$$ \end{lemma} \begin{proof} For arbitrary $I_1 \in P_{\nu^a}([0, 1])$ and $I_2 \in P_{\nu^b}([0, 1])$ which are $\nu$-separated, it suffices to prove that \begin{align*} \avg{\Delta \in P_{\nu^{-b}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{2}&\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta})}^{4}\\ & \lsm \nu^{-1}\sum_{J \in P_{\nu^{2b}}(I_1)}\avg{\Delta' \in P_{\nu^{-2b}}(B)}\nms{\E_{J}g}_{L^{2}_{\#}(w_{\Delta'})}^{2}\nms{\E_{I_2}g}_{L^{4}_{\#}(w_{\Delta'})}^{4}. \end{align*} But this is immediate from ball inflation followed by $l^2 L^2$ decoupling which completes the proof of Lemma \ref{mglem1_rig}. \end{proof} Combining Lemmas \ref{interchange}, \ref{mglem2_rig}, and \ref{mglem1_rig} gives the following corollary. \begin{cor}\label{upgrade} Suppose $\delta$ and $\nu$ were such that $\nu^{2b}\delta^{-1} \in \N$. Then \begin{align*} \mc{M}_{b, b}(\delta, \nu) \lsm \nu^{-1/6}\mc{M}_{2b, 2b}(\delta, \nu)^{1/2}D(\frac{\delta}{\nu^{b}})^{1/2}. \end{align*} \end{cor} This corollary should be compared to the trivial estimate obtained from Lemma \ref{triv} which implies $\mc{M}_{b, b}(\delta, \nu) \lsm D(\delta/\nu^b)$. \subsection{The $O_{\vep}(\delta^{-\vep})$ bound} We now prove that $D(\delta) \lsm_{\vep} \delta^{-\vep}$. The structure of the argument is essentially the same as that in Section \ref{fc_iter}. Repeatedly iterating Corollary \ref{upgrade} gives the following result. \begin{lemma}\label{iter1} Let $N$ be an integer chosen sufficiently large later and let $\delta$ be such that $\delta^{-1/2^{N}} \in \N$ and $0 < \delta < 100^{-2^{N}}$. Then \begin{align*} D(\delta) \lsm D(\delta^{1 - \frac{1}{2^N}}) + \delta^{-\frac{4}{3 \cdot 2^N}}\prod_{j = 0}^{N - 1}D(\delta^{1 - \frac{1}{2^{N - j}}})^{\frac{1}{2^{j + 1}}}. \end{align*} \end{lemma} \begin{proof} Iterating Corollary \ref{upgrade} $N$ times gives that if $\delta$ and $\nu$ were such that $\nu^{2^{N}}\delta^{-1} \in \N$, then \begin{align*} \mc{M}_{1, 1}(\delta, \nu) \lsm \nu^{-1/3}\mc{M}_{2^{N}, 2^{N}}(\delta, \nu)^{1/2^{N}}.\prod_{j = 0}^{N - 1}D(\frac{\delta}{\nu^{2^{j}}})^{\frac{1}{2^{j + 1}}} \end{align*} Applying the trivial bound for the bilinear constant gives that the above is \begin{align*} \lsm \nu^{-1/3}D(\frac{\delta}{\nu^{2^N}})^{1/2^N}\prod_{j = 0}^{N - 1}D(\frac{\delta}{\nu^{2^j}})^{\frac{1}{2^{j + 1}}} \end{align*} Choosing $\nu = \delta^{1/2^N}$ shows that if $\delta^{-1/2^{N}} \in \N$ and $0 < \delta < 100^{-2^{N}}$, then \begin{align*} \mc{M}_{1, 1}(\delta, \delta^{1/2^{N}}) \lsm \delta^{-\frac{1}{3 \cdot 2^N}}\prod_{j = 0}^{N - 1}D(\delta^{1 - \frac{1}{2^{N - j}}})^{\frac{1}{2^{j + 1}}}. \end{align*} By the bilinear reduction, if $\delta$ was such that $\delta^{-1/2^{N}} \in \N$ and $0 < \delta < 100^{-2^N}$, then \begin{align*} D(\delta) \lsm D(\delta^{1 - \frac{1}{2^N}}) + \delta^{-\frac{4}{3 \cdot 2^N}}\prod_{j = 0}^{N - 1}D(\delta^{1 - \frac{1}{2^{N - j}}})^{\frac{1}{2^{j + 1}}}. \end{align*} This completes the proof of Lemma \ref{iter1}. \end{proof} Trivial bounds for $D(\delta)$ show that $1 \lsm D(\delta) \lsm \delta^{-1/2}$ for all $\delta \in \N^{-1}$. Let $\ld$ be the smallest real number such that $D(\delta) \lsm_{\vep} \delta^{-\ld - \vep}$ for all $\delta \in \N^{-1}$. From the trivial bounds $\ld \in [0, 1/2]$. We claim $\ld = 0$. Suppose $\ld > 0$. Let $N$ be a sufficiently large integer $\geq \frac{8}{3\ld}$. This implies $$1 + \frac{N}{2} - \frac{4}{3\ld} \geq 1.$$ Lemma \ref{iter1} then implies that for $\delta$ such that $\delta^{-1/2^{N}} \in \N$ and $0 < \delta < 100^{-2^{N}}$, we have \begin{align*} D(\delta) \lsm_{\vep} \delta^{-\ld(1 - \frac{1}{2^N}) - \vep} + \delta^{-\ld(1 - \frac{1}{2^N}(1 + \frac{N}{2} - \frac{4}{3\ld})) - \vep} \lsm_{\vep} \delta^{-\ld(1 - \frac{1}{2^N}) - \vep} \end{align*} where the last inequality we have applied our choice of $N$. By almost multiplicity we then have the same estimate for all $\delta \in \N^{-1}$ (with a potentially larger constant depending on $N$). But this then contradicts minimality of $\ld$. Therefore $\ld = 0$. \section{Unifying two styles of proof}\label{bds} We now attempt to unify the Bourgain-Demeter style of decoupling and the style of decoupling mentioned in the previous section. In view of Corollary \ref{upgrade}, instead of having two integer parameters $a$ and $b$ we just have one integer parameter. Let $b$ be an integer $\geq 1$ and choose $s \in [2, 3]$ any real number. Suppose $\delta \in \N^{-1}$ and $\nu \in \N^{-1} \cap (0, 1/100)$ were such that $\nu^{b}\delta^{-1} \in \N$. Let $\mb{M}_{b}^{(s)}(\delta, \nu)$ be the best constant such that \begin{align}\label{mb_bds} \begin{aligned} \avg{\Delta \in P_{\nu^{-b}}(B)}&(\sum_{J \in P_{\nu^{b}}(I)}\nms{\E_{J}g}_{L^{2}_{\#}(w_\Delta)}^{2})^{\frac{s}{2}}(\sum_{J' \in P_{\nu^{b}}(I')}\nms{\E_{J'}g}_{L^{2}_{\#}(w_\Delta)}^{2})^{\frac{6 - s}{2}}\\ & \leq \mb{M}_{b}^{(s)}(\delta, \nu)^{6}(\sum_{J \in P_{\delta}(I)}\nms{\E_{J}g}_{L^{2}_{\#}(w_B)}^{2})^{\frac{s}{2}}(\sum_{J' \in P_{\delta}(I')}\nms{\E_{J'}g}_{L^{2}_{\#}(w_B)}^{2})^{\frac{6 - s}{2}} \end{aligned} \end{align} for all squares $B$ of side length $\delta^{-2}$, $g: [0, 1] \rightarrow \C$, and all intervals $I, I' \in P_{\nu}([0, 1])$ which are $\nu$-separated. Note that left hand side of the definition of $\mb{M}_{b}^{(3)}(\delta, \nu)$ is the same as $A_{6}(q, B^r, q)^{6}$ defined in \cite{sg} and from the uncertainty principle, $\mb{M}_{1}^{(2)}(\delta, \nu)$ is morally the same as $M_{1, 1}(\delta, \nu)$ defined in \eqref{mabdef} and $\mc{M}_{1, 1}(\delta, \nu)$ defined in \eqref{bik_const}. The $l^2$ piece in the definition of $\mb{M}_{b}^{(s)}(\delta, \nu)$ is so that we can make the most out of applying $l^2 L^2$ decoupling. We will use $\mb{M}_{b}^{(s)}$ as our bilinear constant in this section to show that $D(\delta) \lsm_{\vep} \delta^{-\vep}$. The bilinear constant $\mb{M}_{b}^{(s)}$ obeys much the same lemmas as in the previous sections. \begin{lemma}[cf. Lemmas \ref{mabtriv} and \ref{triv}]\label{mbtriv} If $\delta$ and $\nu$ were such that $\nu^{b}\delta^{-1} \in \N$, then \begin{align*} \mb{M}_{b}^{(s)}(\delta, \nu) \lsm D(\frac{\delta}{\nu^b}). \end{align*} \end{lemma} \begin{proof} Fix arbitrary $I_1, I_2 \in P_{\nu}([0, 1])$ which are $\nu$-separated. Moving up from $L^{2}_{\#}$ to $L^{6}_{\#}$ followed by H\"{o}lder in the average over $\Delta$ bounds the left hand side of \eqref{mb_bds} \begin{align*} (\avg{\Delta \in P_{\nu^{-b}}(B)}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_{\Delta})}^{2})^{\frac{6}{2}})^{s}(\avg{\Delta \in P_{\nu^{-b}}(B)}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J}g}_{L^{6}_{\#}(w_{\Delta})}^{2})^{\frac{6}{2}})^{6-s}. \end{align*} Using Minkowski to switch the $l^2$ and $l^6$ sum followed by $\sum_{\Delta}w_{\Delta} \lsm w_{B}$ shows that this is \begin{align*} \lsm (\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_B)}^{2})^{\frac{s}{2}}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{6}_{\#}(w_B)}^{2})^{\frac{6 - s}{2}}. \end{align*} Parabolic rescaling then completes the proof of Lemma \ref{mbtriv}. \end{proof} \begin{lemma}[Bilinear reduction, cf. Lemmas \ref{biv1} and \ref{bi_red}]\label{bds_bds} Suppose $\delta$ and $\nu$ were such that $\nu\delta^{-1} \in \N$. Then \begin{align*} D(\delta) \lsm D(\frac{\delta}{\nu}) + \nu^{-1}\mb{M}_{1}^{(s)}(\delta, \nu). \end{align*} \end{lemma} \begin{proof} Note that the left hand side of the definition of $\mb{M}_{1}^{(s)}(\delta, \nu)$ is \begin{align*} \avg{\Delta \in P_{\nu^{-1}}(B)}\nms{\E_{I_1}g}_{L^{2}_{\#}(w_{\Delta})}^{s}\nms{\E_{I_2}g}_{L^{2}_{\#}(w_{\Delta})}^{6 - s}. \end{align*} Proceeding as in the proof of Lemmas \ref{biv1} and \ref{bi_red}, for $I_i, I_j \in P_{\nu}([0, 1])$ which are $\nu$-separated, we have \begin{align}\label{bdseq1} \nms{|\E_{I_i}g||\E_{I_j}g|}_{L^{3}_{\#}(B)}^{1/2} \leq \nms{|\E_{I_i}g|^{\frac{s}{6}}|\E_{I_j}g|^{1 - \frac{s}{6}}}_{L^{6}_{\#}(B)}^{1/2}\nms{|\E_{I_i}g|^{1-\frac{s}{6}}|\E_{I_j}g|^{\frac{s}{6}}}_{L^{6}_{\#}(B)}^{1/2}. \end{align} We have \begin{align*} \nms{|\E_{I_i}g|^{\frac{s}{6}}|\E_{I_j}g|^{1 - \frac{s}{6}}}_{L^{6}_{\#}(B)}^{6} &= \avg{\Delta \in P_{\nu^{-1}}(B)}\frac{1}{|\Delta|}\int_{\Delta}|\E_{I_i}g|^{s}|\E_{I_j}g|^{6 - s}\\ &\leq \avg{\Delta \in P_{\nu^{-1}}(B)}\nms{\E_{I_i}g}_{L^{s}_{\#}(\Delta)}^{s}\nms{\E_{I_j}g}_{L^{\infty}(\Delta)}^{6 - s}\\ &\lsm \avg{\Delta \in P_{\nu^{-1}}(B)}\nms{\E_{I_i}g}_{L^{2}_{\#}(w_\Delta)}^{s}\nms{\E_{I_j}g}_{L^{2}_{\#}(w_\Delta)}^{6 - s} \end{align*} where the last inequality we have used Bernstein. Inserting this into \eqref{bdseq1} and applying the definition of $\mb{M}_{1}^{(s)}(\delta, \nu)$ then completes the proof of Lemma \ref{bds_bds}. \end{proof} \begin{lemma}[Ball inflation, cf. Lemma \ref{ball_bik}]\label{ball_bds} Let $b \geq 1$ be a positive integer. Suppose $I_1$ and $I_2$ are $\nu$-separated intervals of length $\nu$. Then for any square $\Delta'$ of side length $\nu^{-2b}$ and any $\vep > 0$, we have \begin{align*} \avg{\Delta \in P_{\nu^{-b}}(\Delta')}&(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta})}^{2})^{\frac{s}{2}}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{6 - s}_{\#}(w_{\Delta})}^{2})^{\frac{6 - s}{2}}\\ &\lsm_{\vep} \nu^{-1 - b\vep}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta'})}^{2})^{\frac{s}{2}}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{6 - s}_{\#}(w_{\Delta'})}^{2})^{\frac{6 - s}{2}} \end{align*} \end{lemma} \begin{proof} The proof proceeds as in the proof of ball inflation in \cite[Theorem 9.2]{sg} (see also \cite[Section 2.6]{thesis} for more details and explicit constants in the specific case of the parabola). From dyadic pigeonholing, since we can lose a $\nu^{-b\vep}$, it suffices to restrict the sum over $J$ and $J'$ to families $\mc{F}_1$ and $\mc{F}_2$ such that for all $J \in \mc{F}_1$, $\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta'})}$ are comparable up to a factor of 2 and similarly for all $J' \in \mc{F}_2$. H\"{o}lder gives \begin{align*} &\avg{\Delta \in P_{\nu^{-b}}(\Delta')}(\sum_{J \in\mc{F}_1}\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta})}^{2})^{\frac{s}{2}}(\sum_{J' \in \mc{F}_2}\nms{\E_{J'}g}_{L^{6 - s}_{\#}(w_{\Delta})}^{2})^{\frac{6 - s}{2}}\\ &\quad\leq (\# \mc{F}_1)^{\frac{s}{2} - 1}(\# \mc{F}_2)^{\frac{6 - s}{2} - 1}\avg{\Delta \in P_{\nu^{-b}}(\Delta')}(\sum_{J \in\mc{F}_1}\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta})}^{s})(\sum_{J' \in \mc{F}_2}\nms{\E_{J'}g}_{L^{6 - s}_{\#}(w_{\Delta})}^{6-s}). \end{align*} The proof of Lemma \ref{ball_bik} shows that this is \begin{align*} \lsm \nu^{-1}(\# \mc{F}_1)^{\frac{s}{2} - 1}(\# \mc{F}_2)^{\frac{6 - s}{2} - 1}(\sum_{J \in\mc{F}_1}\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta'})}^{s})(\sum_{J' \in \mc{F}_2}\nms{\E_{J'}g}_{L^{6 - s}_{\#}(w_{\Delta'})}^{6-s}). \end{align*} Since for $J \in \mc{F}_1$ the values of $\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta'})}$ are comparable and similarly for $J' \in \mc{F}_2$, the above is \begin{align*} \lsm \nu^{-1}(\sum_{J \in\mc{F}_1}\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta'})}^{2})^{\frac{s}{2}}(\sum_{J' \in \mc{F}_2}\nms{\E_{J'}g}_{L^{6 - s}_{\#}(w_{\Delta'})}^{2})^{\frac{6 - s}{2}}. \end{align*} This completes the proof of Lemma \ref{ball_bds}. \end{proof} \begin{lemma}[cf. Corollary \ref{upgrade}]\label{mbup_bds} Suppose $\delta$ and $\nu$ were such that $\nu^{2b}\delta^{-1} \in \N$. Then for every $\vep > 0$, \begin{align*} \mb{M}_{b}^{(s)}(\delta, \nu) \lsm_{\vep} \nu^{-\frac{1}{6}(1 + b\vep)}\mb{M}_{2b}^{(s)}(\delta, \nu)^{1/2}D(\frac{\delta}{\nu^b})^{1/2}. \end{align*} \end{lemma} \begin{proof} Let $\ta$ and $\vp$ be such that $\frac{\ta}{2} + \frac{1 - \ta}{6} = \frac{1}{s}$ and $\frac{\vp}{2} + \frac{1 - \vp}{6} = \frac{1}{6 - s}$. Then H\"{o}lder gives $\nms{f}_{L^{s}} \leq \nms{f}_{L^{2}}^{\ta}\nms{f}_{L^{6}}^{1 - \ta}$ and $\nms{f}_{L^{6 - s}} \leq \nms{f}_{L^{2}}^{\vp}\nms{f}_{L^{6}}^{1 - \vp}$. Fix arbitrary $I_1, I_2 \in P_{\nu}([0, 1])$ which are $\nu$-separated. We have \begin{align*} &\avg{\Delta \in P_{\nu^{-b}}(B)}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{2}_{\#}(w_{\Delta})}^{2})^{\frac{s}{2}}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{2}_{\#}(w_{\Delta})}^{2})^{\frac{6 - s}{2}}\\ &\leq \avg{\Delta' \in P_{\nu^{-2b}}(B)}\avg{\Delta \in P_{\nu^{-b}}(\Delta')}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta})}^{2})^{\frac{s}{2}}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{6-s}_{\#}(w_{\Delta})}^{2})^{\frac{6 - s}{2}}\\ &\lsm_{\vep} \nu^{-1 - b\vep}\avg{\Delta' \in P_{\nu^{-2b}}(B)}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{s}_{\#}(w_{\Delta'})}^{2})^{\frac{s}{2}}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{6-s}_{\#}(w_{\Delta'})}^{2})^{\frac{6 - s}{2}} \end{align*} where the first inequality is from H\"{o}lder and the second inequality is from ball inflation. We now use how $\ta$ and $\vp$ are defined to return to a piece which we control by $l^2 L^2$ decoupling and a piece which we can control by parabolic rescaling. H\"{o}lder (as in the definition of $\ta$ and $\vp$) gives that the average above is bounded by \begin{align*} \avg{\Delta' \in P_{\nu^{-2b}}(B)}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{2}_{\#}(w_{\Delta'})}^{2\ta}&\nms{\E_{J}g}_{L^{6}_{\#}(w_{\Delta'})}^{2(1 - \ta)})^{\frac{s}{2}}\times\\ &(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{2}_{\#}(w_{\Delta'})}^{2\vp}\nms{\E_{J'}g}_{L^{6}_{\#}(w_{\Delta'})}^{2(1 - \vp)})^{\frac{6 - s}{2}}. \end{align*} H\"{o}lder in the sum over $J$ and $J'$ shows that this is \begin{align*} \leq \avg{\Delta' \in P_{\nu^{-2b}}(B)}\bigg(&(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{2}_{\#}(w_{\Delta'})}^{2})^{\ta}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_{\Delta'})}^{2})^{1 - \ta}\bigg)^{\frac{s}{2}}\times\\ &\bigg((\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{2}_{\#}(w_{\Delta'})}^{2})^{\vp}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{6}_{\#}(w_{\Delta'})}^{2})^{1 - \vp}\bigg)^{\frac{6 - s}{2}}. \end{align*} Since $\ta s = 3 - \frac{s}{2}$ and $\vp(6 - s) = \frac{s}{2}$, rearranging the above gives \begin{align*} \avg{\Delta' \in P_{\nu^{-2b}}(B)}\bigg(&(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{2}_{\#}(w_{\Delta'})}^{2})^{\frac{1}{2}(3 - \frac{s}{2})}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{2}_{\#}(w_{\Delta'})}^{2})^{\frac{1}{2} \cdot \frac{s}{2}}\bigg)\times\\ &\bigg((\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_{\Delta'})}^{2})^{\frac{1}{2}\cdot 3(\frac{s}{2} - 1)}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{6}_{\#}(w_{\Delta'})}^{2})^{\frac{1}{2} \cdot 3(2 - \frac{s}{2})}\bigg). \end{align*} Cauchy-Schwarz in the average over $\Delta'$ then bounds the above by \begin{align}\label{two_term} \begin{aligned} &\bigg(\avg{\Delta' \in P_{\nu^{-2b}}(B)}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{2}_{\#}(w_{\Delta'})}^{2})^{\frac{6-s}{2}}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{2}_{\#}(w_{\Delta'})}^{2})^{\frac{s}{2}}\bigg)^{\frac{1}{2}}\times\\ &\bigg(\avg{\Delta' \in P_{\nu^{-2b}}(B)}(\sum_{J \in P_{\nu^{b}}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_{\Delta'})}^{2})^{\frac{3(s - 2)}{2}}(\sum_{J' \in P_{\nu^{b}}(I_2)}\nms{\E_{J'}g}_{L^{6}_{\#}(w_{\Delta'})}^{2})^{\frac{3(4 - s)}{2}}\bigg)^{\frac{1}{2}}. \end{aligned} \end{align} After $l^2 L^2$ decoupling, the first term in \eqref{two_term} is \begin{align}\label{m2b} \lsm \mb{M}_{2b}^{(s)}(\delta, \nu)^{3}(\sum_{J \in P_{\delta}(I_1)}\nms{\E_{J}g}_{L^{2}_{\#}(w_B)}^{2})^{\frac{1}{2}\cdot \frac{6-s}{2}}(\sum_{J' \in P_{\delta}(I_2)}\nms{\E_{J'}g}_{L^{2}_{\#}(w_B)}^{2})^{\frac{1}{2}\cdot \frac{s}{2}}. \end{align} H\"{o}lder in the average over $\Delta'$ bounds the second term in \eqref{two_term} by \begin{align*} (\avg{\Delta' \in P_{\nu^{-2b}}(B)}(\sum_{J \in P_{\nu^b}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_{\Delta'})}^{2})^{\frac{6}{2}})^{\frac{s-2}{4}}(\avg{\Delta' \in P_{\nu^{-2b}}(B)}(\sum_{J \in P_{\nu^b}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_{\Delta'})}^{2})^{\frac{6}{2}})^{\frac{4-s}{4}}. \end{align*} Applying Minkowski to interchange the $l^2$ and $l^6$ norms shows that this is \begin{align*} \lsm (\sum_{J \in P_{\nu^b}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_B)}^{2})^{\frac{3(s - 2)}{4}}(\sum_{J' \in P_{\nu^b}(I_2)}\nms{\E_{J'}g}_{L^{6}_{\#}(w_B)}^{2})^{\frac{3(4-s)}{4}}. \end{align*} Parabolic rescaling bounds this by \begin{align}\label{2nd} D(\frac{\delta}{\nu^b})^{3}(\sum_{J \in P_{\delta}(I_1)}\nms{\E_{J}g}_{L^{6}_{\#}(w_B)}^{2})^{\frac{1}{2} \cdot\frac{3(s - 2)}{2}}(\sum_{J' \in P_{\delta}(I_2)}\nms{\E_{J'}g}_{L^{6}_{\#}(w_B)}^{2})^{\frac{1}{2} \cdot \frac{3(4 - s)}{2}}. \end{align} Combining \eqref{m2b} and \eqref{2nd} then completes the proof of Lemma \ref{mbup_bds}. \end{proof} With Lemma \ref{mbup_bds}, the same proof as Lemma \ref{iter1} gives the following. \begin{lemma}[cf. Corollary \ref{core} and Lemma \ref{iter1}]\label{bds_iterate_est} Let $N$ be an integer chosen sufficient large later and let $\delta$ be such that $\delta^{-1/2^N} \in \N$ and $0 < \delta < 100^{-2^N}$. Then \begin{align*} D(\delta) \lsm_{\vep} D(\delta^{1 - \frac{1}{2^N}}) + \delta^{-\frac{4}{3\cdot 2^N} - \frac{N\vep}{6\cdot 2^N}}\prod_{j = 0}^{N - 1}D(\delta^{1 - \frac{1}{2^{N - j}}})^{\frac{1}{2^{j + 1}}}. \end{align*} \end{lemma} \begin{proof} This follows from the proof of Lemma \ref{iter1} and the observation that \begin{align*} \mb{M}_{1}^{(s)}(\delta, \nu) \lsm_{\vep} \nu^{-\frac{1}{3} - \frac{1}{6}N\vep}\mb{M}_{2^N}^{(s)}(\delta, \nu)^{\frac{1}{2^N}}\prod_{j = 0}^{N - 1}D(\frac{\delta}{\nu^{2^j}})^{\frac{1}{2^{j + 1}}}. \end{align*} along with Lemmas \ref{mbtriv} and \ref{bds_bds}. \end{proof} To finish, we proceed as at the end of the previous section. Let $\ld \in [0, 1/2]$ be the smallest real such that $D(\delta) \lsm_{\vep} \delta^{-\ld - \vep}$. Suppose $\ld > 0$. Choose $N$ such that $$1 + \frac{N}{2} - \frac{4}{3\ld} \geq 1.$$ Then for $\delta$ such that $\delta^{-1/2^{N}} \in \N$ and $0 < \delta < 100^{-2^{N}}$, Lemma \ref{bds_iterate_est} gives \begin{align*} D(\delta) \lsm_{\vep} \delta^{-\ld(1 - \frac{1}{2^N}) - \vep} + \delta^{-\ld(1 - \frac{1}{2^N}(1 + \frac{N}{2} - \frac{4}{3\ld})) - \vep(1 - \frac{1}{2^N}) + \frac{N\vep}{2\cdot 2^N} -\frac{N\vep}{6 \cdot 2^N}} \lsm_{\vep} \delta^{-\ld(1 - \frac{1}{2^N}) - \vep}. \end{align*} Almost multiplicativity gives that $D(\delta) \lsm_{N, \vep} \delta^{-\ld(1 - \frac{1}{2^N}) - \vep}$ for all $\delta \in \N^{-1}$, contradicting the minimality of $\ld$. \bibliographystyle{amsplain}
1,116,691,498,024
arxiv
\section{Introduction} \label{sec_mimo_df_relay_bf_jamming_finitealphabet_1} The physically degraded discrete memoryless wiretap channel model considered by Wyner in \cite{ir1} opened the path for reliable and secure information transmission using physical layer techniques. Subsequent extension to discrete memoryless broadcast channel and Gaussian channel was done in \cite{ir2} and \cite{ir3}, respectively. A wireless network can be easily eavesdropped due to the broadcast nature of wireless transmission. However, using physical layer techniques (e.g. wiretap codes, beamforming using multiple antennas, artificial noise injection etc.), a wireless network can be secured from getting eavesdropped. Achievable secrecy rate and capacity in single and multiple antenna wiretap channels have been reported by many authors, e.g., \cite{ir4, ir5, ir6, ir7, ir8, ir9}. A relay, operating in decode-and-forward (DF) or amplify-and-forward (AF) mode, can act as an intermediate node and help improving the secrecy rate \cite{ir10}. DF and AF relay beamforming techniques for secrecy under perfect/imperfect channel state information (CSI) have been well studied in the literature, e.g. \cite{ir11, ir12, ir13, ir14, ir15, ir16}. In these works, the transmit codeword symbols belong to an infinite constellation (Gaussian). However, in a practical communication system, the codeword symbols will belong to a finite alphabet set, e.g., $M$-ary alphabets. The effect of finite constellation on secrecy rate has been reported in \cite{ir17, ir18, ir19, ir20, irx, ir21, ir22}. In \cite{ir21}, DF relay beamforming for secrecy with finite alphabet has been considered. There it was shown that when the source power and relay beamforming vector obtained for Gaussian alphabet, when used with finite alphabet, could lead to zero secrecy rate. A power control algorithm was suggested to alleviate the loss in secrecy rate. Motivated by the above works, in this paper, we consider secrecy rate in DF relay beamforming with finite-alphabet input using a MIMO relay. The considered system consists of a source node, a destination node, and multiple non-colluding eavesdroppers. A DF MIMO relay aids the communication between the source and destination. It is known that secrecy rate can be improved through the use of artificial noise (AN) injection \cite{ir8}, \cite{ir11}, \cite{irz}, \cite{ir26}, \cite{ir27}, \cite{ir16}. In this work, we allow the MIMO relay to inject AN in addition to relaying the information symbol from the source. Consequently, we solve for both the optimum source power, signal beamforming weights as well as the AN covariance matrix at the MIMO relay. Since the CSI will not be perfect in practice, we consider a norm-bounded CSI error model and investigate the effect of imperfect CSI on the secrecy rate. We use the fact that a wiretap code consists of two parts: $i)$ common message (non-secret), and $ii)$ secret message. The source transmits two independent messages: $i)$ common message (non-secret), and $ii)$ secret message. The common message is transmitted at a fixed rate $R_{0}$, and its intended for the destination node. The secret message is also intended for the destination node but it should be kept secret from $J$ eavesdroppers. The source and the MIMO DF relay operate under individual power constraints. In order to maximize the worst case secrecy rate, we maximize the worst case link information rate to the user subject to: $i)$ the individual power constraints on the source and the MIMO DF relay, and $ii)$ the best case link information rates to $J$ eavesdroppers be less than or equal to $R_{0}$ in order to support a fixed common message rate $R_{0}$. Numerical results showing the effect of perfect/imperfect CSI, presence/absence of AN with finite-alphabet input on the secrecy rate are presented. $\bf{Notations:}$ $\boldsymbol{A} \in \mathbb{C}^{N_{1} \times N_{2}}$ implies that $\boldsymbol{A}$ is a complex matrix of dimension $N_{1} \times N_{2}$. $\boldsymbol{A} \succeq \boldsymbol{0}$ and $\boldsymbol{A} \succ \boldsymbol{0}$ imply that $\boldsymbol{A}$ is a positive semidefinite matrix and positive definite matrix, respectively. Identity matrix is denoted by $\boldsymbol{I}$. Transpose and complex conjugate transpose operations are denoted by $[.]^{T}$ and $[.]^{\ast}$, respectively. $\mathbb{E}[.]$ denotes expectation operator. $\parallel\hspace{-1mm}.\hspace{-1mm}\parallel$ denotes 2-norm operator. Trace of matrix $\boldsymbol{A} \in \mathbb{C}^{N \times N}$ is denoted by $\Tr(\boldsymbol{A})$. $\boldsymbol{\psi}_{} \in \mathbb{C}^{N_{} \times 1} \sim \mathcal{CN}(\boldsymbol{0}, \boldsymbol{\Psi}_{})$ implies that $\boldsymbol{\psi}_{}$ is a circularly symmetric complex Gaussian random vector with mean vector $\boldsymbol{0}$ and covariance matrix $\boldsymbol{\Psi}_{}$. \section{System model} \label{sec_mimo_df_relay_bf_jamming_finitealphabet_2} Consider a DF cooperative relaying scheme which consists of a source node $S$ having single transmit antenna, a MIMO DF relay node $R$ having $N$ receive/transmit antennas, a destination node $D$ having single receive antenna, and $J$ non-colluding eavesdropper nodes $E_{1},E_{2},\cdots,E_{J}$ having single receive antenna each. The system model is shown in Fig. \ref{fig_mimo_df_relay_bf_jamming_finitealphabet_1}. In addition to the links from relay to destination node and relay to eavesdropper nodes, we assume direct links from source to destination node and source to eavesdropper nodes. The complex channel gain vector between the source and the relay is denoted by $\boldsymbol{g}=[g_{1},g_{2},\cdots,g_{N}]^{T} \in {\mathbb{C}}^{N\times 1}$. Likewise, the channel gain vector between the relay and the destination node $D$ is denoted by $\boldsymbol{h}=[h_{1},h_{2},\cdots,h_{N}] \in {\mathbb{C}}^{1\times N}$, and the channel gain vector between the relay and the $j$th eavesdropper node $E_{j}$, $1 \leq j \leq J$, is denoted by $\boldsymbol{z}_{j}=[z_{1j},z_{2j},\cdots,z_{Nj}]\in {\mathbb{C}}^{1\times N}$. The channel gains on the direct links from the source to $D$ and the source to $E_{j}$ are denoted by $h_{0}$ and $z_{0j}$, respectively. \begin{figure} \center \includegraphics[totalheight=6.5cm,width=6.5cm]{fig_mimo_df_relay_bf_jamming_finitealphabet_1.ps} \caption{System model for MIMO DF relaying.} \label{fig_mimo_df_relay_bf_jamming_finitealphabet_1} \end{figure} The MIMO relay operates in half duplex mode, and the communication happens in two hops. Each hop is divided into $n$ channel uses. We use the fact that a wiretap code consists of two parts: $i)$ common message (non-secret), and $ii)$ secret message. In the first hop of transmission, the source $S$ transmits two independent messages $W_{0}$ and $W_{1}$ which are equiprobable over $\{1,2,\cdots,2^{2nR_{0}}\}$ and $\{1,2,\cdots,2^{2nR_{s}(R_{0})}\}$, respectively. $W_{0}$ is the common message which is transmitted at a fixed rate $R_{0}$ and its intended for the destination $D$. $W_{1}$ is a secret message which is transmitted at some rate $R_{s}(R_{0})$ and its also intended only for $D$ and it should be kept secret from all $E_{j}$s. For each $W_{0}$ and $W_{1}$ drawn independently and equiprobably from the sets $\{1,2,\cdots,2^{2nR_{0}}\}$ and $\{1,2,\cdots,2^{2nR_{s}(R_{0})}\}$, respectively, the source $S$ maps $W_{0}$ and $W_{1}$ to a codeword $\{ x_{m} \}^{n}_{m = 1}$ of length $n$. Each symbol, $x_{m}$, in the codeword is independent and equiprobable over a complex finite-alphabet set $\mathbb{A} = \{ a_{1}, a_{2}, \cdots, a_{M} \} $ of size $M$ with ${\mathbb{E}} [ {x_{m}} ] = 0$, and ${\mathbb{E}} [ {|x_{m}|}^{2} ] = 1$. The source is constrained by the available power $P_{S}$ and it transmits the weighted symbol which is $\sqrt{P_{s}}x_{m}$ in the $m$th channel use, where $1 \leq m \leq n$, and $0 \leq P_{s} \leq P_{S}$. Hereafter, we will denote the symbol $x_{m}$ of the codeword $\{ x_{m} \}^{n}_{m = 1}$ by $x$, and we will consider only one channel use. Let $\boldsymbol{y}_{R}$, ${y}_{D}$, and ${y}_{E_{1j}}$ denote the received signals at the MIMO relay $R$, destination $D$, and $j$th eavesdropper $E_{j}$, respectively, in the first hop. We have \begin{eqnarray} \boldsymbol{y}_{R} \ &=& \ \sqrt{P_{s}}\boldsymbol{g} x \ + \ \boldsymbol{\eta}_{R}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_1} \\ y_{D_{1}} \ &=& \ \sqrt{P_{s}}h_{0} x \ + \ \eta_{D_{1}}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_2} \\ y_{E_{1j}} \ &=& \ \sqrt{P_{s}}z_{0j} x \ + \ \eta_{E_{1j}}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_3} \end{eqnarray} where $\boldsymbol{\eta}_{R} ( \sim {\mathcal{CN}}(\boldsymbol{0}, N_{0}\boldsymbol{I}))$, ${\eta}_{D_{1}} ( \sim {\mathcal{CN}}(0, N_{0}) )$, and ${\eta}_{E_{1j}} ( \sim {\mathcal{CN}}(0, N_{0}) )$ are receiver noise components and are assumed to be independent. In the second hop of transmission, MIMO relay applies the complex weight $\boldsymbol{\phi} = [\phi_{1},\phi_{2},\cdots,\phi_{N}]^{T} \in {\mathbb{C}}^{N \times 1}$ on the successfully decoded symbol $x$ and retransmits it. In order to improve the secrecy rate, MIMO relay also injects the artificial noise $\boldsymbol{\psi} \in {\mathbb{C}}^{N \times 1}(\sim {\mathcal{CN}}(\boldsymbol{0}, \boldsymbol{\Psi}))$. The symbol transmitted by the MIMO relay on the $i$th, $1 \leq i \leq N$, antenna is $\phi_{i} x + \psi_{i}$. Let ${y}_{D_{2}}$, and ${y}_{E_{2j}}$ denote the received signals at the destination $D$, and $j$th eavesdropper $E_{j}$, respectively, in the second hop. We have \begin{eqnarray} y_{D_{2}} \ &=& \ \boldsymbol{h} \boldsymbol{\phi}x \ + \ \boldsymbol{h} \boldsymbol{\psi} \ + \ \eta_{D_{2}}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_4} \\ y_{E_{2j}} \ &=& \ \boldsymbol{z}_{j} \boldsymbol{\phi}x \ + \ \boldsymbol{z}_{j} \boldsymbol{\psi} \ + \ \eta_{E_{2j}}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_5} \end{eqnarray} where ${\eta}_{D_{2}} ( \sim {\mathcal{CN}}(0, N_{0}) )$, and ${\eta}_{E_{2j}} ( \sim {\mathcal{CN}}(0, N_{0}) )$ are receiver noise components and are assumed to be independent. Using (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_2}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_4}), and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_3}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_5}), we rewrite the received signals at $D$ and $E_{j}$ in the following vector forms, respectively: \begin{eqnarray} \boldsymbol{y}_{D} \ &=& \ [y_{D_{1}}, \ y_{D_{2}}]^{T} \nonumber \\ &=& \ {[ \sqrt{P_{s}}h_{0}, \ \boldsymbol{h}\boldsymbol{\phi}]}^{T}x \ + \ {[\eta_{D_{1}}, \ \boldsymbol{h}\boldsymbol{\psi} + \eta_{D_{2}}]}^{T}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_6} \\ \boldsymbol{y}_{E_{j}} \ &=& \ [y_{E_{1j}}, \ y_{E_{2j}}]^{T} \nonumber \\ &=& \ {[ \sqrt{P_{s}}z_{0j}, \ \boldsymbol{z}_{j}\boldsymbol{\phi}]}^{T}x \ + \ {[\eta_{E_{1j}}, \ \boldsymbol{z}_{j}\boldsymbol{\psi} + \eta_{E_{2j}}]}^{T}. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_7} \end{eqnarray} We assume that the MIMO relay's transmit power, denoted by $P_{r}$, is constrained by the available power $P_{R}$. This implies that \begin{eqnarray} P_{r} \ &=& \ {\mathbb{E}}\{{\parallel \hspace{-1mm} (\boldsymbol{\phi}x + \boldsymbol{\psi}) \hspace{-1mm} \parallel}^{2} \} \nonumber \\ \ &=& \ {\parallel \hspace{-1mm} \boldsymbol{\phi} \hspace{-1mm} \parallel}^{2} \ + \ \Tr(\boldsymbol{\Psi}) \ \leq \ P_{R}. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_8} \end{eqnarray} We also assume that the channel remains static over the entire codeword transmit duration. Further, denoting the secret message decoded at the MIMO relay $R$ and destination $D$ by $\widehat{W}^{\footnotesize{R}}_{1}$ and $\widehat{W}^{\footnotesize{D}}_{1}$, respectively, the reliability constraints at $R$ and $D$ and the perfect secrecy constraints at $E_{j}$s are as follows: \begin{eqnarray} \text{Pr}(\widehat{W}^{\footnotesize{R}}_{1} \neq W_{1}) &\leq& \epsilon_{n}, \quad \text{Pr}(\widehat{W}^{\footnotesize{D}}_{1} \neq W_{1}) \ \ \leq \ \ \epsilon_{n}, \nonumber \\ \frac{1}{2n}I(W_{1}; \boldsymbol{y}^{2n}_{E_{j}} ) &\leq& \epsilon_{n}, \ \ \forall j \ = \ 1,2,\cdots,J, \nonumber \end{eqnarray} where $\boldsymbol{y}^{2n}_{E_{j}}$ is the received signal vector at $E_{j}$ in $2n$ channel uses, and $\epsilon_{n} \rightarrow 0$ as $n \rightarrow \infty$. We also note that the reliability constraints at the MIMO relay $R$ and destination $D$ for the secret message also ensure the reliability of the common message. \section{DF relay beamforming - perfect CSI} \label{sec_mimo_df_relay_bf_jamming_finitealphabet_3} In this section, we assume that the CSI on all the links are known perfectly. Using (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_1}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_6}), and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_7}), we get the $S$-$R$, $S$-$D$, and $S$-$E_{j}$ link information rates, respectively, as follows: \begin{eqnarray} \frac{1}{2}I(x; \boldsymbol{y}_{R}) &=& \frac{1}{2}I \bigg( \frac{ P_{s} {\parallel \hspace{-1mm} \boldsymbol{g} \hspace{-1mm} \parallel}^{2} }{ N_{0} }\bigg), \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_9} \\ \frac{1}{2}I(x; \boldsymbol{y}_{D} ) &=& \frac{1}{2} I \bigg( \frac{ P_{s} { | h_{0} | }^{2} }{ N_{0} } + \frac{ \boldsymbol{h} \boldsymbol{\phi} \boldsymbol{\phi}^{\ast} \boldsymbol{h}^{\ast} }{ N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} } \bigg), \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_10} \\ \frac{1}{2}I(x; \boldsymbol{y}_{E_{j}} ) &=& \frac{1}{2} I \bigg( \frac{ P_{s} { | z_{0j} | }^{2} }{ N_{0} } + \frac{ \boldsymbol{z}_{j} \boldsymbol{\phi} \boldsymbol{\phi}^{\ast} \boldsymbol{z}^{\ast}_{j} }{ N_{0} + \boldsymbol{z}_{j} \boldsymbol{\Psi} \boldsymbol{z}^{\ast}_{j} } \bigg), \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_11} \end{eqnarray} where \begin{eqnarray} I(\rho) \ \Define \ \frac{1}{M} \sum\limits^{M}_{l = 1} \int p_{n}\big(y_{_{}} - \sqrt{\rho} a_{l}\big) \nonumber \\ \log_{2} \frac{p_{n}(y_{_{}} - \sqrt{\rho} a_{l})}{\frac{1}{M} \sum\limits^{M}_{m = 1}p_{n}(y_{_{}} - \sqrt{\rho} a_{m})} d y_{_{}}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_12} \end{eqnarray} and $p_n(\theta) = \frac{1}{\pi} e^{{{-\mid \theta \mid}^{2}}}$. The factor $1/2$ in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_9}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_10}), and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_11}) is due to two hops. Further, the MIMO relay $R$ will be able to decode the symbol $x$ if the following condition holds true \cite{ir11, ir15, ir16, ir21}: \begin{eqnarray} \frac{1}{2}I(x;\boldsymbol{y}_{R}) \ \geq \ \frac{1}{2}I(x;\boldsymbol{y}_{D}). \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_13} \end{eqnarray} In order to find the maximum achievable secrecy rate $R_{s}(R_{0})$ which also supports the fixed common message rate $R_{0}$, we maximize the $S-D$ link information rate subject to $i)$ $S-E_{j}$, $1\leq j \leq J$, link information rates be less than or equal to $R_{0}$, $ii)$ the information rate constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_13}), and $iii)$ the power constraints. The optimization problem is as follows: \begin{eqnarray} R_{D}(R_{0}) \ = \ \max_{ P_{s}, \ \boldsymbol{\phi}, \ \boldsymbol{\Psi} } \ \frac{1}{2}I(x; \boldsymbol{y}_{D}) \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_14} \\ \text{s.t.} \quad \quad \frac{1}{2}I(x; \boldsymbol{y}_{E_j}) \ \leq \ R_{0}, \quad \forall j=1,2,\cdots,J, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_15} \\ \frac{1}{2}I(x;\boldsymbol{y}_{R} ) \ \geq \ \frac{1}{2}I(x;\boldsymbol{y}_{D}), \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_16} \\ 0 \leq P_{s} \leq P_{S}, \quad \boldsymbol{\Psi} \succeq \boldsymbol{0}, \quad {\parallel \hspace{-1mm} \boldsymbol{\phi} \hspace{-1mm} \parallel}^{2} \ + \ \Tr(\boldsymbol{\Psi}) \ \leq \ P_{R}. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_17} \end{eqnarray} Having obtained $R_{D}(R_{0})$ from (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_14}), the maximum achievable secrecy rate $R_{s}(R_{0})$ for a given common message rate $R_{0}$ is \cite{ir9} \begin{eqnarray} R_{s}(R_{0}) \ = \ { \{ R_{D}(R_{0}) - R_{0} \} }^{+}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_18} \end{eqnarray} where $ { \{ \alpha \} }^{+} = \max(0, \alpha)$. From the constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_16}), its obvious that the upper bound for $S-D$ link information rate, denoted by $R_{D}$, can be obtained by evaluating (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_9}) at $P_{s} = P_{S}$. For the values of $R_{0}$ over the interval $[0, R_{D}]$, the maximum achievable secrecy rate, denoted by $R_{s}$, is obtained as follows: \begin{eqnarray} R_{s} \ &=& \ \max_{0 \ \leq \ R_{0} \ \leq \ R_{D}} \ { \{ R_{D}(R_{0}) - R_{0} \} }^{+} \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_19} \\ \ &=& \ \max_{0 \ \leq \ l \ \leq \ L} \ { \{ R_{D}(l\Delta_{1}) - l\Delta_{1} \} }^{+}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_20} \end{eqnarray} where $L$ is a large positive integer, $\Delta_{1} = R_{D}/L$, $l$ is an integer, and $R_{0} = l \Delta_{1}$. We solve the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_14}) for a fixed $P_{s} = k \Delta_{2}$, where $\Delta_{2} = P_{S}/K$, $K$ is a large positive integer, and $1 \leq k \leq K$. Hereafter, we will assume that $P_{s}$ is known. Further, it is shown in \cite{ir23, ir24} that for various $M$-ary alphabets, mutual information expression in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_12}) is a strictly increasing concave function in SNR. With this fact, we rewrite the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_14}) into the following equivalent form: \begin{eqnarray} \max_{\boldsymbol{\Phi}, \ \boldsymbol{\Psi}} \ \bigg( a + \frac{ \boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast} }{ N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} } \bigg) \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_21} \end{eqnarray} \begin{eqnarray} \text{s.t.} \quad \quad \quad \forall j=1,2,\cdots,J, \nonumber \\ \bigg( b_{j} + \frac{ \boldsymbol{z}_{j} \boldsymbol{\Phi} \boldsymbol{z}^{\ast}_{j} }{ N_{0} + \boldsymbol{z}_{j} \boldsymbol{\Psi} \boldsymbol{z}^{\ast}_{j} } \bigg) \ \leq \ I^{-1}(2R_{0}), \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_22} \\ c \ \geq \ \bigg( a + \frac{ \boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast} }{ N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} } \bigg) \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_23} \\ \boldsymbol{\Phi} \succeq \boldsymbol{0}, \ rank(\boldsymbol{\Phi}) = 1, \ \boldsymbol{\Psi} \succeq \boldsymbol{0}, \ \Tr( \boldsymbol{\Phi} + \boldsymbol{\Psi}) \ \leq \ P_{R}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_24} \end{eqnarray} where $\boldsymbol{\Phi} = \boldsymbol{\phi} \boldsymbol{\phi}^{\ast}$, $a = \big( \frac{P_{s} {|h_{0}|}^{2}}{N_{0}} \big)$, $b_{j} = \big( \frac{P_{s} {|z_{0j}|}^{2}}{N_{0}} \big)$, and $c = \big( \frac{ P_{s} {\parallel \boldsymbol{g} \parallel}^{2}}{N_{0}}\big)$. Further, relaxing the $ rank(\boldsymbol{\Phi}) = 1$ constraint, we rewrite the above optimization problem into the following form: \begin{eqnarray} \max_{t, \ \boldsymbol{\Phi}, \ \boldsymbol{\Psi} } \quad t \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_25} \\ \text{s.t.} \quad \quad (t - a) \big( N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} \big) - (\boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast}) \ \leq \ 0, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_59} \\ \forall j \ = \ 1,2,\cdots,J, \quad \nonumber \\ \big( \boldsymbol{z}_{j} \boldsymbol{\Phi} \boldsymbol{z}^{\ast}_{j} \big) - \Big( I^{-1}(2R_{0}) - b_{j} \Big) ( N_{0} + \boldsymbol{z}_{j} \boldsymbol{\Psi} \boldsymbol{z}^{\ast}_{j} ) \ \leq \ 0, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_60} \\ (\boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast}) - (c - a) \big( N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} \big) \ \leq \ 0, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_61} \\ \boldsymbol{\Phi} \succeq \boldsymbol{0}, \quad \boldsymbol{\Psi} \succeq \boldsymbol{0}, \quad \Tr( \boldsymbol{\Phi} + \boldsymbol{\Psi}) \ \leq \ P_{R}. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_26} \end{eqnarray} The above problem can be easily solved using bisection method \cite{ir25}. The initial search interval in the bisection method can be taken as $[0, \ c]$. In the appendix, we show that the solution $\boldsymbol{\Phi}$ of the above problem has rank 1. Further, denoting the maximum value of $t$ by $t_{max}$, the secrecy rate is obtained as follows: \begin{eqnarray} R_{s}(R_{0}) \ = \ { \bigg \{ \frac{1}{2} I(t_{max}) - R_{0} \bigg \} }^{+}. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_27} \end{eqnarray} \section{DF relay beamforming - imperfect CSI} \label{sec_mimo_df_relay_bf_jamming_finitealphabet_4} In this section, we assume that each receiver has perfect knowledge of its CSI. We also assume that the control unit which computes the source power, signal beamforming vector and AN covariance matrix has imperfect CSI on all links. The imperfection in CSI is modeled as follows \cite{ir15, ir26, ir27}: \begin{eqnarray} \boldsymbol{g}=\widehat{\boldsymbol{g}}+\boldsymbol{e}_{\boldsymbol{g}}, \quad h_{0} = \widehat{h}_{0} + e_{ h_{0} }, \quad \boldsymbol{h} = \widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }, \nonumber \\ \forall j = 1,2,\cdots,J, \quad z_{0j} = \widehat{z}_{0j} + e_{ z_{0j} }, \quad \boldsymbol{z}_{j} = \widehat{ \boldsymbol{z} }_{j} + \boldsymbol{e}_{ \boldsymbol{z}_{j} }, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_28} \end{eqnarray} where $\widehat{ \boldsymbol{g} }$, $\widehat{h}_{0}$, $\widehat{ \boldsymbol{h} }$, $\widehat{z}_{0j}$, $\widehat{ \boldsymbol{z} }_{j}$ are the available CSI estimates, and $\boldsymbol{e}_{ \boldsymbol{g} }$, $e_{ h_{0} }$, $\boldsymbol{e}_{ \boldsymbol{h} }$, $e_{ z_{0j} }$, $\boldsymbol{e}_{ \boldsymbol{z}_{j} }$ are the corresponding CSI errors. We assume that the CSI errors are bounded, i.e., \begin{eqnarray} \parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{g} } \hspace{-1mm} \parallel \leq \epsilon_{ \boldsymbol{g} }, \quad | e_{ h_{0} } | \leq \epsilon_{ h_{0} }, \quad \parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{h} } \hspace{-1mm} \parallel \leq \epsilon_{ \boldsymbol{h} }, \nonumber \\ \forall j=1,2,\cdots,J, \quad | e_{ z_{0j} } | \leq \epsilon_{ z_{0j} }, \quad \parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{z}_{j} } \hspace{-1mm} \parallel \leq \epsilon_{ \boldsymbol{z}_{j} }. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_29} \end{eqnarray} With the above CSI error model, we write the rank relaxed optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_21}) as follows: \begin{eqnarray} \max_{ \boldsymbol{\Phi}, \ \boldsymbol{\Psi} } \ \min_{ \boldsymbol{e}_{ \boldsymbol{h} } } \ \bigg( \frac{ aN_{0} + (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }) \big( a \boldsymbol{\Psi} + \boldsymbol{\Phi} \big) (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} })^{\ast} }{ N_{0} + (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }) \boldsymbol{\Psi} (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} })^{\ast} } \bigg) \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_30} \end{eqnarray} {\small \begin{eqnarray} \text{s.t.} \quad \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{h} } \hspace{-1mm} \parallel}^{2} \leq \epsilon^{2}_{ \boldsymbol{h} }, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_31} \\ \left \{ \begin{array}{cc} \forall j=1,2,\cdots,J, \\ \mathop{\max } \limits_{ \boldsymbol{e}_{ \boldsymbol{z}_{j} } } \ \bigg( \frac{ b_{j}N_{0} + (\widehat{ \boldsymbol{z} }_{j} + \boldsymbol{e}_{ \boldsymbol{z}_{j} }) \big( b_{j} \boldsymbol{\Psi} + \boldsymbol{\Phi} \big) (\widehat{ \boldsymbol{z} }_{j} + \boldsymbol{e}_{ \boldsymbol{z}_{j} })^{\ast} }{ N_{0} + (\widehat{ \boldsymbol{z} }_{j} + \boldsymbol{e}_{ \boldsymbol{z}_{j} }) \boldsymbol{\Psi} (\widehat{ \boldsymbol{z} }_{j} + \boldsymbol{e}_{ \boldsymbol{z}_{j} })^{\ast} } \bigg) \\ \leq \ I^{-1}(2R_{0}), \\ \text{s.t.} \quad \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{z}_{j} } \hspace{-1mm} \parallel}^{2} \leq \epsilon^{2}_{ \boldsymbol{z}_{j} }, \end{array} \right \} \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_32} \\ \left \{ \begin{array}{cc} c \ \geq \ \mathop{\max } \limits_{ \boldsymbol{e}_{ \boldsymbol{h} } } \Big(a_{max} + \frac{ (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }) \boldsymbol{\Phi} (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} })^{\ast} }{N_{0} + (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }) \boldsymbol{\Psi} (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} })^{\ast}} \Big) \\ \text{s.t.} \quad \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{h} } \hspace{-1mm} \parallel}^{2} \leq \epsilon^{2}_{ \boldsymbol{h} }, \end{array} \right \} \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_33} \\ \boldsymbol{\Phi} \succeq \boldsymbol{0}, \quad \boldsymbol{\Psi} \succeq \boldsymbol{0}, \quad \Tr(\boldsymbol{\Phi} + \boldsymbol{\Psi}) \ \leq \ P_R, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_34} \end{eqnarray} } \vspace{-4mm} where \begin{eqnarray} a_{} = \Big( \frac{ P_{s} {| |\widehat{h}_{0}| - \epsilon_{h_{0}} |}^{2} }{ N_{0} } \Big) \quad \text{if} \quad ( | \widehat{h}_{0} | > \epsilon_{h_{0}} ), \quad 0 \quad \text{else}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_35} \\ b_{j} = \Big( \frac{ P_{s} {| |\widehat{z}_{0j}| + \epsilon_{z_{0j}} |}^{2} }{ N_{0} } \Big) , \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_36} \\ c = \Big( \frac{ P_{s} {| \parallel \hspace{-1mm} \widehat{ \boldsymbol{g} } \hspace{-1mm} \parallel - \epsilon_{ \boldsymbol{g} } |}^{2} } { N_{0} } \Big) \quad \text{if} \quad ( \parallel \hspace{-1mm} \widehat{ \boldsymbol{g} } \hspace{-1mm} \parallel > \epsilon_{ \boldsymbol{g} } ), \quad 0 \quad \text{else}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_37} \\ a_{max} = \Big( \frac{ P_{s} { | |\widehat{h}_{0}| + \epsilon_{ h_{0} } | }^{2} }{ N_{0} } \Big). \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_38} \end{eqnarray} The objective function in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_30}) corresponds to the worst case $S-D$ link information rate over the region of CSI error uncertainty. The constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_32}) corresponds to the best case $S-E_{j}$ link information rate over the region of CSI error uncertainty. The constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_33}) is associated with the information rate constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_23}), i.e., the worst case information rate to the MIMO relay $R$ over the region of CSI error uncertainty should be greater than or equal to the best case information rate to destination $D$. Solving the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_30}) is hard due to the presence of $ \boldsymbol{e}_{ \boldsymbol{h}}$ in both the numerator and denominator of the objective function in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_30}) and the constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_33}). Similarly, $ \boldsymbol{e}_{ \boldsymbol{z}_{j} } $ appears in both the numerator and denominator of the constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_32}). So, by independently constraining the various quadratic terms appearing in the objective function in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_30}) and the constraints in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_32}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_33}), we get the following lower bound for the above optimization problem: \begin{eqnarray} \max_{ \boldsymbol{\Phi}, \ \boldsymbol{\Psi}, \atop{r_{1}, \ r_{2}, \ r_{3}, \ r_{4}, \atop{s_{1j}, \ s_{2j}, \ j=1,2,\cdots,J} } } \ \ \frac{ r_{1} }{ r_{2} } \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_39} \end{eqnarray} \begin{eqnarray} \text{s.t.} \quad \quad \boldsymbol{\Phi} \succeq \boldsymbol{0}, \quad \boldsymbol{\Psi} \succeq \boldsymbol{0}, \quad \Tr(\boldsymbol{\Phi} + \boldsymbol{\Psi}) \ \leq \ P_R, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_40} \\ \forall \boldsymbol{e}_{ \boldsymbol{h} }\quad \text{s.t.} \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{h} } \hspace{-1mm} \parallel}^{2} \leq {\epsilon}^{2}_{ \boldsymbol{h} } \ \Longrightarrow \ \nonumber \\ 0 \ \leq \ r_{1} \ \leq \ aN_{0} + (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }) \big( a \boldsymbol{\Psi} + \boldsymbol{\Phi} \big) (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} })^{\ast}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_41} \\ \forall \boldsymbol{e}_{ \boldsymbol{h} }\quad \text{s.t.} \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{h} } \hspace{-1mm} \parallel}^{2} \leq {\epsilon}^{2}_{ \boldsymbol{h} } \ \Longrightarrow \ \nonumber \\ N_{0} + (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }) \boldsymbol{\Psi} (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} })^{\ast} \ \leq \ r_{2}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_42} \\ \frac{s_{1j}} { s_{2j} } \ \leq \ I^{-1} (2R_{0}), \quad \forall j=1,2,\cdots,J \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_43} \\ \forall \boldsymbol{e}_{ \boldsymbol{z}_{j} }\quad \text{s.t.} \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{z}_{j} } \hspace{-1mm} \parallel}^{2} \leq {\epsilon}^{2}_{ \boldsymbol{z}_{j} } \ \Longrightarrow \ \nonumber \\ b_{j}N_{0} + (\widehat{ \boldsymbol{z}_{j} } + \boldsymbol{e}_{ \boldsymbol{z}_{j} }) \big( b_{j} \boldsymbol{\Psi} + \boldsymbol{\Phi} \big) (\widehat{ \boldsymbol{z}_{j} } + \boldsymbol{e}_{ \boldsymbol{z}_{j} })^{\ast} \ \leq \ s_{1j}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_44} \\ \forall \boldsymbol{e}_{ \boldsymbol{z}_{j} }\quad \text{s.t.} \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{z}_{j} } \hspace{-1mm} \parallel}^{2} \leq {\epsilon}^{2}_{ \boldsymbol{z}_{j} } \ \Longrightarrow \ \nonumber \\ 0 \ \leq \ s_{2j} \ \leq \ N_{0} + (\widehat{ \boldsymbol{z} }_{j} + \boldsymbol{e}_{ \boldsymbol{z}_{j} }) \boldsymbol{\Psi} (\widehat{ \boldsymbol{z} }_{j} + \boldsymbol{e}_{ \boldsymbol{z}_{j} })^{\ast}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_45} \end{eqnarray} \begin{eqnarray} c \ \geq \ \Big( a_{max} + \frac{r_{3}}{r_{4}} \Big), \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_46} \\ \forall \boldsymbol{e}_{ \boldsymbol{h} }\quad \text{s.t.} \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{h} } \hspace{-1mm} \parallel}^{2} \leq {\epsilon}^{2}_{ \boldsymbol{h} } \ \Longrightarrow \ \nonumber \\ (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }) \boldsymbol{\Phi} (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} })^{\ast} \ \leq \ r_{3}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_47} \\ \forall \boldsymbol{e}_{ \boldsymbol{h} }\quad \text{s.t.} \quad {\parallel \hspace{-1mm} \boldsymbol{e}_{ \boldsymbol{h} } \hspace{-1mm} \parallel}^{2} \leq {\epsilon}^{2}_{ \boldsymbol{h} } \ \Longrightarrow \ \nonumber \\ 0 \ \leq \ r_{4} \ \leq \ N_{0} + (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} }) \boldsymbol{\Psi} (\widehat{ \boldsymbol{h} } + \boldsymbol{e}_{ \boldsymbol{h} })^{\ast}. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_48} \end{eqnarray} The quadratic inequality constraints in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_41}) and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_42}) are associated with the objective function in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_30}). The constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_43}), and the quadratic inequality constraints in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_44}) and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_45}) are associated with the constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_32}). Similarly, the constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_46}), and the quadratic inequality constraints in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_47}) and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_48}) are associated with the constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_33}). Further, using $S$-procedure \cite{ir25}, we transform the quadratic inequality constraints in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_41}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_42}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_44}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_45}), (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_47}), and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_48}), into the following linear matrix inequality (LMI) forms, respectively: {\scriptsize \begin{eqnarray} &\hspace{-100mm} & \hspace{-45mm} r_{1} \geq 0, \ \ \lambda_{1} \geq 0, \boldsymbol{A}_{1} \Define \nonumber \\ \left [ \begin{array}{cc} \big( a \boldsymbol{\Psi} + \boldsymbol{\Phi} \big) + \lambda_{1} \boldsymbol{I} & \big( a \boldsymbol{\Psi} + \boldsymbol{\Phi} \big) \widehat{ \boldsymbol{h} }^{\ast} \\ \widehat{ \boldsymbol{h} }^{} \big( a \boldsymbol{\Psi} + \boldsymbol{\Phi} \big)^{\ast} & aN_{0} + \widehat{ \boldsymbol{h} }^{} \big( a \boldsymbol{\Psi} + \boldsymbol{\Phi} \big)\widehat{ \boldsymbol{h} }^{\ast} -r_{1} - \lambda_{1}\epsilon^{2}_{ \boldsymbol{h} } \end{array} \right ], \nonumber \\ &\hspace{-100mm} &\hspace{-35mm} \lambda_{2} \geq 0, \ \ \boldsymbol{A}_{2} \Define \nonumber \\ \left [ \begin{array}{cc} -\boldsymbol{\Psi} + \lambda_{2} \boldsymbol{I} & -\boldsymbol{\Psi} \widehat{ \boldsymbol{h} }^{\ast} \\ -\widehat{ \boldsymbol{h} }^{} \boldsymbol{\Psi}^{\ast} & -N_{0} - \widehat{ \boldsymbol{h} }^{} \boldsymbol{\Psi}\widehat{ \boldsymbol{h} }^{\ast} + r_{2} - \lambda_{2}\epsilon^{2}_{ \boldsymbol{h} } \end{array} \right ], \nonumber \\ &\hspace{-100mm} &\hspace{-38mm} \mu_{1j} \geq 0, \ \ \boldsymbol{B}_{1j} \Define \nonumber \\ \left [ \begin{array}{cc} -\big( b_{j} \boldsymbol{\Psi} + \boldsymbol{\Phi} \big) + \mu_{1j} \boldsymbol{I} & -\big( b_{j} \boldsymbol{\Psi} + \boldsymbol{\Phi} \big) \widehat{ \boldsymbol{z} }^{\ast}_{j} \\ -\widehat{ \boldsymbol{z} }^{}_{j} \big( b_{j} \boldsymbol{\Psi} + \boldsymbol{\Phi} \big)^{\ast} & -b_{j}N_{0} - \widehat{ \boldsymbol{z} }^{}_{j} \big( b_{j} \boldsymbol{\Psi} + \boldsymbol{\Phi} \big)\widehat{ \boldsymbol{z} }^{\ast}_{j} + s_{1j} - \mu_{1j}\epsilon^{2}_{ \boldsymbol{z}_{j} } \end{array} \right ], \nonumber \\ &\hspace{-100mm} &\hspace{-50mm} s_{2j} \geq 0, \ \ \mu_{2j} \geq 0, \ \ \boldsymbol{B}_{2j} \Define \nonumber \\ \left [ \begin{array}{cc} \boldsymbol{\Psi} + \mu_{2j} \boldsymbol{I} & \boldsymbol{\Psi} \widehat{ \boldsymbol{z} }^{\ast}_{j} \\ \widehat{ \boldsymbol{z} }^{}_{j} \boldsymbol{\Psi}^{\ast} & N_{0} + \widehat{ \boldsymbol{z} }^{}_{j} \boldsymbol{\Psi}\widehat{ \boldsymbol{z} }^{\ast}_{j} - s_{2j} - \mu_{2j}\epsilon^{2}_{ \boldsymbol{z}_{j} } \end{array} \right ], \nonumber \\ &\hspace{-100mm} &\hspace{-35mm} \lambda_{3} \geq 0, \quad \boldsymbol{A}_{3} \Define \nonumber \\ \left [ \begin{array}{cc} -\boldsymbol{\Phi} + \lambda_{3} \boldsymbol{I} & -\boldsymbol{\Phi} \widehat{ \boldsymbol{h} }^{\ast} \\ -\widehat{ \boldsymbol{h} }^{} \boldsymbol{\Phi}^{\ast} & - \widehat{ \boldsymbol{h} }^{} \boldsymbol{\Phi}\widehat{ \boldsymbol{h} }^{\ast} + r_{3} - \lambda_{3}\epsilon^{2}_{ \boldsymbol{h} } \end{array} \right ], \nonumber \\ &\hspace{-100mm} &\hspace{-45mm} r_{4} \geq 0, \quad \lambda_{4} \geq 0, \quad \boldsymbol{A}_{4} \Define \nonumber \\ \left[ \begin{array}{cc} \boldsymbol{\Psi} + \lambda_{4} \boldsymbol{I} & \boldsymbol{\Psi} \widehat{ \boldsymbol{h} }^{\ast} \\ \widehat{ \boldsymbol{h} }^{} \boldsymbol{\Psi}^{\ast} & N_{0} + \widehat{ \boldsymbol{h} }^{} \boldsymbol{\Psi}\widehat{ \boldsymbol{h} }^{\ast} - r_{4} - \lambda_{4}\epsilon^{2}_{ \boldsymbol{h} } \end{array} \right], \nonumber \end{eqnarray} } \vspace{-4mm} \hspace{-4.5mm} where $\boldsymbol{A}_{1} \succeq \boldsymbol{0}$, $\boldsymbol{A}_{2} \succeq \boldsymbol{0}$, $\boldsymbol{A}_{3} \succeq \boldsymbol{0}$, $\boldsymbol{A}_{4} \succeq \boldsymbol{0}$, $\boldsymbol{B}_{1j} \succeq \boldsymbol{0}$, $\boldsymbol{B}_{2j} \succeq \boldsymbol{0}$. We substitute the above LMI constraints in the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_39}). We get the following equivalent form for the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_39}): \vspace{-1mm} {\small \begin{eqnarray} \max_{ \boldsymbol{\Phi}, \ \boldsymbol{\Psi}, \atop{r_{1}, \cdots r_{4}, \ \lambda_{1},\cdots,\lambda_{4}, \atop{s_{1j}, \ s_{2j}, \ \mu_{1j}, \ \mu_{2j}, \ j=1,2,\cdots,J, \atop{r} } } } \ \ r \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_55} \\ \text{s.t.} \quad \quad \boldsymbol{\Phi} \succeq \boldsymbol{0}, \quad \boldsymbol{\Psi} \succeq \boldsymbol{0}, \quad \Tr(\boldsymbol{\Phi} + \boldsymbol{\Psi}) \ \leq \ P_R, & & \nonumber \\ r r_{2} - r_{1} \leq 0, \quad \forall j=1,2,\cdots,J, \quad s_{1j} - s_{2j} I^{-1} (2R_{0}) \leq 0, & & \nonumber \\ r_{1} \geq 0, \ r_{4} \geq 0, \ \lambda_{1} \geq 0, \ \lambda_{2} \geq 0, \ \lambda_{3} \geq 0, \ \lambda_{4} \geq 0, & & \nonumber \\ \boldsymbol{A}_{1} \succeq \boldsymbol{0}, \ \boldsymbol{A}_{2} \succeq \boldsymbol{0}, \ \boldsymbol{A}_{3} \succeq \boldsymbol{0}, \ \boldsymbol{A}_{4} \succeq \boldsymbol{0}, & & \nonumber \\ s_{2j} \geq 0, \ \mu_{1j} \geq 0, \ \mu_{2j} \geq 0, \ \boldsymbol{B}_{1j} \succeq \boldsymbol{0},\ \boldsymbol{B}_{2j} \succeq \boldsymbol{0}, & & \nonumber \\ r_{3} - (c - a_{max}) r_{4} \leq 0. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_56} \end{eqnarray} } \vspace{-1mm} \hspace{-4.5mm} The above problem can be solved using the bisection method as discussed in Section \ref{sec_mimo_df_relay_bf_jamming_finitealphabet_3}. The initial search interval in the bisection method can be taken as $[0, \ c]$, where $c$ is as defined in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_37}). Further, denoting the maximum value of $r$ by $r_{max}$, the lower bound on the secrecy rate is obtained as follows: \begin{eqnarray} R_{s}(R_{0}) \ \geq \ { \bigg \{ \frac{1}{2} I(r_{max}) - R_{0} \bigg \} }^{+}. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_57} \end{eqnarray} \section{Results and discussions} \label{sec_mimo_df_relay_bf_jamming_finitealphabet_5} In this section, we present numerical results on the secrecy rate for BPSK alphabet (i.e., $M =2$), with/without AN, perfect/imperfect CSI conditions. We assume that $N = 2$, $J = 1,2,3$, $N_{0} = 1$, $P_{s} = 0$ dB, and $P_{R} = 9$ dB. {\em Perfect CSI case of Section \ref{sec_mimo_df_relay_bf_jamming_finitealphabet_3} :} We have used the following channel gains in the simulations: \begin{eqnarray} \boldsymbol{g} &=& [-0.5839 + 2.2907i, -0.7158 + 0.1144i]^{T}, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_66} \\ h_0 &=& -0.3822 - 0.3976i, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_67} \\ z_{01} &=& 0.0123 + 0.0137i, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_68} \\ z_{02} &=& 0.0231 - 0.0178i, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_69} \\ z_{03} &=& -0.0045 - 0.0042i, \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_70} \\ \boldsymbol{h} &=& [0.2174 - 0.6913i, \ -0.4047 - 0.3159i], \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_71} \\ \boldsymbol{z}_{1} &=& [0.3826 + 0.0811i, \ 0.8389 - 0.0943i], \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_72} \\ \boldsymbol{z}_{2} &=& [0.2977 + 0.7902i, \ -0.2069 + 0.4696i], \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_73} \\ \boldsymbol{z}_{3} &=& [-0.6076 + 0.6637i, \ -0.3316 + 0.1921i]. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_74} \end{eqnarray} In Fig. \ref{sim_fig_mimo_df_relay_bf_jamming_finitealphabet_1}, we plot the secrecy rate versus $R_{0}$ for BPSK alphabet (i.e., $M =2$), with/without AN, $J = 1,2,3$ eavesdroppers, $P_{s} = 0$ dB, and $P_{R} = 9$ dB. We observe that the secrecy rate initially increases with increase in $R_{0}$ and then drops to zero for large values of $R_{0}$. We also observe that the injection of AN improves the secrecy rate when $J= 2,3$ eavesdroppers are present. However, when only one eavesdropper is present, the secrecy rate plots with/without AN overlap. This is due to the null signal beamforming by the MIMO relay at the eavesdropper. This is possible only when the number of eavesdroppers is strictly less than the number of antennas in the MIMO relay which happens to be true for this case with $N=2$ and $J=1$. For the case when $J=1$, the secrecy rate maximum happens at $R_{0}=0.001445$. Further, for the case when $J=2,3$ and without AN, the secrecy rate maximum happens at $R_{0}=0.145797$, and with AN it happens at $R_{0}=0.080959$ and $R_{0}=0.099059$, respectively. It is seen that the secrecy rate falls approximately linearly for large $R_{0}$. The near-linear fall in secrecy rate for large values of $R_{0}$ is due to the saturation of $S-D$ link information rate to $\frac{1}{2}\log_{2} 2 = 0.5$ for $M = 2$. We have also numerically observed that the rank of $\boldsymbol{\Phi}$ is 1. \begin{figure} \center \includegraphics[totalheight=8.5cm,width=8.5cm]{sim_fig_mimo_df_relay_finitealphabet_Rs_R0.eps} \caption{Secrecy rate vs $R_{0}$ in MIMO DF relay beamforming for BPSK alphabet, with/without AN signal. $N=2$, $N_0=1$, $J=1,2,3$, $M=2$, fixed $P_s = 0$ dB, and $P_R = 9$ dB.} \label{sim_fig_mimo_df_relay_bf_jamming_finitealphabet_1} \end{figure} {\em Imperfect CSI case of Section \ref{sec_mimo_df_relay_bf_jamming_finitealphabet_4} :} Here, we assume that the channel gains in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_66})-(\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_74}) are the available CSI estimates. We also assume that the magnitudes of the CSI errors in all the links are equal, i.e., $\epsilon_{ \boldsymbol{g} } = \epsilon_{ h_{0} } = \epsilon_{ z_{0j} } = \epsilon_{ \boldsymbol{h} } = \epsilon_{ \boldsymbol{z}_{j} } = \epsilon$. We solve the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_55}) for BPSK alphabet (i.e., $M =2$), with AN, fixed $R_{0} = 0.0810$, $P_{s} = 0$ dB, and $P_{R} = 9$ dB. In Fig. \ref{sim_fig_mimo_df_relay_bf_jamming_finitealphabet_2}, we plot $R_{s}$ vs $\epsilon$ with AN for $J = 1,2,3$. We observe that the secrecy rate decreases with increase in CSI error and with increase in number of eavesdroppers. We have also numerically observed that the rank of $\boldsymbol{\Phi}$ is 1. \begin{figure}[t] \center \includegraphics[totalheight=8.5cm,width=8.5cm]{sim_fig_mimo_df_relay_finitealphabet_Rs_CSIError.eps} \caption{ $R_{s}$ vs $\epsilon$ in MIMO DF relay beamforming for BPSK alphabet and with AN signal. $N=2$, $N_0=1$, $J=1,2,3$, $M=2$, fixed $R_{0} = 0.0810$, $P_{s} = 0$ dB, and $P_{R} = 9$ dB.} \label{sim_fig_mimo_df_relay_bf_jamming_finitealphabet_2} \end{figure} \section{Conclusions} \label{sec_mimo_df_relay_bf_jamming_finitealphabet_6} We considered MIMO DF relay beamforming with imperfect CSI, cooperative artificial noise injection, and finite-alphabet input in the presence of an user and multiple non-colluding eavesdroppers. The source transmits common and secret messages which are intended for the user. The common message is transmitted at a fixed rate $R_{0}$. In order to maximize the worst case secrecy rate, we maximized the worst case link information rate to the user subject to: $i)$ the individual power constraints on the source and the MIMO DF relay, and $ii)$ the best case link information rates to $J$ eavesdroppers be less than or equal to $R_{0}$ in order to support a fixed common message rate $R_{0}$. Numerical results showing the effect of perfect/imperfect CSI, presence/absence of AN with finite-alphabet input on the secrecy rate were presented. We would like to remark that the work presented in this paper can be extended to amplify-and-forward relay channel. \section*{Appendix} In this appendix, we analyze the rank of the optimal solution $\boldsymbol{\Phi}$ obtained from the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_25}). We take the Lagrangian of the objective function $-t$ with constraints in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_59})-(\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_26}) as follows \cite{ir25}: \begin{eqnarray} \ell(t, \ \boldsymbol{\Phi}, \ \boldsymbol{\Psi}, \ \lambda, \ \boldsymbol{\Lambda}_{1}, \ \boldsymbol{\Lambda}_{2}, \ \mu, \ \nu_{j}, \ \xi) \ = \ - t - \Tr(\boldsymbol{\Lambda}_{1}\boldsymbol{\Phi}) \nonumber \\ - \Tr(\boldsymbol{\Lambda}_{2}\boldsymbol{\Psi}) + \lambda\Big( \Tr(\boldsymbol{\Phi}) + \Tr(\boldsymbol{\Psi}) - P_{R} \Big) \nonumber \\ + \mu \Big( (t - a) \big( N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} \big) - (\boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast}) \Big) \nonumber \\ + \sum^{J}_{j = 1} \nu_{j}\Big( \big( \boldsymbol{z}_{j} \boldsymbol{\Phi} \boldsymbol{z}^{\ast}_{j} \big) - \Big( I^{-1}(2R_{0}) - b_{j} \Big) ( N_{0} + \boldsymbol{z}_{j} \boldsymbol{\Psi} \boldsymbol{z}^{\ast}_{j} ) \Big) \nonumber \\ + \xi \Big( (\boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast}) - (c - a) \big( N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} \big) \Big) \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_58} \end{eqnarray} where $\lambda \geq 0, \ \boldsymbol{\Lambda}_{1} \succeq \boldsymbol{0}, \ \boldsymbol{\Lambda}_{2} \succeq \boldsymbol{0}, \ \mu \geq 0, \ \nu_{j} \geq 0, \ \xi \geq 0$ are Lagrangian multipliers. The KKT conditions for (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_58}) are as follows: \begin{itemize} \vspace{2mm} \item[$(a1)$] all constraints in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_59})-(\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_26}), \vspace{2mm} \item[$(a2)$] $\Tr(\boldsymbol{\Lambda}_{1}\boldsymbol{\Phi}) = 0$. Since $\boldsymbol{\Lambda}_{1} \succeq \boldsymbol{0}$ and $\boldsymbol{\Phi} \succeq \boldsymbol{0}$ $\implies$ $\boldsymbol{\Lambda}_{1}\boldsymbol{\Phi} = \boldsymbol{0}$, \vspace{2mm} \item[$(a3)$] $\Tr(\boldsymbol{\Lambda}_{2}\boldsymbol{\Psi}) = 0$. Since $\boldsymbol{\Lambda}_{2} \succeq \boldsymbol{0}$ and $\boldsymbol{\Psi} \succeq \boldsymbol{0}$ $\implies$ $\boldsymbol{\Lambda}_{2}\boldsymbol{\Psi} = \boldsymbol{0}$, \vspace{2mm} \item[$(a4)$] $\lambda \Big( \Tr(\boldsymbol{\Phi}) + \Tr(\boldsymbol{\Psi}) - P_{R} \Big) \ = \ 0$, \vspace{2mm} \item[$(a5)$] $ \mu \Big( (t - a) \big( N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} \big) - (\boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast}) \Big) \ = \ 0$, \vspace{2mm} \item[$(a6)$] $ \forall j =1,2,\cdots,J, \ \ \nu_{j}\Big( \big( \boldsymbol{z}_{j} \boldsymbol{\Phi} \boldsymbol{z}^{\ast}_{j} \big) - \Big( I^{-1}(2R_{0}) - b_{j} \Big) ( N_{0} + \boldsymbol{z}_{j} \boldsymbol{\Psi} \boldsymbol{z}^{\ast}_{j} ) \Big) \ = \ 0$, \vspace{2mm} \item[$(a7)$] $ \xi \Big( (\boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast}) - (c - a) \big( N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} \big) \Big) \ = \ 0$, \vspace{2mm} \item[$(a8)$] $\frac{\partial \ell}{\partial t} = 0$ $\implies$ $\mu \big( N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} \big) = 1$. This further implies that $\mu > 0$, \vspace{2mm} \item[$(a9)$] $\frac{\partial \ell}{\partial \boldsymbol{\Phi}} = \boldsymbol{0}$ $\implies$ $\boldsymbol{\Lambda}_{1} = \lambda \boldsymbol{I} - \mu ( \boldsymbol{h}^{\ast} \boldsymbol{h} ) + \sum^{J}_{j=1} \nu_{j} ( \boldsymbol{z}^{\ast}_{j} \boldsymbol{z}_{j} ) + \xi ( \boldsymbol{h}^{\ast} \boldsymbol{h} )$, \vspace{2mm} \item[$(a10)$] $\frac{\partial \ell}{\partial \boldsymbol{\Psi}} = \boldsymbol{0}$ $\implies$ $\boldsymbol{\Lambda}_{2} = \lambda \boldsymbol{I} + \mu (t - a)( \boldsymbol{h}^{\ast} \boldsymbol{h} ) - \sum^{J}_{j=1} \nu_{j}\Big( I^{-1}(2R_{0}) - b_{j} \Big)( \boldsymbol{z}^{\ast}_{j} \boldsymbol{z}_{j} ) - \xi (c-a)( \boldsymbol{h}^{\ast} \boldsymbol{h} ) $. \end{itemize} The KKT conditions $(a8)$ and $(a5)$ imply that the constraint (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_59}) will be satisfied with equality. Assuming $\boldsymbol{\Phi} \neq \boldsymbol{0}$, this further implies that $t > a$. The constraints (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_60}) and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_61}) imply that $I^{-1}(2R_{0}) \ge b_{j}$ and $c > a$. The KKT conditions $(a9)$, $(a10)$, $(a2)$, $(a3)$, $(a4)$, $(a5)$, $(a6)$, and $(a7)$ imply that \begin{eqnarray} \lambda P_{R} -\mu (t-a)N_{0} + \sum^{J}_{j = 1} \nu_{j} \Big( I^{-1}(2R_{0}) - b_{j} \Big)N_{0} \nonumber \\ + \xi (c-a)N_{0} = 0. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_62} \end{eqnarray} Let $P_{R}$ be small enough such that the constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_61}) is satisfied with strict inequality. This implies that the KKT condition $(a7)$ will be satisfied only when $\xi = 0$. With $\xi = 0$, we consider the scenario when the expression (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_62}) is satisfied for $\lambda > 0$. With $\lambda > 0$, the KKT condition $(a4)$ implies that $\Tr(\boldsymbol{\Phi}) + \Tr(\boldsymbol{\Psi}) = P_{R}$, i.e., the entire relay power, $P_{R}$, will be used for transmission. Further, we rewrite the KKT condition $(a9)$ as follows: \begin{eqnarray} \boldsymbol{\Lambda}_{1} + \mu ( \boldsymbol{h}^{\ast} \boldsymbol{h} ) \ = \ \lambda \boldsymbol{I} + \sum^{J}_{j=1} \nu_{j} ( \boldsymbol{z}^{\ast}_{j} \boldsymbol{z}_{j} ) \succ \boldsymbol{0}. \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_63} \end{eqnarray} The above expression implies that $rank\big( \boldsymbol{\Lambda}_{1} ) \geq N - rank \big(\mu ( \boldsymbol{h}^{\ast} \boldsymbol{h} ) \big) = N-1$. The KKT condition $(a2)$ further implies that $rank\big( \boldsymbol{\Lambda}_{1})=N-1$ and $rank\big( \boldsymbol{\Phi})=1$. We now show that the solution $\boldsymbol{\Phi}$ of the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_25}) has rank-1 even for large values of $P_{R}$. Let $\boldsymbol{\Phi} \neq \boldsymbol{0} \ (\succeq \boldsymbol{0})$ and $\boldsymbol{\Psi} \neq \boldsymbol{0} \ (\succeq \boldsymbol{0})$ be the optimal solutions of (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_25}) with \begin{eqnarray} \Tr(\boldsymbol{\Phi})+\Tr(\boldsymbol{\Psi}) \ = \ P \ \leq \ P_{R}, \nonumber \end{eqnarray} and the objective function value $t > 0$. Define \begin{eqnarray} \boldsymbol{\Phi}_{0} \ &=& \ \frac{\boldsymbol{\Phi}}{\Tr(\boldsymbol{\Phi}) + \Tr(\boldsymbol{\Psi})} \ = \ \frac{\boldsymbol{\Phi}}{P}, \nonumber \\ \boldsymbol{\Psi}_{0} \ &=& \ \frac{\boldsymbol{\Psi}}{\Tr(\boldsymbol{\Phi}) + \Tr(\boldsymbol{\Psi})} \ = \ \frac{\boldsymbol{\Psi}}{P}. \nonumber \end{eqnarray} It is obvious that the objective function value, $t$, in the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_25}) is a non-decreasing function in $P_{R}$. As discussed previously for small values of $P_{R}$, the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_25}) attains it's maximum value when entire power is used, i.e., $(\boldsymbol{\Phi}_{}, \ \boldsymbol{\Psi}_{}) = (P_{}\boldsymbol{\Phi}_{0}, \ P_{}\boldsymbol{\Psi}_{0}) = (P_{R}\boldsymbol{\Phi}_{0}, \ P_{R}\boldsymbol{\Psi}_{0})$. This implies that the objective function value, $t$, in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_25}) is a strictly increasing function in $P_{R}$ for small values of $P_{R}$. We now fix the directional matrices $(\boldsymbol{\Phi}_{0}, \ \boldsymbol{\Psi}_{0})$ which are obtained for small values of $P_{R}$ such that the constraint in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_61}) is satisfied with strict inequality. We rewrite the constraints in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_60}) and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_61}) in the following forms, respectively: \vspace{-2mm} {\small \begin{eqnarray} \forall j=1,2,\cdots,J, \quad I^{-1}(2R_{0}) \ \geq \ \bigg( b_{j} + \frac{ \boldsymbol{z}_{j} \boldsymbol{\Phi} \boldsymbol{z}^{\ast}_{j} }{ N_{0} + \boldsymbol{z}_{j} \boldsymbol{\Psi} \boldsymbol{z}^{\ast}_{j} } \bigg) , \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_64} \\ c \ \geq \ \bigg( a + \frac{ \boldsymbol{h} \boldsymbol{\Phi} \boldsymbol{h}^{\ast} }{ N_{0} + \boldsymbol{h} \boldsymbol{\Psi} \boldsymbol{h}^{\ast} } \bigg) \label{eqn_mimo_df_relay_bf_jamming_finitealphabet_65}. \end{eqnarray} } \vspace{-2mm} \hspace{-3.5mm} In the above inequalities, the derivatives of the functions $(\frac{ \boldsymbol{z}_{j} \boldsymbol{\Phi} \boldsymbol{z}^{\ast}_{j} }{ N_{0} + \boldsymbol{z}_{j} \boldsymbol{\Psi} \boldsymbol{z}^{\ast}_{j} } )$ and $( \frac{\boldsymbol{h}^{}\boldsymbol{\Phi}\boldsymbol{h}^{\ast}}{N_{0} + \boldsymbol{h}^{} \boldsymbol{\Psi} \boldsymbol{h}^{\ast}} )$ w.r.t. $P$ when evaluated at $(P\boldsymbol{\Phi}_{0}, \ P\boldsymbol{\Psi}_{0})$ are $\geq 0$ and $>0$, respectively. This implies that the right hand sides of the inequalities in (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_64}) and (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_65}) are non-decreasing and strictly increasing functions in $P$, respectively, at $(P\boldsymbol{\Phi}_{0}, \ P\boldsymbol{\Psi}_{0})$. This further implies that if the above inequalities are satisfied at $(P_{R}\boldsymbol{\Phi}_{0}, \ P_{R}\boldsymbol{\Psi}_{0})$, the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_25}) will attain its maximum value at $(P_{R}\boldsymbol{\Phi}_{0}, \ P_{R}\boldsymbol{\Psi}_{0})$, i.e., when the entire available relay power, $P_{R}$, is used. When $P_{R}$ is large such that the above inequalities fail to satisfy at $(P_{R}\boldsymbol{\Phi}_{0}, \ P_{R}\boldsymbol{\Psi}_{0})$, the optimization problem (\ref{eqn_mimo_df_relay_bf_jamming_finitealphabet_25}) will attain its maximum value at $(P\boldsymbol{\Phi}_{0}, \ P\boldsymbol{\Psi}_{0})$, where $P \ (< P_{R})$ is the maximum power at which the above inequalities are satisfied at $(P\boldsymbol{\Phi}_{0}, \ P\boldsymbol{\Psi}_{0})$. The excess power $(P_{R} - P)$ will remain unused. This implies that the ranks of $\boldsymbol{\Phi}$ and $\boldsymbol{\Psi}$ remain constant for large values of $P_{R}$.
1,116,691,498,025
arxiv
\section{Introduction}\label{introduction} Dense artificial neural networks are a commonly used machine-learning technique in deep learning that has a wide range of applications. Yet, they have multiple limitations, several of which are potentially addressable by sparse artificial neural networks. Most existing research on this topic focuses on reducing storage and prediction time, for example to be able to use neural networks in embedded devices \cite{han2015,srinivas2017}. We, however, are interested in algorithms that reduce training time as well. By reducing memory requirements and training time, usage of neural networks is made more accessible. It could for example facilitate training neural networks in those embedded devices. Furthermore, it may allow for deploying very large networks, such that they can be used to tackle datasets with a large number of features directly. This objective makes approaches such as \cite{han2015,srinivas2017} unsuitable for our purposes, as they still use the full dense network during training. There are also approaches that do not use the dense network, but instead depend on defining the network's topology before the training phase \cite{dey2017,mocanu2016}. This pre-defined sparsity may however not be optimal for all datasets. For this reason, we consider an alternative approach that does not use the full network and that does not rely on a pre-defined network topology: Sparse Evolutionary Training (SET) \cite{mocanu2018}. SET starts out with a randomly generated sparse network and updates its topology after each training epoch based on the values of its weights. In SET's experiments, training a network using SET gave better results than its nonevolutionary (i.e. using pre-defined sparsity) counterpart, as the final topology is better suited to the training data. Additionally, its results were most of the times even better than those of its densely connected counterpart. In the original paper, it was already suggested that the algorithm may still be improved by using alternative techniques to evolve the network, such as preferential attachment. In this paper, we take a novel direction and propose to evolve the network's topology by using domain knowledge. This is also our main contribution: to determine the importance of a connection based on the cosine similarity between the activations of the two neurons of that connection. Reason for this is that if the cosine similarity is close to zero, this is an indication that there is no meaningful relation between these neurons. We then propose systematically five new algorithms that use this technique to replace the original procedures for adding and removing connections in SET. On top of that, we analyze the additional computational complexity of our method and argue that it should not cause noticeable overhead, while suggesting methods to further reduce complexity for extreme cases. Next, each algorithm is tested on 8 different datasets in order to demonstrate the improvement of our approach over SET. Our results show that usually our algorithms outperform SET and dense state-of-the-art neural network techniques, while having many times less parameters than the latter ones. They also reveal that using cosine similarity to evolve a network may reduce overfitting. Finally, we show the the evolved connectivity patterns of the input neurons (or input features) reflect very well their impact on the classification task and may contribute to further understanding the behavior of neural networks with adaptive sparse connectivity. \section{Background}\label{background} There are many types of neural networks, such as MultiLayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) \cite{lecun2015}. In this paper we will focus on the most vanilla type of neural networks, MLPs, as they represent 61\% of a typical Google TPU (Tensor Processing Unit) workload for production neural networks applications, while convolutional neural networks represent just 5\% \cite{jouppi2017datacenter}. A dense MLP consists of a number of fully connected bipartite layers. In contrast to these commonly used dense networks, there is also research into sparsely connected networks. Dense neural networks have been shown to have a large number of redundant parameters, in some cases more than 95\% of the parameters can be predicted from the remaining ones without accuracy loss \cite{denil2013}. In early work on sparsification, Optimal Brain Damage \cite{lecun1990} and Optimal Brain Surgeon \cite{hassibi1993} use gradient methods in order to sparsify networks during training. They noted that a sparse network has several advantages over its dense counterpart, such as better generalization, reduced memory footprint and improved prediction time. More recently, \cite{han2015} proposed a magnitude-based method for obtaining a sparse network. After pruning the dense network during training, the network is retrained in order to improve accuracy. Their motivation for employing magnitude-based pruning is that alternatives such as using second order derivatives are computationally intensive. In \cite{srinivas2017}, gate variables are introduced that represent whether a connection is present. These gate variables are parameters that are optimized during training, and as such introduce additional overhead. Note that all of these approaches do use the full dense network during training. In \cite{dey2017}, a method for obtaining a sparse neural network was introduced, which does not require training the dense network. Based on the user-specified number of connections per neuron, the topology is determined by an interleaver algorithm ensuring good spatial spread of connections. Although this pre-determined sparsity may allow for larger networks, it is not flexible for handling a wide range of datasets with various characteristics. In \cite{decphdthesis, mocanu2018} SET is introduced, while variants of it are discussed for federated learning in \cite{setfederatedlearning} and image classification in \cite{mostafa2019parameter}. SET is an algorithm for training sparse neural networks with adaptive connectivity. Like the previously described approaches, SET starts out with a sparsely connected network. The topology of this network, however, is not static but instead evolves during training. After each training epoch, when the weights have been trained to a reasonable level to suit the provided data, the connections having weights closest to zero are removed (weights magnitude based removal). New connections replacing removed connections are randomly selected and added to the network. As the evolution of the network is specific to the data, this approach is more flexible. This was also revealed in their results, in which SET outperforms both dense networks and static (i.e. nonevolutionary) sparse networks. The original SET algorithm uses a straightforward randomized method for evolving the topology of a neural network, while encouraging research into more sophisticated methods. The interested reader is referred to \cite{mocanu2018} for more details on SET. Further on, we provide background on cosine similarity \cite{tan2006}, a similarity measure that is used in our method. Cosine similarity is defined as the cosine of the angle between two vectors. It has various applications, such as text classification, document clustering and face verification. An important reason for adopting cosine similarity is the fact that it is efficient to evaluate. Another important property is that the length of each vector that is being compared is normalized. \section{Proposed methods}\label{method} This section details our proposed approach. First, it presents how cosine similarity can be used to determine the importance of neural network connections. Second, it introduces 5 new algorithms that integrate cosine similarity based connections importance into the SET procedure. To the end, it discusses the computational complexity and the relation to Hebbian learning of our approach. \subsection{Cosine similarity to detect connections importance} The basic idea is that the sparse network topology can be evolved based on the behavior of its neurons. The importance of a connection is determined by the similarity of the activations of these neurons, which were obtained during the feedforward phase of an epoch. The similarity measure that we employ is cosine similarity. For activation vectors $\textbf{a}$ and $\textbf{b}$, this is defined as: \begin{equation} \frac{\textbf{a} \cdot \textbf{b}}{\norm{\textbf{a}}\norm{\textbf{b}}} \label{cosine_similarity} \end{equation} Intuitively, if two neurons exhibit similar behavior, this indicates that the value of one neuron can help predict the value of the other neuron and can therefore aid in propagating patterns present in the data. For this reason, this connection can help to establish the behavior of the receiving neuron in the simplest way possible. Thus, a connection is more likely to be meaningful if there is a consistent relation between the behavior of the two neurons. We argue that cosine similarity is a suitable way of determining if such a relation exists, since it can consider two vectors to be similar when the signs of the activations frequently agree, but the magnitude does not. This is desirable as a consistent difference in magnitude can be mitigated by the weight of a connection. We will further introduce systematically five new algorithms to evolve sparse neural networks using cosine similarity. For consistency, the notation introduced in \cite{mocanu2018} is used. Let $n^k$ denote the number of neurons of layer $k$ in the neural network with $s$ available training samples. $\varepsilon$ is a constant controlling the sparsity of the network. Each bipartite layer $n^{k-1}\times n^k$ has weight matrix $\textbf{W}^k \in \textbf{R}^{n^{k-1}\times n^k}$. \subsection{Proposed algorithms}\label{methods} \subsubsection{Cosine Similarity-based Connection Addition (CoDASET)}\label{sim_add} The neural network is initialized in the same way as the network of SET: each connection in a bipartite layer with $n^{k-1}\times n^k$ neurons exists with probability $\frac{\varepsilon(n^{k-1}+n^k)}{n^{k-1}n^k}$, resulting in a sparse network. In the training phase, the weights are updated by stochastic gradient descent. However, we also fill out an activation matrix $\textbf{A}^k \in \textbf{R}^{n^k\times s}$ for each layer $k$ during the feedforward phase. After each training epoch, the cosine similarity matrix $\textbf{C}^k \in \textbf{R}^{n^{k-1}\times n^k}$ can then be calculated as follows: \begin{equation} \textbf{C}_{pq}^k = \left|\frac{\textbf{A}^{k-1}_p \cdot \textbf{A}^k_q}{\norm{\textbf{A}^{k-1}_p}\norm{\textbf{A}^k_q}}\right| \label{add_sim} \end{equation} where $p$ and $q$ are neurons in layer $k-1$ and $k$ respectively and $\textbf{A}^k_p$ is the activation vector for neuron $p$ of length $s$. This is followed by the rewiring step, in which the connections having weight closest to zero are removed. Since $\textbf{C}^k$ contains absolute values, we can retrieve the set of connections with highest similarity from this matrix and add these connections to the network. Pseudocode for this approach can be found in Algorithm \ref{algorithm:CoDASET}. \begin{algorithm}[h \tiny \caption{CoDASET pseudocode} \label{algorithm:CoDASET} \begin{algorithmic} \STATE initialize SET model\; \FOR {each training epoch $e$} \STATE perform standard feedforward phase, storing activations in activation matrix $\textbf{A}$\; \STATE backpropagate and perform weights update\; \FOR {each bipartite SC layer $k$} \STATE remove a fraction $\zeta$ of the smallest positive weights\; \STATE remove a fraction $\zeta$ of the highest negative weights\; \STATE calculate cosine similarity matrix $\textbf{C}$ according to equation \ref{add_sim}\; \STATE add connections with largest value in $\textbf{C}$ in the same amount as previously removed\; \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsubsection{Cosine Similarity-based Probabilistic Connection Addition (CoPASET)}\label{randomly_add} Here, we propose a probabilistic variant of CoDASET. New connections are not chosen by using cosine similarity directly, but instead by drawing from a probability distribution based on the cosine similarities of the connections. Each connection has a probability of being added to the network that corresponds to its normalized cosine similarity: \begin{equation} P(\textbf{W}^k_{pq})=\frac{\textbf{C}^k_{pq}}{\sum_{i=0}^{n^{k-1}}\sum_{j=0}^{n^k}\textbf{C}_{ij}^{k}} \label{equation:CoPASET} \end{equation} where $P(\textbf{W}^k_{pq})$ is the probability of adding a connection between neurons $p$ and $q$ in bipartite layer $k$. This method reintroduces randomness into the connection selection procedure and may therefore lead to a better exploration of possible topologies, as CoDASET can potentially get stuck in a local minima (adding and removing the same connections after each epoch). On the other hand, we do expect to select more interesting connections than SET by using a probability distribution proportional to cosine similarity. Pseudocode is presented in Algorithm \ref{algorithm:CoPASET}. \begin{algorithm}[b!] \tiny \caption{CoPASET pseudocode} \label{algorithm:CoPASET} \begin{algorithmic} \STATE initialize SET model\; \FOR {each training epoch $e$} \STATE perform standard feedforward phase, storing activations in activation matrix $\textbf{A}$\; \STATE backpropagate and perform weights update\; \FOR{each bipartite SC layer $k$} \STATE remove a fraction $\zeta$ of the smallest positive weights\; \STATE remove a fraction $\zeta$ of the highest negative weights\; \STATE calculate cosine similarity matrix $\textbf{C}$ according to equation \ref{add_sim}\; \STATE create probability distribution following equation \ref{equation:CoPASET}\; \STATE add connections by drawing samples in the same amount as previously removed\; \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsubsection{Cosine Similarity-based Connection Removal (CoRSET)}\label{weighted_remove} SET removes connections based on their weight. For such magnitude-based methods it has been shown that they often remove the wrong connections, e.g. by Optimal Brain Surgeon \cite{hassibi1993}. Unfortunately, their proposed alternative is computationally expensive. Therefore, we also apply cosine similarity to connection elimination. Instead of eliminating the connections that have smallest weight, the connections for which the product of their weight and cosine similarity is smallest are eliminated: \begin{equation} \textbf{M}_{pq}^k = \textbf{W}^k_{pq}\textbf{C}^k_{pq} \end{equation} where $pq$ is an existing connection in layer $k$. Note that this method does not require computing $\textbf{C}^k$ fully, since only the values for existing connections are used. We hypothesize that using cosine similarity as an additional indicator for the importance of a connection can result in improved connection removal. Algorithm \ref{algorithm:CoRSET} shows pseudocode for this method. \begin{algorithm}[b!] \tiny \caption{CoRSET pseudocode} \label{algorithm:CoRSET} \begin{algorithmic} \STATE initialize SET model\; \FOR {each training epoch $e$} \STATE perform standard feedforward phase, storing activations in activation matrix $\textbf{A}$\; \STATE backpropagate and perform weights update\; \FOR {each bipartite SC layer $k$} \FOR{each existing connection $pq$ in $k$} \STATE $\textbf{C}_{pq}^k \leftarrow \left|\frac{\textbf{A}^{k-1}_p \cdot \textbf{A}^k_q}{\norm{\textbf{A}^{k-1}_p}\norm{\textbf{A}^k_q}}\right|$\; \ENDFOR \STATE \textit{\%calculate metric for removal}\; \FOR{each existing connection $pq$ in $k$} \STATE $\textbf{M}_{pq}\leftarrow \textbf{W}^k_{pq}\textbf{C}^k_{pq}$\; \ENDFOR \STATE remove fraction $\zeta$ of connections from the network with smallest metric value in $\textbf{M}$\; \STATE randomly add connections in the same amount as previously removed\; \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsubsection{CoDACoRSET} This method combines CoDASET with CoRSET, i.e. it uses CoRSET to remove the unimportant connections, and CoDASET to add new connections. \subsubsection{CoPACoRSET} This method combines CoPASET with CoRSET, i.e. it uses CoRSET to remove the unimportant connections, and CoPASET to add new connections. \subsection{Computational Complexity} An important property of SET is the potential to reduce both training and prediction time by capitalizing on the sparsity of the network. For this reason, we analyze the additional overhead which our proposed approach brings, such that a trade-off can be made between computational complexity and any possible accuracy improvements. The extra overhead mainly resides in calculating the activation matrix $\textbf{C}$. For each entry in this matrix, 3 dot products between vectors of size $s$ are performed. CoRSET, however, does not require calculating the full matrix, since only the cosine similarities of existing connections are needed. Therefore, it requires $3s\frac{\varepsilon(n^{k-1}+n^k)}{n^{k-1}n^k}$ computations per bipartite layer $n^{k-1}\times n^k$. CoDASET and CoPASET on the other hand do use the full matrix, thus those need $3sn^{k-1}n^k$ computations per bipartite layer $n^{k-1}\times n^k$. This is comparable to a single feedforward phase of a dense network, in which such a bipartite layer calculates $sn^{k-1}n^k$ products. Since the backpropagation phase dominates the feedforward phase, we argue that our method should not cause noticeable overhead in an efficient implementation. \subsection{Relation to Hebbian Learning} Hebbian learning \cite{hebb1949} is an alternative learning algorithm for neural networks. Instead of updating weights using global information from a loss function, weights are updated solely based on the values of the activations of the pre- and post-synaptic neuron of that connection, i.e. in a local manner. Pre-synaptic neurons refer to neurons sending their activations over a connection in a bipartite layer, with post-synaptic neurons being the receivers. Using this terminology, the principle that Hebbian learning is based on can be stated as the theory that a post-synaptic neuron is more responsive to activations of pre-synaptic neurons that frequently take part in firing this neuron. In other words, a connection is strengthened if the activations of its neurons agree, this is often summarized as "fire together, wire together". Following this principle, weights are updated by Hebb's rule: \begin{equation} \dot{w}_{ij} = \eta x_i y_j \label{classic_hebb_rule} \end{equation} where $\dot{w}_{ij}$ is the change in weight magnitude, $\eta$ is the learning rate, and $x_i$ and $y_j$ the activations of neurons $i$ and $j$ respectively. As \cite{wadhwa2016} pointed out, Hebbian learning is largely ignored for machine learning tasks. Yet, it has some interesting properties, such as being biologically plausible. For this reason, we discuss the similarities: the cosine similarity of a connection in our method is also determined by multiplying the activations of its neurons, albeit normalized. So if the activations of two connected neurons agree, both Hebbian learning and our methods would increase the importance of this connection, respectively resulting in increased weight and a better chance at adding or preserving this connection. Thus, both methods reward connections between neurons that exhibit similar behavior. There is, however, a difference in usage: Hebb's rule is used to optimize the network's parameters values, whereas cosine similarity here is employed in order to evolve the network's topology. \section{Experiments}\label{experiments} \subsection{Setup} Experiments were performed on several datasets retrieved from the UCI Machine Learning Repository\footnote{\url{http://archive.ics.uci.edu/ml/}}: MicroMass \cite{micromass}, CNAE-9 \cite{cnae}, Epilepsy \cite{epilepsy}, Human Activity Recognition (HAR) \cite{har}, Madelon \cite{madelon} and ISOLET \cite{ISOLET}. Additionally, two image datasets were used: COIL-100 \cite{coil100} and Fashion-MNIST \cite{fashion}. An overview of the properties of these datasets can be found in Table \ref{table:datasets}. We chose this diverse set of datasets covering various domains in order to demonstrate the general applicability of our method. On top of that, these datasets are difficult enough such that there is room for improvement. \begin{table*}[t!] \caption{Dataset statistics} \label{table:datasets} \vskip 0.15in \begin{center} \begin{scriptsize} \begin{tabular}{l|l|l|l|l|l|l} Dataset & Domain & Data type & \# classes & \# features & \# train samples & \# test samples \\ \hline ISOLET & Speech & Continuous & 26 & 617 & 6238 & 1559 \\ HAR & Phone sensor & Continuous & 6 & 561 & 7352 & 2947\\ Madelon & Artificial & Discrete & 2 & 500 & 2000 & 600\\ Epilepsy & EEG & Discrete & 2 & 178 & 9244 & 2256\\ CNAE-9 & Text & Discrete & 9 & 856 & 858 & 222\\ MicroMass & Mass-spectrometry & Discrete & 20 & 1300 & 454 & 117\\ Fashion-MNIST & Image & Discrete & 10 & 784 & 60000 & 10000\\ COIL-100 & Image & Discrete & 100 & 1024 & 5764 & 1436\\ \end{tabular} \end{scriptsize} \end{center} \vskip -0.15in \end{table*} The MicroMass dataset consists of mass-spectrometry data obtained from bacterial strains. The objective is to discriminate between bacterial species based on spectra, which has been shown to be hard for several species. Note that only pure spectra data was considered. CNAE-9 is a text classification task, in which 1080 documents represented by their word frequencies are classified into 9 categories. This is a sparse dataset: 99.22\% of the data consists of zeros. Epilepsy is a time-series dataset from EEG recordings of 500 individuals, in which the objective is to detect epileptic seizures. HAR consists of various smartphone sensor statistics, from which the activity of the person carrying the phone must be deduced. Madelon is an artificial dataset that has 5 informative features and 15 linear combinations of those features. The other 480 features are probes that provide no information about the class label. ISOLET is a speech dataset, from which it must be recognized which letter of the alphabet was spoken by a subject. COIL-100 and Fashion-MNIST are both small grayscale image datasets. In order to perform experiments on the previously described datasets, they must be split into a training set and a test set. For ISOLET, HAR, Madelon and Fashion-MNIST, such a split was already provided. For Epilepsy, CNAE-9, MicroMass and COIL-100 on the other hand, a custom split had to be made. We opted to randomly sample 20\% of the data as test data, the remainder being the training data. The aim of our experiments is to study the effect of our proposed training algorithms on the accuracy of sparse MLPs, compared to the original approach, i.e. SET-MLP (sparse MLP trained with SET). The sparse MLPs trained with our algorithms are dubbed further: CoDASET-MLP, CoPASET-MLP, CoRSET-MLP, CoDACoRSET-MLP and CoPACoRSET-MLP. Since we are not trying to optimize hyperparameters, we mostly used the same configuration of parameters as SET, which is also our baseline. For all experiments, the multilayer perceptron that was used consisted of an input layer, three hidden layers of 1000 neurons each and an output layer. The only exception to this are the experiments involving Fashion-MNIST, in which hidden layers of 200 neurons were used instead. In addition, we used activation function SReLU \cite{jin2016}, sparsity level $\varepsilon=20$, rewire rate $\zeta=0.3$ and a dropout rate of 0.3, all of which were also used in SET. Finally, the learning rate $\eta$ was chosen by empirically experimenting with different values. MicroMass and Madelon were found to give best results for $\eta=0.1$, the other datasets use $\eta=0.01$. Please note that these hyperparameter values yield the same amount of connections for all models studied, i.e. SET-MLP, CoDASET-MLP, CoPASET-MLP, CoRSET-MLP, CoDACoRSET-MLP and CoPACoRSET-MLP, on the same dataset. \subsection{Implementation Details} Our method is implemented on top of Keras, using a weight mask that sets selected weights to zero in order to create a sparse network. The weight rewiring step itself is implemented in pure Python. Proof-of-concept code is available at \href{https://github.com/joostPieterse/CosineSET}{https://github.com/joostPieterse/CosineSET}. \subsection{Results} The resulting accuracy plots of our experiments are shown in Fig. \ref{fig:sim}. Note that the experiments on MicroMass were run multiple times for each method, the plot shown in Fig. \ref{fig:sim} is the experiment resulting in median maximum accuracy for that method. The reason for running MicroMass in particular multiple times is twofold. First, when performing experiments on MicroMass, we observed a relatively large variance in accuracy over time. Second, the difference in maximum accuracy between the results of each of the different methods was quite small. Consequently, multiple runs were needed in order to obtain statistically significant results from this dataset. \begin{figure*}[t!] \begin{center} \subfigure{{\includegraphics[width=.245\linewidth]{addsim-isolet}}} \subfigure{{\includegraphics[width=.245\linewidth]{removesim-isolet}}} \subfigure{{\includegraphics[width=.245\linewidth]{addsim-har}}} \subfigure{{\includegraphics[width=.245\linewidth]{removesim-har}}}\\ \vspace{-1em} \subfigure{{\includegraphics[width=.245\linewidth]{addsim-madelon}}} \subfigure{{\includegraphics[width=.245\linewidth]{removesim-madelon}}} \subfigure{{\includegraphics[width=.245\linewidth]{addsim-epilepsy}}} \subfigure{{\includegraphics[width=.245\linewidth]{removesim-epilepsy}}}\\ \vspace{-1em} \subfigure{{\includegraphics[width=.245\linewidth]{addsim-cnae}}} \subfigure{{\includegraphics[width=.245\linewidth]{removesim-cnae}}} \subfigure{{\includegraphics[width=.245\linewidth]{addsim-micromass}}} \subfigure{{\includegraphics[width=.245\linewidth]{removesim-micromass}}}\\ \vspace{-1em} \subfigure{{\includegraphics[width=.245\linewidth]{addsim-fashion}}} \subfigure{{\includegraphics[width=.245\linewidth]{removesim-fashion}}} \subfigure{{\includegraphics[width=.245\linewidth]{addsim-coil100}}} \subfigure{{\includegraphics[width=.245\linewidth]{removesim-coil100}}} \caption{Evaluation of the proposed cosine similarity-based sparse MLP models against the baseline SET-MLP on 8 different datasets.} \label{fig:sim} \end{center} \vskip -0.2in \end{figure*} Table \ref{table:sim} lists the maximum accuracy for each method/dataset combination. Note that the maximum accuracies for MicroMass are averaged over all runs on that dataset/method. Experiments on other datasets are reported after one run, as we did not observe a significant difference at multiple runs. For context, we will first provide an overview of the results of previous research on neural networks on these datasets, before analyzing our own methods. We emphasize, however, that the purpose of these experiments is not necessarily to improve accuracy for a specific dataset by e.g. tuning hyperparameters, but instead to identify the effect on accuracy of integrating our approach into the SET procedure. To the best of our knowledge, the best neural network approach for ISOLET using no further signal processing has an accuracy of 96.02\% \cite{kochetov2016}, by employing an MLP of which the hyperparameters were optimized by a software library. We improve upon this result with an accuracy of 96.54\% for our best method, while using only 0.14M parameters compared to their 1.1M parameter MLP. Among the neural network-based approaches to HAR, an accuracy of 95.75\% was obtained using a CNN \cite{ronao2015}. Our best result improves this by about 2\%, even though the incorporated MLP replacing the softmax layer of their CNN already has twice as many parameters as our MLP. Madelon is not commonly tackled by neural networks. The difficulty of using an MLP for this dataset for example becomes clear in \cite{santos2010}. They obtained an accuracy of 56.0\% for this two-class classification problem when using all features in an MLP, whereas our best method has an accuracy of 77.50\%. In \cite{orhan2011}, the accuracy of recognizing epileptic seizures is improved from 93.2\% using an MLP to 99.6\% using both an MLP and K-means. Their result is better than our best accuracy of 97.96\%, although this may be because a combination of two techniques is used. For CNAE-9, an accuracy of 97.2\% was obtained with an MLP \cite{ko2017}, which is more than our best result of 95.95\% while using a similar number of parameters. An accuracy of 91.06\% can be obtained on Fashion-MNIST \cite{gouk2018}, approximately 2\% more than our best result. However, they use a much larger MLP that has 7.0M parameters, while our MLP for Fashion-MNIST only has 0.038M parameters. In \cite{vicente2003}, an accuracy of 96\% was obtained for COIL-100 by employing both Principle Component Analysis and an MLP, which we improve by over 2\%. So, to summarize, the state-of-the-art is improved upon by at least one of the methods listed in Table \ref{table:sim} for 4 out of the 7 previously mentioned datasets, despite the fact that hyperparameters were not optimized for these specific datasets. For the unmentioned dataset, MicroMass, we are not aware of any MLP results reported in the literature. However, in \cite{vervier2015} an accuracy of 89.4\% was obtained on MicroMass using an SVM-based strategy. \begin{table*}[t!] \caption{Cosine similarity accuracies for our newly proposed algorithms and some combinations thereof. Each entry denotes \textit{accuracy (relative accuracy compared to SET-MLP)}, the entry with highest accuracy for its dataset is made bold. Sparsity level represents the number of missing (zero-out) connections in the sparse MLPs from the total number of connections in their corresponding fully-connected MLPs. Per dataset, all sparse MLP models have the same amount of parameters (connections).} \label{table:sim} \vspace{-1em} \begin{center} \begin{scriptsize} \begin{sc} \begin{tabular}{l|l|l|l|l|l|l|r|r} Dataset & SET- & CoDASET- & CoPASET- & CoRSET- & CoDACoRSET- & CoPACoRSET- &Sparsity& Number of\\ & MLP (\%) & MLP (\%) & MLP (\%) & MLP (\%) & MLP (\%) & MLP (\%) &level (\%)&parameters\\ \hline ISOLET&95.45&\textbf{96.54 (+1.09)}&95.70 (+0.25)&95.13 (-0.32)&96.09 (+0.64)&95.96 (+0.51)&94.76&138\textbf{k}\\ HAR&96.67&97.12 (+0.45)&97.01 (+0.34)&\textbf{97.69 (+1.02)}&97.46 (+0.79)&97.12 (+0.45)&95.43&117\textbf{k}\\ Madelon&64.33&70.67 (+6.34)&70.00 (+5.67)&70.67 (+6.34)&\textbf{77.50 (+13.17)}&72.50 (+8.17)&95.52&112\textbf{k}\\ Epilepsy&97.47&97.74 (+0.27)&97.78 (+0.31)&97.61 (+0.14)&\textbf{97.96 (+0.49)}&97.65 (+0.18)&95.15&105\textbf{k}\\ CNAE-9&94.59&95.05 (+0.46)&94.59 (0.00)&95.50 (+0.91)&94.59 (0.00)&\textbf{95.95 (+1.36)}&95.59&126\textbf{k}\\ MicroMass&85.47&85.47 (0.00)&87.46 (+1.99)&85.47 (0.00)&\textbf{88.03 (+2.56)}&86.97 (+1.50)&95.60&146\textbf{k}\\ Fashion-MNIST&88.73&87.97 (-0.76)&\textbf{89.01 (+0.28)}&88.32 (-0.41)&87.62 (-1.11)&88.05 (-0.68)&84.22&37\textbf{k}\\ COIL-100&\textbf{98.68}&97.77 (-0.91)&98.47 (-0.21)&98.40 (-0.28)&97.35 (-1.33)&98.40 (-0.28)&92.24&220\textbf{k}\\ \end{tabular} \end{sc} \end{scriptsize} \end{center} \vskip -0.3in \end{table*} \subsubsection{CoDASET-MLP} The accuracy plots of CoDASET-MLP show that its training speed is similar to SET-MLP's. A notable exception is ISOLET, in which CoDASET-MLP continues to improve accuracy for much longer than SET-MLP. So besides ISOLET, CoDASET-MLP needs a similar number of training epochs to reach maximum accuracy. CoDASET-MLP improves upon the SET-MLP baseline in terms of maximum accuracy in nearly all non-image datasets. On image datasets, however, its performance is relatively poor. An important characteristic of image datasets is that the location of features can shift, as objects appear at different points in the image, which also inspired CNNs. We hypothesize that using cosine similarity makes an MLP less robust to feature shift. However, MLPs are not frequently used for image classification in practice except for benchmarking, as more suitable alternatives such as CNNs exist. \subsubsection{CoPASET-MLP} For almost all datasets, CoPASET-MLP outperforms SET-MLP. The only dataset for which SET-MLP performs better is COIL-100, though it actually achieves highest accuracy on the other image dataset, Fashion-MNIST. The results are also in line with our expectation that its performance would be more consistent compared to CoDASET-MLP across different datasets, since the reintroduced randomness leads to better exploration of possible topologies. Because of these performance improvements, we conclude that this method outperforms SET-MLP in a wide range of applications and gives more consistent results than CoDASET-MLP. \subsubsection{CoRSET-MLP} A small improvement can be observed for most of the non-image datasets compared to SET-MLP, even obtaining best results on HAR over all methods. On the other hand, the accuracies of CoRSET-MLP are in general slightly lower compared to the accuracies of CoDASET-MLP and CoPASET-MLP. So CoRSET-MLP's results are a small improvement over SET's, but overall CoDASET-MLP and CoPASET-MLP obtained better results. \subsubsection{CoDACoRSET-MLP} In the experiments on CoDACoRSET-MLP, we observed large improvements on all but one of the non-image datasets. Especially noteworthy are the results on Madelon, Epilepsy and MicroMass, for which the highest accuracy over all methods was obtained showing a clear improvement over SET. The large improvement for Madelon in particular stands out. This is an extremely noisy dataset, so the improved performance may indicate that cosine similarity reduces overfitting on this noise. Furthermore, we can see that the difference in results on image- and non-image data is even more pronounced in the results of this method. \subsubsection{CoPACoRSET-MLP} Here a similar observation can be made: replacing SET's method of removing connections with CoRSET results in lower performace for image datasets, but generally better results for the other datasets. However, CoDACoRSET-MLP has better results than this method on all but one of the non-image datasets and is therefore a more promising method. Thus we conclude that CoPASET-MLP is more suitable for use on image datasets and CoDACoRSET-MLP is more suitable for non-image datasets. \section{Discussion: Understanding Evolutionary Pattern Behavior} \label{sec:undevpaper} In the results of CoDACoRSET-MLP on Madelon, we noted a large improvement in accuracy over SET-MLP and hypothesized that this indicates that our methods reduce overfitting. In this section, we conduct further analysis of this hypothesis by analyzing the models topologies obtained after training. Herein, we focus just on the Madelon dataset, while an extensive analyze on all datasets can be found in Appendix \ref{app:undev}. In particular, we are interested in the question of whether this difference is caused by the prioritization of features as the network evolves during training. If the right features are prioritized as the network evolves, their degrees would follow the feature importance distribution. Madelon has an interesting feature importance distribution. It contains 5 informative features and 15 combinations thereof, for a total of 20 (redundant) informative features. All other 480 features are noninformative noise \begin{figure}[t!] \begin{center} \subfigure{{\includegraphics[width=.47\linewidth]{madelon-set-degrees}}} \subfigure{{\includegraphics[width=.47\linewidth]{madelon-CoDACoRSET-degrees}}} \vskip -0.1in \caption{Number of connections distribution per input neuron in SET-MLP and CoDACoRSET-MLP after training on Madelon.} \label{madelon-topology-hist} \end{center} \vskip -0.2in \end{figure} \begin{figure}[t!] \begin{center} \subfigure{{\includegraphics[width=.48\linewidth]{madelon-least-most}}} \subfigure{{\includegraphics[width=.48\linewidth]{madelon-most-least}}} \vskip -0.1in \caption{Influence of feature removal on accuracy for the Madelon dataset. The method of selecting features to remove is listed per figure. The vertical line in the left figure marks 480 input neurons removed.} \label{madelon-topology-plot} \end{center} \vskip -0.2in \end{figure} The resulting distribution of the degrees is shown in Fig.~\ref{madelon-topology-hist}. For SET-MLP, we do observe some input neurons which have more than 60 connections, these are slightly more connected compared to the other neurons. However, for CoDACoRSET-MLP we obtained exactly 20 neurons that are clear outliers (i.e. the input neurons with degree larger than 100). We cannot for sure know if these 20 input neurons correspond to the 20 informative features, as this information is not provided in the dataset. Yet, this is certainly suggested by the vastly improved results obtained by CoDACoRSET-MLP compared to SET-MLP on this dataset. Furthermore, we show in Fig.~\ref{madelon-topology-plot} the influence on accuracy of removing neurons based on their degree. The left-hand plot shows that we maintain and even improve accuracy while removing 480 of the input neurons that have the least connections in the CoDACoRSET-MLP model. When we continue to remove input neurons after the first 480 neurons, a steep drop in accuracy can be observed, indicating that the 20 outliers in the degree distribution indeed correspond exactly to the informative features in Madelon. The plot for SET-MLP on the other hand shows that its accuracy is also maintained for some time, but clearly more gradually degrades than the plot of CoDACoRSET-MLP. So, we conclude that CoDACoRSET-MLP is a valid approach to understand the impact of input features on the classification task, and may be further use for supervised feature selection and to try understanding the underlying mechanisms of neural networks, while SET-MLP presents also these characteristics but at a much more diluted level. \section{Conclusion}\label{conclusion} We introduced a new approach that uses cosine similarity to evolve sparse neural networks. It improves the search process on the optimal topology using domain knowledge. Additionally, based on this approach, five algorithms were proposed to train sparse neural networks, i.e. CoDASET, CoPASET, CoRSET, CoDACoRSET, and CoPACoRSET. All algorithms were tested on 8 different datasets. CoPASET had the most consistent results, outperforming the baseline SET in all but one of the datasets. CoDACoRSET on the other hand performs the best in general on non-image data. In contrast, it obtained the worst results on image data. A possible explanation is the feature shift that can occur in image data. So we can conclude that out of the tested methods, CoDACoRSET is the best method in terms of accuracy for non-image data. It should be noted that for most of the datasets, at least one of the algorithms proposed in this paper outperformed the state-of-the-art on MLPs for that dataset, while frequently having few order of magnitude less connections. Additionally, our experimental results indicate that using cosine similarity for evolution of sparse networks can reduce overfitting. Finally, we show that the evolved connectivity patterns of the input neurons can help understanding the input features impact on classification. There are several directions for future work. First, further analysis could be conducted on the additional computational complexity introduced by cosine similarity for connection selection. Second, more efficient implementations for all algorithms can be researched, e.g. GPU for cosine similarity \cite{li2010}, sparse data structures for all neural network models. Third, extensive studies can be performed on the effect of sampling the activation vectors before calculating cosine similarity, which would further reduce its computational complexity and therefore improve scalability. Fourth, trying to understand better the evolved connectivity patterns may lead to more interpretable neural network models.
1,116,691,498,026
arxiv
\section{Introduction} The investigation of gauge gravitation theory in Riemann-Cartan spacetime (GTRC), which is a necessary generalization of metric gravitation theory in the framework of gauge approach by including the Lorentz group into the gauge group corresponding to gravitational interaction, shows that GTRC allows to solve some principal problems of general relativity theory (GR) by virtue of the change of gravitational interaction by certain physical conditions in the frame of GTRC in comparison with GR (see for example \cite{b1,b2,b3}). The change of gravitational interaction is provoked by more complicated structure of physical spacetime, namely by spacetime torsion. In the frame of GTRC the gravitational interaction in the case of usual gravitating matter with positive values of energy density and pressure can be repulsive. The effect of gravitational repulsion appears at extreme conditions when energy density and pressure are extremely high and also in case when energy density is very small and vacuum effect of gravitational repulsion is essential. This allows to solve the problem of cosmological singularity and to explain accelerating cosmological expansion at present epoch without using the notion of dark energy. Given data were obtained by study of isotropic cosmology built in the frame of GTRC based on general expression of gravitational Lagrangian including both a scalar curvature and quadratic in the curvature and torsion invariants with indefinite parameters by certain restrictions on these parameters. A physical cause of a change of gravitational interaction in GTRC is connected with the fact that torsion according to gravitational equations is function of energy density and pressure and together with energy-momentum tensor affects on spacetime metric. The torsion plays the principal role at extreme conditions by formation of limiting energy density for gravitating matter \cite{b4} and also leads to formation of effective cosmological constant at asymptotics of cosmological models by virtue of influence on physical spacetime in the vacuum having the structure of Riemann-Cartan continuum with de Sitter metric (but not Minkowski spacetime) \cite{b5}. The following question appears: what is possible role of the torsion in astrophysics in the case of usual gravitating systems, for which energy density is much smaller than limiting energy density \footnote{We don't discuss here massive stars collapsing in GR with formation of singular black holes. In the frame of GTRC such objects are impossible if limiting energy density exists in the nature.} but greater than average energy density in the Universe at present epoch (or effective cosmological constant). It should be noted that one torsion function at asymptotics of cosmological models has the structure (see below) which can be essential quantitatively in newtonian approximation though the evolution of cosmological models at asymptotics coincides practically with that of Friedmann cosmological models with cosmological constant. The study of this question is particular case of research of relationship of GTRC and GR. This paper is devoted to investigation of relationship of GR and the simplest GTRC (minimum GTRC) which allows to build the theory of regular accelerating Universe. \section{Isotropic cosmology and minimum gauge gravitation theory in Riemann-Cartan spacetime} In the beginning lets introduce the base definitions and relations used in this paper. In the framework of GTRC the role of gravitational field variables play the orthonormalized tetrad $h^i{}_\mu$ and the Lorentz connection $A^{ik}{}_\mu$; corresponding field strengths are the torsion tensor $S^i{}_{\mu\nu}$ and the curvature tensor $F^{ik}{}_{\mu\nu}$ defined as \[ S^i{}_{\mu \,\nu } = \partial _{[\nu } \,h^i{}_{\mu ]} - h_{k[\mu } A^{ik}{}_{\nu ]}\,, \] \[ F^{ik}{}_{\mu\nu } = 2\partial _{[\mu } A^{ik}{}_{\nu ]} + 2A^{il}{}_{[\mu } A^k{}_{|l\,|\nu ]},\ \] where holonomic and anholonomic spacetime coordinates are denoted by means of greek and latin indices respectively \footnote{Like our previous papers we will use notations corresponding to the following relation between holonomic connection $\Gamma^{\lambda}{}_{\mu\nu }$ and $A^{ik}{}_\mu$: $\Gamma^{\lambda}{}_{\mu\nu } = h_i{}^{\lambda } (\partial _{\nu } \,h^i{}_{\mu } - h_{k\mu } A^{ik}{}_{\nu })$. Then the tensor $F^{\rho}{}_{\sigma\mu\nu }=h_i{}^{\rho} h_{k \sigma } F^{ik}{}_{\mu\nu }= 2\partial _{[\nu }\Gamma^{\rho}{}_{|\sigma\,|\mu] }+ 2 \Gamma^{\rho}{}_{\lambda [\nu } \Gamma^{\lambda}{}_{|\sigma\,| \mu] }$ has the opposite sign as compared with frequently defining curvature tensor and $S^{\lambda}{}_{\mu \,\nu } =\Gamma^{\lambda}{}_{[\mu\nu] }$ (cf.\cite{b7,b8}). The signature of spacetime metric is (-2).}. Isotropic cosmology in Riemann-Cartan spacetime investigated in a number of papers (see for example \cite{b1,b2,b3,b4,b5,b6}) was built by using the gravitational Lagrangian given in the following sufficiently general form \begin{eqnarray}\label{1
1,116,691,498,027
arxiv
\section{Introduction}\label{sec:intro} In this paper we present a self-contained study of % Hilbert--Sobolev spaces defined on arbitrary open and closed sets of $\mathbb{R}^n$, aimed at applied and numerical analysts interested in linear elliptic problems on rough domains, in particular in boundary integral equation (BIE) reformulations. Our focus is on the Sobolev spaces $H^s(\Omega)$, $H^s_0(\Omega)$, $\widetilde{H}^s(\Omega)$, $\mathringbig{H}{}^s(\Omega)$, and $H^s_F$, all described below, where $\Omega$ (respectively $F$) is an arbitrary open (respectively closed) subset of $\mathbb{R}^n$. Our goal is to investigate % properties of these spaces (in particular, to provide natural unitary realisations for their dual spaces), and to clarify the nature of the relationships between them. Our motivation for writing this paper is recent and current work by two of the authors \cite{CoercScreen,CoercScreen2,Ch:13,ScreenPaper} on problems of acoustic scattering by planar screens with rough (e.g.\ fractal) boundaries. The practical importance of such scattering problems has been highlighted by the recent emergence of ``fractal antennas'' in electrical engineering applications, which have attracted attention due to their miniaturisation and multi-band properties; see the reviews \cite{GiRS:02,WeGa:03} and \cite[\S18.4]{Fal}. The acoustic case considered in \cite{CoercScreen,CoercScreen2,Ch:13,ScreenPaper} and the results of the current paper may be viewed as first steps towards developing a mathematical analysis of problems for such structures. In the course of our investigations of BIEs on more general sets it appeared to us that the literature on the relevant classical Sobolev spaces, while undeniably vast, is not as complete % or as clear as desirable in the case when the domain of the functions is an arbitrary open or closed subset of Euclidean space, as opposed to the very well-studied case of a Lipschitz open set. By ``classical Sobolev spaces" we mean the simplest of Sobolev spaces, Hilbert spaces based on the $L^2$ norm, which are sufficient for a very large part of the study of linear elliptic BVPs and BIEs, and are for this reason the focus of attention for example in the classic monographs \cite{LiMaI} and \cite{ChPi} and in the more recent book by McLean \cite{McLean} that has become the standard reference for the theory of BIE formulations of BVPs for strongly elliptic systems. However, even in this restricted setting there are many different ways to define Sobolev spaces on subsets of $\mathbb{R}^n$ (via e.g.\ weak derivatives, Fourier transforms and Bessel potentials, completions of spaces of smooth functions, duality, interpolation, traces, quotients, restriction of functions defined on a larger subset,~\ldots). On Lipschitz open sets (defined e.g.\ as in % \cite[1.2.1.1]{Gri}), many of these different definitions lead to the same Sobolev spaces and to equivalent norms. But, as we shall see, the situation is more complicated for spaces defined on more general subsets of $\mathbb{R}^n$. Of course there already exists a substantial literature relating to function spaces on rough subsets of $\mathbb{R}^n$ (see e.g.~\cite{JoWa84,Triebel97FracSpec,Triebel83ThFS,Maz'ya,AdHe,MaPo97,Ca:00,St:03}). However, many of the results presented here, despite being relatively % elementary, do appear to be new and of interest and relevance for applications. That we are able to achieve some novelty may be due in part to the fact that we restrict our attention to the Hilbert--Sobolev framework, which means that many of the results we are interested in can be proved using Hilbert space techniques and geometrical properties of the domains, without the need for more general and intricate theories such as those of Besov and Triebel--Lizorkin spaces and atomic decompositions \cite{Triebel83ThFS,Maz'ya,AdHe} which are usually employed to describe function spaces on rough sets. This paper is by no means an exhaustive study, but we hope that the results we provide, along with the open questions that we pose, will stimulate further research in this area. Many of our results involve the question of whether or not a given subset of Euclidean space can support a Sobolev distribution of a given regularity (the question of ``$s$-nullity'', see \S\ref{subsec:Polarity} below). A number of results pertaining to this question have been derived recently in \cite{HewMoi:15} using standard results from potential theory in \cite{AdHe,Maz'ya}, and those we shall make use of are summarised in \S\ref{subsec:Polarity}. We will also make reference to a number of the concrete examples and counterexamples provided in \cite{HewMoi:15}, in order to demonstrate the sharpness (or otherwise) of our theoretical results. Since our motivation for this work relates to the question of determining the correct function space setting in which to analyse integral equations posed on rough domains, we include towards the end of the paper an application to BIEs on fractal screens; further applications in this direction can be found in \cite{CoercScreen,Ch:13,ScreenPaper}. We point out that one standard way of defining Sobolev spaces not considered in detail in this paper is interpolation (e.g.\ defining spaces of fractional order by interpolation between spaces of integer order, as for the famous Lions--Magenes space $H^{1/2}_{00}(\Omega)$). In our separate paper \cite{InterpolationCWHM} we prove that while the spaces $H^s(\Omega)$ and $\widetilde{H}^s(\Omega)$ form interpolation scales for Lipschitz $\Omega$, if this regularity assumption is dropped the interpolation property does not hold in general (this finding contradicts an incorrect claim to the contrary in \cite{McLean}). This makes interpolation a somewhat unstable operation on non-Lipschitz open sets, and for this reason we do not pursue interpolation in the current paper as a means of defining Sobolev spaces on such sets. However, for completeness we collect in Remark~\ref{rem:LionsMagenes} some basic facts concerning the space $H^{s}_{00}(\Omega)$ on Lipschitz open sets, derived from the results presented in the current paper and in \cite{InterpolationCWHM}. \subsection{Notation and basic definitions} In light of the considerable variation in notation within the Sobolev space literature, we begin by clarifying the notation and the basic definitions we use. For any subset $E\subset\mathbb{R}^n$ we denote the complement of $E$ by $E^c:=\mathbb{R}^n\setminus E$, the closure of $E$ by $\overline{E}$, and the interior of $E$ by ${\rm int}(E)$. We denote by ${\rm dim_H}(E)$ the Hausdorff dimension of $E$ (cf.\ e.g.\ \cite[\S5.1]{AdHe}), and by $m(E)$ the $n$-dimensional Lebesgue measure of $E$ (for measurable $E$). For $\mathbf{x} \in \mathbb{R}^n$ and $r>0$ we write $B_r(\mathbf{x}) := \{\mathbf{y}\in \mathbb{R}^n: |\mathbf{x}-\mathbf{y}|< r\}$ and $B_r := \{\mathbf{x}\in \mathbb{R}^n: |\mathbf{x}|<r\}$. Throughout the paper, $\Omega$ will denote a non-empty open subset of $\mathbb{R}^n$, and $F$ a non-empty closed subset of $\mathbb{R}^n$. We say that $\Omega$ is $C^0$ (respectively $C^{0,\alpha}$, $0<\alpha<1$, respectively Lipschitz) if its boundary $\partial\Omega$ can be locally represented as the graph (suitably rotated) of a $C^0$ (respectively $C^{0,\alpha}$, respectively Lipschitz) function from $\mathbb{R}^{n-1}$ to $\mathbb{R}$, with $\Omega$ lying only on one side of $\partial\Omega$. For a more detailed definition see, e.g., \cite[Definition 1.2.1.1]{Gri}. We note that for $n=1$ there is no distinction between these definitions: we interpret them all to mean that $\Omega$ is a countable union of open intervals whose closures are disjoint. Note that in the literature several alternative definitions of Lipschitz open sets can be found (see e.g.\ the discussion in \cite{Fr:79}). The following definitions are stronger than that given above: Stein's ``minimally smooth domains'' in \cite[{\S}VI.3.3]{Stein}, which require all the local parametrisations of the boundary to have the same Lipschitz constant and satisfy a certain finite overlap condition; Adams' ``strong local Lipschitz property'' in \cite[4.5]{Adams}; Ne\v{c}as' Lipschitz boundaries \cite[\S1.1.3]{NEC67}; and Definition~3.28 in \cite{McLean}, which is the most restrictive of this list as it considers only sets with bounded boundaries for which sets it is equivalent to the ``uniform cone condition'' \cite[Theorem~1.2.2.2]{Gri}. On the other hand, Definition~1.2.1.2 in \cite{Gri} (``Lipschitz manifold with boundary'') is weaker than ours; see \cite[Theorem~1.2.1.5]{Gri}. In this paper we study function spaces defined on \emph{arbitrary} open sets. Since some readers may be unfamiliar with open sets that fail to be $C^0$, we give a flavour of the possibilities we have in mind. We first point the reader to the examples illustrated in Figure \ref{fig:TSExamples} below (unions of polygons meeting at vertices, double bricks, curved cusps, spirals, and ``rooms and passages'' domains), all of which fail to be $C^0$ at one or more points on their boundaries. But these examples are still rather tame. A more exotic example is the Koch snowflake \cite[Figure~0.2]{Fal}, which fails to be $C^0$ at any point on its (fractal) boundary. Another class of examples we will use to illustrate many of our results (e.g.\ in \S\ref{subsec:3spaces}) is found by taking $\Omega=\Omega_0 \setminus F$, where $\Omega_0$ is a regular ($C^0$, or even Lipschitz) open set (e.g.\ a ball or a cube) and $F$ is an arbitrary non-empty closed subset of $\Omega_0$. The set $F$ may have empty interior, in which case $\Omega\neq {\rm int}(\overline\Omega)$. Of particular interest to us will be the case where $F$ is a fractal set. A concrete example (used in the proof of Theorem \ref{thm:notequalbig} and cf.\ Remark \ref{rem:BIE} below) is where $\Omega_0$ is a ball and $F$ is a Cantor set (an uncountable closed set with zero Lebesgue measure---see Figure \ref{fig:CantorDust} for an illustration). As we will see, a key role in determining properties of Sobolev spaces defined on the open set $\Omega=\Omega_0\setminus F$ is played by the maximal Sobolev regularity of distributions that are supported inside $F$, which itself is closely related to the Hausdorff dimension of $F$. \subsubsection{Slobodeckij--Gagliardo vs Bessel--Fourier} For $s\in\mathbb{R}$, the fundamental Hilbert--Sobolev spaces on an open set $\Omega\subset \mathbb{R}^n$ are usually defined either \begin{enumerate} \item[\emph{(i)}] intrinsically, using volume integrals over $\Omega$ of squared weak (distributional) derivatives for $s\in\mathbb{N}_0$, Slobodeckij--Gagliardo integral norms for $0<s\notin\mathbb{N}$, and by duality for $s<0$ (cf.\ \cite[pp.~73--75]{McLean}); or \item[\emph{(ii)}] extrinsically, as the set of restrictions to $\Omega$ (in the sense of distributions) of elements of the global space $H^s(\mathbb{R}^n)$, which is defined for all $s\in\mathbb{R}$ using the Fourier transform and Bessel potentials (cf.\ \cite[pp.~75--77]{McLean}). \end{enumerate} Following McLean \cite{McLean}, we denote by $W^s_2(\Omega)$ the former class of spaces and by $H^s(\Omega)$ the latter. Clearly $H^s(\Omega)\subset W^s_2(\Omega)$ for $s\geq 0$; in fact the two classes of spaces coincide and their norms are equivalent whenever there exists a continuous extension operator $W^s_2{(\Omega)}\to H^s{(\R^n)}$ \cite[Theorem~3.18]{McLean}; this exists (at least for $s\geq 0$) for Lipschitz $\Omega$ with bounded boundary \cite[Theorem~A.4]{McLean}, and more generally for ``minimally smooth domains'' \cite[{\S}VI, Theorem~5]{Stein} and ``$(\varepsilon,\delta)$ locally uniform domains'' % \cite[Definition~5 and Theorem~8]{Rogers}. But it is easy to find examples where the two spaces are different: if $\Omega$ is Lipschitz and bounded, and $\Omega':=\Omega\setminus\Pi$, where $\Pi$ is a hyperplane that divides $\Omega$ into two components, then $H^s(\Omega')=H^s{(\Omega)}$ for $n/2<s\in\mathbb{N}$ as their elements require a continuous extension to $\mathbb{R}^n$, while the elements of $W^s_2(\Omega')$ can jump across $\Pi$, so $H^s(\Omega')\subsetneqq W^s_2(\Omega')$. In the present paper we will only investigate the spaces $H^s(\Omega)$ and certain closed subspaces of $H^s(\mathbb{R}^n)$ related to $\Omega$, i.e.\ we choose option \emph{(ii)} above. We cite two main reasons motivating this choice (see also \cite[\S3.1]{Triebel83ThFS}). Firstly, while the intrinsic spaces $W^s_2(\Omega)$ described in option \emph{(i)} are the standard setting for BVPs posed in an open set $\Omega$ and their finite element-type discretisations, the extrinsic spaces $H^s(\Omega)$ and certain closed subspaces of $H^s(\mathbb{R}^n)$ arise naturally in BIE formulations. An example (for details see \S\ref{sec:BIE} and \cite{CoercScreen,ScreenPaper}) is the scattering of an acoustic wave propagating in $\mathbb{R}^{n+1}$ ($n=1$ or $2$) by a thin screen, assumed to occupy a bounded relatively open subset of the hyperplane $\{\mathbf{x}\in\mathbb{R}^{n+1},\, x_{n+1}=0\}$. Identifying this hyperplane with $\mathbb{R}^n$ and the screen with an open subset $\Gamma\subset\mathbb{R}^n$ in the obvious way, one can impose either Dirichlet or Neumann boundary conditions on the screen by first taking a (trivial) Dirichlet or Neumann trace onto the hyperplane $\mathbb{R}^n$, then prescribing the value of the restriction of this trace to $\Gamma$, as an element of $H^{1/2}(\Gamma)$ or $H^{-1/2}(\Gamma)$ respectively. The solution to the associated BIE is respectively either the jump in the normal derivative of the acoustic field or the jump in the field itself across the hyperplane, these jumps naturally lying in the closed subspaces $H^{-1/2}_{\overline{\Gamma}}\subset H^{-1/2}(\mathbb{R}^n)$ and $H^{1/2}_{\overline{\Gamma}}\subset H^{1/2}(\mathbb{R}^n)$ respectively (see below for definitions). Secondly, on non-Lipschitz open sets $\Omega$ the intrinsic spaces $W^s_2(\Omega)$ have a number of undesirable properties. For example, for $0<s<1$ the embedding $W^1_2(\Omega)\subset W^s_2(\Omega)$ may fail and the embedding $W^s_2(\Omega)\subset W^0_2(\Omega)=L^2(\Omega)$ may be non-compact (see \cite[\S~9]{DiPaVa:12}). Other pathological behaviours are described in \S1.1.4 of \cite{Maz'ya}: for $2\le \ell\in\mathbb{N}$, the three spaces defined by the (squared) norms $\|u\|_{L^\ell_2(\Omega)}^2:=\int_\Omega\sum_{\boldsymbol{\alpha}\in\mathbb{N}^n, |\boldsymbol\alpha|=\ell}|D^{\boldsymbol \alpha} u|^2\mathrm{d}\mathbf{x}$, $\|u\|_{L^0_2(\Omega)}^2+\|u\|_{L^\ell_2(\Omega)}^2$ and $\sum_{j=0}^\ell\|u\|_{L^j_2(\Omega)}^2$ may be all different from each other. \subsubsection{``Zero trace'' spaces} \label{sec:ZeroTrace} In PDE applications, one often wants to work with Sobolev spaces on an open set $\Omega$ which have ``zero trace'' on the boundary of $\Omega$. There are many different ways to define such spaces; in this paper we consider the following definitions, which are equivalent only under certain conditions on $\Omega$ and $s$ (as will be discussed in \S\ref{subsec:3spaces}): \begin{itemize} \item $H^s_0(\Omega)$, the closure in $H^s(\Omega)$ of the space of smooth, compactly supported functions on $\Omega$. \item $\widetilde{H}^s(\Omega)$, the closure in $H^s(\mathbb{R}^n)$ of the space of smooth, compactly supported functions on $\Omega$. \item $H^s_{\overline\Omega}$, the set of those distributions in $H^s(\mathbb{R}^n)$ whose support lies in the closure $\overline\Omega$. \item $\mathringbig{H}{}^s(\Omega)$, defined for $s\ge0$ as the set of those distributions in $H^s(\mathbb{R}^n)$ that are equal to zero almost everywhere in the complement of $\Omega$. \end{itemize} $H^s_0(\Omega)$, being a closed subspace of $H^s(\Omega)$, is a space of distributions on $\Omega$, while $\widetilde{H}^s(\Omega)$, $H^s_{\overline\Omega}$ and $\mathringbig{H}{}^s(\Omega)$, all being closed subspaces of $H^s{(\R^n)}$, are spaces of distributions on $\mathbb{R}^n$ (which can sometimes be embedded in $H^s(\Omega)$ or $H^s_0(\Omega)$, as we will see). All the notation above is borrowed from \cite{McLean} (see also \cite{HsWe08,Steinbach,ChPi}), except the notation $\mathringbig{H}{}^s(\Omega)$ which we introduce here (essentially the same space is denoted $\tilde W^s_2(\Omega)$ in \cite{Gri}). We remark that for Lipschitz or smoother open sets $\Omega$, the above spaces are classically characterised as kernels of suitable trace operators (e.g.\ \cite[Theorem~3.40]{McLean}, \cite[Theorem~1.5.1.5]{Gri}, \cite[Chapter 1, Theorem~11.5]{LiMaI}). % Trace spaces on closed sets $F\subset\mathbb{R}^n$ with empty interior (e.g.\ finite unions of submanifolds of $\mathbb{R}^n$, or fractals such as Cantor sets) are sometimes defined as quotient spaces, e.g.\ \cite[Definition~6.1]{ClHi:13} considers the space $H^{1/2}([F])$, defined as $H^{1/2}([F]):= W^1_2(\mathbb{R}^n)/\overline{\mathscr{D}(\mathbb{R}^n\setminus F)}^{W^1_2(\mathbb{R}^n\setminus F)}$; other similar trace spaces are $H^s{(\R^n)}/\widetilde{H}^s(\mathbb{R}^n\setminus F)$ and $H^s(\mathbb{R}^n\setminus F)/H^s_0(\mathbb{R}^n\setminus F)$. While we do not discuss such trace operators or trace spaces in this paper, we point out that our results in \S\ref{subsec:DiffDoms} and % \S\ref{subsec:Hs0vsHs}, respectively, describe precisely when the latter two trace spaces are or are not trivial. \subsection{Overview of main results} We now outline the structure of the paper and summarise our main results. \paragraph{Preliminary Hilbert space results.} In \S\ref{sec:hs} we recall some basic facts regarding (complex) Hilbert spaces that we use later to construct unitary isomorphisms between Sobolev spaces and their duals. The key result in \S\ref{sec:DualSpaceRealisations} (stated as Lemma~\ref{lem:hs_orth}) is that given a unitary realisation $\mathcal{H}$ of the dual of a Hilbert space $H$ and a closed subspace $V\subset H$, the dual of $V$ can be realised unitarily in a natural way as the orthogonal complement of the annihilator of $V$ in $\mathcal{H}$. In \S\ref{subsec:ApproxVar} we consider sequences of continuous and coercive variational equations posed in nested (either increasing or decreasing) Hilbert spaces, and prove the convergence of their solutions under suitable assumptions, using arguments based on C\'ea's lemma. These results are used in \S\ref{sec:BIE} to study the limiting behaviour of solutions of BIEs on sequences of Lipschitz open sets $\Gamma_j$, including cases where $\Gamma_j$ converges as $j\to \infty$ to a closed fractal set, or to an open set with a fractal boundary. \paragraph{Sobolev space definitions.} In \S\ref{subsec:SobolevDef} we recall the precise definitions and basic properties of the function spaces $H^s(\mathbb{R}^n)$, $H^s(\Omega)$, $H^s_0(\Omega)$, $\widetilde{H}^s(\Omega)$, $\mathringbig{H}{}^s(\Omega)$, and $H^s_F\subset H^s(\mathbb{R}^n)$ introduced above. Our presentation closely follows that of \cite[Chapter~3]{McLean}. \paragraph{Duality.} In \S\ref{subsec:DualAnnih} we describe natural unitary realisations of the duals of the Sobolev spaces introduced in \S\ref{subsec:SobolevDef}. By ``natural'' we mean that the duality pairing extends the $L^2$ inner product, and/or the action of a distribution on a test function. For example, the dual space of $H^{s}(\Omega)$ can be naturally and unitarily identified with the space $\widetilde{H}^{-s}(\Omega)$, and vice versa. This is very well known for $\Omega$ sufficiently regular (e.g.\ Lipschitz with bounded boundary, e.g., \cite[Theorem 3.30]{McLean}) but our proof based on the abstract Hilbert space results in \S\ref{sec:hs} makes clear that the geometry of $\Omega$ is quite irrelevant; the result holds for any $\Omega$ (see Theorem \ref{thm:DualityTheorem}). We also provide what appear to be new realisations of the dual spaces of $H^s_F$ and $H^s_0(\Omega)$. \paragraph{$s$-nullity.} In \S\ref{subsec:Polarity} we introduce the concept of $s$-nullity, a measure of the negligibility of a set in terms of Sobolev regularity. This concept will play a prominent role throughout the paper, and many of our key results relating different Sobolev spaces will be stated in terms of the $s$-nullity (or otherwise) of the set on which a Sobolev space is defined, of its boundary, or of the symmetric difference between two sets. For $s\in\mathbb{R}$ we say a set $E\subset\mathbb{R}^n$ is $s$-null if there are no non-zero elements of $H^s(\mathbb{R}^n)$ supported in $E$. (Some other authors \cite{HoLi:56,Li:67a,Li:67b,Maz'ya} refer to such sets as ``$(-s,2)$-polar sets'', or \cite{AdHe,Maz'ya} as sets of uniqueness for $H^s(\mathbb{R}^n)$; for a more detailed discussion of terminology see Remark \ref{rem:polarity}.) In Lemma \ref{lem:polarity} we collect a number of results concerning $s$-nullity and its relationship to analytical and geometrical properties of sets (for example Hausdorff dimension) that have recently been derived in \cite{HewMoi:15} using potential theoretic results on set capacities taken from \cite{Maz'ya,AdHe}. \paragraph{Spaces defined on different subsets of $\mathbb{R}^n$.} Given two different Lipschitz open sets $\Omega_1,\Omega_2\subset\mathbb{R}^n$, the symmetric difference $(\Omega_1\cup\Omega_2)\setminus(\Omega_1\cap\Omega_2)$ has non-empty interior, and hence the Sobolev spaces related to $\Omega_1$ and $\Omega_2$ are different, in particular $\widetilde{H}^s(\Omega_1)\neq\widetilde{H}^s(\Omega_2)$. If the Lipschitz assumption is lifted the situation is different: for example, from a Lipschitz open set $\Omega$ one can subtract any closed set with empty interior (e.g.\ a point, a convergent sequence of points together with its limit, a closed line segment, curve or other higher dimensional manifold, or a more exotic fractal set) and what is left will be again an open set $\Omega'$. In which cases is $\widetilde{H}^s{(\Omega)}=\widetilde{H}^s(\Omega')$? When is $H^s_{\Omega^c}=H^s_{{\Omega^{'}}^c}$? And how is $H^s{(\Omega)}$ related to $H^s(\Omega')$? In \S\ref{subsec:DiffDoms} we answer these questions precisely in terms of $s$-nullity. \paragraph{Comparison between the ``zero-trace'' subspaces of $H^s(\mathbb{R}^n)$.} The three spaces $\widetilde{H}^s(\Omega)$, $H^s_{\overline\Omega}$ and $\mathringbig{H}{}^s(\Omega)$ are all closed subspaces of $H^s{(\R^n)}$. For arbitrary $\Omega$ they satisfy the inclusions $$ \widetilde{H}^s{(\Omega)}\subset\mathringbig{H}{}^s{(\Omega)}\subset H^s_{\overline\Omega} $$ (with $\mathringbig{H}{}^s{(\Omega)}$ present only for $s\ge0$). In \S\ref{subsec:3spaces} we describe conditions under which the above inclusions are or are not equalities. For example, it is well known (e.g.\ \cite[Theorem 3.29]{McLean}) that when $\Omega$ is $C^0$ the three spaces coincide. A main novelty in this section is the construction of explicit counterexamples which demonstrate that this is not the case for general $\Omega$. A second is the proof, relevant to the diversity of configurations illustrated in Figure \ref{fig:TSExamples}, that $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ for % $|s| \leq 1/2$ ($|s|\leq 1$ for $n\geq 2$) for a class of open sets whose boundaries, roughly speaking, fail to be $C^0$ at a countable number of points. \paragraph{When is $H^s_0(\Omega)=H^s(\Omega)$?} In \S\ref{subsec:Hs0vsHs} we investigate the question of when $H^s_0(\Omega)$ is or is not equal to $H^s(\Omega)$. One classical result (see \cite[Theorem 1.4.2.4]{Gri} or \cite[Theorem 3.40]{McLean}) is that if $\Omega$ is Lipschitz and bounded then $H^s_0(\Omega)=H^s(\Omega)$ for $0\leq s\leq 1/2$. Using the dual space realisations derived in \S\ref{subsec:DualAnnih} we show that, for arbitrary $\Omega$, equality of $H^s_0(\Omega)$ and $H^s(\Omega)$ is equivalent to a certain subspace of $H^{-s}(\mathbb{R}^n)$ being trivial. From this we deduce a number of necessary and sufficient conditions for equality, many of which appear to be new; in particular our results linking the equality of $H^s_0(\Omega)$ and $H^s(\Omega)$ to the fractal dimension of $\partial\Omega$ improve related results presented in \cite{Ca:00}. \paragraph{The restriction operator.} One feature of this paper is that we take care to distinguish between spaces of distributions defined on $\mathbb{R}^n$ (including $H^s(\mathbb{R}^n)$, $\widetilde{H}^s(\Omega)$,$\mathringbig{H}{}^s(\Omega)$, $H^s_{\overline\Omega}$) and spaces of distributions defined on $\Omega$ (including $H^s_0(\Omega)$, $H^s(\Omega)$). The link between the two is provided by the restriction operator $|_\Omega:H^s(\mathbb{R}^n)\to H^s(\Omega)$. In \S\ref{subsec:restriction} we collect results from \cite{Hs0paper} on its mapping properties (injectivity, surjectivity, unitarity). % In Remark~\ref{rem:LionsMagenes} we briefly mention the relationship of $\widetilde{H}^s{(\Omega)}$ and $H^s_0{(\Omega)}$ with the classical Lions--Magenes space $H^s_{00}{(\Omega)}$ (defined by interpolation), using results recently derived in \cite{InterpolationCWHM}. \paragraph{Sequences of subsets.} Many of the best-known fractals (for example Cantor sets, Cantor dusts, the Koch snowflake, the Sierpinski carpet, and the Menger sponge) are defined by taking the union or intersection of an infinite sequence of simpler, nested ``prefractal'' sets. In \S\ref{subsec:Seqs+Eqs} we determine which of the Sobolev spaces defined on the limiting set naturally emerges as the limit of the spaces defined on the approximating sets. This question is % relevant when the different spaces on the limit set do not coincide, e.g.\ when $\widetilde{H}^s{(\Omega)}\subsetneqq H^s_{\overline\Omega}$. In this case the correct function space setting depends on whether the limiting set is to be approximated from ``inside'' (as a union of nested open sets), or from the ``outside'' (as an intersection of nested closed sets). \paragraph{Boundary integral equations on fractal screens.} \S\ref{sec:BIE} contains the major application of the paper, namely the BIE formulation of acoustic (scalar) wave scattering by fractal screens. We show how the Sobolev spaces $H^s(\Omega)$, $\widetilde{H}^s(\Omega)$, $H^s_{F}$ all arise naturally in such problems, pulling together many of the diverse results proved in the other sections of the paper. In particular, we study the limiting behaviour as $j\to\infty$ of the solution in the fractional Sobolev space $\widetilde{H}^{\pm 1/2}(\Gamma_j)$ of the BIE on the sequence of regular screens $\Gamma_j$, focussing particularly on cases where $\Gamma_j$ is a sequence of prefractal approximations to a limiting screen $\Gamma$ that is fractal or has fractal boundary. \section{Preliminary Hilbert space results} \label{sec:hs} In this section we summarise the elementary Hilbert space theory which underpins our later discussions. We say that a mapping $\iota: H_1\to H_2$ between topological vector spaces $H_1$ and $H_2$ is an \emph{embedding} if it is linear, continuous, and injective, and indicate this by writing $H_1 \hookrightarrow_\iota H_2$, abbreviated as $H_1 \hookrightarrow H_2$ when the embedding $\iota$ is clear from the context. We say that a mapping $\iota: H_1\to H_2$ is an \emph{isomorphism} if $\iota$ is linear and a homeomorphism. % If $H_1$ and $H_2$ are Banach spaces and, additionally, the mapping is isometric (preserves the norm) then we say that $\iota$ is an \emph{isometric isomorphism}. If $H_1$ and $H_2$ are Hilbert spaces and, furthermore, $\iota$ preserves the inner product, then we say that $\iota$ is a \emph{unitary isomorphism} (the terms $H$-isomorphism and Hilbert space isomorphism are also commonly used), and we write $H_1 \cong_{\iota} H_2$. We recall that an isomorphism between Hilbert spaces is unitary if and only if it is isometric \cite[Proposition 5.2]{Conway}. From now on let $H$ denote a complex Hilbert space with inner product $(\cdot,\cdot)_H$, and $H^*$ its dual space (all our results hold for real spaces as well, with the obvious adjustments). Following, e.g., Kato \cite{Ka:95} we take $H^*$ to be the space of {\em anti-linear} continuous functionals on $H$ (sometimes called the {\em anti-dual}), this choice simplifying some of the notation and statement of results. The space $H^*$ is itself a Banach space with the usual induced operator norm. Further, it is an elementary result % that the so-called {\em Riesz isomorphism}, the mapping $R:H\to H^*$ which maps $\phi\in H$ to the anti-linear functional $\ell_\phi\in H^*$, given by $\ell_\phi(\psi)= (\phi,\psi)_H$, for $\psi\in H$, is an isometric isomorphism. This provides a natural identification of the Banach space $H^*$ with $H$ itself. Moreover, this mapping allows us to define an inner product $(\cdot,\cdot)_{H^*}$ on $H^*$, by the requirement that $(\phi,\psi)_H = (\ell_\phi,\ell_\psi)_{H^*}$, $\phi,\psi\in H$, and this inner product is compatible with the norm on $H^*$. With this canonical inner product $H^*$ is itself a Hilbert space and the Riesz isomorphism is a unitary isomorphism% \footnote{As for Kato \cite{Ka:95}, a large part of our preference for our dual space convention (that our functionals are anti-linear rather than linear) is that the Riesz mapping is an isomorphism. If one prefers to work with linear functionals one can construct an isomorphism between the spaces of continuous linear and anti-linear functionals; indeed, in many important cases there is a canonical choice for this isomorphism. % Precisely, if $\psi\mapsto \psi^*$ is any anti-linear isometric involution on $H$ (sometimes called a conjugate map, and easily constructed using an orthogonal basis for $H$, e.g., \cite[Conclusion 2.1.18]{sauter-schwab11}) the map $\phi^*\mapsto\phi$, from the Hilbert space of continuous anti-linear functionals to the space of continuous linear functionals, defined by $\phi(\psi) = \phi^*(\psi^*)$, $\psi \in H$, is a unitary isomorphism. In general there is no natural choice for this conjugate map, but when, as in \S\ref{sec:ss} onwards, $H$ is a space of complex-valued functions the canonical choice is $\psi^*=\overline{\psi}$. When $H$ is real all this is moot; linear and anti-linear coincide.}. \subsection{Realisations of dual spaces}\label{sec:DualSpaceRealisations} It is frequently convenient, e.g.\ when working with Sobolev spaces, to identify the dual space $H^*$ not with $H$ itself but with another Hilbert space $\mathcal{H}$. If $\mathcal{I}:\mathcal{H}\to H^*$ is a unitary isomorphism then we say that $(\mathcal{H},\mathcal{I})$ is a {\em unitary realisation} of $H^*$, and % \begin{equation} \label{dp} \langle\psi, \phi\rangle := \mathcal{I}\psi(\phi), \quad \phi\in H, \psi\in \mathcal{H}, \end{equation} defines a bounded sesquilinear form on $\mathcal{H}\times {H}$, called the {\em duality pairing}. The following lemma shows that, given a unitary realisation $(\mathcal{H}, \mathcal{I})$ of $H^*$, there is a natural unitary isomorphism ${\mathcal{I}^*}:H\to \mathcal{H}^*$, so that $(H, {\mathcal{I}^*})$ is a realisation of $\mathcal{H}^*$. The operator ${\mathcal{I}^*}$ is the adjoint operator of $\mathcal{I}$ after the canonical identification of $H$ with its bidual $H^{**}$. \begin{lem} \label{dual_lem} If $H$ and $\mathcal{H}$ are Hilbert spaces and $\mathcal{I}:\mathcal{H}\to H^*$ is a unitary isomorphism, then ${\mathcal{I}^*}:H\to \mathcal{H}^*$, given by ${\mathcal{I}^*}\phi(\psi) = \overline{\mathcal{I}\psi(\phi)}$, for $\phi\in H$ and $\psi \in \mathcal{H}$, is a unitary isomorphism, and the corresponding duality pairing $\langle \cdot,\cdot\rangle$ on $H\times \mathcal{H}$ is \begin{equation*}% \langle\phi, \psi\rangle :={\mathcal{I}^*}\phi(\psi) = \overline{\langle \psi, \phi\rangle}, \quad \phi\in H, \psi\in \mathcal{H}, \end{equation*} where the duality pairing on the right hand side is that on $\mathcal{H}\times {H}$, as defined in \eqref{dp}. \end{lem} \begin{proof} For $\phi\in H$ and $\psi\in \mathcal{H}$, where ${R}:{H}\to {H}^*$ and ${\mathcal R}:\mathcal{H}\to \mathcal{H}^*$ are the Riesz isomorphisms, \begin{align*} % {\mathcal{I}^*}\phi(\psi) = \overline{\mathcal{I}\psi(\phi)} = \overline{(R^{-1}\mathcal{I}\psi,\phi)_H} =(\phi,R^{-1}\mathcal{I}\psi)_H&= (\mathcal{I}^{-1}R\phi, \psi)_{\mathcal{H}} \\&= {\mathcal R}\mathcal{I}^{-1}R\phi(\psi), \end{align*} so that ${\mathcal{I}^*} = {\mathcal R}\mathcal{I}^{-1}R$ is a composition of unitary isomorphisms, and hence a unitary isomorphism. \end{proof} Similarly, there is associated to $(\mathcal{H},\mathcal{I})$ a natural unitary isomorphism $j:H\to \mathcal{H}$ defined by $j= \mathcal{I}^{-1}R$, where $R:H\to H^*$ is the Riesz isomorphism. For a subset $V\subset H$, we denote by $V^\perp$ the subset of $H$ orthogonal to $V$, a closed linear subspace of $H$. When $V$ is itself a closed linear subspace, in which case $V^\perp$ is termed the orthogonal complement of $V$, we can define $P:H\to V$ ({\em orthogonal projection onto $V$}) by $P\phi=\psi$, where $\psi$ is the best approximation to $\phi$ from $V$. This mapping is linear and bounded with $\|P\|=1$ and $P=P^2=P^*$, where $P^*:H\to H$ is the Hilbert-space adjoint operator of $P$. $P$ has range $P(H)=V$ and kernel $\ker(P)= V^\perp$; moreover $H=V\oplus V^\perp$, and $V^{\perp\perp} = V$. Furthermore, if $(\mathcal{H},\mathcal{I})$ is a unitary realisation of $H^*$ and $\langle \cdot, \cdot\rangle$ is the associated duality pairing (as in \eqref{dp}), we define, % for any subset $V\subset H$, \begin{align}\label{eq:AnnihilatorDef} V^{a,\mathcal{H}} := \{\psi\in \mathcal{H}:\langle \psi,\phi\rangle = 0, \mbox{ for all }\phi\in V\}\subset\mathcal{H}, \end{align} this the {\em annihilator of $V$ in $\mathcal{H}$}. For $\phi,\psi\in H$, $\langle j\psi,\phi \rangle =R\psi(\phi) =(\psi,\phi)_H$, so that $V^{a,\mathcal{H}} = j(V^\perp)$. When $V$ is a closed linear subspace of $H$, since $j$ preserves orthogonality and $V^{\perp\perp}=V$, we have \begin{equation} \label{eq:AnnihilatorResult} (V^\perp)^{a,\mathcal{H}}=j(V)= \left(V^{a,\mathcal{H}}\right)^\perp, \quad \textrm{ and } \quad \left(V^{a,\mathcal{H}}\right)^{a,H} = j^{-1}\big((V^{a,\mathcal{H}})^\perp\big) = V. \end{equation} Given a linear subspace $V\subset H$ we can form the {\em quotient space} $H/V$ as $\{\phi+V:\phi\in H\}$. If $V$ is closed then $H/V$ is a Banach space, with norm \begin{equation} \label{qsn} \|\phi+V\|_{H/V} := \inf_{\psi\in V} \|\phi+\psi\|_H = \|Q \phi\|_H, \end{equation} where $Q:H\to V^\perp$ is orthogonal projection. The mapping $Q_/:H/V\to V^\perp$, defined by $Q_/(\phi+V) = Q\phi$, is clearly surjective and so an isometric isomorphism. Defining an inner product compatible with the norm on $H/V$ by $(\phi+V,\psi+V)_{H/V} = (Q\phi,Q\psi)_H$, $H/V$ becomes a Hilbert space and $Q_/$ a unitary isomorphism, i.e. \begin{equation*} % H/V \cong_{Q_/} V^\perp. \end{equation*} A situation which arises frequently in Sobolev space theory is where we have identified a particular unitary realisation $(\mathcal{H},\mathcal{I})$ of a dual space $H^*$ and we seek a unitary realisation of $V^*$, where $V$ is a closed linear subspace of $H$. The following result shows that an associated natural unitary realisation of $V^*$ is $({\mathcal V}, \mathcal{I}_{\mathcal V})$, where ${\mathcal V}=\left(V^{a,\mathcal{H}}\right)^\perp\subset \mathcal{H}$ and $\mathcal{I}_{\mathcal V}$ is the restriction of $\mathcal{I}$ to ${\mathcal V}$. This is actually a special case of a more general Banach space result, e.g.\ \cite[Theorem 4.9]{Ru91}, but since it plays such a key role in later results, for ease of reference we restate it here restricted to our Hilbert space context, and provide the short proof. \begin{lem} \label{lem:hs_orth} Suppose that $H$ and $\mathcal{H}$ are Hilbert spaces, $\mathcal{I}:\mathcal{H}\to H^*$ is a unitary isomorphism, and $V\subset H$ is a closed linear subspace. Set ${\mathcal V} := \left(V^{a,\mathcal{H}}\right)^\perp\subset \mathcal{H}$, and define $\mathcal{I}_{\mathcal V}: {\mathcal V}\to V^*$ by $\mathcal{I}_{\mathcal V}\psi(\phi)=\mathcal{I}\psi(\phi)$, for $\phi\in V, \psi\in {\mathcal V}$. Then $({\mathcal V},\mathcal{I}_{\mathcal V})$ is a unitary realisation of $V^*$, with duality pairing \begin{equation*} % \langle\psi,\phi\rangle_V:= \mathcal{I}_{\mathcal V}\psi(\phi) =\langle\psi,\phi\rangle, \quad \phi\in V, \psi\in {\mathcal V}, \end{equation*} where $\langle\cdot,\cdot\rangle$ is the duality pairing on $\mathcal{H}\times H$ given by \eqref{dp}. \end{lem} \begin{proof} As above, let $R:H\to H^*$ be the Riesz isomorphism and $j:= \mathcal{I}^{-1}R:H\to \mathcal{H}$, both unitary isomorphisms. $(V,R_V)$ is a unitary realisation of $V^*$, where $R_V:V\to V^*$ is the Riesz isomorphism. Thus, since ${\mathcal V} = j(V)$ by \eqref{eq:AnnihilatorResult}, another unitary realisation is $({\mathcal V},R_V j^{-1}|_{\mathcal V})$. Further, for $\phi\in V$, $\psi\in {\mathcal V}$, \begin{align*} \label{} R_Vj^{-1}\psi(\phi) = (j^{-1}\psi, \phi)_V = (j^{-1}\psi, \phi)_H = Rj^{-1}\psi(\phi) = \mathcal{I}\psi(\phi) &=\langle\psi,\phi\rangle \\ &= \mathcal{I}_{\mathcal V}\psi(\phi), \end{align*} so that $\mathcal{I}_{\mathcal V} = R_Vj^{-1}|_{\mathcal V}$. \end{proof} \begin{rem} \label{rem:orth} Lemma \ref{lem:hs_orth} gives a natural unitary realisation of the dual space of a closed subspace $V$ of a Hilbert space $H$. This lemma applies in particular to the closed subspace $V^\perp$. In view of \eqref{eq:AnnihilatorResult} and Lemma \ref{lem:hs_orth} we have that $({\mathcal V}^\perp,\mathcal{I}_{{\mathcal V}^\perp})$ is a unitary realisation of $(V^\perp)^*$, with ${\mathcal V}^\perp =V^{a,\mathcal{H}}$ and $\mathcal{I}_{{\mathcal V}^\perp}\psi(\phi) =\langle\psi,\phi\rangle$, $\phi\in V^\perp, \psi\in {\mathcal V}^\perp$. \end{rem} Figure \ref{fig:Hilbert} illustrates as connected commutative diagrams the spaces in this section and key elements of the proofs of the above lemmas. \begin{figure}[htb!] \begin{center} \begin{tikzpicture} \hspace{-15mm} \matrix[matrix of math nodes, column sep={13pt}, % row sep={40pt,between origins}, % text height=1.5ex, text depth=0.25ex] (s) { |[name=Vp]|V^\perp & |[name=plus]|\oplus & |[name=V]|V & |[name=eq]|=& |[name=H]|H & |[name=cHs]|\mathcal H^*\\ |[name=Vps]|(V^\perp)^* & & |[name=Vs]|V^* & & |[name=Hs]|H^* & |[name=cH]|\mathcal H & |[name=ceq]|=& |[name=cV]|\big(\mathcal{V}=(V^{a,\mathcal H})^\perp\big) & |[name=cplus]|\oplus & |[name=cVp]|\big(\mathcal V^\perp=V^{a,\mathcal H}\big) \\ }; \draw[->,>=angle 60] % (H) edge node[auto] {\({\mathcal{I}^*}\)} (cHs) (cH) edge node[auto] {\(\mathcal I\)} (Hs) (H) edge node[auto] {\(\!\!\!j\)} (cH) (H) edge node[auto,swap] {\(R\)} (Hs) (cH) edge node[auto,swap] {\(\mathcal R\)} (cHs) (V) edge node[auto] {\(R_V\)} (Vs) (Vp) edge node[auto] {\(R_{V^\perp}\)} (Vps) (cV) edge[bend left=15] node[auto] {\(\mathcal I_{\mathcal V}\)} (Vs) (cVp) edge[bend left=20] node[auto,swap,pos=0.4] {\(\mathcal I_{\mathcal V^\perp}\)} (Vps) (V) edge[bend left=35] node[auto] {\(j_V\)} (cV) (Vp) edge[bend left=25] node[auto] {\(j_{V^\perp}\)} (cVp); \draw[->,>=angle 60] (H) edge[bend left=20] node[auto]{\(P\)} (V) (cH) edge[bend left=20] node[auto]{\(\mathcal{P}\)} (cV); \end{tikzpicture} \end{center} \caption{ A representation, as two connected commutative diagrams, of the Hilbert spaces and the mappings defined in \S\ref{sec:hs}; here $j_V$ and $j_{V^\perp}$ are the restrictions of $j$ to $V$ and $V^\perp$, respectively. Every arrow represents a unitary isomorphism, except for the two orthogonal projections $P:H\to V$ and $\mathcal{P}:\mathcal{H}\to{\mathcal V}$. % \label{fig:Hilbert}} \end{figure} \subsection{Approximation of variational equations in nested subspaces} \label{subsec:ApproxVar} Let $H$ be a Hilbert space, with its dual $H^*$ realised unitarily as some Hilbert space $\mathcal{H}$ and associated duality pairing $\langle\cdot,\cdot\rangle$, as in \S\ref{sec:DualSpaceRealisations}. Fix $f\in \mathcal{H}$, and suppose that $a(\cdot,\cdot):H\times H\to \mathbb{C}$ is a sesquilinear form that is continuous and coercive, i.e., $\exists C,c>0$ such that \begin{equation} \label{eq:defcoer} |a(u,v)|\le C\|u\|_H \|v\|_H, \qquad |a(v,v)|\ge c\|v\|^2_H \qquad \forall u,v\in H. \end{equation} For any closed subspace $V\subset H$ the restriction of $a(\cdot,\cdot)$ to $V\times V$ is also continuous and coercive. Thus by the Lax--Milgram lemma there exists a unique solution $u_V\in V$ to the variational equation \begin{equation}\label{eq:VarEq} a(u_V,v) = \langle f,v\rangle \qquad \forall v\in V, \end{equation} and the solution is bounded independently of the choice of $V$, by $\|u_V\|_H\le c^{-1}\|f\|_\mathcal{H}$. Furthermore, given closed, nested subspaces $V_1\subset V_2\subset H$, C\'ea's lemma gives the following standard bound: \begin{align} \|u_{V_1}-u_{V_2}\|_H \le\frac{C} c \inf_{v_1\in V_1} \|v_1-u_{V_2}\|_H. \label{eq:Cea} \end{align} Consider increasing and decreasing sequences of closed, nested subspaces indexed by $j\in\mathbb{N}$, $$V_1\!\subset\!\cdots\!\subset\! V_j\!\subset \!V_{j+1}\!\subset\!\cdots\!\subset \!H \!\quad\text{ and }\quad H\!\supset \!W_1\!\supset\!\cdots\!\supset\! W_j\!\supset \!W_{j+1}\!\supset\!\cdots, % $$ and define the limit spaces $V:=\overline{\bigcup_{j\in\mathbb{N}} V_j}$ and $W:=\bigcap_{j\in\mathbb{N}} W_j$. C\'ea's lemma \eqref{eq:Cea} immediately gives convergence of the corresponding solutions of \eqref{eq:VarEq} in the increasing case: \begin{equation}\label{eq:V-conver} \|u_{V_j}-u_V\|_H \le \frac{C} c \inf_{v_j\in V_j} \|v_j-u_V\|_H \xrightarrow{j\to\infty}0. \end{equation} In the decreasing case the following analogous result applies. \begin{lem} \label{lem:dec} Define $\{W_j\}_{j=1}^\infty$ and $W$ as above. Then \mbox{$\|u_{W_j}-u_W\|_H\to 0$} as $j\to\infty$. \end{lem} \begin{proof} The Lax--Milgram lemma gives that $\|u_{W_j}\|_H \leq c^{-1} \|f\|_{\mathcal H}$, so that $(u_{W_j})_{j=1}^\infty$ is bounded and has a weakly convergent subsequence, converging to a limit $u_*$. Further, for all $w\in W$, \eqref{eq:VarEq} gives $$ a(u_W,w)=\langle f, w\rangle = a(u_{W_j},w) \to a(u_*,w), $$ as $j\to\infty$ through that subsequence, so that $u_*=u_W$. By the same argument every subsequence of $(u_{W_j})_{j=1}^\infty$ has a subsequence converging weakly to $u_W$, so that $(u_{W_j})_{j=1}^\infty$ converges weakly to $u_W$. Finally, we see that% \begin{align*} c\|u_{W_j}-u_W\|^2_H &\leq |a(u_{W_j}-u_W,u_{W_j}-u_W)| \\ &= |\langle f,u_{W_j}\rangle - a(u_{W_j},u_W)-a(u_W,u_{W_j}-u_W)|,% \end{align*} which tends to 0 as $j\to \infty$, by the weak convergence of $(u_{W_j})_{j=1}^\infty$ and \eqref{eq:VarEq}. \end{proof} \section{Sobolev spaces} \label{sec:ss} \subsection{Main definitions}\label{subsec:SobolevDef} We now define the Sobolev spaces studied in this paper. Our presentation broadly follows that of \cite{McLean}. \subsubsection{Distributions, Fourier transform and Bessel potential} Given $n\in \mathbb{N}$, let $\mathscr{D}(\mathbb{R}^n)$ denote the space of compactly supported smooth test functions on $\mathbb{R}^n$, and for any open set $\Omega\subset \mathbb{R}^n$ let $\mathscr{D}(\Omega):=\{u\in\mathscr{D}(\mathbb{R}^n):\supp{u}\subset\Omega\}$. For $\Omega\subset \mathbb{R}^n$ let $\mathscr{D}^*(\Omega)$ denote the space of distributions on $\Omega$ (anti-linear continuous functionals on $\mathscr{D}(\Omega)$). With $L^1_{\rm loc}(\Omega)$ denoting the space of locally integrable functions on $\Omega$, the standard embedding $L^1_{\rm loc}(\Omega)\hookrightarrow \mathscr{D}^*(\Omega)$ is given by $u(v):=\int_\Omega u \overline{v}$ for $u\in L^1_{\rm loc}(\Omega)$ and $v\in \mathscr{D}(\Omega)$. Let $\mathscr{S}(\mathbb{R}^n)$ denote the Schwartz space of rapidly decaying smooth test functions on $\mathbb{R}^n$, and $\mathscr{S}^*(\mathbb{R}^n)$ the dual space of tempered distributions (anti-linear continuous functionals on $\mathscr{S}(\mathbb{R}^n)$). Since the inclusion $\mathscr{D}(\mathbb{R}^n)\subset \mathscr{S}(\mathbb{R}^n)$ is continuous with dense image, we have $\mathscr{S}^*(\mathbb{R}^n)\hookrightarrow \mathscr{D}^*(\mathbb{R}^n)$. For $u\in \mathscr{S}(\mathbb{R}^n)$ we define the Fourier transform $\hat{u}={\mathcal F} u\in \mathscr{S}(\mathbb{R}^n)$ and its inverse $\check{u}={\mathcal F}^{-1} u\in \mathscr{S}(\mathbb{R}^n)$ by \begin{align*}% \hat{u}(\boldsymbol{\xi})&:= \frac{1}{(2\pi)^{n/2}}\int_{\mathbb{R}^n}{\mathrm{e}}^{-{\mathrm{i}} \boldsymbol{\xi}\cdot \mathbf{x}}u(\mathbf{x})\,\mathrm{d} \mathbf{x} , \;\; \boldsymbol{\xi}\in\mathbb{R}^n, \\ \check{u}(\mathbf{x}) &:= \frac{1}{(2\pi)^{n/2}}\int_{\mathbb{R}^n}{\mathrm{e}}^{{\mathrm{i}} \boldsymbol{\xi}\cdot \mathbf{x}}u(\boldsymbol{\xi})\,\mathrm{d} \boldsymbol{\xi} , \;\;\mathbf{x}\in\mathbb{R}^n. \end{align*} We define the Bessel potential operator $\mathcal{J}_s$ on $\mathscr{S}(\mathbb{R}^n)$, for $s\in\mathbb{R}$, by $\mathcal{J}_s := {\mathcal F}^{-1}\mathcal{M}_s{\mathcal F}$, where $\mathcal{M}_s$ is multiplication by $(1+|\boldsymbol{\xi}|^2)^{s/2}$. We extend these definitions to $\mathscr{S}^*(\mathbb{R}^n)$ in the usual way: for $u\in \mathscr{S}^*(\mathbb{R}^n)$ and $v\in \mathscr{S}(\mathbb{R}^n)$ let \begin{align} \label{FTDistDef} \hat{u}(v) := u(\check{v}),\quad \check{u}(v) := u(\hat{v}),\quad \mathcal{M}_su(v) := u(\mathcal{M}_s v), \quad (\mathcal{J}_s u)(v) := u(\mathcal{J}_s v), \end{align} Note that for $u\in \mathscr{S}^*(\mathbb{R}^n)$ it holds that $\widehat{\mathcal{J}_s u} = \mathcal{M}_s\hat{u}$. \subsubsection{Sobolev spaces on \texorpdfstring{$\mathbb{R}^n$}{Rn}} \label{subsec:SobSpacesOnRn} We define the Sobolev space $H^s(\mathbb{R}^n)\subset \mathscr{S}^*(\mathbb{R}^n)$ by \begin{align*} \mmbox{H^s(\mathbb{R}^n):=\mathcal{J}_{-s}\big(L^2(\mathbb{R}^n)\big) = \big\{u\in \mathscr{S}^*(\mathbb{R}^n): \mathcal{J}_s u \in \big(L^2(\mathbb{R}^n)\big)\big\},} \end{align*} equipped with the inner product $\left(u,v\right)_{H^{s}(\mathbb{R}^n)}:=\left(\mathcal{J}_s u,\mathcal{J}_s v\right)_{L^2(\mathbb{R}^n)}$, which makes $H^s(\mathbb{R}^n)$ a Hilbert space and $\mathcal{J}_{-s}:L^2(\mathbb{R}^n)\to H^s(\mathbb{R}^n)$ a unitary isomorphism. Furthermore, for any $s,t\in \mathbb{R}$, the map $\mathcal{J}_{t}:H^{s}(\mathbb{R}^n)\to H^{s-t}(\mathbb{R}^n)$ is a unitary isomorphism with inverse $\mathcal{J}_{-t}$. If $u\in H^s(\mathbb{R}^n)$ then the Fourier transform $\hat u\in \mathscr{S}^*(\mathbb{R}^n)$ lies in $L^1_{\rm loc}(\mathbb{R}^n)$; that is, $\hat u$ can be identified with a locally integrable function. Hence we can write % \begin{align} \label{eq:HsProdNorm}% \mmbox{ \begin{aligned} \left(u,v\right)_{H^{s}(\mathbb{R}^n)} &= \int_{\mathbb{R}^n}(1+|\boldsymbol{\xi}|^2)^{s}\,\hat{u}(\boldsymbol{\xi})\overline{\hat{v}(\boldsymbol{\xi})}\,\mathrm{d} \boldsymbol{\xi}, \\ \norm{u}{H^{s}(\mathbb{R}^n)}^2 &= \norm{\mathcal{J}_s u}{L^2(\mathbb{R}^n)}^2 = \int_{\mathbb{R}^n}(1+|\boldsymbol{\xi}|^2)^{s}|\hat{u}(\boldsymbol{\xi})|^2\,\mathrm{d} \boldsymbol{\xi}, \end{aligned} } \qquad u,v\in H^s(\mathbb{R}^n). \end{align} For every $s\in \mathbb{R}$, $\mathscr{D}(\mathbb{R}^n)$ is a dense subset of $H^s(\mathbb{R}^n)$. Indeed \cite[Lemma 3.24]{McLean}, for all $u\in H^s(\mathbb{R}^n)$ and $\epsilon>0$ there exists $v\in \mathscr{D}(\mathbb{R}^n)$ such that \begin{equation} \label{eq:approx} \|u-v\|_{H^s(\mathbb{R}^n)} < \epsilon \quad \mbox{and} \quad \supp{v} \subset \{\mathbf{x}\in \mathbb{R}^n:|\mathbf{x}-\mathbf{y}|<\epsilon \mbox{ and } \mathbf{y}\in \supp{u}\}, \end{equation} where $\supp{v}$ denotes the support of the distribution $v$, understood in the standard sense (e.g.~\cite[p.\ 66]{McLean}). A related standard result (this follows, e.g., from \cite[Exercise 3.14]{McLean}) is that, for all $u\in H^s(\mathbb{R}^n)$ and $\epsilon>0$, there exists a compactly supported $v\in H^s(\mathbb{R}^n)$ such that \begin{equation} \label{eq:approx2} \|u-v\|_{H^s(\mathbb{R}^n)} < \epsilon \quad \mbox{and} \quad \supp{v} \subset \supp{u}. \end{equation} For any $-\infty<s<t<\infty$, $H^t(\mathbb{R}^n)$ is continuously embedded in $H^s(\mathbb{R}^n)$ with dense image and $\|u\|_{H^s(\mathbb{R}^n)}<\|u\|_{H^t(\mathbb{R}^n)}$ for all $0\ne u\in H^t(\mathbb{R}^n)$. When $s>n/2$, elements of $H^{s}(\mathbb{R}^n)$ can be identified with continuous functions (by the Sobolev embedding theorem \cite[Theorem 3.26]{McLean}). At the other extreme, for any $\mathbf{x}_0\in\mathbb{R}^n$ the Dirac delta function\footnote{To fit our convention that $H^s(\mathbb{R}^n)\subset \mathscr{S}^*(\mathbb{R}^n)$ is a space of {\em anti}-linear functionals on $\mathscr{S}(\mathbb{R}^n)$, we understand the action of $\delta_{\mathbf{x}_0}$ by $\delta_{\mathbf{x}_0}(\phi)=\overline{\phi(\mathbf{x}_0)}$, $\phi\in \mathscr{D}(\mathbb{R}^n)$.} \begin{equation}\label{eq:delta} \delta_{\mathbf{x}_0}\in H^{s}(\mathbb{R}^n)\qquad \text{if and only if}\qquad s<-n/2. \end{equation} Recall that for a multi-index $\boldsymbol\alpha\in\mathbb{N}_0^n$ we have ${\mathcal F}(\partial^{\boldsymbol\alpha}u/\partial \mathbf{x}^{\boldsymbol\alpha})(\boldsymbol{\xi})=({\mathrm{i}}\boldsymbol{\xi})^{\boldsymbol\alpha}\hat u(\boldsymbol{\xi})$. Then by Plancherel's theorem and \eqref{eq:HsProdNorm} it holds that $$\|u\|^2_{H^{s+1}(\mathbb{R}^n)}=\|u\|^2_{H^s(\mathbb{R}^n)}+\sum_{j=1}^n\Big\|\frac{\partial u}{\partial x_j}\Big\|^2_{H^s(\mathbb{R}^n)} \qquad \forall u\in H^{s+1}(\mathbb{R}^n), \; s\in\mathbb{R}.$$ In particular, if $m\in\mathbb{N}_0$ then, where $|\boldsymbol\alpha|:=\sum_{j=1}^n\alpha_j$ for $\boldsymbol\alpha\in\mathbb{N}_0^n$, \begin{align*}% \|u\|^2_{H^{m}(\mathbb{R}^n)}&=\sum_{\substack{\boldsymbol\alpha\in\mathbb{N}_0^n,\\ |\boldsymbol\alpha|\le m}} \binom{m}{|\boldsymbol\alpha|}\binom{|\boldsymbol\alpha|}{\boldsymbol\alpha} \Big\|\frac{\partial^{|\boldsymbol\alpha|} u}{\partial \mathbf x^{\boldsymbol\alpha}}\Big\|_{L^2{(\R^n)}}^2 \\ &=\sum_{\substack{\boldsymbol\alpha\in\mathbb{N}_0^n,\\ |\boldsymbol\alpha|\le m}} \frac{m!}{(m-|\boldsymbol\alpha|)!\alpha_1!\cdots\alpha_n!} \Big\|\frac{\partial^{|\boldsymbol\alpha|} u}{\partial \mathbf x^{\boldsymbol\alpha}}\Big\|_{L^2{(\R^n)}}^2. \end{align*} Similar manipulations show that functions with disjoint support are orthogonal in $H^m{(\R^n)}$ for $m\in\mathbb{N}_0$. But we emphasize that this is not in general true in $H^s {(\R^n)}$ for $s\in \mathbb{R}\setminus\mathbb{N}_0$. \subsubsection{The duality relation between \texorpdfstring{$H^s(\mathbb{R}^n)$ and $H^{-s}(\mathbb{R}^n)$}{Hs(Rn) and H-s(Rn)}} \label{subsec:dual1} Where $R_s$ is the Riesz isomorphism $R_s:H^s(\mathbb{R}^n)\to (H^s(\mathbb{R}^n))^*$, the map $\mathcal{I}^s:=R_s \mathcal{J}_{-2s}$, from $H^{-s}(\mathbb{R}^n)$ to $(H^s(\mathbb{R}^n))^*$, is a unitary isomorphism, so $(H^{-s}(\mathbb{R}^n),\mathcal{I}^s)$ is a unitary realisation of $(H^s(\mathbb{R}^n))^*$, with the duality pairing given by \begin{align}\label{DualDef} \left\langle u, v \right\rangle_{s}:=\mathcal{I}^s u(v) =(\mathcal{J}_{-2s}u,v)_{H^s(\mathbb{R}^n)}=\left(\mathcal{J}_{-s} u,\mathcal{J}_s v\right)_{L^2(\mathbb{R}^n)} = \int_{\mathbb{R}^n}\hat{u}(\boldsymbol{\xi}) &\overline{\hat{v}(\boldsymbol{\xi})}\,\mathrm{d} \boldsymbol{\xi}, \end{align} for $u\in H^{-s}(\mathbb{R}^n)$ and $v\in H^s(\mathbb{R}^n)$. This unitary realisation of $(H^s(\mathbb{R}^n))^*$ is attractive because the duality pairing \rf{DualDef} is simply the $L^2(\mathbb{R}^n)$ inner product when $u,v\in \mathscr{S}(\mathbb{R}^n)$, and a continuous extension of that inner product for $u\in H^{-s}(\mathbb{R}^n)$, $v\in H^{s}(\mathbb{R}^n)$. Moreover, if $u\in H^{-s}(\mathbb{R}^n)$ and $v\in \mathscr{S}(\mathbb{R}^n)\subset H^s(\mathbb{R}^n)$, then $\langle u, v\rangle_s$ coincides with the action of the tempered distribution $u$ on $v\in \mathscr{S}(\mathbb{R}^n)$, since (recalling \eqref{FTDistDef}) % for $u\in H^{-s}(\mathbb{R}^n)$ and $v\in \mathscr{S}(\mathbb{R}^n)$ \begin{equation} \label{dualequiv} \langle u, v\rangle_s = (\mathcal{J}_{-s}u, \mathcal{J}_sv)_{L^2(\mathbb{R}^n)}= \mathcal{J}_{-s}u(\mathcal{J}_{s}v) = u(v). % \end{equation} \subsubsection{Sobolev spaces on closed and open subsets of \texorpdfstring{$\mathbb{R}^n$}{Rn}} \label{subsec:SobSpacesClosedOpen} Given $s\in \mathbb{R}$ and a closed set $F\subset \mathbb{R}^n$, we define % \begin{equation} \label{HsSubFDef} \mmbox{H_F^s :=\big\{u\in H^s(\mathbb{R}^n): \supp(u) \subset F\big\},} \end{equation} i.e.\ $H_F^s=\{u\in H^s{(\R^n)}: u(\varphi)=0\; \forall \varphi\in\mathscr{D}(F^c)\}$. Then $H_F^s$ is a closed subspace of $H^s(\mathbb{R}^n)$, so is a Hilbert space with respect to the inner product inherited from $H^s(\mathbb{R}^n)$. There are many different ways to define Sobolev spaces on a non-empty open subset $\Omega\subset \mathbb{R}^n$. We begin by considering three closed subspaces of $H^s(\mathbb{R}^n)$, which are all Hilbert spaces with respect to the inner product inherited from $H^s(\mathbb{R}^n)$. First, we have the space $H^s_{\overline{\Omega}}$, defined as in \rf{HsSubFDef}, i.e. \begin{equation*} % \mmbox{H_{\overline{\Omega}}^s := \big\{u\in H^s(\mathbb{R}^n): \supp(u) \subset \overline{\Omega}\big\}.} \end{equation*} Second, we consider % \begin{equation*} \mmbox{ \widetilde{H}^s(\Omega):=\overline{\mathscr{D}(\Omega)}^{H^s(\mathbb{R}^n)}.} \end{equation*} Third, for $s\geq 0$ another natural space to consider is (see also Remark \ref{rem:ZeroExtension})% \begin{align*} \mmbox{ \begin{aligned} \mathringbig{H}{}^s(\Omega) &:= \big\{u\in H^s(\mathbb{R}^n): u= 0 \mbox{ a.e. in } \Omega^c\big\}\\ &\,\,= \big\{u\in H^s(\mathbb{R}^n): m\big(\Omega^c\cap\supp{u}\big) = 0\big\}. \end{aligned} } \end{align*} These three closed subspaces of $H^s(\mathbb{R}^n)$ satisfy the inclusions \begin{align} \label{eqn:inclusions} \widetilde{H}^s(\Omega)\subset \mathringbig{H}{}^s(\Omega)\subset H^s_{\overline\Omega} \end{align} (with $\mathringbig{H}{}^s(\Omega)$ present only for $s\geq0$). If $\Omega$ is sufficiently smooth (e.g.\ $C^0$) then the three sets coincide, but in general all three can be different (this issue will be investigated in \S\ref{subsec:3spaces}). Another way to define Sobolev spaces on $\Omega$ is by restriction from $H^s(\mathbb{R}^n)$. For $s\in\mathbb{R}$ let $$ \mmbox{H^s(\Omega):=\big\{u\in \mathscr{D}^*(\Omega): u=U|_\Omega \textrm{ for some }U\in H^s(\mathbb{R}^n)\big\},} $$ where $U|_\Omega$ denotes the restriction of the distribution $U$ to $\Omega$ in the standard sense \cite[p.~66]{McLean}. We can identify $H^s(\Omega)$ with the quotient space $H^s(\mathbb{R}^n)/H^s_{\Omega^c}$ through the bijection \begin{equation*} % q_s:H^s(\mathbb{R}^n)/H^s_{\Omega^c}\to H^s(\Omega) \quad\text{given by}\quad q_s(U+H^s_{\Omega^c}) = U|_\Omega, \quad U\in H^s(\mathbb{R}^n). \end{equation*} Recalling the discussion of quotient spaces in and below \eqref{qsn}, this allows us to endow $H^s(\Omega)$ with a Hilbert space structure (making $q_s$ a unitary isomorphism), with the inner product given by \begin{align*} (u,v)_{H^s(\Omega)} := (q_s^{-1}u,q_s^{-1}v)_{H^s(\mathbb{R}^n)/H^s_{\Omega^c}} & = (U+H^s_{\Omega^c},V+H^s_{\Omega^c})_{H^s(\mathbb{R}^n)/H^s_{\Omega^c}} \\ &= (Q_sU,Q_sV)_{H^s(\mathbb{R}^n)}, \end{align*} for $u,v\in H^s(\mathbb{R}^n)$, where $U,V\in H^s(\mathbb{R}^n)$ are such that $U|_\Omega = u$, $V|_\Omega = v$, and $Q_s$ is orthogonal projection from $H^s(\mathbb{R}^n)$ onto $(H^s_{{\Omega^c}})^\perp$, and the resulting norm given by% \begin{align} \mmbox{\|u\|_{H^{s}(\Omega)}=\|Q_sU\|_{H^s(\mathbb{R}^n)} = \min_{\substack{W\in H^s(\mathbb{R}^n)\\ W|_{\Omega}=u}}\normt{W}{H^{s}(\mathbb{R}^n)}.} \label{eq:InfNorm} \end{align} We can also identify $H^s(\Omega)$ with $(H^s_{\Omega^c})^\perp$, by the unitary isomorphism $q_s {Q_s}_/^{-1}:(H^s_{\Omega^c})^\perp\to H^s(\Omega)$, where ${Q_s}_/:H^s(\mathbb{R}^n)/H^s_{\Omega^c} \to (H^s_{\Omega^c})^\perp$ is the quotient map defined from $Q_s$, as in \S\ref{sec:hs}. In fact, it is easy to check that $q_s {Q_s}_/^{-1}$ is nothing but the restriction operator $|_\Omega$, so \begin{equation}\label{eq:RestrIsUnitary} |_\Omega :(H^s_{\Omega^c})^\perp\to H^s(\Omega) \qquad \text{is a unitary isomorphism} \end{equation} and the diagram in Figure \ref{fig:qsQs} commutes. This means we can study the spaces $H^s(\Omega)$ (which, a priori, consist of distributions on $\Omega$) by studying subspaces of $H^s{(\R^n)}$; this is convenient, e.g., when trying to compare $H^s(\Omega_1)$ and $H^s(\Omega_2)$ for two different open sets $\Omega_1,\Omega_2$; see \S\ref{subsec:DiffDoms}. \begin{figure}[tb!] \begin{center} \begin{tikzpicture} \matrix[matrix of math nodes, column sep={80pt}] (s) {&&|[name=quot]| H^s(\mathbb{R}^n)/H^s_{\Omega^c}\\ |[name=HR]| H^s(\mathbb{R}^n)&|[name=orth]| (H^s_{\Omega^c})^\perp&\\ &&|[name=HO]| H^s(\Omega) \\}; \draw[->,>=angle 60] (HR) edge node[auto] {\(Q_s\)} (orth); \draw[->,>=angle 60] (quot) edge node[auto,swap,pos=0.3] {\(Q_{s/}\)} (orth) (quot) edge node[auto] {\(q_s\)} (HO) (orth) edge node[auto,swap,pos=0.7] {% \(|_\Omega\)} (HO); \end{tikzpicture} \end{center} \caption{The maps between $H^s(\mathbb{R}^n)$ and $H^s(\Omega)$, for $s\in\mathbb{R}$ and an open $\Omega\subset\mathbb{R}^n$, as described in \S\ref{subsec:SobSpacesClosedOpen}. All the maps depicted are unitary isomorphisms except $Q_s$, which is an orthogonal projection, and this diagram commutes.\label{fig:qsQs}} \end{figure} Clearly% \[\mathscr{D}(\overline\Omega):=\big\{u\in C^\infty(\Omega):u=U|_\Omega \textrm{ for some }U\in\mathscr{D}(\mathbb{R}^n)\big\}\] is a dense subspace of $H^s(\Omega)$, since $\mathscr{D}(\mathbb{R}^n)$ is dense in $H^s(\mathbb{R}^n)$. The final space we introduce in this section is the closed subspace of $H^s(\Omega)$ defined by \begin{equation}\label{eq:Hs0} \mmbox{H^s_0(\Omega):=\overline{\mathscr{D}(\Omega)\big|_\Omega}^{H^s(\Omega)}.} \end{equation} $\widetilde{H}^s(\Omega)$ and $H^s_0(\Omega)$ are defined as closures in certain norms of $\mathscr{D}(\Omega)$ and $\mathscr{D}(\Omega)|_\Omega$, respectively, so that the former is a subspace of $H^s(\mathbb{R}^n)\subset \mathscr{S}^*(\mathbb{R}^n)$ and the latter of $H^s(\Omega)\subset\mathscr{S}^*(\mathbb{R}^n)|_\Omega \subset\mathscr{D}^*(\Omega)$. For $s>1/2$ and sufficiently uniformly smooth $\Omega$, both $\widetilde{H}^s{(\Omega)}$ and $H^s_0{(\Omega)}$ consist of functions with ``zero trace'' (see \cite[Theorem~3.40]{McLean} for the case when $\partial \Omega$ is bounded), but this intuition fails for negative $s$: if $\mathbf{x}_0\in\partial\Omega$, then the delta function $\delta_{\mathbf{x}_0}$ lies in $\widetilde{H}^s{(\Omega)}$ for $s<-n/2$, irrespective of the regularity of $\partial\Omega$; see the proof of Corollary~\ref{cor:Hs0HsEqual}(iv) below. \begin{rem} \label{rem:ZeroExtension} We note that for $s\geq 0$ the restriction of $\mathringbig{H}{}^s(\Omega)$ to $\Omega$ is precisely the subspace (not necessarily closed) \begin{align*}% \Hze^s(\Omega):= \big\{u\in H^s(\Omega): \uze\in H^s(\mathbb{R}^n)\big\}\subset H^s(\Omega), \end{align*} where $\uze$ is the extension of $u$ from $\Omega$ to $\mathbb{R}^n$ by zero. The restriction operator $|_\Omega:\mathringbig{H}{}^s(\Omega) \to \Hze^s(\Omega)$ is clearly a bijection for all $s\geq 0$, with inverse given by the map $u\mapsto \uze$, and if $\Hze^s(\Omega)$ is equipped with the norm $\|u\|_{\Hze^s(\Omega)}:=\|\uze\|_{H^s(\mathbb{R}^n)}$ (as in e.g.\ \cite[Equation~(1.3.2.7)]{Gri}, where $\Hze^s(\Omega)$ is denoted $\tilde W_2^s(\Omega)$) then $|_\Omega:\mathringbig{H}{}^s(\Omega) \to \Hze^s(\Omega)$ is trivially a unitary isomorphism for all $s\geq 0$. \end{rem} For clarity, we repeat a fundamental fact: the natural norm on $H^s_F$, $\widetilde{H}^s(\Omega)$, $\mathringbig{H}{}^s{(\Omega)}$ and $H^s_{\overline{\Omega}}$ is the $H^s{(\R^n)}$-norm (defined in \eqref{eq:HsProdNorm}), while the norm on $H^s{(\Omega)}$ and $H^s_0{(\Omega)}$ is the minimal $H^s{(\R^n)}$-norm among the extensions of $u\in H^s{(\Omega)}$ to $\mathbb{R}^n$ (defined in \eqref{eq:InfNorm}). \subsection{Dual spaces} \label{subsec:DualAnnih} In this section we construct concrete unitary realisations (as Sobolev spaces) of the duals of the Sobolev spaces defined in \S\ref{subsec:SobolevDef}. Our constructions are based on the abstract Hilbert space result of Lemma \ref{lem:hs_orth}, and are valid for any non-empty open set $\Omega\subset\mathbb{R}^n$, irrespective of its regularity. We first note the following lemma, which characterises the annihilators (as defined in \rf{eq:AnnihilatorDef}) of the subsets $\widetilde{H}^s(\Omega)$ and $H^s_{\Omega^c}$ of $H^s(\mathbb{R}^n)$, with $(H^s(\mathbb{R}^n))^*$ realised as $H^{-s}(\mathbb{R}^n)$ through the unitary isomorphism $\mathcal{I}^s=R_s\mathcal{J}_{-2s}$ (see \S\ref{subsec:dual1}) with associated duality pairing \rf{DualDef}. \begin{lem} \label{lem:orth_lem} Let $\Omega$ be any non-empty open subset of $\mathbb{R}^n$, and $s\in\mathbb{R}$. % Then \begin{align} \label{eqn:anni} H^{-s}_{\Omega^c} = \left(\widetilde{H}^{s}(\Omega)\right)^{a,H^{-s}(\mathbb{R}^n)} \qquad \textrm{ and } \qquad \widetilde{H}^{-s}(\Omega) = \left(H^s_{\Omega^c}\right)^{a,H^{-s}(\mathbb{R}^n)}. \end{align} Furthermore, the Bessel potential operator is a unitary isomorphism between the following pairs of subspaces: \[\mathcal{J}_{2s}:\widetilde{H}^s(\Omega)\to(H^{-s}_{\Omega^c})^{\perp} \qquad \textrm{and} \qquad \mathcal{J}_{2s}: H^s_{\Omega^c} \to (\widetilde{H}^{-s}{(\Omega)})^{\perp}.\] \end{lem} \begin{proof} From the definition of the support of a distribution, \eqref{dualequiv}, the definition of $\widetilde{H}^{s}(\Omega)$, and the continuity of the sesquilinear form $\langle\cdot,\cdot\rangle_s$, it follows that, for $s\in \mathbb{R}$, \begin{align*} H^{-s}_{\Omega^c} &= \{u\in H^{-s}(\mathbb{R}^n): \supp(u)\subset {\Omega^c}\} \\ &= \{u\in H^{-s}(\mathbb{R}^n): u(v) = 0 \mbox{ for all } v\in \mathscr{D}(\Omega)\}\\ & = \{u\in H^{-s}(\mathbb{R}^n): \langle u,v\rangle_s = 0 \mbox{ for all } v\in \mathscr{D}(\Omega)\} = \left(\widetilde{H}^{s}(\Omega)\right)^{a,H^{-s}(\mathbb{R}^n)}, \end{align*} which proves the first statement in \rf{eqn:anni}. The second statement in \rf{eqn:anni} follows immediately from the first, after replacing $s$ by $-s$, by \rf{eq:AnnihilatorResult}. The final statement of the lemma also follows by \eqref{eq:AnnihilatorResult}, noting that $j$ in \eqref{eq:AnnihilatorResult} is given explicitly as $j=(\mathcal{I}^s)^{-1}R_s = \mathcal{J}_{2s}$. \end{proof} Combining Lemma~\ref{lem:orth_lem} with Lemmas \ref{dual_lem} and \ref{lem:hs_orth} gives unitary realisations for $(\widetilde{H}^s(\Omega))^*$ and $(H^{-s}(\Omega))^*$, expressed in Theorem~\ref{thm:DualityTheorem} below. These unitary realisations, precisely the result that the operators $\mathcal{I}_s$ and $\mathcal{I}_s^*$ in \rf{def_embed1App} are unitary isomorphisms, are well known when $\Omega$ is sufficiently regular. For example, in \cite[Theorem~3.30]{McLean} and in \cite[Theorem~2.15]{Steinbach} the result is claimed for $\Omega$ Lipschitz with bounded boundary. (In fact, \cite[Theorems~3.14 and 3.29(ii)]{McLean} together imply the result when $\Omega$ is $C^0$ with bounded boundary, but this is not highlighted in \cite{McLean}.) However, it is not widely appreciated, at least in the numerical PDEs community, that this result holds without any constraint on the geometry of $\Omega$. \begin{thm}\label{thm:DualityTheorem} Let $\Omega$ be any non-empty open subset of $\mathbb{R}^n$, and $s\in\mathbb{R}$. Then % \begin{align} \label{isdual} H^{-s}(\Omega)\cong_{\mathcal{I}_s} \big(\widetilde{H}^s(\Omega)\big)^* \; \mbox{ and }\; \widetilde{H}^{s}(\Omega)\cong_{\mathcal{I}_s^*}\big(H^{-s}(\Omega)\big)^*, \end{align} where $\mathcal{I}_s:H^{-s}(\Omega)\to(\widetilde{H}^s(\Omega))^*$ and $\mathcal{I}_s^*:\widetilde{H}^{s}(\Omega)\to(H^{-s}(\Omega))^*$, defined by \begin{align} \label{def_embed1App} \mathcal{I}_s u (v)= \langle U,v \rangle_{s} \;\; \mbox{ and } \;\; \mathcal{I}_s^*v(u) = \langle v,U \rangle_{-s}, % \quad \mbox{ for } u\in H^{-s}(\Omega), \,v\in\widetilde{H}^s(\Omega), \end{align} where $U\in H^{-s}(\mathbb{R}^n)$ denotes \textit{any} extension of $u$ with $U|_\Omega=u$, % are unitary isomorphisms. Furthermore, the associated duality pairings % \begin{equation*} % \langle u,v \rangle_{H^{-s}(\Omega)\times \widetilde{H}^{s}(\Omega)} := \mathcal{I}_s u(v) \qquad \mbox{ and } \qquad \langle v,u \rangle_{\widetilde{H}^{s}(\Omega)\times {H}^{-s}(\Omega)}:= \mathcal{I}_s^*v(u), \end{equation*} satisfy $$ \langle v,u \rangle_{\widetilde{H}^{s}(\Omega)\times H^{-s}(\Omega)} = \overline{\langle u,v \rangle}_{H^{-s}(\Omega)\times \widetilde{H}^s(\Omega)}, \quad v\in \widetilde{H}^{s}(\Omega), \; u\in H^{-s}(\Omega). $$ \end{thm} \begin{proof} By Lemma~\ref{lem:orth_lem}, it follows from Lemma \ref{lem:hs_orth}, applied with $H=H^s(\mathbb{R}^n)$, $\mathcal{H} = H^{-s}(\mathbb{R}^n)$ and $V=\widetilde{H}^s(\Omega)$, that $\hat{\mathcal{I}}_s:(H^{-s}_{\Omega^c})^\perp \to (\widetilde{H}^s(\Omega))^*$, defined by $\hat{\mathcal{I}}_s u(v) = \langle u,v\rangle_s$, is a unitary isomorphism. By Lemma \ref{dual_lem}, $\hat{\mathcal{I}}_s^*:\widetilde{H}^s(\Omega) \to ((H^{-s}_{\Omega^c})^\perp)^*$, defined by $\hat{\mathcal{I}}_s^* v(u) = \langle v,u\rangle_{-s} =\overline{\hat{\mathcal{I}}_s u(v)}$ is also a unitary isomorphism. Thus the dual space of $\widetilde{H}^s(\Omega)$ can be realised in a canonical way % by $(H^{-s}_{\Omega^c})^\perp$, and vice versa. But we can say more. Since (cf.\ \eqref{eq:RestrIsUnitary}) the restriction operator $|_\Omega$ is a unitary isomorphism from $(H^{-s}_{\Omega^c})^\perp$ onto $H^{-s}(\Omega)$, the composition $\mathcal{I}_s:=\hat{\mathcal{I}}_s(|_\Omega)^{-1}:H^{-s}(\Omega) \to (\widetilde{H}^s(\Omega))^*$ is a unitary isomorphism. And, again by Lemma \ref{dual_lem}, $\mathcal{I}_s^*:\widetilde{H}^s(\Omega) \to (H^{-s}(\Omega))^*$, defined by $\mathcal{I}_s^* v(u) := \overline{\mathcal{I}_s u(v)}$ is also a unitary isomorphism. Hence we can realise the dual space of $\widetilde{H}^s(\Omega)$ by $H^{-s}(\Omega)$, and vice versa. Moreover, it is easy to check that $\mathcal{I}_s$ and $\mathcal{I}_s^*$ can be evaluated as in \rf{def_embed1App}. Thus $\mathcal{I}_s$ and $\mathcal{I}_s^*$ coincide with the natural embeddings of $H^{-s}(\Omega)$ and $\widetilde{H}^{s}(\Omega)$ into $(\widetilde{H}^s(\Omega))^*$ and $(H^{-s}(\Omega))^*$, respectively (as in e.g.\ \cite[Theorem 3.14]{McLean}). \end{proof} \begin{cor}\label{cor:DualityTheorem2} Let $F$ be any closed subset of $\mathbb{R}^n$ (excepting $\mathbb{R}^n$ itself), and $s\in\mathbb{R}$. Then \begin{align*}% \big(\widetilde{H}^{-s}(F^c)\big)^\perp\cong_{\tilde{\mathcal{I}}_s}(H^s_{F})^* \; \mbox{ and }\; H^s_{F} \cong_{\tilde{\mathcal{I}}_s^*}\Big( \big(\widetilde{H}^{-s}(F^c)\big)^\perp\Big)^*, \end{align*} where $\tilde{\mathcal{I}}_s: (\widetilde{H}^{-s}(F^c))^\perp \to (H^s_{F})^*$ and $\tilde{\mathcal{I}}_s^*:H^s_{F} \to ( (\widetilde{H}^{-s}(F^c))^\perp)^*$, defined by \begin{align*}% \tilde{\mathcal{I}}_s u(v):=\langle u,v\rangle_s, \; \mbox{ and }\; \tilde{\mathcal{I}}_s^* v(u) = \langle v,u\rangle_{-s} =\overline{\tilde{\mathcal{I}}_s u(v)}, \end{align*} for $u\in \big(\widetilde{H}^{-s}(F^c)\big)^\perp$ and $v\in H^s_{F}$, are unitary isomorphisms. \end{cor} \begin{proof} Setting $\Omega:=F^c$, the result follows from Theorem \ref{thm:DualityTheorem} and its proof and Remark \ref{rem:orth}. \end{proof} \begin{rem} It is also possible to realise $(\widetilde{H}^s(\Omega))^*$ and $(H^s_{F})^*$ using quotient spaces, by composition of $\hat{\mathcal{I}}_s$ and $\tilde{\mathcal{I}}_s$ with the appropriate quotient maps. For example, $(\widetilde{H}^s(\Omega))^*$ can be realised as $(H^{-s}(\mathbb{R}^n)/H^{-s}_{\Omega^c},\check{\mathcal{I}}_s)$, where $\check{\mathcal{I}}_s=\hat{\mathcal{I}}_s {Q_{-s}}_/ = \mathcal{I}_s q_{-s}$, and $q_s$ and ${Q_s}_/$ are defined as in \S\ref{subsec:SobSpacesClosedOpen}. \end{rem} \begin{rem} \label{rem:DualOfCircle} Corollary \ref{cor:DualityTheorem2}, coupled with Remark \ref{rem:orth} or with the results in the proof of Theorem \ref{thm:DualityTheorem}, implies that, for a non-empty open set $\Omega$, $(\widetilde{H}^s(\Omega))^*$ and $(H^s_{\overline{\Omega}})^*$ can be canonically realised as subspaces of $H^{-s}(\mathbb{R}^n)$, namely as $(H^{-s}_{\Omega^c})^\perp$ and $(\widetilde{H}^{-s}(\overline{\Omega}^c))^\perp$ respectively. For $s\geq 0$, we know that $(\mathringbig{H}{}^s{(\Omega)})^*$ can similarly be realised as the subspace $(X^{-s}(\Omega))^\perp\subset H^{-s}(\mathbb{R}^n)$, where $\widetilde{H}^{-s}(\overline{\Omega}^c)\subset X^{-s}(\Omega):=(\mathringbig{H}{}^s{(\Omega)})^{a,H^{-s}(\mathbb{R}^n)}\subset H^{-s}_{\Omega^c}$. But, as far as we know, providing an explicit description of the space $X^{-s}(\Omega)\subset H^{-s}(\mathbb{R}^n)$ is an open problem. \end{rem} The following lemma realises the dual space of $H^s_0(\Omega)\subset H^s(\Omega)$ as a subspace of $\widetilde{H}^{-s}(\Omega)$. \begin{lem} \label{lem:Hs0Dual} Let $\Omega$ be any non-empty open subset of $\mathbb{R}^n$ and $s\in\mathbb{R}$. Then the dual space of $H^s_0(\Omega)$ can be unitarily realised as $(\widetilde{H}^{-s}(\Omega) \cap H^{-s}_{\partial\Omega})^{\perp, \widetilde{H}^{-s}(\Omega)}$, with the duality pairing inherited from ${\widetilde{H}}^{-s}(\Omega) \times H^s(\Omega)$. \end{lem} \begin{proof} Since $H^s_0(\Omega)$ is a closed subspace of $H^s(\Omega)$, by Lemma \ref{lem:hs_orth} $(H^s_0(\Omega))^*$ can be unitarily realised as a closed subspace of $(H^s(\Omega))^*$, which we identify with $\widetilde{H}^{-s}(\Omega)$ using the operator $\mathcal{I}^*_{-s}$ of Theorem~\ref{thm:DualityTheorem}. Explicitly, $(H^s_0(\Omega))^*$ is identified with the orthogonal complement of the annihilator of $H^s_0(\Omega)$ in $\widetilde{H}^{-s}(\Omega)$, which annihilator satisfies \begin{align*} H^s_0(\Omega)^{a,\widetilde{H}^{-s}(\Omega)} &=\big(\mathscr{D}(\Omega)|_\Omega\big)^{a,\widetilde{H}^{-s}(\Omega)} =\widetilde{H}^{-s}(\Omega) \cap\big(\mathscr{D}(\Omega)\big)^{a,H^{-s}(\mathbb{R}^n)}\\ &=\widetilde{H}^{-s}(\Omega)\cap H^{-s}_{\Omega^c} =\widetilde{H}^{-s}(\Omega) \cap H^{-s}_{\partial\Omega}. \end{align*} \end{proof} \begin{table}[tb!] \begin{center}\begin{tabular}{c|c|c} The dual of & is isomorphic to & via the isomorphism\\ \hline &&\\[-3mm] $H^s(\mathbb{R}^n)$ & $H^{-s}(\mathbb{R}^n)$ & $\mathcal{I}^s$\\ $\widetilde{H}^s(\Omega)$ & $(H^{-s}_{\Omega^c})^\perp$ & $\hat{\mathcal{I}}_s$\\ % & $H^{-s}(\Omega)$ & $\mathcal{I}_s% $\\ &$H^{-s}(\mathbb{R}^n)/(H^{-s}_{\Omega^c})% $ & $\check{\mathcal{I}}_s% $\\ $H^s(\Omega)$ & $\widetilde{H}^{-s}(\Omega)$ & $\mathcal{I}_{-s}^*$\\ $H^{s}_{\Omega^c}$ & $(\widetilde{H}^{-s}(\Omega))^\perp$ &$\tilde{\mathcal{I}}_s$\\ $(H^{s}_{\Omega^c})^\perp$ & $\widetilde{H}^{-s}(\Omega)$ &$\hat{\mathcal{I}}_{-s}^*$\\ $\big(\widetilde{H}^{s}(\Omega)\big)^\perp$ & $H^{-s}_{\Omega^c}$ & $\tilde{\mathcal{I}}_{-s}^*$\\ $H^s_0(\Omega)$ & $(\widetilde{H}^{-s}(\Omega) \cap H^{-s}_{\partial\Omega})^{\perp, \widetilde{H}^{-s}(\Omega)}$ \end{tabular}\end{center} \caption{A summary of the duality relations proved in \S\ref{subsec:dual1} and \S\ref{subsec:DualAnnih}.\label{tab:duals}} \end{table} \begin{figure}[htb!] \begin{center} \begin{tikzpicture} \hspace{-15mm} \matrix[matrix of math nodes, column sep={12pt}, row sep={40pt,between origins}, % text height=1.5ex, text depth=0.25ex] (s) { |[name=DO]| \mathscr{D}(\Omega)& |[name=DoO]| &% |[name=DR]| \mathscr{D}(\mathbb{R}^n)& |[name=SR]| \mathscr{S}(\mathbb{R}^n)&& |[name=L2R]| L^2(\mathbb{R}^n) \\ |[name=tH]| \widetilde{H}^s(\Omega) & |[name=cH]| \mathringbig{H}{}^s(\Omega) & |[name=sH]| H^s_{\overline{\Omega}} & |[name=HR]| \hspace{3mm}H^s(\mathbb{R}^n)\hspace{2mm} =\hspace{-5mm} & |[name=Pr]| (H^s_{\Omega^c})^\perp \; \oplus \; H^s_{\Omega^c} & |[name=SsR]| \mathscr{S}^*(\mathbb{R}^n) \\ & |[name=H0O]| H^s_0(\Omega) && |[name=HO]| H^s(\Omega) & |[name=DsO]| \mathscr{D}^*(\Omega) & \\ |[name=HOs]| \big(H^{-s}(\Omega)\big)^* & |[name=HmR]| H^{-s}(\mathbb{R}^n)& |[name=HRs]| \big(H^{-s}(\mathbb{R}^n)\big)^* & |[name=tHOs]| \big(\widetilde{H}^{-s}(\Omega)\big)^*&& |[name=QOs]| % \hspace{-2mm}\big((\widetilde{H}^{-s}(\Omega))^\perp\big)^* \\ }; \draw[right hook->] (DO) edge (DR) % % (DR) edge (SR) (SR) edge (L2R) (tH) edge (cH) (cH) edge (sH) (sH) edge (HR) (Pr) edge (SsR) % (HO) edge (DsO) (H0O) edge (HO) (DO) edge node[auto] {\(\iota\)} (tH) (SR) edge node[auto] {\(\iota\)} (HR) (L2R) edge node[auto] {\(\iota\)} (SsR) (tH) edge node[swap] {\(\hs{-8}|_\Omega\)} (H0O) % ; \draw[->>] (HR) edge node[auto] {\(\hs{-0.7}|_\Omega\)} (HO) ; \draw[right hook->>] % (tH) edge node[auto] {\(\mathcal{I}_{-s}^*\)} (HOs) (HO) edge node[auto,swap] {\(\mathcal{I}_{-s}\)} (tHOs) % % % (Pr.south east) to node[auto,pos=0.5] {\(\tilde{\mathcal{I}}_s^*\hs{-2}\)} (QOs) % % (HmR) edge node[auto,swap] {\(R_{-s}\)} (HRs) ; \draw[left hook->>] % (Pr) edge node[pos=0.5] {\(\hs{-10}|_\Omega\)} (HO) % % (HR) edge node[pos=0.77] {\(\hs{9}\mathcal{I}^{-s}\)}(HRs) (HR) edge node[pos=0.77] {\(\hs{11}\mathcal{J}_{2s}\)} (HmR) (Pr) edge node[pos=0.77] {\(\hs{12}\hat{\mathcal{I}}_{-s}\)} (tHOs) % ; \end{tikzpicture} \end{center} \caption{A representation, as a commutative diagram, of the relationships between the Sobolev spaces and the isomorphisms between them described in \S\ref{subsec:SobolevDef} and \S\ref{subsec:DualAnnih}. Here $s\in\mathbb{R}$, $\Omega\subset\mathbb{R}^n$ is open, ${\Omega^c}:=\mathbb{R}^n\setminus\Omega$, $\hookrightarrow$ denotes an embedding, $\twoheadrightarrow$ a surjective mapping, % $\hookdoubleheadrightarrow$ a unitary isomorphism, and $\iota$ denotes the standard identification of Lebesgue functions with distributions, namely $\iota:L^2{(\R^n)}\to\mathscr{S}^*{(\R^n)}$, with $\iota u(v):=(u,v)_{L^2{(\R^n)}}$, for $u\in L^2{(\R^n)}$, $v\in\mathscr{S}{(\R^n)}$. Note that $\mathringbig{H}{}^s(\Omega)$ is defined only when $s\ge0$, see \S\ref{subsec:3spaces}. In this diagram the first row contains spaces of functions, the second distributions on $\mathbb{R}^n$, and the third distributions on $\Omega$. \label{fig:Sobolev}} \end{figure} \subsection{\texorpdfstring{$s$}{s}-nullity} \label{subsec:Polarity} In order to compare Sobolev spaces defined on different open sets (which we do in \S\ref{subsec:DiffDoms}), and to study the relationship between the different spaces (e.g.\ $\widetilde{H}^s(\Omega)$, $\mathringbig{H}{}^s(\Omega)$ and $H^s_{\overline{\Omega}}$) on a given open set $\Omega$ (which we do in \S\ref{subsec:3spaces}), we require the concept of $s$-nullity of subsets of $\mathbb{R}^n$. % \begin{defn} For $s\in\mathbb{R}$ we say that a set $E\subset\mathbb{R}^n$ is $s$-null if there are no non-zero elements of $H^{s}(\mathbb{R}^n)$ supported entirely in $E$ (equivalently, if $H^{s}_{F}=\{0\}$ for every closed set $F\subset E$). \end{defn} \noindent We make the trivial remark that if $F$ is closed then $F$ is $s$-null if and only if $H^s_F=\{0\}$. \begin{rem}\label{rem:polarity} While the terminology ``$s$-null'' is our own, the concept it describes has been studied previously, apparently first by H\"{o}rmander and Lions in relation to properties of Sobolev spaces normed by Dirichlet integrals \cite{HoLi:56}, and then subsequently by other authors in relation to the removability of singularities for elliptic partial differential operators \cite{Li:67a,Maz'ya}, and to the approximation of functions by solutions of the associated elliptic PDEs \cite{Po:72}. For integer $s<0$, $s$-nullity is referred to as $(-s)$-polarity in \cite[Definition 2]{HoLi:56}, ``$2$-$(-s)$ polarity'' in \cite{Li:67a} and ``$(2,-s)$-polarity'' in \cite[\S 13.2]{Maz'ya}. For $s>0$ and $E$ closed, $s$-nullity coincides with the concept of ``sets of uniqueness'' for $H^s(\mathbb{R}^n)$, as considered in \cite[\S11.3]{AdHe} and \cite[p.~692]{Maz'ya}. For $s>0$ and $E$ with empty interior, $s$-nullity coincides with the concept of $(s,2)$-stability, discussed in \cite[\S11.5]{AdHe}. For a more detailed comparison with the literature see \cite[\S2.2]{HewMoi:15}. \end{rem} To help us throughout the paper interpret characterisations in terms of $s$-nullity, the following lemma collects useful results relating $s$-nullity to topological and geometrical properties of a set. The results in Lemma \ref{lem:polarity} are a special case of those recently presented in \cite{HewMoi:15} (where $s$-nullity is called $(s,2)$-nullity) in the more general setting of the Bessel potential spaces $H^{s,p}(\mathbb{R}^n)$, $s\in\mathbb{R}$, $1<p<\infty$. Many results in \cite{HewMoi:15} are derived using the equivalence between $s$-nullity and the vanishing of certain set capacities from classical potential theory, drawing heavily on results in \cite{AdHe} and \cite{Maz'ya}. \cite{HewMoi:15} also contains a number of concrete examples and counterexamples illustrating the general results. Regarding point \rf{gg0} of the lemma, following \cite[\S3]{Triebel97FracSpec}, given $0\leq d\leq n$ we call a closed set $F\subset \mathbb{R}^n$ with ${\rm dim_H}(F)=d$ a $d$-set if there exist constants $c_1,c_2>0$ such that \begin{align} \label{eq:dSet} 0<c_1 r^d \leq \mathcal H^d(B_r(\mathbf{x})\cap F) \leq c_2 r^d<\infty, \qquad \textrm{for all } \mathbf{x}\in F, \; 0<r<1, \end{align} where $\mathcal H^d$ is the $d$-dimensional Hausdorff measure on $\mathbb{R}^n$. Condition \eqref{eq:dSet} may be understood as saying that $d$-sets are everywhere locally $d$-dimensional. Note that the definition of $d$-set includes as a special case all Lipschitz $d$-dimensional manifolds, $d\in\{0,1,\ldots, n\}$. \begin{lem}[\cite{HewMoi:15}] \label{lem:polarity} Let $E,E'\subset \mathbb{R}^n$ be arbitrary, $\Omega\subset \mathbb{R}^n$ be non-empty and open, and $s\in\mathbb{R}$. \begin{enumerate}[(i)] \item \label{aa}If $E$ is $s$-null and $E'\subset E$ then $E'$ is $s$-null. \item \label{bb}If $E$ is $s$-null and $t>s$ then $E$ is $t$-null. \item \label{cc}If $E$ is $s$-null then ${\rm int}(E)=\emptyset$. \item \label{dd}If $s>n/2$ then $E$ is $s$-null if and only if ${\rm int}(E)=\emptyset$. \item \label{ll} Let $E$ be $s$-null and let $F\subset\mathbb{R}^n$ be closed and $s$-null. Then $E\cup F$ is $s$-null. \item \label{ll0} If $s\leq 0$ then a countable union of Borel $s$-null sets is $s$-null. \item \label{ee}If $s\geq0$ and $E$ is Lebesgue-measurable with $m(E)=0$, then $E$ is $s$-null. \item \label{ff}If $E$ is Lebesgue-measurable then $E$ is $0$-null if and only if $m(E)=0$. \item \label{mm} There exists a compact set $K\subset\mathbb{R}^n$ with ${\rm int}(K)=\emptyset$ and $m(K)>0$, which is not $s$-null for any $s\leq n/2$. \item \label{jj}If $s<-n/2$ there are no non-empty $s$-null sets. \item \label{ii} A non-empty countable set is $s$-null if and only if $s\ge-n/2$. \item \label{hh} If $-n/2<s\leq 0$ and ${\rm dim_H}(E)< n+2s$, then $E$ is $s$-null. \item \label{gg} If $-n/2\leq s<0$ and $E$ is Borel and $s$-null, then ${\rm dim_H}(E)\leq n+2s$. \item \label{gg1} For each $0\leq d\leq n$ there exist compact sets $K_1,K_2\subset \mathbb{R}^n$ with ${\rm dim_H}(K_1)$ $={\rm dim_H}(K_2)=d$, such that $K_1$ is $(d-n)/2$-null and $K_2$ is not $(d-n)/2$-null. \item \label{gg0} If $0<d<n$ and $F\subset \mathbb{R}^n$ is a compact $d$-set, or a $d$-dimensional hyperplane (in which case $d$ is assumed to be an integer) then $F$ is $(d-n)/2$-null. \item \label{kk1} If ${\rm int}(\Omega^c)\neq \emptyset$, then ${\partial\Omega}$ is not $s$-null for $s<-1/2$. (In particular this holds if $\Omega\neq \mathbb{R}^n$ is $C^0$.) \item \label{kk0} If $\Omega$ is $C^0$ and $s\geq 0$, then $\partial\Omega$ is $s$-null. Furthermore, for $n\geq2$ there exists a bounded $C^0$ open set whose boundary is not $s$-null for any $s<0$. \item \label{kk2} If $\Omega$ is $C^{0,\alpha}$ for some $0<\alpha<1$ and $s> -\alpha/2$, then $\partial\Omega$ is $s$-null. Furthermore, for $n\geq2$ there exists a bounded $C^{0,\alpha}$ open set whose boundary is not $s$-null for any $s<-\alpha/2$. \item \label{kk} If $\Omega$ is Lipschitz then $\partial\Omega$ is $s$-null if and only if $s\geq -1/2$. \end{enumerate} \end{lem} \subsection{Equality of spaces defined on different subsets of \texorpdfstring{$\mathbb{R}^n$}{Rn}}\label{subsec:DiffDoms} The concept of $s$-nullity defined in \S\ref{subsec:Polarity} provides a characterization of when Sobolev spaces defined on different open or closed sets are or are not equal. For two subsets $E_1$ and $E_2$ of $\mathbb{R}^n$ we use the notation $E_1\ominus E_2$ to denote the symmetric difference between $E_1$ and $E_2$, i.e. \begin{align*}% E_1\ominus E_2:=(E_1\setminus E_2)\cup( E_2\setminus E_1)=(E_1\cup E_2)\setminus (E_1\cap E_2). \end{align*} The following elementary result is a special case of \cite[Proposition 2.11]{HewMoi:15}. \begin{thm}[{\cite[Proposition 2.11]{HewMoi:15}}] \label{thm:Hs_equality_closed} Let $F_1,F_2$ be closed subsets of $\mathbb{R}^n$, and let $s\in\mathbb{R}$. Then the following statements are equivalent: \begin{enumerate}[(i)] \item \label{a}$F_1\ominus F_2$ is $s$-null. \item \label{b} $F_1\setminus F_2$ and $ F_2\setminus F_1$ are both $s$-null. \item \label{c} $H^s_{F_1}=H^s_{F_2}$. \end{enumerate} \end{thm} By combining Theorem \ref{thm:Hs_equality_closed} with the duality result of Theorem \ref{thm:DualityTheorem} one can deduce a corresponding result about spaces defined on open subsets. The following theorem generalises \cite[Theorem~13.2.1]{Maz'ya}, which concerned the case $\Omega_1\subset\Omega_2=\mathbb{R}^n$. The special case where $\mathbb{R}^n\setminus\Omega_1$ is a $d$-set was considered in \cite{Tri:08}. (That result was used in \cite{HewMoi:15} to prove item~\rf{gg0} in Lemma~\ref{lem:polarity} above.) \begin{thm} \label{thm:Hs_equality_open} Let $\Omega_1,\Omega_2$ be non-empty, open subsets of $\mathbb{R}^n$, and let $s\in\mathbb{R}$. Then the following statements are equivalent: \begin{enumerate}[(i)] \item \label{a0}$\Omega_1\ominus\Omega_2$ is $s$-null. \item \label{b0}$\Omega_1\setminus\Omega_2$ and $\Omega_2\setminus\Omega_1$ are both $s$-null. \item \label{c0} $H^{s}(\Omega_1)=H^{s}(\Omega_2)$, in the sense that $\!\big( H^s_{\Omega_1^c}\big)^\perp \! =\!\big( H^s_{\Omega_2^c}\big)^\perp \!$ (recall from \eqref{eq:RestrIsUnitary} that $(H^s_{\Omega^c})^\perp\cong H^s(\Omega)$ for any non-empty open $\Omega\subset \mathbb{R}^n$). \item \label{d0} $\widetilde{H}^{-s}(\Omega_1)=\widetilde{H}^{-s}(\Omega_2)$. \end{enumerate} \end{thm} \begin{proof} The result follows from Theorem \ref{thm:DualityTheorem} and Theorem \ref{thm:Hs_equality_closed} with $F_j:=(\Omega_j)^c$, $j=1,2$. \end{proof} \begin{rem} For non-empty open $\Omega_1,\Omega_2\subset \mathbb{R}^n$, the set $\Omega_1\ominus\Omega_2$ has empty interior if and only if $\overline{\Omega_1} =\overline{\Omega_2}$. Hence, by Lemma \ref{lem:polarity}\rf{cc},\rf{dd}, $\overline{\Omega_1} =\overline{\Omega_2}$ % is a necessary condition for the statements \rf{a0}--\rf{d0} of Theorem \ref{thm:Hs_equality_open} to hold, and a sufficient condition when $s>n/2$. But sufficiency does not extend to $s\leq n/2$: a counter-example is provided by $\Omega_1=\mathbb{R}^n$ and $\Omega_2=K^c$, where $K$ is any compact non-$(n/2)$-null set (cf.\ Lemma \ref{lem:polarity}\rf{mm}). \end{rem} For the $\mathringbig{H}{}^s(\Omega)$ spaces, $s\geq0$, the following sufficient (but not necessary) condition for equality is trivial. \begin{lem} \label{lem:CircEquality} If $\Omega_1,\Omega_2\subset \mathbb{R}^n$ are non-empty and open, with $m(\Omega_2\ominus \Omega_1)=0$, then $\mathringbig{H}{}^s(\Omega_1)=\mathringbig{H}{}^s(\Omega_2)$ for all $s\geq 0$. \end{lem} \subsection{Comparison of the ``zero trace'' subspaces of \texorpdfstring{$H^s(\mathbb{R}^n)$}{Hs(Rn)}} \label{subsec:3spaces} In \S\ref{subsec:SobSpacesClosedOpen} we defined three closed subspaces of $H^s(\mathbb{R}^n)$ associated with a non-empty open set $\Omega\subset\mathbb{R}^n$, namely $H^s_{\overline{\Omega}}$ and $\widetilde{H}^s(\Omega)$ (both defined for all $s\in \mathbb{R}$) and $\mathringbig{H}{}^s(\Omega)$ (defined for $s\geq0$), which can all be viewed in some sense as ``zero trace'' spaces. % We already noted (cf.\ \rf{eqn:inclusions}) the inclusions \begin{align}\label{eqn:inclusionsRepeat} \widetilde{H}^s(\Omega)\subset \mathringbig{H}{}^s(\Omega) \subset H^s_{\overline{\Omega}}, \end{align} for all $s\in\mathbb{R}$ (with $\mathringbig{H}{}^s(\Omega)$ present only for $s\geq 0$). In this section we investigate conditions on $\Omega$ and $s$ under which the inclusions in \rf{eqn:inclusionsRepeat} are or are not equalities, and construct explicit counterexamples demonstrating that equality does not hold in general. When $\Omega$ is a $C^0$ open set, both inclusions in \rf{eqn:inclusionsRepeat} are equalities. The following result is proved in \cite[Theorem~3.29]{McLean} for $C^0$ sets with bounded boundary\footnote{We note however that the partition of unity argument appears not quite accurate in the proof of \cite[Theorem~3.29]{McLean}. For an alternative method of handling this part of the argument see the proof of Theorem \ref{thm:new2} below.}; the extension to general $C^0$ sets (as defined in \cite[Definition~1.2.1.1]{Gri}) follows from \eqref{eq:approx2} (cf.\ the proof of Theorem \ref{thm:new2} below). We note that a proof of the equality $\widetilde{H}^s(\Omega) = \mathringbig{H}{}^s(\Omega)$ for $s>0$ and $\Omega$ a $C^0$ open set can also be found in \cite[Theorem 1.4.2.2]{Gri}. \begin{lem}[{\cite[Theorems 3.29, 3.21]{McLean}}] \label{lem:sob_equiv} Let $\Omega\subset \mathbb{R}^n$ be $C^0$ and let $s\in\mathbb{R}$. Then $\widetilde{H}^s(\Omega)= \mathringbig{H}{}^s(\Omega) =H^s_{\overline\Omega}$ (with $\mathringbig{H}{}^s(\Omega)$ present only for $s\geq 0$). \end{lem} When $\Omega$ is not $C^0$ the situation is more complicated. We first note the following elementary results concerning the case $s\geq 0$, part \rf{c1} of which makes it clear that Lemma \ref{lem:sob_equiv} does not extend to general open $\Omega$. \begin{lem}\label{lem:CircleSpace} Let $\Omega\subset\mathbb{R}^n$ be non-empty and open. Then \begin{enumerate}[(i)] \item \label{c1} $\widetilde{H}^0(\Omega)=\mathringbig{H}{}^0(\Omega)$; while $\mathringbig{H}{}^0(\Omega) = H^0_{\overline{\Omega}}$ if and only if $m(\partial \Omega)=0$. \item \label{d1} For $s\geq 0$, if $m(\partial \Omega)=0$ then $\mathringbig{H}{}^s(\Omega)= H^s_{\overline{\Omega}}$. \item \label{g1} For $t>s\geq0$, if $\mathringbig{H}{}^s(\Omega) = H^{s}_{\overline{\Omega}}$ then $\mathringbig{H}{}^t(\Omega) = H^{t}_{\overline{\Omega}}$.% \end{enumerate} \end{lem} \begin{proof} % \rf{c1} The equality $\widetilde{H}^0(\Omega)=\mathringbig{H}{}^0(\Omega)$ holds because the restriction operator is a unitary isomorphism from $\mathringbig{H}{}^0(\Omega)$ onto $H^0(\Omega) = L^2(\Omega)$, in particular $\|u\|_{L^2(\mathbb{R}^n)}=\|u|_\Omega\|_{L^2(\Omega)}$ for $u\in \mathringbig{H}{}^0(\Omega)$, and because $\mathscr{D}(\Omega)$ is dense in $L^2(\Omega)$ \cite[Theorem 2.19]{Adams}. The second statement in \rf{c1}, and \rf{d1}, follow straight from the definitions. If the hypothesis of part \rf{g1} is satisfied, then every $u\in H^{t}_{\overline{\Omega}}\subset H^{s}_{\overline{\Omega}}\cap H^{t}(\mathbb{R}^n)=\mathringbig{H}{}^s(\Omega)\cap H^{t}(\mathbb{R}^n)$ is equal to zero a.e.\ in $\Omega^c$, % and hence belongs to $\mathringbig{H}{}^t(\Omega)$. \end{proof} Open sets for which $\Omega\subsetneqq \mathrm{int}(\overline{\Omega})$ are a source of counterexamples to equality in \rf{eqn:inclusionsRepeat}. The following lemma relates properties of the inclusions \rf{eqn:inclusionsRepeat} to properties of the set $\mathrm{int}(\overline{\Omega})\setminus\Omega$. \begin{lem} \label{lem:equalityNullity} Let $\Omega\subset \mathbb{R}^n$ be non-empty and open, and let $s\in \mathbb{R}$.% \begin{enumerate}[(i)] \item \label{a7} For $s\geq 0$, if $m(\mathrm{int}(\overline{\Omega})\setminus \Omega)>0$ then $\mathringbig{H}{}^s(\Omega) \subsetneqq H^s_{\overline{\Omega}}$. \item \label{b7} For $s>n/2$, $\mathringbig{H}{}^s(\Omega) = H^s_{\overline{\Omega}}$ if and only if $m(\mathrm{int}(\overline{\Omega})\setminus \Omega)=0$. \item \label{aaa} If $\mathrm{int}(\overline{\Omega})\setminus\Omega$ is not $(-s)$-null then $\widetilde{H}^s(\Omega)\subsetneqq H^s_{\overline\Omega}$. \item \label{bbb} If $\mathrm{int}(\overline{\Omega})\setminus\Omega$ is not $(-s)$-null, $s>0$, and $m(\mathrm{int}(\overline{\Omega})\setminus\Omega)=0$, then $\widetilde{H}^s{(\Omega)}\subsetneqq\mathringbig{H}{}^s{(\Omega)}$. \item \label{ddd} If $\widetilde{H}^s(\mathrm{int}(\overline{\Omega})) = H^s_{\overline{\Omega}}$ (e.g.\ if $\mathrm{int}(\overline{\Omega})$ is $C^0$), then $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ if and only if $\mathrm{int}(\overline{\Omega})\setminus\Omega$ is $(-s)$-null. \end{enumerate} \end{lem} \begin{proof} \rf{a7} If $m(\mathrm{int}(\overline{\Omega})\setminus \Omega)>0$ then there exists an open ball $B\subset \mathrm{int}(\overline{\Omega})$ such that $m(B\setminus \Omega)=\epsilon>0$. (To see this first write $\mathrm{int}(\overline{\Omega})$ as the union of balls. Then use the fact that $\mathbb{R}^n$ is a separable metric space, so second countable, so that, by Lindel\"of's theorem (see e.g.\ \cite[p.~100]{Simmons}), $\mathrm{int}(\overline{\Omega})$ can be written as the union of a countable set of balls, i.e., as $\mathrm{int}(\overline{\Omega})=\bigcup_{n=1}^\infty B_n$. Then $0<m(\mathrm{int}(\overline{\Omega})\setminus \Omega)\leq \sum_{n=1}^\infty m(B_n\setminus \Omega)$, so that $m(B_n\setminus \Omega)>0$ for some $n$.) Choose $\chi\in \mathscr{D}(B)$ such that $0\leq \chi\leq 1$ and $\int \chi dx > m(B)-\epsilon$. Then $\chi \in \widetilde{H}^s(\mathrm{int}(\overline{\Omega}))\subset H^s_{\overline{\Omega}}$, but $\chi\not\in \mathringbig{H}{}^s(\Omega)$, for if $\chi\in \mathringbig{H}{}^s(\Omega)$ then $\chi=0$ a.e. in $\Omega^c$, so that $\int \chi dx \leq m(B\cap \Omega)\leq m(B)-\epsilon$. \rf{b7} If $u\in H^s_{\overline{\Omega}}$ then $u=0$ a.e.\ in $\overline\Omega^c$. Since $s>n/2$, the Sobolev embedding theorem says that $u\in C^0{(\R^n)}$, so $u=0$ a.e.\ in $\overline{\overline{\Omega}^c}$. But $\Omega^c\setminus\overline{\overline{\Omega}^c}=\mathrm{int}(\overline{\Omega})\setminus \Omega$, which has zero measure by assumption. Thus $u=0$ a.e.\ in $\Omega^c$, so $u\in\mathringbig{H}{}^s{(\Omega)}$. The ``only if'' part of the statement is provided by \rf{a7}. \rf{aaa} If $\mathrm{int}(\overline{\Omega})\setminus\Omega$ is not $(-s)$-null then, by Theorem \ref{thm:Hs_equality_open}, $\widetilde{H}^s(\Omega)\subsetneqq \widetilde{H}^s(\mathrm{int}(\overline{\Omega})) \subset H^s_{\overline\Omega}$. Part \rf{bbb} follows similarly, by noting that $\widetilde{H}^s{(\Omega)}\subsetneqq \widetilde{H}^s(\mathrm{int}(\overline{\Omega}))\subset \mathringbig{H}{}^s(\mathrm{int}(\overline{\Omega}))=\mathringbig{H}{}^s{(\Omega)}$, the latter equality following from Lemma \ref{lem:CircEquality}. % \rf{ddd} Lemma \ref{lem:sob_equiv} (applied to $\mathrm{int}(\overline{\Omega})$) implies that $\widetilde{H}^s{(\Omega)}\subset\widetilde{H}^s(\mathrm{int}(\overline{\Omega})) =H^s_{\overline{\mathrm{int}(\overline{\Omega})}} =H^s_{\overline\Omega}$, and the assertion then follows by Theorem~\ref{thm:Hs_equality_open} (with $\Omega_1=\Omega$ and $\Omega_2=\mathrm{int}(\overline{\Omega})$). \end{proof} In particular, Lemma~\ref{lem:equalityNullity}\rf{ddd}, combined with Lemmas \ref{lem:sob_equiv} and \ref{lem:polarity}, provides results about the case where $\Omega$ is an $C^0$ open set from which a closed, nowhere dense set has been removed. A selection of such results is given in the following proposition. \begin{prop} \label{prop:TildeSubscript} Suppose that $\Omega\subsetneqq \mathrm{int}(\overline{\Omega})$ and that $\mathrm{int}(\overline{\Omega})$ is $C^0$. Then: \begin{enumerate}[(i)] \item \label{ts0} $\widetilde{H}^s(\Omega)= H^s_{\overline\Omega}$ for all $s<-n/2$. \item \label{ts1} If $\mathrm{int}(\overline{\Omega})\setminus\Omega$ is a subset of the boundary of a Lipschitz open set $\Upsilon$, with $\mathrm{int}(\overline{\Omega})\setminus\Omega$ having non-empty relative interior in $\partial\Upsilon$, then $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ if and only if $s\leq 1/2$. (A concrete example in one dimension is where $\Omega$ is an open interval with an interior point removed. An example in two dimensions is where $\Omega$ is an open disc with a slit cut out. Three-dimensional examples relevant for computational electromagnetism are the ``pseudo-Lipschitz domains'' of \cite[Definition~3.1]{ABD98}.) \item If $0<d:={\rm dim_H}(\mathrm{int}(\overline{\Omega})\setminus\Omega)<n$ then $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ for all $s<(n-d)/2$ and $\widetilde{H}^s(\Omega) \subsetneqq H^s_{\overline{\Omega}}$ for all $s>(n-d)/2$. % \item If $\mathrm{int}(\overline{\Omega})\setminus\Omega$ is countable then $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ if and only if $s\leq n/2$. \item If $\widetilde{H}^t(\Omega)=H^t_{\overline\Omega}$ for some $t\in\mathbb{R}$ then $\widetilde{H}^s(\Omega)=H^s_{\overline\Omega}$ for all $s<t$. % (Whether the assumption that $\mathrm{int}(\overline\Omega)$ is $C^0$ is necessary here appears to be an open question. Lemma \ref{lem:CircleSpace}\rf{g1} shows that if $\widetilde{H}$ is replaced by $\mathring{H}$ the opposite result holds (without assumptions on $\mathrm{int}(\overline\Omega)$)). \end{enumerate} \end{prop} Parts \rf{aaa} and \rf{bbb} of Lemma \ref{lem:equalityNullity}, combined with Lemma \ref{lem:CircleSpace}, provide a way of constructing bounded open sets for which all the spaces considered in this section are different from each other for $s\geq-n/2$. (Note that the statement of Lemma \ref{lem:equalityNullity}\rf{aaa} is empty if $s<-n/2$ as $\mathrm{int}(\overline{\Omega})\setminus\Omega$ is necessarily $(-s)$-null in this case (cf.\ Lemma \ref{lem:polarity}\rf{dd}). One might speculate that if $s<-n/2$ then $\widetilde{H}^s(\Omega)= H^s_{\overline\Omega}$ for every open $\Omega\subset\mathbb{R}^n$, not just when $\mathrm{int}(\overline\Omega)$ is~$C^0$ (see Proposition \ref{prop:TildeSubscript}\rf{ts0} above). But proving this in the general case is an open problem. \begin{thm}\label{thm:notequalbig} For every $n\in\mathbb{N}$, there exists a bounded open set $\Omega\subset \mathbb{R}^n$ such that, for every $s>0$, $\widetilde{H}^s(\Omega) \subsetneqq \mathringbig{H}{}^s(\Omega) \subsetneqq H^s_{\overline{\Omega}}$, and for every $s\geq-n/2$, $\widetilde{H}^s(\Omega) \subsetneqq H^s_{\overline{\Omega}}$. \end{thm} \begin{proof} Let $\Omega_1$ be any bounded open set for which $\mathrm{int}(\overline\Omega_1)\setminus\Omega_1$ has positive measure and is not $n/2$-null, for example an open ball minus a compact set of the type considered in Lemma \ref{lem:polarity}\rf{mm}. Let $\Omega_2$ be any bounded open set for which $\mathrm{int}(\overline\Omega_2)\setminus\Omega_2$ has zero measure and is not $s$-null for any $s<0$, for example an open ball minus the Cantor set $F^{(n)}_{n,\infty}$ from \cite[Theorem 4.5]{HewMoi:15}. Then, by Lemmas~\ref{lem:CircleSpace} and \ref{lem:equalityNullity}, \begin{align*} \widetilde{H}^s(\Omega_1)&\subsetneqq H^s_{\overline{\Omega_1}}, && \mbox{ for all }s\geq-n/2,\nonumber\\ \mathringbig{H}{}^s(\Omega_1)&\subsetneqq H^s_{\overline{\Omega_1}}, && \mbox{ for all }s\ge0, \\ \widetilde{H}^s(\Omega_2) &\subsetneqq \mathringbig{H}{}^s(\Omega_2) , && \mbox{ for all }s>0.\nonumber \end{align*} Provided $\Omega_1$ and $\Omega_2$ have disjoint closure (this can always be achieved by applying a suitable translation if necessary) the open set $\Omega:=\Omega_1\cup\Omega_2$ has the properties claimed in the assertion. \end{proof} For bounded open sets with $\Omega=\mathrm{int}(\overline{\Omega})$, the equality $\widetilde{H}^s(\Omega)=H^s_{\overline\Omega}$ is equivalent to $\overline\Omega$ being ``$(s,2)$-stable'', in the sense of \cite[Definition 11.5.2]{AdHe} and \cite[Definition 3.1]{BaCa:01}. (We note that the space $L^{s,2}_0(E)$ appearing in \cite[Definition 11.5.2]{AdHe} is equal to $\widetilde{H}^s(E)$ when $E$ is open (see \cite[Equation (11.5.2)]{AdHe}), and equal to $H^s_E$ when $E$ is compact (see \cite[\S10.1]{AdHe}).) Then, results in \cite[\S11]{AdHe} -- specifically, the remark after Theorem~11.5.3, Theorem~11.5.5 (noting that the compact set $K$ constructed therein satisfies $K=\overline{\mathrm{int}(K)}$) and Theorem~11.5.6 -- provide the following results, which show that, at least for $m\in\mathbb{N}$, $\Omega=\mathrm{int}(\overline{\Omega})$ is not a sufficient condition for $\widetilde{H}^m(\Omega)=H^m_{\overline\Omega}$ unless $n=1$. Part \rf{AdHe1} of Lemma \ref{lem:stability} also appears in \cite[Theorem 7.1]{BaCa:01}. We point out that references \cite{AdHe} and \cite{BaCa:01} also collect a number of technical results from the literature, not repeated here, relating $(s,2)$-stability to certain ``polynomial'' set capacities (e.g.\ \cite[Theorem 11.5.10]{AdHe} and \cite[Theorem 7.6]{BaCa:01}) % and spectral properties of partial differential operators (e.g.\ \cite[Theorem 6.6]{BaCa:01}). \begin{lem}[{\cite[$\S$11]{AdHe}}] \label{lem:stability} \begin{enumerate}[(i)] \item \label{AdHe1} If $n=1$ and $\Omega\subset \mathbb{R}$ is open, bounded and satisfies $\Omega=\mathrm{int}(\overline{\Omega})$, then $\widetilde{H}^m(\Omega)=H^m_{\overline\Omega}$ for all $m\in\mathbb{N}$. \item \label{AdHe2} If $n\geq 2$ and $m\in\mathbb{N}$, there exists a bounded open set $\Omega\subset \mathbb{R}^n$ for which $\Omega=\mathrm{int}(\overline{\Omega})$ but $\widetilde{H}^m(\Omega)\neq H^m_{\overline\Omega}$. \item \label{AdHe3} If $n\geq 3$ then the set $\Omega$ in point \rf{AdHe2} can be chosen so that $\overline{\Omega}^c$ is connected. \end{enumerate} \end{lem} We now consider the following question: if $\Omega$ is the disjoint union of finitely many open sets $\{\Omega_\ell\}_{\ell=1}^L$, each of which satisfies $\widetilde{H}^s(\Omega_\ell) = H^s_{\overline{\Omega_\ell}}$, then is $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$? Certainly this will be the case when the closures of the constituent sets are mutually disjoint. But what about the general case when the closures intersect nontrivially? A first answer, valid for a narrow range of regularity exponents, is given by the following lemma, which is a simple consequence of a standard result on pointwise Sobolev multipliers. \begin{lem}\label{lem:TildeSubscriptHalf} Let $\Omega\subset\mathbb{R}^n$ be the disjoint union of finitely many bounded Lipschitz open sets $\Omega_1,\ldots,\Omega_L$. Then $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ for $0\leq s<1/2$. \end{lem} \begin{proof} Let $0\leq s<1/2$ and $u\in H^s_{\overline{\Omega}}$. By \cite[Proposition~5.3]{Tri:02} and Lemma~\ref{lem:sob_equiv}, where $\chi_{\Omega_\ell}$ is the characteristic function of $\Omega_\ell$, $u\chi_{\Omega_\ell}\in H^s_{\overline{\Omega_\ell}}=\widetilde{H}^s(\Omega_\ell)\subset \widetilde{H}^s(\Omega)$. Thus $\sum_{\ell=1}^L u\chi_{\Omega_\ell}\in\widetilde{H}^s(\Omega)$, and $\sum_{\ell=1}^L u\chi_{\Omega_\ell}=u$ since $m(\partial \Omega)\leq \sum_{\ell=1}^L m(\partial \Omega_\ell) = 0$. \end{proof} Lemma~\ref{lem:TildeSubscriptHalf} can be extended to disjoint unions of some classes of non-Lipschitz open sets using \cite[Definition 4.2, Theorem~4.4]{Sickel99a}, leading to the equality $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ for $0\leq s<t/2$ for some $0<t<1$ related to the boundary regularity (cf.\ also \cite[Theorem 6]{Sickel99b} and \cite[Theorem 3, p.\ 216]{RunstSickel}). However, % the technique of Lemma~\ref{lem:TildeSubscriptHalf}, namely using characteristic functions as pointwise multipliers, cannot be extended to $s\geq 1/2$, no matter how regular the constituent sets are; indeed, \cite[Lemma~3.2]{Sickel99a} states that $\chi_\Omega\notin H^{1/2}{(\R^n)}$ for any non-empty open set $\Omega\subset\mathbb{R}^n$. We now state and prove a general result, which allows us to prove $\widetilde{H}^s{(\Omega)}=H^s_{\overline\Omega}$, for $|s|\leq 1$ if $n\geq 2$, $|s|\leq 1/2$ if $n=1$, for a class of open sets which are in a certain sense ``regular except at a countable number of points''. This result depends on the following lemma that is inspired by results in \cite[\S17]{Tartar}, whose proof we defer to later in this section. \begin{lem} \label{lem:vj} Suppose that $n\geq 2$, that $N\in \mathbb{N}$ and $\mathbf{x}_1,...,\mathbf{x}_N\in \mathbb{R}^n$ are distinct, and that \begin{equation} \label{eq:Rbound} 0<R<\min_{i,j\in \{1,...,N\}}\frac{|\mathbf{x}_i-\mathbf{x}_j|}{6}. \end{equation} Then there exists a family $(v_j)_{j\in\mathbb{N}}\subset C^\infty(\mathbb{R}^n)$ and a constant $C>0$ such that, for all $j\in \mathbb{N}$: (i) $0\leq v_j(\mathbf{x})\leq 1$, for $\mathbf{x}\in \mathbb{R}^n$; (ii) $v_j(\mathbf{x}) = 0$, if $|\mathbf{x}-\mathbf{x}_i|< R/(2j)$ for some $i\in \{1,...,N\}$; (iii) $v_j(\mathbf{x})=1$, if $|\mathbf{x}-\mathbf{x}_i|> 5R/2$ for all $i\in \{1,...,N\}$; (iv) $\|v_j\phi\|_{H^s(\mathbb{R}^n)} \leq C\|\phi\|_{H^s(\mathbb{R}^n)}$, for all $\phi\in H^s(\mathbb{R}^n)$ with $|s|\leq 1$; (v) $\|v_j\phi-\phi\|_{H^s(\mathbb{R}^n)} \to 0$ as $j\to\infty$, for all $\phi\in H^s(\mathbb{R}^n)$ with $|s|\leq 1$. For $n=1$ the same result holds, but with $s$ restricted to $|s|\leq 1/2$. \end{lem} \begin{thm} \label{thm:new} Suppose that $|s|\leq 1$ if $n\geq 2$, $|s|\leq 1/2$ if $n=1$, that $\Omega\subset \mathbb{R}^n$ is open, and that: (i) $P\subset \partial \Omega$ is closed and countable with at most finitely many limit points in every bounded subset of $\partial\Omega$; (ii) $\Omega$ has the property that, if $u\in H_{\overline{\Omega}}^s$ is compactly supported with $\supp(u)\cap P=\emptyset$, then $u\in \widetilde{H}^s(\Omega)$. Then $\widetilde{H}^s(\Omega)=H_{\overline{\Omega}}^s$. \end{thm} \begin{proof} Suppose that $|s|\leq 1$ if $n\geq 2$, $|s|\leq 1/2$ if $n=1$. Since the set of compactly supported $v\in H^s_{\overline{\Omega}}$ is dense in $H^s_{\overline{\Omega}}$ by \eqref{eq:approx2} and $\widetilde{H}^s(\Omega)$ is closed, it is enough to show that $v\in \widetilde{H}^s(\Omega)$ for every compactly supported $v\in H^s_{\overline{\Omega}}$. So suppose that $v\in H^s_{\overline \Omega}$ is compactly supported, and let $Q$ be the (finite) set of limit points of $P$ that lie in the support of $v$. Let $(v_j)_{j\in \mathbb{N}}\subset C^\infty(\mathbb{R}^n)$ be a family constructed as in Lemma \ref{lem:vj}, such that $v_jv\to v$ as $j\to\infty$ in $H^s(\mathbb{R}^n)$, and each $v_j=0$ in a neighbourhood of $Q$. For each $j\in \mathbb{N}$, $P_j := P\cap \supp(v_jv)$ is finite. For each $j\in \mathbb{N}$, let $(v_{j,\ell})_{\ell\in \mathbb{N}}\subset C^\infty(\mathbb{R}^n)$ be a family constructed as in Lemma \ref{lem:vj}, such that $v_{j,\ell}v_jv\to v_jv$ as $\ell\to\infty$ in $H^s(\mathbb{R}^n)$, and each $v_{j,\ell}=0$ in a neighbourhood of $P_j$. Then $w_{j,\ell}:= v_{j,\ ell}v_jv\in \widetilde{H}^s(\Omega)$, for all $j,\ell\in \mathbb{N}$, by hypothesis. Since $\widetilde{H}^s(\Omega)$ is closed it follows that $v\in \widetilde{H}^s(\Omega)$. \end{proof} In the next theorem, when we say that the open set $\Omega\subset \mathbb{R}^n$ is $C^0$ except at the points $P\subset \partial \Omega$, we mean that its boundary $\partial\Omega$ can, in a neighbourhood of each point in $\partial \Omega\setminus P$, be locally represented as the graph (suitably rotated) of a $C^0$ function from $\mathbb{R}^{n-1}$ to $\mathbb{R}$, with $\Omega$ lying only on one side of $\partial\Omega$. (In more detail we mean that $\Omega$ satisfies the conditions of \cite[Definition 1.2.1.1]{Gri}, but for every $\mathbf{x} \in \partial \Omega \setminus P$ rather than for every $\mathbf{x} \in \partial \Omega$.) \begin{thm} \label{thm:new2} Suppose that $|s|\leq 1$ if $n\geq 2$, $|s|\leq 1/2$ if $n=1$. Suppose further that $\Omega\subset\mathbb{R}^n$ is open, and that $\Omega$ is $C^0$ except at a set of points $P$ satisfying condition (i) of Theorem \ref{thm:new}. Then $\widetilde{H}^s(\Omega)=H_{\overline{\Omega}}^s$. In particular, $\widetilde{H}^s(\Omega)=H_{\overline{\Omega}}^s$ if $\Omega$ is the union of disjoint $C^0$ open sets, whose closures intersect only at a set of points $P$ that satisfies condition (i) of Theorem \ref{thm:new}. \end{thm} \begin{proof} The first two sentences of this result will follow from Theorem \ref{thm:new} if we can show that $\Omega$ satisfies condition (ii) of Theorem \ref{thm:new}. We will show that this is true (for all $s\in \mathbb{R}$) by a partition of unity argument, adapting the argument used to prove Lemma \ref{lem:sob_equiv} in \cite[Theorem 3.29]{McLean}. Suppose that $u\in H_{\overline{\Omega}}^s$ is compactly supported with $\supp(u)\cap P=\emptyset$. For each $\mathbf{x} \in \supp(u)$, let $\epsilon(\mathbf{x})>0$ be such that $\partial \Omega$ is the rotated graph of a $C^0$ function and $\Omega$ the rotated hypograph of that function in $B_{3\epsilon(\mathbf{x})}(\mathbf{x})$ if $\mathbf{x}\in \partial \Omega$, and such that $B_{3\epsilon(\mathbf{x})}(\mathbf{x})\subset \Omega$ if $\mathbf{x} \in \Omega$. Then $\{B_{\epsilon(\mathbf{x})}(\mathbf{x}):\mathbf{x} \in \supp(u)\}$ is an open cover for $\supp(u)$. Since $\supp(u)$ is compact we can choose a finite sub-cover $\mathcal{W}=\{B_{\epsilon(\mathbf{x}_i)}(\mathbf{x}_i):i \in \{1,...,N\}\}$. Choose a partition of unity $(\chi_i)_{i=1}^N$ for $\supp(u)$ subordinate to $\mathcal{W}$, with $\supp(\chi_i)\subset B_{\epsilon(\mathbf{x}_i)}(\mathbf{x}_i)$ for $1\leq i\leq N$, this possible by \cite[Theorem 2.17]{grubb}. Given $\eta>0$, for $i=1,...,N$ choose $\phi_i\in \mathscr{D}(\Omega)$ such that $\|\chi_iu-\phi_i\|_{H^s(\mathbb{R}^n)}\leq \eta/N$. This is possible by \eqref{eq:approx} if $\mathbf{x}_i\in \Omega$. To see that this is possible if $\mathbf{x}_i\in \partial \Omega \cap \supp(u)$ we argue as in the proof of Lemma \ref{lem:sob_equiv} given in \cite[Theorem 3.29]{McLean}, first making a small shift of $\chi_iu$ to move its support into $\Omega$, and then approximating by \eqref{eq:approx}. Then $\phi = \sum_{i=1}^N \phi_i\in \mathscr{D}(\Omega)$ and $\|u-\phi\|_{H^s(\mathbb{R}^n)}\leq\eta$. Since $\eta>0$ is arbitrary, this implies that $u\in \widetilde{H}^s(\Omega)$. The last sentence of the theorem is an immediate corollary. \end{proof} The above theorem applies, in particular, whenever $\Omega$ is $C^0$ except at a finite number of points. The following remark notes applications of this type. \begin{rem} Theorem \ref{thm:new2} implies that $\widetilde{H}^s{(\Omega)}=H^s_{\overline\Omega}$, for $|s|\leq 1$, for a number of well-known examples of non-$C^0$ open sets. In particular we note the following examples, illustrated in Figure \ref{fig:TSExamples}, all of which are $C^0$ except at a finite number of points: \begin{enumerate} \item any finite union of polygons (in $\mathbb{R}^2$) or $C^0$ polyhedra (in $\mathbb{R}^3$) where the closures of the constituent polygons/polyhedra intersect only at a finite number of points, for example the standard prefractal approximations to the Sierpinski triangle (see Figure \ref{subfig:Sierpinski}); \item the double brick domain of \cite[p.~91]{McLean} (see Figure \ref{subfig:DoubleBrick}); \item sets with ``curved cusps'', either interior or exterior, e.g.\ $\{(x,y)\in \mathbb{R}^2 : x^2+y^2<1 \textrm{ and } x^2+(y+1/2)^2>1/2\}$ or its complement (see Figure \ref{subfig:Cusps}); \item spiral domains, e.g.\ $\{(r\cos{\theta},r\sin\theta)\in \mathbb{R}^2:2^{\theta/(2\pi)}<r<\frac{3}{2}2^{\theta/(2\pi)}, \, \theta\in\mathbb{R}\}$ (see Figure \ref{subfig:Spiral}); \item the ``rooms and passages'' domain of \cite[\S2.1]{Fr:79} (see Figure \ref{subfig:RoomsAndPassages}). \end{enumerate} \end{rem} \begin{figure}[!t] \begin{center} \subfigure[\label{subfig:Sierpinski} The first four prefractal approximations to the Sierpinski triangle]{ \hspace{-7.5mm} \includegraphics[height=25mm]{Sierpinski1}\hs{2} \includegraphics[height=25mm]{Sierpinski2}\hs{2} \includegraphics[height=25mm]{Sierpinski3}\hs{2} \includegraphics[height=25mm]{Sierpinski4} } \subfigure[\label{subfig:DoubleBrick} Double brick]{ \includegraphics[height=25mm]{DoubleBrick} } \hs{3} \subfigure[\label{subfig:Cusps} Curved cusps]{ \includegraphics[height=30mm]{Cusps} } \hs{3} \subfigure[\label{subfig:Spiral} Spiral]{ \includegraphics[height=30mm]{SpiralFig} } \subfigure[\label{subfig:RoomsAndPassages} ``Rooms and passages'']{ \includegraphics[height=30mm]{RoomsAndPassages} } \end{center} \caption{\label{fig:TSExamples} Examples of non-$C^0$ open sets to which Theorem \ref{thm:new2} applies.} \end{figure} \begin{proof}[Proof of Lemma \ref{lem:vj}.] Choose $R>0$ to satisfy \eqref{eq:Rbound}. The case $n=2$ is the hardest so we start with that. For $j\in \mathbb{N}$, define $\Phi_j\in C(\mathbb{R})$ by \begin{align*}% \Phi_j(r):= \begin{cases} 0, & r\leq R/j,\\ 1-\dfrac{\log(r/(2R))}{\log(1/(2j))}, & R/j<r\leq 2R,\\ 1, & r>2R, \end{cases} \end{align*} and note that $\Phi_j^\prime(r) = (r\log(2j))^{-1}$, for $R/j<r<2R$. We define by mollification a smoothed version $\Psi_j$ of $\Phi_j$. Choose $\chi\in \mathscr{D}(\mathbb{R})$ with $0\leq \chi(t)\leq 1$ for $t\in\mathbb{R}$, $\chi(t)=0$ if $|t|\geq 1$, and $\int_{-\infty}^\infty \chi(t) \mathrm{d} t = 1$. Define $\chi_j(t) := (2j/R)\chi(2jt/R)$, $t\in \mathbb{R}$, and $$ \Psi_j(r) := \int_{-\infty}^\infty \chi_j(r-t)\Phi_j(t)\, \mathrm{d} t = \int_{-\infty}^\infty \chi_j(t)\Phi_j(r-t)\, \mathrm{d} t, \quad r\in \mathbb{R}. $$ Then $\Psi_j\in C^\infty(\mathbb{R})$, $0\leq \Psi_j(r)\leq 1$ for $r\in \mathbb{R}$, $\Psi_j(r)=0$ if $r\leq R/(2j)$, $\Psi_j(r)=1$ if $r\geq 2R + R/(2j)$, and \begin{equation} \label{eq:Psibound} 0\leq\Psi^\prime_j(r)\leq \max_{|t-r|\leq R/(2j)}\, \Phi^\prime_j(t) \leq \frac{3}{2r\log(2j)}, \quad \mbox{for }\frac{R}{2j}< r\leq 2R+ \frac{R}{2j}. \end{equation} For $n=2$ we define the sequence $(v_j)_{j\in \mathbb{N}}$ by \begin{equation} \label{eq:vjdef} v_j(\mathbf{x}) := \prod_{i=1}^N \Psi_j(|\mathbf{x}-\mathbf{x}_i|), \quad \mathbf{x}\in \mathbb{R}^2. \end{equation} Clearly $v_j\in C^\infty(\mathbb{R}^2)$ and satisfies conditions (i)-(iii). Noting that \begin{equation} \label{eq:vjbound} |\nabla v_j(\mathbf{x})| = \left\{\begin{array}{cc} \Psi_j^\prime(|\mathbf{x}-\mathbf{x}_i|), & \mbox{for } |\mathbf{x}-\mathbf{x}_i|\leq 5R/2, \;\; i\in \{1,...,N\}, \\ 0, & \mbox{otherwise}, \end{array}\right. \end{equation} it follows from \eqref{eq:Psibound} that $\|\nabla v_j\|_{L^2(\mathbb{R}^2)}\to 0$ as $j\to\infty$, and hence by the dominated convergence theorem that (v) holds for all $\phi\in \mathscr{D}(\mathbb{R}^2)$ and $s=1$ (and so also for $s<1$). Thus, if (iv) holds, (v) follows by density arguments. We will prove (iv) first for $s=1$, then for $s=-1$ by a duality argument, then for $s\in[-1,1]$ by interpolation. Choose $\varphi\in \mathscr{D}(\mathbb{R}^2)$ with support in $\cup_{i=1}^N B_R(\mathbf{x}_i)$ and such that $\varphi=1$ in a neighbourhood of $\{\mathbf{x}_1,...,\mathbf{x}_N\}$. It is clear from \eqref{eq:vjbound} and \eqref{eq:Psibound} that the operation of multiplication by $(1-\varphi)v_j$ is bounded on $H^1(\mathbb{R}^2)$, uniformly in $j$. It follows from the same bounds and the fact (for $n=2$) that (cf.\ \cite[Lemma 17.4]{Tartar}) \begin{align} \label{eqn:Tartar17p4} \int_{B_{R}}\frac{|u|^2}{|\mathbf{x}|^2\log^2(|\mathbf{x}|/R)}\, \mathrm{d}\mathbf{x}\leq 4\int_{B_{R}}|\nabla u|^2\, \mathrm{d}\mathbf{x}, \qquad u\in \widetilde{H}^1(B_{R}), \end{align} that the operation of multiplication by $\varphi v_j$ is bounded on $H^1(\mathbb{R}^2)$, uniformly in $j$. Thus (iv) holds for some constant $C>0$ for $s=1$. Abbreviating $H^{\pm 1}(\mathbb{R}^2)$ by $H^{\pm 1}$ and $\langle \cdot,\cdot\rangle_{H^{-1}\times H^{1}}$ by $\langle \cdot,\cdot\rangle$, since $H^{-1}$ is a unitary realisation of $(H^{1})^*$ it holds for $\phi\in H^{-1}$ that \begin{align*}% \|v_j\phi\|_{H^{-1}} = \sup_{\substack{v\in H^1\\ \|v\|_{H^1}=1}}\left|\langle v_j\phi,v\rangle\right| = \sup_{\substack{v\in H^1\\ \|v\|_{H^1}=1}}\left|\langle \phi,v_jv\rangle\right| \leq C\|\phi\|_{H^{-1}}, \end{align*} i.e.\ (iv) holds also for $s=-1$ with the same constant $C$, and hence also for $s\in [-1,1]$ by interpolation (e.g.\ \cite[(1) and Theorem 4.1]{InterpolationCWHM}). If $n\ge3$ we argue and define $v_j$ as above, but with the simpler choice $\Psi_j(r):=\psi(jr/R)$, where $\psi$ is any function in $C^\infty(\mathbb{R})$ with $\psi(r)=0$ for $r<1$, $\psi(r)=1$ for $r>2$, and $0\leq \psi(r)\leq 1$ for $r\in\mathbb{R}$. To prove (iv) for $s=1$ one uses instead of \rf{eqn:Tartar17p4} the bound (cf.\ \cite[Lemma 17.1]{Tartar}) \begin{align*}% \displaystyle{\int_{\mathbb{R}^n}\frac{|u|^2}{|\mathbf{x}|^2} \, \mathrm{d}\mathbf{x}\leq \frac{2}{n-2}\int_{\mathbb{R}^n}|\nabla u|^2\,\mathrm{d}\mathbf{x}, \qquad u\in H^1(\mathbb{R}^n).} \end{align*} If $n=1$ then the result follows by embedding $\mathbb{R}$ in $\mathbb{R}^2$, trace theorems, interpolation and duality. In more detail, if $\mathbf{x}_1,...,\mathbf{x}_N\in \mathbb{R}\subset \mathbb{R}^2$ are distinct, and $(v_j)_{j\in \mathbb{N}}\subset C^\infty(\mathbb{R}^2)$, satisfying (i)-(v) for $n=2$ and $|s|\leq 1$, is defined by \eqref{eq:vjdef}, then $(v_j|_\mathbb{R})_{j\in\mathbb{N}}$ satisfies (i)-(iii) for $n=1$. To see (iv) holds, note that $(v_j|_\mathbb{R})$ is uniformly bounded in $L^\infty(\mathbb{R})$. Moreover, let $c$ denote the norm of the trace operator $\gamma:H^1(\mathbb{R}^2)\to H^{1/2}(\mathbb{R})$, defined by $\gamma v = v|_\mathbb{R}$ for $v\in \mathscr{D}(\mathbb{R}^2)$, and $c^\prime$ the norm of a right inverse $E:H^{1/2}(\mathbb{R})\to H^1(\mathbb{R}^2)$ of $\gamma$. Then \begin{equation} \label{eq:ineq} \|v_j|_\mathbb{R}\phi\|_{H^{1/2}(\mathbb{R})} \leq c \|v_jE\phi\|_{H^1(\mathbb{R}^2)} \leq cC \|E\phi\|_{H^1(\mathbb{R}^2)}\leq c^\prime cC \|\phi\|_{H^{1/2}(\mathbb{R}^2)}, \end{equation} for $\phi\in H^{1/2}(\mathbb{R})$. Thus $(v_j|_\mathbb{R})$ satisfies (iv) for $n=1$ for $s=0$ and $s=1/2$, and hence for $0\leq s\leq 1/2$ by interpolation, and then for $-1/2\leq s<0$ by duality arguments as above. Finally, (v) follows by density, as in the case $n=2$, if we can show that (v) holds for $s=1/2$ and all $\phi\in \mathscr{D}(\mathbb{R})$. But, arguing as in \eqref{eq:ineq}, this follows from (v) for $n=2$. \end{proof} We end this section with a result linking the inclusions in \rf{eqn:inclusionsRepeat} to taking complements. This result generalises \cite[Theorem~1.1]{Po:72}, where the same result is proved for the special case where $s\in \mathbb{N}$ and $\Omega$ is the interior of a compact set. \begin{lem} \label{lem:TSComplements} Let $\Omega\subset \mathbb{R}^n$ be open and non-empty, and let $s\in \mathbb{R}$. % Then $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ if and only if $\widetilde{H}^{-s}(\overline{\Omega}^c) = H^{-s}_{\Omega^c}$. \end{lem} \begin{proof} Applying Lemma~\ref{lem:orth_lem} twice, and using $V_2^{a,H^{s}{(\R^n)}}\subset V_1^{a,H^{s}{(\R^n)}}$ for all closed spaces $V_1\subset V_2\subset H^{-s}{(\R^n)}$, we have $\widetilde{H}^s{(\Omega)}=(H^{-s}_{\Omega^c})^{a,H^{s}{(\R^n)}} \subset (\widetilde{H}^{-s}(\overline\Omega^c))^{a,H^{s}{(\R^n)}}=H^s_{\overline\Omega}$. The assertion follows noting that $V_1^{a,H^{s}{(\R^n)}}= V_2^{a,H^{s}{(\R^n)}}$ if and only if $V_1= V_2$. \end{proof} \begin{rem} If $\mathrm{int}(\overline{\Omega})\setminus\Omega$ is $(-s)$-null (for example if $\Omega={\mathrm{int}}(\overline\Omega)$) then $H^{-s}_{\Omega^c} = H^{-s}_{ \overline{\overline{\Omega}^c}}$, by Theorem~\ref{thm:Hs_equality_closed}, and the fact that $ {\mathrm{int}}(\overline\Omega)\setminus \Omega=\Omega^c\setminus \overline{\overline\Omega^c}$. In this case, Lemma \ref{lem:TSComplements} says that $\widetilde{H}^s(\Omega) = H^s_{\overline{\Omega}}$ if and only if $\widetilde{H}^{-s}(U) = H^{-s}_{\overline{U}}$, where $U=\overline{\Omega}^c$. \end{rem} \subsection{When is \texorpdfstring{$H^s_0(\Omega)=H^s(\Omega)$}{Hs0(Omega)=Hs(Omega)}?} \label{subsec:Hs0vsHs} The space $H^s_0(\Omega)$ was defined in \eqref{eq:Hs0} as a closed subspace of $H^s(\Omega)$. In this section we investigate the question of when these two spaces coincide, or, equivalently, when $\mathscr{D}(\Omega)|_\Omega$ is dense in $H^s(\Omega)$. One classical result (see \cite[Theorem 1.4.2.4]{Gri} or \cite[Theorem 3.40]{McLean}) is that if $\Omega$ is Lipschitz and bounded, then $H^s_0(\Omega)=H^s(\Omega)$ for $0\leq s\leq 1/2$. In Corollary \ref{cor:Hs0HsEqual} we extend this slightly, by showing that equality in fact extends to $s<0$ (in fact this holds for any open set $\Omega$, see parts \rf{f3} and \rf{e3} below), as well as presenting results for non-Lipschitz $\Omega$. The proofs of the results in Corollary \ref{cor:Hs0HsEqual} are based on the following lemma, which states that the condition $H^s_0(\Omega)=H^s(\Omega)$ is equivalent to a certain subspace of $H^{-s}(\mathbb{R}^n)$ being trivial. This seemingly new characterisation follows directly from the dual space realisations derived in \S\ref{subsec:DualAnnih}. \begin{lem} \label{lem:Hs0HsEquiv} Let $\Omega\subset \mathbb{R}^n$ be non-empty and open, and let $s\in\mathbb{R}$. Then $H^s_0(\Omega)=H^s(\Omega)$ if and only if $\widetilde{H}^{-s}(\Omega) \cap H^{-s}_{\partial\Omega} =\{0\}$. \end{lem} \begin{proof} This follows from Theorem \ref{thm:DualityTheorem} and Lemma \ref{lem:Hs0Dual}, which together imply that, by duality, $H^s_0(\Omega)=H^s(\Omega)$ if and only if $(\widetilde{H}^{-s}(\Omega) \cap H^{-s}_{\partial\Omega})^{\perp, \widetilde{H}^{-s}(\Omega)}=\widetilde{H}^{-s}(\Omega)$, which holds if and only if $\widetilde{H}^{-s}(\Omega) \cap H^{-s}_{\partial\Omega} =\{0\}$. \end{proof} \begin{cor} \label{cor:Hs0HsEqual} Let $\Omega\subset \mathbb{R}^n$ be non-empty, open and different from $\mathbb{R}^n$ itself, and let $s\in\mathbb{R}$. \begin{enumerate}[(i)] \item \label{lem:Hs0Hs_monot} If $H^s_0(\Omega)=H^s(\Omega)$ then $H_0^{t}(\Omega)=H^{t}(\Omega)$ for all $t<s$. \item \label{f3} If $s\leq 0$ then $H^s_0(\Omega)=H^s(\Omega)$. % \item \label{a3} If $\partial\Omega$ is $(-s)$-null then $H^s_0(\Omega)=H^s(\Omega)$. \item \label{g3} If $s>n/2$, % then $H^s_0(\Omega)\subsetneqq H^s(\Omega)$. % \item \label{b3} For $0<s<n/2$, if ${\rm dim_H}{\partial\Omega}<n-2s$ then $H^s_0(\Omega)=H^s(\Omega)$. \item \label{c3} If $\widetilde{H}^{-s}(\Omega)=H^{-s}_{\overline\Omega}$ (e.g. if $\Omega$ is $C^0$) then $H^s_0(\Omega)=H^s(\Omega)$ if and only if $\partial\Omega$ is $(-s)$-null. \item \label{d3} If $\Omega$ is $C^0$ then $H^s_0(\Omega)\subsetneqq H^s(\Omega)$ for $s>1/2$. \item \label{h3} If $\Omega$ is $C^{0,\alpha}$ for some $0<\alpha<1$ then $H^s_0(\Omega)=H^s(\Omega)$ for $s<\alpha/2$. \item \label{e3} If $\Omega$ is Lipschitz then $H^s_0(\Omega)=H^s(\Omega)$ if and only if $s\leq 1/2$. \end{enumerate} \end{cor} \begin{proof} Our proofs all use the characterization provided by Lemma \ref{lem:Hs0HsEquiv}. \rf{lem:Hs0Hs_monot} holds because, for $t<s$, $\widetilde{H}^{-t}(\Omega)\subset \widetilde{H}^{-s}(\Omega)$ and $H_{\partial\Omega}^{-t}\subset H_{\partial\Omega}^{-s}$. \rf{f3} holds because, for $s\leq 0$, $\widetilde{H}^{-s}(\Omega)\cap H^{-s}_{\partial \Omega} \subset \mathring{H}{}^{-s}(\Omega)\cap H^{-s}_{\partial \Omega}= \{0\}$. \rf{a3} is immediate from Lemma \ref{lem:Hs0HsEquiv}. To prove \rf{g3}, we first note that, for any $\mathbf{x}_0\in{\partial\Omega}$, there exists a sequence of points $\{\mathbf{y}_j\}_{j\in\mathbb{N}}\subset\Omega$ such that $\lim_{j\to\infty}\mathbf{y}_j=\mathbf{x}_0$, and the corresponding Dirac delta functions satisfy $\delta_{\mathbf{x}_0}\in H^{-s}_{\partial\Omega}$ and $\delta_{\mathbf{y}_j}\in H^{-s}_{\{\mathbf{y}_j\}}\subset\widetilde{H}^{-s}(\Omega)$, by \eqref{eq:delta} and \eqref{eq:approx}. Then, since $\widetilde{H}^{-s}(\Omega)\subset H^{-s}(\mathbb{R}^n)$ is closed, to show that $\widetilde{H}^{-s}(\Omega)\cap H^{-s}_{\partial\Omega}\neq \{0\}$ it suffices to prove that $\{\delta_{\mathbf{y}_j}\}_{j\in\mathbb{N}}$ converges to $\delta_{\mathbf{x}_0}$ in $H^{-s}(\mathbb{R}^n)$. Recall that the dual space of $H^{-s}(\mathbb{R}^n)$ is realised as $H^s(\mathbb{R}^n)$, which (since $s>n/2$) is a subspace of $C^0(\mathbb{R}^n)$, the space of % continuous functions (see, e.g.\ \cite[Theorem 3.26]{McLean}). Hence the duality pairing \eqref{dualequiv} gives $\langle \delta_{\mathbf{x}_0}-\delta_{\mathbf{y}_j}, \phi\rangle_s=\overline{\phi(\mathbf{x}_0)}-\overline{\phi(\mathbf{y}_j)}\xrightarrow{j\to\infty}0$ for all $\phi\in H^s(\mathbb{R}^n)\subset C^0(\mathbb{R}^n)$, i.e.\ $\{\delta_{\mathbf{y}_j}\}_{j\in\mathbb{N}}$ converges to $\delta_{\mathbf{x}_0}$ weakly in $H^{-s}(\mathbb{R}^n)$. But by \cite[Theorem 3.7]{Brezis}, $\widetilde{H}^{-s}(\Omega)$ is weakly closed, so $\delta_{\mathbf{x}_0}\in \widetilde{H}^{-s}(\Omega)$ as required. % \rf{b3} follows from \rf{a3} and Lemma \ref{lem:polarity}\rf{hh}. For \rf{c3}, note that if $\widetilde{H}^{-s}(\Omega)=H^{-s}_{\overline\Omega}$ then $\widetilde{H}^{-s}(\Omega)\cap H^{-s}_{\partial \Omega} = H^{-s}_{\partial \Omega}$. \rf{d3}--\rf{e3} follow from \rf{c3}, Lemma~\ref{lem:sob_equiv}, and Lemma~\ref{lem:polarity}\rf{kk1}--\rf{kk}. \end{proof} \begin{rem} \label{rem:HsHs0} Parts \rf{lem:Hs0Hs_monot}, \rf{f3} and \rf{g3} of Corollary \ref{cor:Hs0HsEqual} imply that for any non-empty open $\Omega\subsetneqq\mathbb{R}^n$, there exists $0\leq s_0(\Omega)\leq n/2$ such that $$ H_0^{s_-}(\Omega)= H^{s_-}(\Omega)\quad\text{and}\quad H_0^{s_+}(\Omega)\subsetneqq H^{s_+}(\Omega) \qquad \text{for all }\; s_-<s_0(\Omega)<s_+. $$ We can summarise most of the remaining results in Corollary \ref{cor:Hs0HsEqual} as follows: \begin{itemize} \item $s_0(\Omega)\ge \sup\{s: {\partial\Omega} $ is $(-s)$-null$\}$, with equality if $\Omega$ is $C^0$. \item If $\Omega$ is $C^0$, then $0\le s_0(\Omega)\le 1/2$. \item If $\Omega$ is $C^{0,\alpha}$ for some $0<\alpha<1$, then $\alpha/2\le s_0(\Omega)\le 1/2$. \item If $\Omega$ is Lipschitz, then $s_0(\Omega)=1/2$. \end{itemize} Moreover, the above bounds on $s_0(\Omega)$ can all be achieved: by Corollary \ref{cor:Hs0HsEqual}(vi) for the first two cases, (iii) and (iv) for the third case:% \begin{itemize} \item For $2\le n\in\mathbb{N}$ the bounded $C^0$ open set of \cite[Lemma 4.1(vi)]{HewMoi:15} satisfies $s_0(\Omega)=0$. \item For $2\le n\in\mathbb{N}$ and $0<\alpha<1$, the bounded $C^{0,\alpha}$ open set of \cite[Lemma 4.1(v)]{HewMoi:15} satisfies $s_0(\Omega)=\alpha/2$. \item If $\Omega=\mathbb{R}^n\setminus\{\mathbf0\}$, $s_0(\Omega)=n/2$. \end{itemize} \end{rem} To put the results of this section in context we give a brief comparison with the results presented by Caetano in \cite{Ca:00}, where the question of when $H^s_0(\Omega)=H^s(\Omega)$ is considered within the more general context of Besov--Triebel--Lizorkin spaces. Caetano's main positive result \cite[Proposition 2.2]{Ca:00} is that if $0<s<n/2$, $\Omega$ is bounded, and $\overline{{\rm dim_B}}\partial\Omega<n-2s$, then $H^s_0(\Omega)=H^s(\Omega)$ (here $\overline{{\rm dim_B}}$ denotes the upper box dimension, cf.\ \cite[\S3]{Fal}). Our Corollary \ref{cor:Hs0HsEqual}\rf{b3} sharpens this result, replacing $\overline{{\rm dim_B}}$ with ${\rm dim_H}$ (note that ${\rm dim_H}(E)\leq \overline{{\rm dim_B}}(E)$ for all bounded $E\subset\mathbb{R}^n$, cf.\ \cite[Proposition 3.4]{Fal}) and removing the boundedness assumption. Caetano's main negative result \cite[Proposition 3.7]{Ca:00} says that if $0<s<n/2$, $\Omega$ is ``interior regular'', $\partial\Omega$ is a $d$-set (see \eqref{eq:dSet}) for some $d>n-2s$, then $H^s_0(\Omega)\subsetneqq H^s(\Omega)$. Here ``interior regular'' is a smoothness assumption that, in particular, excludes outward cusps in $\partial\Omega$. Precisely, it means \cite[Definition 3.2]{Ca:00} that there exists $C>0$ such that for all $\mathbf{x}\in\partial\Omega$ and all cubes $Q$ centred at $\mathbf{x}$ with side length $\leq 1$, $m(\Omega\cap Q)\geq C m(Q)$. This result of Caetano's is similar to our Corollary \ref{cor:Hs0HsEqual}\rf{c3}, which, when combined with our Lemma \ref{lem:polarity}\rf{gg}, implies that if $0<s<n/2$ and $\widetilde{H}^{-s}(\Omega)=H^{-s}_{\overline\Omega}$ (e.g.\ if $\Omega$ is $C^0$) with ${\rm dim_H}{\partial\Omega}>n-2s$, then $H^s_0(\Omega)\subsetneqq H^s(\Omega)$. In some respects our result is more general than \cite[Proposition 3.7]{Ca:00} because we allow cusp domains and we do not require a uniform Hausdorff dimension. However, it is difficult to make a definitive comparison because we do not know of a characterisation of when $\widetilde{H}^{-s}(\Omega)=H^{-s}_{\overline\Omega}$ for interior regular $\Omega$. % Certainly, not every interior regular set whose boundary is a $d$-set belongs to the class of sets for which we can prove $\widetilde{H}^{-s}(\Omega)=H^{-s}_{\overline\Omega}$; a concrete example is the Koch snowflake \cite[Figure~0.2]{Fal}. \subsection{Some properties of the restriction operator \texorpdfstring{$|_\Omega: H^s(\mathbb{R}^n)\to H^s(\Omega)$}{}} \label{subsec:restriction} In \S\ref{subsec:3spaces} we have studied the relationship between the spaces $\widetilde{H}^s(\Omega)$, $\mathringbig{H}{}^s(\Omega)$, and $H^s_{\overline\Omega}\subset H^s(\mathbb{R}^n)$, whose elements are distributions on $\mathbb{R}^n$, and in \S\ref{subsec:Hs0vsHs} the relationship between $H^s(\Omega)$ and $H^s_0(\Omega)$, whose elements are distributions on $\Omega$. To complete the picture we explore in this section the connections between these two types of spaces, which amounts to studying mapping properties of the restriction operator $|_\Omega:H^s(\mathbb{R}^n)\to H^s(\Omega)$. These properties, contained in the following lemma, are rather straightforward consequences of the results obtained earlier in the paper and classical results such as \cite[Theorem 3.33]{McLean}, but for the sake of brevity we relegate the proofs to \cite{Hs0paper}. \begin{lem} \label{lem:RestrictionCollection} Let $\Omega\subset\mathbb{R}^n$ be non-empty and open, and $s\in\mathbb{R}$. \begin{enumerate}[(i)] \item $|_\Omega:H^s(\mathbb{R}^n)\to H^s(\Omega)$ is continuous with norm one; \item $|_\Omega:(H^s_{\Omega^c})^\perp\to H^s(\Omega)$ is a unitary isomorphism;% \item If $\Omega$ is a finite union of disjoint Lipschitz open sets, $\partial \Omega$ is bounded, and $s> -1/2$, $s\not\in \{1/2,3/2,\ldots\}$, then $|_\Omega:\widetilde{H}^s(\Omega) \to H^s_0(\Omega)$ is an isomorphism; \item \label{b4} $|_\Omega:H^s_{\overline \Omega}\to H^s(\Omega)$ is injective if and only if $\partial\Omega$ is $s$-null; in particular, \begin{itemize} \item $|_\Omega:H^s_{\overline \Omega}\to H^s(\Omega)$ is always injective for $s>n/2$ and never injective for $s<-n/2$; \item if $\Omega$ is Lipschitz then $|_\Omega:\widetilde{H}^s(\Omega)=H^s_{\overline \Omega}\to H^s(\Omega)$ is injective if and only if $s\ge -1/2$; \item for every $-1/2\leq s_*\leq 0$ there exists a $C^0$ open set $\Omega$ for which $|_\Omega:\widetilde{H}^s(\Omega)=H^s_{\overline \Omega}\to H^s(\Omega)$ is injective for all $s>s_*$ and not injective for all $s<s_*$; \end{itemize} \item \label{e4} For $s\geq 0$, $|_\Omega:\mathringbig{H}{}^s(\Omega) \to H^s(\Omega)$ is injective; if $s\in\mathbb{N}_0$ then it is a unitary isomorphism onto its image in $H^s(\Omega)$; \item \label{f4} For $s\geq 0$, $|_\Omega:\widetilde{H}^s(\Omega)\to H^s_0(\Omega)$ is injective and has dense image; if $s\in\mathbb{N}_0$ then it is a unitary isomorphism; \item \label{a5} $|_\Omega:\widetilde{H}^s(\Omega)\to H^s(\Omega)$ is bijective if and only if $|_\Omega:\widetilde{H}^{-s}(\Omega)\to H^{-s}(\Omega)$ is bijective; \item \label{b5} $|_\Omega:\widetilde{H}^{-s}(\Omega)\to H^{-s}(\Omega)$ is injective if and only if $|_\Omega:\widetilde{H}^{s}(\Omega)\to H^{s}(\Omega)$ has dense image; i.e.\ if and only if $H^s_0(\Omega)=H^s(\Omega)$; \item \label{bbb5} The following are equivalent: \begin{itemize} \item $|_\Omega:\widetilde{H}^s(\Omega)\to H^s_0(\Omega)$ is a unitary isomorphism; \item $\big\|\phi|_\Omega\big\|_{H^s(\Omega)} = \|\phi\|_{H^s(\mathbb{R}^d)}$ for all $\phi\in \mathscr{D}(\Omega)$; \item $\mathscr{D}(\Omega) \subset (H^s_{\Omega^c})^\perp$; \end{itemize} \item If $\Omega$ is bounded, or $\Omega^c$ is bounded with non-empty interior, then the three equivalent statements in \rf{bbb5} hold if and only if $s\in\mathbb{N}_0$; \item If the complement of $\Omega$ is $s$-null, then $|_\Omega:\widetilde{H}^s(\Omega)\to H^s_0(\Omega)$ is a unitary isomorphism. \end{enumerate} \end{lem} \begin{rem}\label{rem:LionsMagenes} A space often used in applications is the Lions--Magenes space $H^s_{00}{(\Omega)}$, defined as the interpolation space between $H^m_0{(\Omega)}$ and $H^{m+1}_0{(\Omega)}$, where $m\in\mathbb{N}_0$ and $m\le s<m+1$, see e.g.\ \cite[Chapter~1, Theorem~11.7]{LiMaI} (the choice of interpolation method, e.g.\ the $K$-, the $J$- or the complex method, does not affect the result, as long it delivers a Hilbert space, see \cite[\S3.3]{InterpolationCWHM}). Since $|_\Omega:\widetilde{H}^m(\Omega)\to H^m_0(\Omega)$ is an isomorphism for all $m\in\mathbb{N}_0$ by Lemma \ref{lem:RestrictionCollection}\rf{f4} above, $H^s_{00}{(\Omega)}$ is the image under the restriction operator of the space obtained from the interpolation of $\widetilde{H}^m{(\Omega)}$ and $\widetilde{H}^{m+1}{(\Omega)}$. % Thus by \cite[Corollary~4.9]{InterpolationCWHM}, $H^s_{00}{(\Omega)}$ is a subspace (not necessarily closed) of $H^s_0{(\Omega)}$, for all $s\ge0$ and all open $\Omega$. Furthermore, if $\Omega$ is Lipschitz and ${\partial\Omega}$ is bounded, \cite[Corollary~4.10]{InterpolationCWHM} ensures that $\{\widetilde{H}^s{(\Omega)}: s\in\mathbb{R}\}$ is an interpolation scale, hence in this case we can characterise the Lions--Magenes space as $H^s_{00}{(\Omega)}=\widetilde{H}^s{(\Omega)}|_\Omega$. In particular, by \cite[Theorem 3.33]{McLean}, this implies that $H^s_{00}{(\Omega)}=H^s_0{(\Omega)}$ if $s\notin\{1/2,3/2,\ldots\}$. This observation extends \cite[Chapter~1, Theorem~11.7]{LiMaI}, which was stated for $C^\infty$ bounded~$\Omega$. That $H^{m+1/2}_{00}{(\Omega)}\subsetneqq H^{m+1/2}_0{(\Omega)}$ for $m\in\mathbb{N}_0$ was proved for all $C^\infty$ bounded $\Omega$ in \cite[Chapter~1, Theorem~11.7]{LiMaI}. For general Lipschitz bounded $\Omega$, $H^{1/2}_{00}{(\Omega)}\subsetneqq H^{1/2}_0{(\Omega)}$ because the constant function $1$ belongs to the difference between the two spaces, as shown in \cite[p.~5]{NOS13}. \end{rem} \subsection{Sobolev spaces on sequences of subsets of \texorpdfstring{$\mathbb{R}^n$}{Rn}} \label{subsec:Seqs+Eqs} We showed in \S\ref{subsec:3spaces} that the Sobolev spaces $\widetilde{H}^s(\Omega)$, $\mathringbig{H}{}^s(\Omega)$ (for $s\geq0$) and $H^s_{\overline\Omega}$ are in general distinct. These spaces arise naturally in the study of Fredholm integral equations and elliptic PDEs on rough (non-Lipschitz) open sets (a concrete example is the study of BIEs on screens, see \S\ref{sec:BIE} and \cite{CoercScreen}). When formulating such problems using a variational formulation, one must take care to choose the correct Sobolev space setting to ensure the physically correct solution. Any arbitrarily ``rough'' open set $\Omega$ can be represented as a nested union of countably many ``smoother'' (e.g.\ Lipschitz) open sets $\{\Omega_j\}_{j=1}^\infty$ \cite[p.317]{Kellogg}. One can also consider closed sets $F$ that are nested intersections of a collection of closed sets $\{F_j\}_{j=1}^\infty$. Significantly, many well-known fractal sets and sets with fractal boundary are constructed in this manner as a limit of prefractals. We will apply the following propositions that consider such constructions to BIEs on sequences of prefractal sets in \S\ref{sec:BIE} below. Precisely, we will use these results together with those from \S\ref{subsec:ApproxVar} to deduce the correct fractal limit of the sequence of solutions to the prefractal problems, and the correct variational formulation and Sobolev space setting for the limiting solution. \begin{prop} \label{prop:nestedopen} Suppose that $\Omega=\bigcup_{j=1}^\infty \Omega_j$, where $\{\Omega_j\}_{j=1}^\infty$ is a nested sequence of non-empty open subsets of $\mathbb{R}^n$ satisfying $\Omega_j\subset\Omega_{j+1}$ for $j=1,2,\ldots$. Then $\Omega$ is open and \begin{align} \label{eq:HtildeUnion} \widetilde{H}^s(\Omega) =\overline{\bigcup_{j=1}^\infty \widetilde{H}^s(\Omega_j)}. \end{align} \end{prop} \begin{proof} We will show below that \begin{align} \label{eq:DUnion} \mathscr{D}(\Omega) = \bigcup_{j=1}^\infty \mathscr{D}(\Omega_j). \end{align} Then \rf{eq:HtildeUnion} follows easily from \rf{eq:DUnion} because \begin{align*}% \widetilde{H}^s(\Omega) = \overline{\mathscr{D}(\Omega)} = \overline{\bigcup_{j=1}^\infty \mathscr{D}(\Omega_j)} = \overline{\bigcup_{j=1}^\infty \overline{\mathscr{D}(\Omega_j)}} =\overline{\bigcup_{j=1}^\infty \widetilde{H}^s(\Omega_j)}. \end{align*} To prove \rf{eq:DUnion}, we first note that the inclusion $\bigcup_{j=1}^\infty \mathscr{D}(\Omega_j)\subset \mathscr{D}(\Omega)$ is obvious. To show the reverse inclusion, let $\phi\in\mathscr{D}(\Omega)$. We have to prove that $\phi\in\mathscr{D}(\Omega_j)$ for some $j\in \mathbb{N}$. Denote $K$ the support of $\phi$; then $K$ is a compact subset of $\Omega$, thus $\{\Omega_j\}_{j=1}^\infty$ is an open cover of $K$. As $K$ is compact there exists a finite subcover $\{\Omega_j\}_{j=j_1,\ldots,j_\ell}$. Thus $K\subset\Omega_{j_\ell}$ and $\phi\in \mathscr{D}(\Omega_{j_\ell})$. \end{proof} It is easy to see that the analogous result, with $\widetilde{H}^s(\Omega)$ replaced by $\mathringbig{H}{}^s(\Omega)$ (with $s\geq 0$), or with $\widetilde{H}^s(\Omega)$ replaced by $H^s_{\overline{\Omega}}$, does not hold in general. Indeed, as a counterexample we can take any $\Omega$ which is a union of nested $C^0$ open sets, but for which $\widetilde{H}^s(\Omega)\neq \mathringbig{H}{}^s(\Omega)$. Then the above result and \eqref{eqn:inclusions} gives \begin{align*}% \overline{\bigcup_{j=1}^\infty \mathringbig{H}{}^s(\Omega_j)} =\overline{\bigcup_{j=1}^\infty \widetilde{H}^s(\Omega_j)} =\widetilde{H}^s(\Omega) \subsetneqq \mathringbig{H}{}^s(\Omega) \subset H^s_{\overline{\Omega}}. \end{align*} A concrete example is $\Omega=(-1,0)\cup(0,1)\subset\mathbb{R}$ and $\Omega_j=(-1,-1/j)\cup(1/j,1)$, with $s>1/2$, for which $\widetilde{H}^s(\Omega)\neq \mathringbig{H}{}^s(\Omega)=H^s_{\overline\Omega}$ by Lemma \ref{lem:CircleSpace}\rf{d1}, Lemma \ref{lem:equalityNullity}\rf{aaa} and Lemma \ref{lem:polarity}\rf{jj}. The following is a related and obvious result. \begin{prop} \label{prop:nestedclosed} Suppose that $F=\bigcap_{j\in\mathscr{J}} F_j$, where $\mathscr{J}$ is an index set and $\{F_j\}_{j\in\mathscr{J}}$ is a collection of closed subsets of $\mathbb{R}^n$. Then $F$ is closed and \begin{align*}% H^s_F=\bigcap_{j\in\mathscr{J}} H^s_{F_j}. \end{align*} \end{prop} We will apply both the above results in \S\ref{sec:BIE} on BIEs. The following remark makes clear that Proposition \ref{prop:nestedopen} applies also to the FEM approximation of elliptic PDEs on domains with fractal boundaries. \begin{rem}\label{rem:FEM} Combining the abstract theory developed in \S\ref{subsec:ApproxVar} with Proposition~\ref{prop:nestedopen} allows us to prove the convergence of Galerkin methods on open sets with fractal boundaries. In particular, we can easily identify which limit a sequence of Galerkin approximations converges to. Precisely, let $\Omega=\bigcup_{j=1}^\infty \Omega_j$, where $(\Omega_j)_{j=1}^\infty$ is a sequence of non-empty open subsets of $\mathbb{R}^n$ satisfying $\Omega_j\subset\Omega_{j+1}$ for $j\in\mathbb{N}$. Fix $s\in\mathbb{R}$. For each $j\in\mathbb{N}$, define a sequence of nested closed spaces $V_{j,k}\subset V_{j,k+1}\subset \widetilde{H}^s(\Omega_j)$, $k\in\mathbb{N}$, such that $\widetilde{H}^s(\Omega_j)=\overline{\bigcup_{k=1}^\infty V_{j,k}}$, and such that the sequences are a refinement of each other, i.e.\ $V_{j,k}\subset V_{j+1,k}$. Suppose that $a(\cdot,\cdot)$ is a continuous and coercive sesquilinear form on some space $H$ satisfying $\widetilde{H}^s{(\Omega)}\subset H \subset H^s{(\R^n)}$. Then, for all $f\in H^{-s}{(\R^n)}$ the discrete and continuous variational problems: find $u_{V_{j,k}}\in V_{j,k}$ and $u_{\widetilde{H}^s{(\Omega)}}\in \widetilde{H}^s{(\Omega)}$ such that \begin{align} \label{eq:vps} a(u_{V_{j,k}},v)=\langle f,v\rangle_s\quad\forall v\in V_{j,k},\qquad a(u_{\widetilde{H}^s{(\Omega)}},v')=\langle f,v'\rangle_s\quad\forall v'\in \widetilde{H}^s{(\Omega)}, \end{align} have exactly one solution, and moreover the sequence $(u_{V_{j,j}})_{j=1}^\infty$ converges to $u_{\widetilde{H}^s{(\Omega)}}$ in the $H^s{(\R^n)}$ norm, because the sequence $(V_{j,j})_{j=1}^\infty$ is dense in $\widetilde{H}^s{(\Omega)}$. (Here we use Proposition \ref{prop:nestedopen} and \eqref{eq:V-conver}.) As a concrete example, take $\Omega\subset\mathbb{R}^2$ to be the Koch snowflake \cite[Figure~0.2]{Fal}, $\Omega_j$ the prefractal set of level $j$ (which is a Lipschitz polygon with $3\cdot 4^{j-1}$ sides), $s=1$ and $a(u,v)=\int_{B_R}\nabla u\cdot \nabla \overline v\mathrm{d} \mathbf{x}$ the sesquilinear form for the Laplace equation, which is continuous and coercive on $\widetilde{H}^s(B_R)$, where $B_R$ is any open ball containing $\overline\Omega$. The $V_{j,k}$ spaces can be taken as nested sequences of standard finite element spaces defined on the polygonal prefractals. Then the solutions $u_{V_{j,j}}\in V_{j,j}$ of the discrete variational problems, which are easily computable with a finite element code, converge in the $H^1(\mathbb{R}^2)$ norm to $u_{\widetilde{H}^1{(\Omega)}}$, the solution to the variational problem on the right hand side in \eqref{eq:vps}. \end{rem} \section{Boundary integral equations on fractal screens} \label{sec:BIE} This section contains the paper's major application, which has motivated much of the earlier theoretical analysis. The problem we consider is itself motivated by the widespread use in telecommunications of electromagnetic antennas that are designed as good approximations to fractal sets. The idea of this form of antenna design, realised in many applications, is that the self-similar, multi-scale fractal structure leads naturally to good and uniform performance over a wide range of wavelengths, so that the antenna has effective wide band performance \cite[\S18.4]{Fal}. Many of the designs proposed take the form of thin planar devices that are approximations to bounded fractal subsets of the plane, for example the Sierpinski triangle \cite{PBaRomPouCar:98} and sets built using Cantor-set-type constructions \cite{SriRanKri:11}. These and many other fractals sets $F$ are constructed by an iterative procedure: a sequence of ``regular'' closed sets $F_1\supset F_2 \supset \ldots$ (which we refer to as ``prefractals'') is constructed recursively, with the fractal set $F$ defined as the limit $F=\cap_{j=1}^\infty F_j$. Of course, practical engineered antennae are not true fractals but rather a prefractal approximation $F_j$ from the recursive sequence. So an interesting mathematical question of potential practical interest is: how does the radiated field from a prefractal antenna $F_j$ behave in the limit as $j\to\infty$ and $F_j\to F$? We will not address this problem in this paper, which could be studied, at a particular radiating frequency, via a consideration of boundary value problems for the time harmonic Maxwell system in the exterior of the antenna, using for example the BIE formulation of \cite{BuCh:03}. Rather, we shall consider analogous time harmonic acoustic problems, modelled by boundary value problems for the Helmholtz equation. These problems can be considered as models of many of the issues and potential behaviours, and we will discuss, applying the results of \S\ref{subsec:ApproxVar} and other sections above, the limiting behaviour of sequences of solutions to BIEs, considering as illustrative examples two of several possible set-ups. For the Dirichlet screen problem we will consider the limit $\Gamma_j\to F$ where the closed set $F=\cap_{j=1}^\infty \Gamma_j$ may be fractal and each $\Gamma_j$ is a regular Lipschitz screen. For the Neumann screen problem we will consider the limit $\Gamma_j\to \Gamma$ where the open set $\Gamma = \cup_{j=1}^\infty \Gamma_j$, and $\overline{\Gamma}\setminus \Gamma$ may be fractal. In the Dirichlet case we will see that the limiting solution may be non-zero even when $m(F)=0$ ($m$ here 2D Lebesgue measure), provided the fractal dimension of $F$ is $>1$. In the Neumann case we will see that in cases where $\Gamma^* := {\mathrm{int}}(\overline{\Gamma})$ is a regular Lipschitz screen the limiting solution can differ from the solution for $\Gamma^*$ if the fractal dimension of $\partial \Gamma$ is $>1$. The set-up is as follows. For $\mathbf{x}=(x_1,x_2,x_3)\in \mathbb{R}^3$ let $\tilde \mathbf{x}=(x_1,x_2)$ and let $\Gamma_\infty = \{(\tilde \mathbf{x},0):\tilde \mathbf{x}\in \mathbb{R}^2\}\subset \mathbb{R}^3$, which we identify with $\mathbb{R}^2$ in the obvious way. Let $\Gamma$ be a bounded open Lipschitz subset of $\Gamma_\infty$, choose $k\in \mathbb{C}$ (the {\em wavenumber}), with $k\neq 0$ and\footnote{Our assumption here that $k$ has a positive imaginary part corresponds physically to an assumption of some energy absorption in the medium of propagation. While making no essential difference to the issues we consider, a positive imaginary part for $k$ simplifies the mathematical formulation of our screen problems slightly.} $0< \arg(k)\leq \pi/2$, and consider the following Dirichlet and Neumann screen problems for the Helmholtz equation (our notation $W^1_2(\mathbb{R}^3)$ here is as defined in \S\ref{sec:intro}): \begin{equation*} % \begin{split} \mbox{Find }u\in C^2(\mathbb{R}^3\setminus\overline{\Gamma})\cap W^1_2(\mathbb{R}^3\setminus\overline{\Gamma}) \mbox{ such that }\Delta u + k^2u=0 \mbox{ in }\mathbb{R}^3\setminus\overline{\Gamma} \mbox{ and}\\ u = f\in H^{1/2}(\Gamma) \mbox{ on }\Gamma \mbox{ (Dirichlet) or}\\ \frac{\partial u}{\partial \mathbf{n}} = g\in H^{-1/2}(\Gamma) \mbox{ on }\Gamma \mbox{ (Neumann)}. \end{split} \end{equation*} Where $U_+ := \{\mathbf{x}\in \mathbb{R}^3:x_3>0\}$ and $U_-:= \mathbb{R}^3\setminus \overline{U_+}$ are the upper and lower half-spaces, by $u=f$ on $\Gamma$ we mean precisely that $\gamma_\pm u|_\Gamma=f$, where $\gamma_\pm$ are the standard trace operators $\gamma_\pm:H^1(U_\pm)=W^1_2(U_\pm)\to H^{1/2}(\Gamma_\infty)$. Similarly, by $\partial u/\partial \mathbf{n}=g$ on $\Gamma$ we mean precisely that $\partial_\mathbf{n}^\pm u|_\Gamma=g$, where $\partial_\mathbf{n}^\pm$ are the standard normal derivative operators $\partial_\mathbf{n}^\pm: W^1_2(U_\pm;\Delta) \to H^{1/2}(\Gamma_\infty)$; here $W^1_2(U_\pm;\Delta)= \{u\in W^1_2(U_\pm):\Delta u\in L^2(U_\pm)\}$, and for definiteness we take the normal in the $x_3$-direction, so that $\partial u/\partial \mathbf{n} = \partial u/\partial x_3$. These screen problems are uniquely solvable: one standard proof of this is via BIE methods \cite{sauter-schwab11}. The following theorem, reformulating these screen problems as BIEs, is standard (e.g.~\cite{sauter-schwab11}), dating back to \cite{stephan87} in the case when $\Gamma$ is $C^\infty$ (the result in \cite{stephan87} is for $k\geq 0$, but the argument is almost identical and slightly simpler for the case $\Im(k)>0$). The notation in this theorem is that $[u]:= \gamma_+u-\gamma_-u \in H^{1/2}_{\overline{\Gamma}}\subset H^{1/2}(\Gamma_\infty)$ and $[\partial_\mathbf{n} u]:= \partial_\mathbf{n}^+u-\partial_\mathbf{n}^-u \in H^{-1/2}_{\overline{\Gamma}}\subset H^{-1/2}(\Gamma_\infty)$ (and recall that $H^{s}_{\overline{\Gamma}}=\widetilde{H}^s(\Gamma)$, $s\in \mathbb{R}$, since $\Gamma$ is Lipschitz; see \cite[Theorem 3.29]{McLean} or Lemma \ref{lem:sob_equiv} above). Further, for every compactly supported $\phi\in H^{-1/2}(\Gamma_\infty)$, $\mathcal{S}\phi \in H^1(\mathbb{R}^3)=W^1_2( \mathbb{R}^3) $ denotes the standard acoustic single-layer potential (e.g.~\cite{McLean,ChGrLaSp:11}), defined explicitly in the case that $\phi$ is continuous by \begin{equation*}% \mathcal{S}\phi(\mathbf{x}) = \int_{\Gamma_\infty} \Phi(\mathbf{x},\mathbf{y}) \phi(\mathbf{y}) \,\mathrm{d} s(\mathbf{y}), \quad \mathbf{x}\in \mathbb{R}^3, \end{equation*} where $\Phi(\mathbf{x},\mathbf{y}) := \exp({\mathrm{i}} k |\mathbf{x}-\mathbf{y}|)/(4\pi |\mathbf{x}-\mathbf{y}|)$ is the fundamental solution for the Helmholtz equation. Similarly \cite{McLean,ChGrLaSp:11}, for compactly supported $\psi\in H^{1/2}(\Gamma_\infty)$, $\mathcal{D}\psi \in W^1_2(\mathbb{R}^3\setminus \supp{\psi})$ is the standard acoustic double-layer potential, defined by \begin{equation*}% \mathcal{D}\psi(\mathbf{x}) = \int_{\Gamma_\infty} \frac{\partial \Phi(\mathbf{x},\mathbf{y})}{\partial \mathbf{n}(\mathbf{y})} \psi(\mathbf{y}) \,\mathrm{d} s(\mathbf{y}), \quad \mathbf{x}\in \mathbb{R}^3\setminus \supp{\psi}. \end{equation*} \begin{thm} [E.g., \cite{stephan87,sauter-schwab11}.] \label{thm:bie} If $u$ satisfies the Dirichlet screen problem then $$ u(\mathbf{x}) = -\mathcal{S}[\partial_\mathbf{n} u ](\mathbf{x}), \quad \mathbf{x}\in \mathbb{R}^3\setminus \overline{\Gamma}, $$ and $[\partial_\mathbf{n} u]\in \widetilde{H}^{-1/2}(\Gamma)$ is the unique solution of \begin{equation} \label{eq:single} S_\Gamma [\partial_\mathbf{n} u] = f, \end{equation} where the isomorphism $S_\Gamma:\widetilde{H}^{-1/2}(\Gamma)\to H^{1/2}(\Gamma)$ is the standard acoustic single-layer boundary integral operator, defined by $$ S_\Gamma \phi:= \gamma_\pm \mathcal{S} \phi\big|_\Gamma, \quad \phi \in \widetilde{H}^{-1/2}(\Gamma). $$ Similarly, if $u$ satisfies the Neumann screen problem then $$ u(\mathbf{x}) = \mathcal{D}[u](\mathbf{x}), \quad \mathbf{x}\in \mathbb{R}^3\setminus \overline{\Gamma}, $$ and $[u]\in \widetilde{H}^{1/2}(\Gamma)$ is the unique solution of \begin{equation} \label{eq:hyp} T_\Gamma [u] = -g, \end{equation} where the isomorphism $T_\Gamma:\widetilde{H}^{1/2}(\Gamma)\to H^{-1/2}(\Gamma)$ is the standard acoustic hypersingular integral operator, defined by $$ T_\Gamma \phi:= \partial_\mathbf{n}^\pm \mathcal{D} \phi\big|_\Gamma, \quad \phi \in \widetilde{H}^{1/2}(\Gamma). $$ % \end{thm} The standard analysis of the above BIEs, in particular the proof that $S_\Gamma$ and $T_\Gamma$ are isomorphisms, progresses via a variational formulation. Recalling from Theorem \ref{thm:DualityTheorem} that $H^{-s}(\Gamma)$ is (a natural unitary realisation of) the dual space of $\widetilde{H}^s(\Gamma)$, we define sesquilinear forms $a_{\rm D}$ on $\widetilde{H}^{-1/2}(\Gamma)$ and $a_{\rm N}$ on $\widetilde{H}^{1/2}(\Gamma)$ by \begin{align*} \label{} a_{\rm D}(\phi,\psi) &= \langle S_\Gamma \phi,\psi \rangle, \quad \phi,\psi\in \widetilde{H}^{-1/2}(\Gamma),\\ a_{\rm N}(\phi,\psi) &= \langle T_\Gamma \phi,\psi \rangle, \quad \phi,\psi\in \widetilde{H}^{1/2}(\Gamma), \end{align*} where in each equation $\langle.,.\rangle$ is the appropriate duality pairing. Equation \eqref{eq:single} is equivalent to the variational formulation: find $[\partial_\mathbf{n} u]\in \widetilde{H}^{-1/2}(\Gamma)$ such that \begin{equation} \label{eq:wfD} a_{\rm D}([\partial_\mathbf{n} u],\phi) = \langle f,\phi\rangle, \quad \phi \in \widetilde{H}^{-1/2}(\Gamma). \end{equation} Similarly \eqref{eq:hyp} is equivalent to: find $[u]\in \widetilde{H}^{1/2}(\Gamma)$ such that \begin{equation} \label{eq:wfN} a_{\rm N}([u],\psi) = -\langle g,\psi\rangle, \quad \psi \in \widetilde{H}^{1/2}(\Gamma). \end{equation} These sesquilinear forms (see \cite{stephan87,Ha-Du:92,Co:04}) are continuous and coercive in the sense of \eqref{eq:defcoer}. It follows from the Lax--Milgram theorem that \eqref{eq:wfD} and \eqref{eq:wfN} (and so also \eqref{eq:single} and \eqref{eq:hyp}) are uniquely solvable. \begin{rem} \label{rem:finite_union} It is not difficult to show (see \cite{CoercScreen,ScreenPaper} for details) that Theorem \ref{thm:bie} holds, and the Dirichlet and Neumann screen problems are uniquely solvable, for a rather larger class of open sets than the open Lipschitz sets. Precisely, the Dirichlet problem is uniquely solvable, and Theorem \ref{thm:bie} holds for the Dirichlet problem, if and only if $\partial\Gamma$ is $1/2$-null (as defined in \S\ref{subsec:Polarity}) and $\widetilde{H}^{-1/2}(\Gamma)=H^{-1/2}_{\overline \Gamma}$. In particular, by Lemma \ref{lem:polarity}(xvii), (v) and Theorem \ref{thm:new2}, and relevant to our discussion of prefractals below, these conditions hold in the case that $\Gamma= \Gamma_1 \cup \ldots \cup \Gamma_M$ is a finite union of bounded $C^0$ open sets, $\Gamma_1$, \ldots, $\Gamma_M$, with $\overline{\Gamma_i}\cap \overline{\Gamma_j}$ a finite set for $1\leq i,j\leq M$. Similarly, the Neumann problem is uniquely solvable, and Theorem \ref{thm:bie} holds for the Neumann problem, if and only if $\partial\ Gamma$ is $(-1/2)$-null and $\widetilde{H}^{1/2}(\Gamma)=H^{1/2}_{\overline \Gamma}$; in particular, by Lemma \ref{lem:polarity}(xix), (v) and Theorem \ref{thm:new2}, these conditions hold in the case that $\Gamma= \Gamma_1 \cup \ldots \cup \Gamma_M$ is a finite union of bounded Lipschitz open sets, $\Gamma_1$, \ldots, $\Gamma_M$, with $\overline{\Gamma_i}\cap \overline{\Gamma_j}$ finite for $1\leq i,j\leq M$. \end{rem} Domain-based variational formulations of screen problems are also standard. In particular, an equivalent formulation of the Dirichlet problem is to find $u\in H^1(\mathbb{R}^3)= W_2^1(\mathbb{R}^3)$ such that $\gamma_\pm u = f$ on $\Gamma$ and such that \begin{equation} \label{eq:dom_weak} a_{\mathrm{dom}}(u,\psi) := \int_{\mathbb{R}^3} (\nabla u \cdot \nabla \bar v - k^2 u \bar v) \,\mathrm{d} \mathbf{x} = 0, \quad \forall v\in H^1_0(\mathbb{R}^3\setminus \overline{\Gamma}),% \end{equation} with $a_{\mathrm{dom}}(\cdot,\cdot)$ continuous and coercive on $H_0^1(\mathbb{R}^3\setminus\overline{\Gamma})$, so that this formulation is also uniquely solvable by the Lax--Milgram lemma. In the case that $\Re(k)=0$, so that $k^2<0$, $a_{\mathrm{dom}}(\cdot,\cdot)$ is also Hermitian, and the solution to this variational problem is also the unique solution to the minimisation problem: find $u\in H^1(\mathbb{R}^3)$ that minimises $a_{\mathrm{dom}}(u,u)$ subject to the constraint $\gamma_\pm u = f$. This leads to a connection to certain set capacities from potential theory. For an open set $\Omega\subset \mathbb{R}^n$ and $s>0$ we define the capacity \begin{align*} \label{} \mathrm{cap}_{s,\mathbb{R}^n}(\Omega):=\sup_{\substack{K\subset \Omega\\ K \textrm{ compact}}} \inf \big\{\|u \|^2_{H^{s}(\mathbb{R}^n)} \big\}, \end{align*} where the infimum is over all $u\in \mathscr{D}(\mathbb{R}^n)$ such that $u\geq 1$ in a neighbourhood of $K$. Then, in the special case when $k={\mathrm{i}}$ (so that $a_\mathrm{dom}(u,u) = \|u\|^2_{H^1(\mathbb{R}^3)}$ for $u\in H^1(\mathbb{R}^3)$) and $f=1$, the solution $u$ of the above minimisation problem satisfies (viewing $\Gamma$ as a subset of $\mathbb{R}^3$) \begin{equation} \label{eq:capbie} \mathrm{cap}_{1,\mathbb{R}^3}(\Gamma) = a_{\mathrm{dom}}(u,u) = a_D([\partial_n u],[\partial_n u])= \langle 1, [\partial_n u]\rangle, \end{equation} where $[\partial_n u]\in H^{-1/2}(\Gamma)$ is the unique solution of \eqref{eq:wfD} and $u= -\mathcal{S}[\partial_n u]$ is the unique solution of \eqref{eq:dom_weak}. Note that in \rf{eq:capbie} the first equality follows from standard results on capacities (see, e.g., \cite[Proposition 3.4, Remark 3.14]{HewMoi:15}), the third from \eqref{eq:wfD}, and the second equality follows because $a_D(\phi,\phi) = a_{\mathrm{dom}}(\mathcal{S}\phi,\mathcal{S}\phi)$, for all $\phi\in \widetilde{H}^{-1/2}(\Gamma)$ (cf.\ the proof of \cite[Theorem 2]{Costabel88}). We are interested in sequences of screen problems, with a sequence of screens $\Gamma_1, \Gamma_2, \ldots$ converging in some sense to a limiting screen. We assume that there exists $R>0$ such that the open set $\Gamma_j\subset \Gamma^R := \{\mathbf{x}\in \Gamma_\infty: |\mathbf{x}|<R\}$ for every $j\in \mathbb{N}$. Let $a_\mathrm{D}^R$ and $a_\mathrm{N}^R$ denote the sesquilinear forms $a_\mathrm{D}$ and $a_\mathrm{N}$ when $\Gamma=\Gamma^R$. We note that for any $R>0$ and open $\Gamma\subset \Gamma^R$ it holds that \begin{equation*} % S_\Gamma \phi = \left.\left(S_{\Gamma^R} \phi\right)\right|_\Gamma \quad \mbox{ and } \quad T_\Gamma \psi = \left.\left(T_{\Gamma^R} \psi\right)\right|_\Gamma, % \end{equation*} for $\phi\in \widetilde{H}^{-1/2}(\Gamma)$ and $\psi\in \widetilde{H}^{1/2}(\Gamma)$. Hence \begin{equation} \label{eq:rest2} a_\mathrm{D}(\phi,\psi) = a_\mathrm{D}^R(\phi,\psi), \quad \phi,\psi\in \widetilde{H}^{-1/2}(\Gamma)\subset \widetilde{H}^{-1/2}(\Gamma^R), \end{equation} i.e. $a_\mathrm{D}$ is the restriction of the sesquilinear form $a_\mathrm{D}^R$ from $\widetilde{H}^{-1/2}(\Gamma^R)$ to its closed subspace $\widetilde{H}^{-1/2}(\Gamma)$. Similarly, $a_\mathrm{N}$ is the restriction of $a_\mathrm{N}^R$ to $\widetilde{H}^{1/2}(\Gamma)$. Focussing first on the Dirichlet problem, consider a sequence of Lipschitz screens $\Gamma_1, \Gamma_2, \ldots$ with $\Gamma_1 \supset \Gamma_2 \supset \ldots$ (or equivalently $\overline{\Gamma_1} \supset \overline{\Gamma_2} \supset \ldots$). Suppose that $f_j\in H^{1/2}(\Gamma_j)$ and let $\phi_j$ denote the solution $[\partial_\mathbf{n} u]$ to \eqref{eq:wfD} (equivalently to \eqref{eq:single}) when $\Gamma=\Gamma_j$ and $f=f_j$. The question we address is what can be said about $\phi_j$ in the limit as $j\to\infty$. For this question to be meaningful, we need some control over the sequence $f_j$: a natural assumption, relevant to many applications, is that \begin{equation} \label{eq:assum} \mbox{there exists }f_\infty\in H^{1/2}(\Gamma_\infty) \;\mbox{ such that } f_j = f_\infty|_{\Gamma_j}, \; \mbox{for } j\in \mathbb{N}. \end{equation} We shall study the limiting behaviour under this assumption using the general theory of \S\ref{subsec:ApproxVar}. To this end choose $R>0$ so that $\Gamma_1\subset \Gamma^R$, let $H = \widetilde{H}^{-1/2}(\Gamma^R)$, $W_j = \widetilde{H}^{-1/2}(\Gamma_j)$, so that $H\supset W_1\supset W_2 \supset \ldots$, and set $$ W = \bigcap_{j=1}^\infty W_j = \bigcap_{j=1}^\infty H^{-1/2}_{\overline{\Gamma_j}} = \bigcap_{j=1}^\infty \widetilde{H}^{-1/2}(\Gamma_j). $$ Then, by Proposition \ref{prop:nestedclosed}, $W = H^{-1/2}_F$, where $F = \cap_{j=1}^\infty \overline{\Gamma_j}$. Further, by \eqref{eq:rest2}, and where $f= f_\infty|_{\Gamma^R}$, we see that $\phi_j$ is the solution of $$ a_\mathrm{D}^R(\phi_j,\psi) = \langle f,\psi\rangle, \quad \psi\in W_j. $$ Applying Lemma \ref{lem:dec} we obtain immediately the first part of the following result. The remainder of the theorem follows from Lemma \ref{lem:polarity}(\ref{hh}) and (\ref{gg}). \begin{thm} \label{thm:dec} Assuming \eqref{eq:assum}, $\|\phi_j-\phi\|_{H^{-1/2}(\Gamma_\infty)}=\|\phi_j-\phi\|_{\widetilde{H}^{-1/2}(\Gamma^R)}\to 0$ as $j\to\infty$, where $\phi\in H^{-1/2}_F$ is the unique solution of $$ a_\mathrm{D}^R(\phi,\psi) = \langle f,\psi\rangle, \quad \psi\in H^{-1/2}_F. $$ Further, if $F$ is $(-1/2)$-null (which holds in particular if ${\rm dim_H}(F) < 1$) then $\phi=0$. If $F$ is not $(-1/2)$-null (which holds in particular if ${\rm dim_H}(F) > 1$), then there exists $f_\infty\in H^{1/2}(\Gamma_\infty)$ such that $\langle f,\psi\rangle\neq 0$, for some $\psi\in H^{-1/2}_F$, in which case $\phi\neq 0$. \end{thm} \begin{example} Theorem \ref{thm:dec} applies in particular to cases in which $F$ is a fractal set. One such example is where \begin{equation*}% \overline{\Gamma_j} = \big\{(\tilde \mathbf{x}, 0):\tilde \mathbf{x}\in E_{j-1}^2\big\}, \end{equation*} and $\Gamma_j = \mathrm{int}(\overline{\Gamma_j})$, with (cf.\ \cite[Example 4.5]{Fal}) $E_0\supset E_1\supset\ldots$ the standard recursive sequence generating the one-dimensional ``middle-$\lambda$'' Cantor set, $0<\lambda<1$, so that $E_j^2\subset \mathbb{R}^2$ is the closure of a Lipschitz open set that is the union of $4^j$ squares of side-length $l_j=\alpha^j$, where $\alpha=(1-\lambda)/2\in (0,1/2)$. (Figure \ref{fig:CantorDust} visualises $E_0^2,\ldots, E_4^2$ in the classical ``middle third'' case $\alpha =\lambda = 1/3$.) In this case the limit set is % \begin{equation*}% F = \big\{(\tilde \mathbf{x}, 0):\tilde \mathbf{x}\in E^2\big\}, \end{equation*} where $E=\cap_{j=0}^\infty E_j$ is the middle-$\lambda$ Cantor set and $E^2$ is the associated two-dimensional Cantor set (or ``Cantor dust''), which has Hausdorff dimension ${\rm dim_H}(E^2) = 2\log 2/\log(1/\alpha) \in (0,2)$. It is known that $E^2$ is $s$-null if and only if $s\geq ({\rm dim_H} (E^2)-n)/2$ (see \cite[Theorem 4.5]{HewMoi:15}, where $E^2$ is denoted $F^{(2)}_{2\log 2/\log(1/\alpha),\infty}$). Theorem \ref{thm:dec} applied to this example shows that if $1/4<\alpha < 1/2$ then there exists $f_\infty\in H^{1/2}(\Gamma_\infty)$ such that the limiting solution $\phi\in H^{-1/2}_F$ to the sequence of screen problems is non-zero. On the other hand, if $0<\alpha\leq 1/4$ then the theorem tells us that the limiting solution $\phi=0$. \end{example} \begin{figure} {\hspace{-12mm}\includegraphics[scale=0.75]{cantor_dust}} \caption{The first five terms in the recursive sequence of prefractals converging to the standard two-dimensional middle-third Cantor set (or Cantor dust).}% \label{fig:CantorDust} \end{figure} It is clear from Theorem \ref{thm:dec} that whether or not the solution to the limiting sequence of screen problems is zero depends not on whether the limiting set $F$, thought of as a subset of $\Gamma_\infty$ which we identify with $\mathbb{R}^2$, has Lebesgue measure zero, but rather on whether this set $F$ is $(-1/2)$-null. % From a physical perspective this may seem surprising: thinking of the screen as having a certain mass per unit area, a screen with zero surface Lebesgue measure is a screen with zero mass, in some sense a screen that is not there! But to those familiar with potential theory (e.g., \cite{AdHe}) this will be less surprising. In particular from \eqref{eq:capbie}, in the case $k={\mathrm{i}}$ and choosing $f_\infty$ so that $f_\infty=1$ in a neighbourhood of $\Gamma^R$, it holds that $$ \mathrm{cap}_{1,\mathbb{R}^3}(\Gamma_j) = \langle 1, \phi_j\rangle. $$ Taking the limit as $j\to\infty$, and applying elementary capacity theoretic arguments (see, e.g., \cite[Proposition 3.4]{HewMoi:15}), it follows that $$ \mathrm{cap}_{1,\mathbb{R}^3}(F) = \langle 1, \phi\rangle. $$ Moreover, for $\widetilde G \subset \mathbb{R}^2$, defining $G = \{(x_1,x_2,0)\in \mathbb{R}^3: (x_1,x_2)\in \widetilde G\}$, it is clear from the definition of capacity (which involves smooth functions only) and standard Sobolev trace and extension theorems (e.g.\ \cite{McLean}) that, for some positive constants $c_1$ and $c_2$ independent of $\tilde G$, \begin{align} \label{} c_1 \mathrm{cap}_{1,\mathbb{R}^3}(G) \leq \mathrm{cap}_{1/2,\mathbb{R}^2}(\widetilde G) \leq c_2 \mathrm{cap}_{1,\mathbb{R}^3}(G). \end{align} Thus, where $\widetilde F = \{(x_1,x_2)\subset \mathbb{R}^2: (x_1,x_2,0)\in F\}$, it is clear that $\phi=0$ iff $\mathrm{cap}_{1,\mathbb{R}^3}(F) =0$ iff $\mathrm{cap}_{1/2,\mathbb{R}^2}(\widetilde F)=0$, i.e. iff $\widetilde F$ is $(-1/2)$-null as a subset of $\mathbb{R}^2$, where the latter equivalence follows from \cite[13.2.2]{Maz'ya} (restated in \cite[Theorem 2.5]{HewMoi:15}). Turning now to the Neumann problem, consider a sequence of open screens $\Gamma_1, \Gamma_2,\ldots$, with $\Gamma_1 \subset \Gamma_2 \subset\ldots$, such that: (i) $\Gamma := \bigcup_{j=1}^\infty \Gamma_j$ is bounded; and (ii) each $\Gamma_j$ is either Lipschitz or is a finite union of Lipschitz open sets whose closures intersect in at most a finite number of points (the case discussed in Remark \ref{rem:finite_union}, which ensures, inter alia, that $\widetilde{H}^{1/2}(\Gamma_j) = H^{1/2}_{\overline{\Gamma_j}}$). Suppose that $g_j\in H^{-1/2}(\Gamma_j)$ and let $\phi_j\in V_j:= \widetilde{H}^{1/2}(\Gamma_j)= H^{1/2}_{\overline{\Gamma_j}}$ denote the solution $[u]$ to \eqref{eq:wfN} (equivalently to \eqref{eq:hyp}) when $\Gamma=\Gamma_j$ and $g=g_j$. Analogously to the Dirichlet case we assume that \begin{equation} \label{eq:assum2} \mbox{there exists }g_\infty\in H^{-1/2}(\Gamma_\infty) \;\mbox{ such that } g_j = g_\infty|_{\Gamma_j}, \; \mbox{for } j\in \mathbb{N}, \end{equation} and choose $R>0$ such that $\Gamma \subset \Gamma^R$. Then, as noted after \eqref{eq:rest2}, and where $g= g_\infty|_{\Gamma^R}$, we see that $\phi_j\in V_j\subset \widetilde{H}^{1/2}(\Gamma^R)$ is the solution of $$ a_\mathrm{N}^R(\phi_j,\psi) = \langle g,\psi\rangle, \quad \psi\in V_j. $$ By Proposition \ref{prop:nestedopen}, $ V := \overline{\bigcup_{j\in\mathbb{N}} V_j} = \widetilde{H}^{1/2}(\Gamma). $ The first sentence of the following proposition is immediate from \eqref{eq:V-conver}, and the second sentence is clear. \begin{prop} \label{thm:neu} In the case that \eqref{eq:assum2} holds, $\|\phi_j-\phi\|_{H^{1/2}(\Gamma_\infty)}=\|\phi_j-\phi\|_{\widetilde{H}^{1/2}(\Gamma^R)}=\|\phi_j-\phi\|_{\widetilde{H}^{1/2}(\Gamma)}\to 0$ as $j\to\infty$, where $\phi\in \widetilde{H}^{1/2}(\Gamma)$ is the unique solution of $$ a_\mathrm{N}^R(\phi,\psi) = \langle g,\psi\rangle, \quad \psi\in \widetilde{H}^{1/2}(\Gamma). $$ Further, if $\widetilde{H}^{1/2}(\Gamma) \neq H^{1/2}_{\overline{\Gamma}}$, then there exists $g_\infty\in H^{-1/2}(\Gamma_\infty)$ such that $\phi\neq \phi^*$, where $\phi^*\in H^{1/2}_{\overline{\Gamma}}$ is the unique solution of $$ a_\mathrm{N}^R(\phi^*,\psi) = \langle g,\psi\rangle, \quad \psi\in H^{1/2}_{\overline{\Gamma}}. $$ \end{prop} \begin{rem} \label{rem:BIE} The question: ``for which $s\in \mathbb{R}$ and open $\Omega\subset \mathbb{R}^n$ is $\widetilde{H}^{s}(\Omega) \neq H^{s}_{\overline{\Omega}}$'' was addressed in \S\ref{subsec:3spaces}. From Lemma \ref{lem:equalityNullity} we have, in particular, that if $G:={\mathrm{int}}(\overline{\Gamma})\setminus \Gamma$ is not $-1/2$-null then $\widetilde{H}^{1/2}(\Gamma) \subsetneqq H^{1/2}_{\overline{\Gamma}}$. Indeed, by Lemma \ref{lem:equalityNullity}(v), $\widetilde{H}^{1/2}(\Gamma) = H^{1/2}_{\overline{\Gamma}}$ if and only if $G$ is $-1/2$-null, if it holds that $\widetilde{H}^{1/2}({\mathrm{int}}(\overline{\Gamma})) = H^{1/2}_{\overline{\Gamma}}$, in particular if ${\mathrm{int}}(\overline{\Gamma})$ is $C^0$. And, by Lemma \ref{lem:polarity}(xii) and (xiii), $G$ is $-1/2$-null if $\dim_H(G)<1$, while $G$ is not $-1/2$-null if $dim_H(G)>1$. As a specific example, consider the sequence of closed sets $F_0\supset F_1\supset \ldots$ that are the prefractal approximations to the Sierpinski triangle $F:= \bigcap_{j=0}^\infty F$ \cite[Example 9.4]{Fal}. $F_0$ is a (closed) triangle and $F_j$ is the union of $3^j$ closed triangles; the first four sets $F_0$, \ldots, $F_3$ in this sequence are shown in Figure \ref{fig:TSExamples}(a). For $j\in \mathbb{N}$ let $\Gamma_j := F_0\setminus F_j$, and let $\Gamma := \bigcup_{j\in\mathbb{N}} \Gamma_j$, so that $\overline{\Gamma} = F_0$ and $\partial \Gamma = \overline{\Gamma}\setminus \Gamma = F$. Then, using standard results on fractal dimension (e.g., \cite{Fal}), $\dim_H(\partial F_0) = 1$ while $\dim_H(F) = \log 3/\log 2$, so that also $\dim_H({\mathrm{int}}(\overline{\Gamma})\setminus \Gamma) = \dim_H(F\setminus \partial F_0) = \log 3/\log 2 >1$, which implies that $\widetilde{H}^{1/2}(\Gamma) \subsetneqq H^{1/2}_{\overline{\Gamma}}$. On the other hand, since $\Gamma^*:={\mathrm{int}}(\overline{\Gamma})$ is $C^0$, $H^{1/2}_{\overline{\Gamma}} = \widetilde{H}^{1/2}(\Gamma^*)$, and $\phi^*\in \widetilde{H}^{1/2}(\Gamma^*)$ (defined in Proposition~\ref{thm:neu}) is the solution $[u]$ to \eqref{eq:hyp} in the case when the screen is $\Gamma^*$ and $g$ in \eqref{eq:hyp} is the restriction of $g_\infty$ to $\Gamma^*$. This specific example illustrates that the limit of the solutions $\phi_j\in \widetilde{H}^{1/2}(\Gamma_j)$ to the BIE for the Neumann problem when the screen is $\Gamma_j$ can be different to the solution $\phi^* \in \widetilde{H}^{1/2}(\Gamma^*)$ when the screen is $\Gamma^*$. It is surprising that this happens even though $\Gamma_j\to \Gamma^*$ in a number of senses. In particular, $\Gamma_j$ can be viewed as the screen $\Gamma^*$ with ``holes'' in it, but with the size of these holes, as measured by the 2D Lebesgue measure $m(\Gamma^*\setminus \Gamma_j)$, tending to $0$ as $j\to\infty$. \end{rem} \addcontentsline{toc}{section}{References}
1,116,691,498,028
arxiv
\section{Introduction} Galilean invariance is generally broken in solids due to the presence of a lattice background. For typical semiconductor materials however, Galilean symmetry is preserved for low-energy states near the band edge where the only remaining effect of the underlying lattice is a renormalization of the electron mass from its bare value~\cite{Yu:2010Springer_Semiconductors}. In a Galilean-invariant system, interaction effects do not affect electronic transport which is only carried by the center-of-mass motion of the electron liquid. The absence of interaction corrections to Drude weight in conventional two-dimensional electron gas (2DEG) has been demonstrated in several experiments~\cite{Hirjibehedin:2002is,Pellegrini:2006el}. On the other hand, electronic transport in multilayer graphene systems is incompatible with Galilean invariance symmetry due to the chiral pseudospin texture of their low-energy states. Electronic states in the Brillouin zone are not only characterized by their respective crystal momenta, but also by their pseudospin orientations that originate from the underlying lattice structure. A Galilean boost in graphene systems will not only shift the momentum of the occupied quantum states but also change their average pseudospin orientations. As a result, electronic states in chiral multilayer graphene do not respect Galilean symmetry. Therefore, unlike conventional 2DEG, optical properties of graphene systems can be subject to renormalization effects from many-body interactions~\cite{James:2009PRB_TwobandModel,Polini:2011PRB_Drudeweight}. The above theoretical expectations receive reasonable support from the experiments but remain an open issue to date. In single-layer graphene, several measurements of the Drude weight indeed observe a deviation from its free-carrier behavior, though the experimental {interpretations are not yet fully conclusive}. In two earlier optical spectroscopy experiments~\cite{Horng:2011PRB_DrudeCond,Yan:2011ACS_DrudeCond}, the results suggest an up to $40\%$ suppression of the Drude weight. However, in a recent cyclotron-resonance absorption experiment~\cite{Orlita:2012NJP_Drude}, the measured Drude weight is reported to be in quantitative agreement with the prediction in Ref.~\onlinecite{Polini:2011PRB_Drudeweight}. Optical Drude weight in bilayer graphene is less studied experimentally. To date, most optical absorption measurements on bilayer graphene focus on the higher-frequency absorption features in the spectrum such as interband absorption thresholds as well as the asymmetry between electron- and hole-doped regions~\cite{Abergel:2007PRB_OpticalInfrared,Nicol:2008PRB_BLGCond,Nilsson:2008PRB_ElePropertyBLG,Benfatto:2008PRB_OpticalSumRule,Wang:2008Science_GateTunableTransition,Malard:2007PRB_Raman,Zhang:2008PRB_BLGconductivity,Kuzmenko:2009PRB_InfraredSpect,Li:2009PRL_BandAsymmetry,Zhang:2009Nature_TunableGap,Mak:2009PRL_BandGapOpening,Lui:2011NatPhys_BandgapTuning}. Additional studies are imperative to better understand the intraband absorption processes represented by the optical Drude weight in bilayer and multilayer graphene. In this paper, we present a quantum kinetic theory for the renormalization of the Drude weight and plasmon frequency in multilayer graphene. The current work generalizes the theory developed in Ref.~\onlinecite{James:2009PRB_TwobandModel} for the case of bilayer graphene. The quantum kinetic approach~\cite{Rammer:1986RMP_QKE} captures important quantum coherence effects among energy bands beyond the semiclassical Boltzmann theory. We first build our theory on the low-energy two-band description of multilayer graphene and study the effects of chirality on the interaction-induced renormalization. Using the full four-band Hamiltonian, we then focus on bilayer graphene as an example to illustrate the effects of higher energy bands ignored in the two-band model. In these two calculations, we obtain a set of generalized optical Bloch equations that govern the dynamical frequency dependence of the nonequilibrium density matrix under the influence of an optical field and electron-electron interactions. We obtain leading-order solutions to these equations and demonstrate that the Drude weight and plasmon frequency are enhanced, with substantial corrections from higher-band contributions that are ignored in the two-band calculations. We organize the rest of our paper as follows. We first develop the formalism of our kinetic theory for multilayer graphene using the two-band model in Section~\ref{Sec:Model} and~\ref{Sec:SLGDIC}. Then in Section~\ref{Sec:4bandModel} we lay out the necessary ingredients for a more elaborate theory for bilayer graphene using the four-band description. In Section~\ref{Sec:Drudeweight} we proceed to obtain the leading-order solution to the theory and obtain the optical Drude weight of bilayer graphene. In Section~\ref{Sec:Discussions} we compare and discuss the results obtained using the two-band and four-band models of bilayer graphene, as well as the renormalization of the plasmon frequency. Finally, Section~\ref{Sec:Conclusion} summarizes our main results. \section{Quantum kinetic formalism \label{Sec:Model}} We make use of a quantum kinetic equation~\cite{Rammer:1986RMP_QKE} to study the influence of electron-electron interactions on the optical conductivity. Such an approach is well established in connection with studies of the carrier and exciton kinetics in conventional semiconductors under optical excitation~\cite{haug2008quantum}. While fully equivalent to the Bethe-Salpeter equation, the advantage of the present approach lies in the gauge invariance structure of the kinetic equation, in which electron self-energy effects as well as excitonic effects are built in consistently in a conserving approximation. The density matrix $\rho$ is the central quantity in our theory. In the presence of an \textit{a.c.} electric field $\bm{E}$, the dynamics of the density matrix $\rho=\rho(\bm{k})$ is governed by the following quantum kinetic equation~\cite{James:2009PRB_TwobandModel} \begin{align} -i\omega\rho+e\bm{E}\cdot\partii{\rho}{\bm{k}}+i[\mathcal{H},\rho]=0, \label{Eq:QKE-general} \end{align} where $\omega$ is the frequency of the \textit{a.c.} field and $[\hat{A},\hat{B}]\equiv \hat{A}\hat{B}-\hat{B}\hat{A}$ denotes the commutator between operators $\hat{A}$ and $\hat{B}$. The system Hamiltonian $\mathcal{H}$ generally comprises a noninteracting part $\mathcal{H}_0$ and a self-energy correction due to many-body interactions. In linear response, the density matrix is given by $\rho=\rho_{0}+\rho_{1}$, where $\rho_{0}$ is its equilibrium value and $\rho_{1}$ the first-order correction due to the external electric field. Keeping only terms up to the first order in electric fields, we write Eq.~\eqref{Eq:QKE-general} as \begin{align} -i\omega\rho_{1} + e\bm{E}\cdot\partii{\rho_{0}}{\bm{k}}+i[\mathcal{H},\rho_{0}+\rho_{1}]=0. \label{Eq:QKE} \end{align} Our focus is on obtaining the Drude weight from the optical conductivity. Because the Drude weight is obtained from the residue of the $\omega=0$ pole in the real part of the optical conductivity in the absence of disorder, we can limit our discussions to the clean limit $\omega\tau\gg1$, where collision terms in the kinetic equation can be ignored. The optical conductivity $\sigma(\omega)$ can then be obtained from the average total current $J = \sigma(\omega)\bm{E}$, which is the quantum mechanical average of the current operator $j$ \begin{align} J &= g_sg_v e\sum_{\bm{k}}\;\text{tr}(\rho_{1} j).\label{Eq:totalcurrents} \end{align} In the above equation, $g_s,g_v = 2$ arise from the spin and valley degeneracies respectively in multilayer graphene systems and `tr' denotes trace over the pseudospin degrees of freedom. In the following, we will solve for the nonequilibrium density matrix $\rho_{1}$ from Eq.~\eqref{Eq:QKE} both in the absence and presence of electron-electron interaction. To clearly delineate these two limits, we separate $\rho_{1}$ into two parts $\rho_{1}=\rho_{1}^{(0)}+\rkle$, with the first term $\rho_{1}^{(0)}$ being the noninteracting result and the second term $\rkle$ containing corrections from interaction effects. \section{Chiral multi-layer graphene \label{Sec:SLGDIC}} In this section, we generalize the method {used for obtaining the Drude weight renormalization} developed in Ref.~\onlinecite{James:2009PRB_TwobandModel} from the case of bilayer to multilayer graphene. By focusing our interest on the two lowest energy bands around the charge neutrality point, one can write~\cite{Min:2008PRB_PseudospinMag,McCann:2011PRB_MultilayerGraphene,Li2014} the effective Hamiltonian for an $l$-layer ABC-stacked multilayer graphene system as $\mathcal{H}_0=\epsilon_{\bk}\hat{\bm{n}}\cdot\bm{\sigma}$, where $\hat{\bm{n}}=(\cos l\phi_k,\sin l\phi_k)$ is the pseudospin vector responsible for the chirality of the band structure, $\bm{\sigma}$ is a vector comprising the set of Pauli matrices acting on the pseudospin degrees of freedom, $\epsilon_{\bk}\equiv \mathcal{A}_l k^l$ is the band energy dispersion, with $\mathcal{A}_l=(\hbar v_0)^l/\gamma_1^{l-1}$. In this low-energy description, the pseudospin degrees of freedom correspond to the outermost top and bottom layers in multilayer graphene (including bilayer graphene) and the two sublattice sites for single-layer graphene. As the electronic wave vector undergoes one full rotation around the Dirac point, the pseudospin vector also undergoes $l$ number of rotations. In other words, the pseudospin winding number is equal to the number of layers~\cite{Park:2011ps}. We note that this two-band model is valid within a limited energy range; in particular for bilayer and multilayer graphene it does not capture either the higher-energy bands or the low-energy remote hopping processes that can lead to trigonal warping effects~\cite{Varlet2014:BLG}. \subsection{Pseudospin Bloch Equation} In equilibrium, the density matrix in the band basis is diagonal with the elements $n_F(\xi_{k\lambda})$, where $n_F(x)$ is the Fermi-Dirac distribution function, $\xi_{k\lambda}=\lambda\epsilon_{\bk}-\varepsilon_F$ is the quasiparticle energy measured from the Fermi energy $\varepsilon_F$, and $\lambda=+(-)$ labels the conduction (valence) band. For clarity we denote $n_F(\xi_{k\lambda})$ simply by $n_\lambda(k)$ in the following. The equilibrium density matrix $\rho_{0}$ can be obtained by transforming the above diagonal matrix from the band basis to the pseudospin basis yielding \begin{align} \rho_{0} = \dfrac{1}{2}\sum_{\lambda=\pm} n_\lambda(k)(1-\lambda\bm{\sigma}\cdot\hat{\bm{n}}). \end{align} To obtain the nonequilibrium density matrix $\rho$, we first express it in the complete basis of a set of transformed Pauli matrices (see Appendix~\ref{Appendix:GammaMatrices}) \begin{align} \rho_{1} &= i(\bsigma\times\nn)_z P(\bm{k})+Q(\bm{k})\sigma_z+R(\bm{k})\bsigma\cdot\nn+S(\bm{k}). \label{Eq:TwobandRk} \end{align} Such a decomposition carries a clear physical meaning: the $\bm{1}$, $\bsigma\cdot\nn$, $(\bsigma\times\nn)_z$, and $\sigma_z$ components describe the total density change, interband polarization, interlayer coherence, and interlayer polarization respectively. We insert the above ansatz for $\rho_{1}$ into the quantum kinetic equation in Eq.~\eqref{Eq:QKE} and obtain the following equations for the functions $P$, $Q$, $R$, and $S$. In particular, $R(\bm{k})$ and $S(\bm{k})$ have the following closed form solutions \begin{align} S(\bm{k})&=-\dfrac{ie(\bm{E}\cdot\hat{\bm{k}})}{2\omega}\left[n'_+(k)+n'_-(k)\right],\notag\\ R(\bm{k})&= \dfrac{ie(\bm{E}\cdot\hat{\bm{k}})}{2\omega}\left[n'_+(k)-n'_-(k)\right], \label{Eq:Two-band-Bk} \end{align} while $P(\bm{k})$ and $Q(\bm{k})$ satisfy the following coupled integral equations, \begin{align} \omega P(\bm{k})+\delta_k Q(\bm{k})&=(n_{+}-n_{-})\left[\dse_{+-}+e\bm{E}\cdot\mathcal{A}_{+-}\right],\notag\\ \delta_k Q(\bm{k})+\omega P(\bm{k})&=-(n_{+}-n_{-})\dse_{-+},\label{Eq:two-bandEquations} \end{align} where $\delta_k=2\epsilon_{\bk}+\Sigma_{+}^{(0)}(\bm{k})-\Sigma_{-}^{(0)}(\bm{k})$ is the interband excitation energy, and \begin{align} \Sigma_{\lambda}^{(0)}(\bm{k})=-\sum_{\lambda'=\pm,\bm{k}'}V_{\bm{k}\bk'}n_{\lambda'}(k')\left[1+\lambda\lambda'\cos (l\phi_{k'k})\right]/2\notag \end{align} is the equilibrium self-energy for band $\lambda=\pm$. The electric dipole term consists of a coupling between the electric field and a gauge potential $\mathcal{A}_{+-}$. Here $\mathcal{A}_{\lambda\lambda'}$ is the \emph{non-Abelian Berry connection}~\cite{DiXiao:2009RMP_BerryPhase}, \begin{align} \mathcal{A}_{\lambda\lambda'}(\bm{k}) = i\bra{u_{\lambda}(\bm{k})}\frac{\partial}{\partial \bm{k}}\ket{u_{\lambda'}(\bm{k})},\label{Eq:BerryConnection} \end{align} with $u_{\lambda}(\bm{k})$ denoting the wave function for the band $\lambda$. $\mathcal{A}_{+-}$ is therefore the off-diagonal matrix element of $\mathcal{A}(\bm{k})$ between the conduction and valence band states, and for multilayer graphene we have $\mathcal{A}_{+-} =(l/2k)\hat{\phi}$ in Eq.~\eqref{Eq:two-bandEquations}. Finally, the right-hand side of Eq.~\eqref{Eq:two-bandEquations} arises from changes in the self-energy from the nonequilibrium density matrix \begin{align} \dse_{+-}(\bm{k}) &=\sum_{\bm{k}'}V_{\bm{k}\bk'} Q(\bm{k}'), \label{Eq:DrudeContributionToCk}\\ \dse_{-+}(\bm{k}) &=\sum_{\bm{k}'}V_{\bm{k}\bk'}[\cos l\phi_{k'k} P(\bm{k}')-i\sin l\phi_{k'k} R(\bm{k}')].\notag \end{align} Eqs.~\eqref{Eq:Two-band-Bk}-\eqref{Eq:two-bandEquations} comprise a set of pseudospin Bloch equations, reminescent of the optical Bloch equations commonly used in two-level atoms \cite{allen2012optical} and conventional two-band semiconductors~\cite{haug2008quantum,boyd2013nonlinear}. Importantly, Eqs.~\eqref{Eq:Two-band-Bk}-\eqref{Eq:two-bandEquations} are also different from the conventional optical Bloch equations in the following way. First we note that the solutions of $R(\bm{k})$ and $S(\bm{k})$ in Eq.~\eqref{Eq:Two-band-Bk} describe the Drude responses of the total density and interband polarization. Eq.~\eqref{Eq:two-bandEquations} determines the interband response from the coupled dynamics of the coherence and polarization in the layer degrees of freedom. An important observation is that the interband response is coupled to the Drude response through the nonequilibrium self-energy $\dse_{-+}(\bm{k})$ in Eq.~\eqref{Eq:DrudeContributionToCk} due to its dependence on $R(\bm{k})$. This Drude-interband coupling is the central piece of physics that gives rise to the renormalization effects on the optical conductivity and plasmon frequency we discuss in this paper. It arises from the exclusive $\hat{\phi}$ dependence in the Berry connection $\mathcal{A}_{+-}$ in Eq.~\eqref{Eq:two-bandEquations} that reflects the chirality of the graphene bandstructure. Solutions of Eq.~\eqref{Eq:two-bandEquations} yield the interaction corrections to the density matrix $\rho_{1}$. To obtain the optical conductivity, we need to compute the current induced by an applied \emph{a.c.} electric field. We decompose the current density operator in the following way \begin{align} \bm{j}_{\bm{k}} = \partii{\mathcal{H}_0}{\bm{k}}=\partii{\mathcal{H}_0}{k}\hat{k}+\dfrac{1}{k}\partii{\mathcal{H}_0}{\phi}\hat{\phi}\equiv j_k\hat{k}+j_{\phi}\hat{\phi}, \end{align} where $\hat{k}$ and $\hat{\phi}$ are the unit vector for the radial and azimuthal direction, respectively. The current density along the $x$-direction is then $ j_x= j_k\cos\phi - j_\phi \sin\phi$. As a result, in the linear response regime, the total current induced by an electric field in the $x$ direction reads \begin{align} J_x &= 4e\int d\bm{k}\;\text{Tr}(\rho_{1} j_x) \equiv J_1 - J_2, \label{Eq:totalcurrents} \end{align} with $J_1$ and $J_2$ defined as \allowdisplaybreaks[2] \begin{align*} J_1= 4e\int d\bm{k}\;\text{Tr}(\rho_{1} j_k)\cos\phi, \; J_2 = 4e\int d\bm{k}\;\text{Tr}(\rho_{1} j_\phi)\sin\phi. \end{align*} For the two-band model we find that the current operator $j_x = \partial \mathcal{H}_0/\partial k_x$ is evaluated as $j_x = l\mathcal{A}_lk^{l-1}\{\sigma_x\cos[(l-1)\phi_k]+\sigma_y\sin[(l-1)\phi_k]\}$. As a result, the total current of the system is given by \begin{align} J_x = \dfrac{2e}{\pi^2}l\mathcal{A}_l \int_0^{\infty} dk k^l\int_0^{2\pi} d\phi_k \left[R(\bm{k})\cos\phi_k+iP(\bm{k})\sin\phi_k\right], \label{Eq:Two-band-current} \end{align} from which we can find the interaction corrections to the conductivity. \subsection{Leading-order interaction-induced Drude weight renormalization} We now use the above formalism to obtain the leading-order interaction-induced Drude weight renormalization $\bar{\mathcal{D}}$ in multilayer graphene. The integral equations~\eqref{Eq:TwobandRk}-\eqref{Eq:two-bandEquations} can be solved numerically to obtain the nonequilibrium density matrix $\rho_{1}$ to all orders of interaction potential within our theory. To maintain analytic tractability, {however}, in this work we will {only} solve these couped integral equations perturbatively up to first order and obtain the corresponding interaction corrections to the Drude weight. {In addition}, as we are concerned only with the Drude weight, it is sufficient to evaluate terms with an $\omega^{-1}$ dependence in Eq.~\eqref{Eq:Two-band-current}. First, the noninteracting contribution to the Drude weight only comes from $R(\bm{k})$ in Eq.~\eqref{Eq:Two-band-Bk}, yielding \begin{align} \mathcal{D}^{(0)}(\varepsilon_F) = \dfrac{e^2}{\pi}l\mathcal{A}_l k_F^l=\dfrac{e^2}{\pi}l\varepsilon_F, \end{align} where $k_F$ is the Fermi wave vector. The interaction contributions to the Drude weight are contained in the $P(\bm{k})$ term from Eq.~\eqref{Eq:Two-band-current}, originating from the nonequilibrium self-energy $\dse_{-+}(\bm{k})$ due to Drude-interband coupling. To leading order in the interaction potential, we find that the part of $P(k)$ having a $\omega^{-1}$ dependence (denoted by a subscript `$\mathrm{Drude}$' below) is given by \begin{align} P_{\mathrm{Drude}}^{(e)}&= \dfrac{e(\bm{E}\times\hat{\bm{k}})_z(n_{+}-n_{-})}{4\epsilon_{\bk}}\times \notag\\ &\sum_{\bm{k}'}V_{\bm{k}\bk'}\sin\phi_{k'k}\sin l\phi_{k'k}(n_{-}'-n_{+}'). \end{align} The leading-order interaction correction to the Drude weight then follows from substituting the above into Eq.~\eqref{Eq:Two-band-current}. To illustrate the behavior of the Drude weight correction in the presence of screening effects, we assume static screening for the Coulomb potential \begin{align} V_{\bm{k}\bk'}= \dfrac{2\pi e^2}{\kappa(|\bm{k}-\bm{k}'|+\etak_{\text{TF}})}, \label{Eq:CoulombPotential} \end{align} where $\kappa$ is the effective dielectric constant of the environment in which the multilayer graphene sheet is embedded, $k_{\text{TF}}$ is the Thomas-Fermi screening wave vector, \begin{align} k_{\text{TF}}=\dfrac{g_sg_ve^2}{l\kappa}\mathcal{A}_l^{-1}k_F^{2-l}, \end{align} and $\eta$ is a control parameter that can be adjusted to represent the weaker screening at high frequencies. In the following we first consider two limits where analytical expressions for the Drude weight correction can be obtained. We first study the long-range interaction limit corresponding to negligible screening by ignoring the $k_{\text{TF}}$ in Eq.~\eqref{Eq:CoulombPotential}. In the opposite limit when the interaction is heavily screened, the Thomas-Fermi screening length $k_{\text{TF}}$ will be much larger than typical values of $|\bm{k}-\bm{k}'|$. Thus, we use a constant $V_0$ interaction to represent $ V_{\bm{k}\bk'}$. Finally we evaluate the Drude weight correction numerically in the full Thomas-Fermi approximation [Eq.~\eqref{Eq:CoulombPotential}] and compare the results from the three cases. \subsubsection{Long-range interaction limit} In the limit of long-range Coulomb potential, the expression for $P_{\mathrm{Drude}}^{(e)}(\bm{k})$ reads \begin{align} P_{\mathrm{Drude}}^{(e)}(\bm{k})=\dfrac{e^3(\bm{E}\times\hat{\bm{k}})_zk_F(n_{+}-n_{-})}{16\pi\omega\epsilon_{\bk}\kappa}\notag\\\times [\Phi_{l-1}(k,k_F)-\Phi_{l+1}(k,k_F)],\notag \end{align} from which we obtain the following correction to the Drude weight from Eq.~\eqref{Eq:Two-band-current} \begin{align} \mathcal{D}^{(e)}= \dfrac{e^4l k_F}{8\pi^2\kappa} \int_{1}^{k_c/k_F}dx [\Phi_{l-1}(x,1)-\Phi_{l+1}(x,1)], \end{align} where the function $\Phi_{l}$ is defined in Appendix~\ref{Appendix:AuxFunctions}. In addition, we have defined the dimensionless variable $x=k/k_F$, and $k_c$ is the momentum cutoff for which the two-band description for the low-energy multilayer graphene model remains valid, which is dependent on the number of layers. For Fermi energies with $k_F \ll k_c$ where the two-band description holds to a good approximation, the upper limit of the integral above becomes large and the value of the integral becomes independent of $k_F$. Therefore, unlike the noninteracting Drude weight, the power-law dependence on $k_F$ of the leading-order $\mathcal{D}^{(e)}$ is independent of the number of layers $l$. We define the Drude weight renormalization factor $\bar{\mathcal{D}}$ by $\bar{\mathcal{D}}-1= {\mathcal{D}^{(e)}}/{\mathcal{D}^{(0)}}$, and find that in the long-range limit \begin{align} \bar{\mathcal{D}}-1 =&\dfrac{\alpha^{\ast}}{8\pi(\hbar v_0k_F/\gamma_1)^{l-1}}\notag\\&\times \int_{1}^{k_c/k_F}dx [\Phi_{l-1}(x,1)-\Phi_{l+1}(x,1)], \end{align} where $\alpha^{\ast} = e^2/\kappa \hbar v$ is the effective fine structure constant in graphene, and $\kappa$ is the dielectric constant from the environment. Among graphene systems, we note that single-layer graphene ($l=1$) is special because the Drude weight renormalization factor is independent of electron density in this long-range interaction limit, \begin{align} \bar{\mathcal{D}}-1\big|_\text{SLG}=\dfrac{\alpha^{\ast}}{8\pi} \int_{1}^{k_c/k_F}dx [\Phi_{0}(x,1)-\Phi_{2}(x,1)]. \label{Eq:SLG_lr_DIC} \end{align} This agrees with results obtained from the diagrammatic formalism up to the same leading order~\cite{Polini:2011PRB_Drudeweight}. For bilayer graphene $(l=2)$, we have \begin{align} \bar{\mathcal{D}}-1\big|_\text{BLG}=\dfrac{\alpha^{\ast}\gamma_1}{8\pi\hbar v k_F} \int_{1}^{k_c/k_F}dx [\Phi_{1}(x,1)-\Phi_{3}(x,1)], \label{Eq:BLG_lr_DIC} \end{align} which agrees with the result previously obtained in Ref.~\onlinecite{James:2009PRB_TwobandModel}. \subsubsection{Short-range interaction limit} We now turn to the limit of short-range interaction where electron-electron interaction is assumed to be a constant $V_0$. The expression for $P_{\mathrm{Drude}}^{(e)}(\bm{k})$ in this limit is \begin{align} P_{\mathrm{Drude}}^{(e)} &=\dfrac{e(\bm{E}\times\hat{\bm{k}})_z(n_+-n_-)k_F V_0}{32\pi^2\omega\epsilon_{\bk}}\notag\\ &\times\int_0^{2\pi}d\phi\left\{\cos[(l-1)\phi_{\bm{k}}]-\cos[(l+1)\phi_{\bm{k}}]\right\}. \notag \end{align} Interestingly, we note that when the number of layers $l>1$, the above expression vanishes due to azimuthal symmetry. This finding generalizes our previous result~\cite{James:2009PRB_TwobandModel} for bilayer graphene to $l > 2$ multilayer graphene. Therefore $\bar{\mathcal{D}}$ vanishes in the short-range interaction limit for pseudospin winding number $l \geq 2$. Single-layer graphene ($l=1$) is special as only it has a nonzero leading-order $\bar{\mathcal{D}}$ in the short-range limit. If we let the effective interaction strength to be $V_0 = 2\pi e^2/\kappa\etak_{\text{TF}}$, the Drude weight correction is then \begin{align} \mathcal{D}^{(e)}\big|_\text{SLG} =\dfrac{e^4k_F}{4\pi\kappa\etak_{\text{TF}}}(k_c-k_F) =\dfrac{e^2\hbar v}{16\pi\eta}(k_c-k_F), \end{align} and the corresponding $\bar{\mathcal{D}}$ is \begin{align} \bar{\mathcal{D}}-1\big|_\text{SLG}=\dfrac{\eta}{16}\left(\dfrac{k_c}{k_F}-1\right), \label{Eq:SLG_sr_DIC} \end{align} in agreement with the result obtained in Ref.~\onlinecite{Polini:2011PRB_Drudeweight}. \subsubsection{Thomas-Fermi Screening} \begin{figure}[!] \centering \includegraphics[scale=0.3]{Fig-SLGDIC} \caption{Comparison of DIC in monolayer graphene in the long-range limit and short-range limit. We also include the numerical evaluation with the whole screened potential for comparison. Here we again use $\eta = 0.1$, and a dielectric constant of $\kappa =2.5$. \label{Fig:SLGDIC}} \end{figure} We now evaluate the Drude weight renormalization numerically for finite static screening. The expression for the Drude weight correction for finite $k_{\text{TF}}$ is given by \begin{align} \mathcal{D}^{(e)} =\dfrac{e^4l}{4\pi^2\kappa}\int_{k_F}^{k_c}dk I_l(k), \end{align} with \begin{align} I_l(k)=\int_0^{2\pi}d\phi\dfrac{k_F}{|\bm{k}-\bm{k}_F|+\etak_{\text{TF}}}\sin\phi\sin l\phi. \end{align} from which we obtain the Drude weight renormalization factor as \begin{align} \bar{\mathcal{D}}-1 =\dfrac{\alpha^{\ast}}{4\pi(\hbar v_0k_F/\gamma_1) ^{l-1}}\int_{1}^{k_c/k_F}dx I_l(x), \label{Eq:TwobandDIC} \end{align} In Fig.~\ref{Fig:SLGDIC}, we show the numerical result from Eq.~\eqref{Eq:TwobandDIC} and the analytical results in the long-range [Eq.~\eqref{Eq:SLG_lr_DIC}] and short-range limits [Eq.~\eqref{Eq:SLG_sr_DIC}]. We note that the short-range limit result drastically overestimates the Drude weight renormalization as compared to the Thomas-Fermi screening result, which is better approximated by the long-range limit. \begin{figure}[!] \centering \includegraphics[scale=0.3]{Fig-DICTwoband} \caption{Interaction-induced Drude weight renormalization in few-layer graphene [see the definition in Eq.~\eqref{Eq:TwobandDIC}]. Here we use $\eta = 0.1$, and the dielectric constant $\kappa =2.5$. \label{Fig:DICTwoband}} \end{figure} Our theory further predicts that Drude weight renormalization effects become smaller with increasing number of layers, as shown in Fig.~\ref{Fig:DICTwoband}. Also, an increase in electron density will tend to weaken the Drude weight renormalization. \section{Generalization to four bands \label{Sec:4bandModel}} In this section we generalize our kinetic equation formalism to more than two bands, using the $4\times4$ bilayer graphene model as a prototypical example. This serves to extend the validity of our theory to a wider frequency range encompassing higher frequency optical excitations, and to include the interband coherence effects between the two conduction bands as well as the two valance bands. Our starting point is the four-band continuum description of Bernal-stacked bilayer graphene, in which we only include the in-plane hopping energy and the nearest-neighbor interlayer coupling. The resulting Hamiltonian is given by~\cite{CastroNeto:2009RMP_Graphene} \begin{align} \mathcal{H}_0= \begin{pmatrix} 0 & v\hbar k e^{-i\phi} & -\gamma_1 & 0\\ v\hbar k e^{i\phi} & 0 & 0 & 0\\ -\gamma_1 & 0 & 0 & v\hbar k e^{-i\phi}\\ 0 & 0 & v\hbar k e^{i\phi} & 0 \end{pmatrix},\label{Eq:BilayerH0} \end{align} where $v=1.0\times 10^6$\,m/s is the Fermi velocity of the Dirac fermions in single-layer graphene, $\phi=\tan^{-1}(k_y/k_x)$, and $\gamma_1=0.4$\, eV is the interlayer hopping energy. We will set $\hbar=1$ and $v=1$ hereafter and only restore them in our final results. The four bands derived from the above Hamiltonian are \begin{align} \eps{1}=\dfrac{1}{2}\left(\sqrt{4k^2+\gamma_1^2}+ \gamma_1 \right)=-\eps{4},\notag\\ \eps{2}=\dfrac{1}{2}\left(\sqrt{4k^2+\gamma_1^2}- \gamma_1 \right)=-\eps{3}, \label{Eq:BLband} \end{align} which is shown in Fig.~\ref{Fig:BLGBands}, and the corresponding wavefunctions will be denoted as $u_{i}(\bm{k})$. For convenience, we will adopt the notation $\Delta_{\bk}\equiv\sqrt{4v^2k^2+\gamma_1^2}$ in this paper. \begin{figure}[!] \includegraphics[scale=1]{Fig-BLGBands.pdf} \caption{\label{Fig:BLGBands} Bandstructure of bilayer graphene. In this figure $\gamma_1 = \SI{0.4}{eV}$ is the interlayer hopping energy, which is also equal to the energy difference between the two conduction bands as well as that between the two valence bands. } \end{figure} \subsection{The density matrix and its dynamics \label{Sec:QKE}} At equilibrium, the density matrix for the Hamiltonian $\mathcal{H}_0$ can be written in the energy band basis as follows, \begin{align} \rho_{0}= \begin{pmatrix} n_1 & 0 & 0 & 0\\ 0 & n_2 & 0 & 0\\ 0 & 0 & n_3 & 0\\ 0 & 0 & 0 & n_4 \end{pmatrix}. \end{align} At zero temperature, each of the distribution functions is a step function $n_i = \Theta(\varepsilon_F-\eps{i})$, where $\varepsilon_F$ is the Fermi energy. With Eq.~(\ref{Eq:QKE}) as our starting point, $\rho_{1}$ is again composed of two parts, $\rho_{1}=\rho_{1}^{(0)}+\rkle$, where $\rho_{1}^{(0)}$ is the density matrix in the absence of interaction while $\rkle$ is the correction due to electron-electron interaction. Because of the $4\times 4$ matrix structure of Eq.~\eqref{Eq:QKE}, we introduce a complete set of 16 $\Gamma$ matrices [see Appendix~\ref{Appendix:GammaMatrices}] and expand the density matrix in the basis of these $\Gamma$ matrices. Specifically, the density matrix $\rho_{1}$ is written as \begin{align} \rho_{1}=\sum_{i=1}^{16} f_i \Gamma_i=\rho_{1}^{(0)}+\rkle, \label{Eq:DensityMatrixExpansionFormal} \end{align} and the two terms are \begin{align} \rho_{1}^{(0)}=\sum_{i=1}^{16} f^{(0)}_i\Gamma_i,\quad \rkle=\sum_{i=1}^{16} f^{(e)}_i\Gamma_i, \label{Eq:DensityMatrixExpansion} \end{align} where we have $f_i=f^{(0)}_i+f^{(e)}_i$ for each $i$. In this way, the above matrix equation will be reduced to a set of coupled equations for these expansion coefficients. In addition, such a decomposition of the density matrix enables us to rewrite the current in Eq.~\eqref{Eq:totalcurrents} in a convenient form: the explicit expressions for $\bm{j}_{\bm{k}}$ now read \begin{align} j_k &\equiv \partii{\mathcal{H}_0}{k} = \dfrac{\gamma_1}{\Delta_{\bk}}\Gamma_1-\dfrac{2k}{\Delta_{\bk}} \Gamma_{5}, \notag \\ j_\phi &\equiv \dfrac{1}{k}\partii{\mathcal{H}_0}{\phi} = \dfrac{-i\gamma_1}{\Delta_{\bk}} \Gamma_7 +\dfrac{2k}{\Delta_{\bk}} \Gamma_{14}, \end{align} and the two currents $J_1$ and $J_2$ become \begin{align} J_1 &= 16e\int d\bm{k} \cos\phi \left(\dfrac{\gamma_1}{\Delta_{\bk}}f_1-\dfrac{2k}{\Delta_{\bk}}f_{5}\right), \notag\\ J_2 &= 16e\int d\bm{k}\sin\phi \left(\dfrac{i\gamma_1}{\Delta_{\bk}}f_{7}+\dfrac{2k}{\Delta_{\bk}}f_{14}\right).\label{Eq:TotalCurrents} \end{align} Note that only four expansion coefficients $f_1$, $f_5$, $f_7$, and $f_{14}$ contribute to the current. \subsection{Nonequilibrium density matrix $\rho_{1}$ in the noninteracting limit \label{Sec:NonInteractingDensityMatrix}} We first solve the nonequilibrium density matrix $\rho_{1}$ in the absence of electron-electron interaction. Such a solution is obtained by using the noninteracting $\mathcal{H}_0$ [Eq.~\eqref{Eq:BilayerH0}] in the quantum kinetic equation [Eq.~\eqref{Eq:QKE}]. The resulting density matrix is just $\rho_{1}^{(0)}$, according to our convention in Eq.~\eqref{Eq:DensityMatrixExpansion}. The corresponding 16 coefficients $f^{(0)}_i$ are given below. First of all, four coefficients are proportional to the derivatives of the distribution functions: \begin{align} \begin{Bmatrix} f^{(0)}_{5}\\ f^{(0)}_{8}\\ f^{(0)}_{9}\\ f^{(0)}_{16} \end{Bmatrix} &=\dfrac{i\beta_{1}(k)}{4\omega} \begin{Bmatrix} (n_1'-n_2'+n_3'-n_4')\\ -i(n_1'-n_2'-n_3'+n_4')\\ -(n_1'+n_2'-n_3'-n_4')\\ -(n_1'+n_2'+n_3'+n_4') \end{Bmatrix}, \label{Eq:DeltaDistributionFunctions} \end{align} where the prime denotes partial derivatives, i.e., $n_i'(k)\equiv\partial n_i(k)/\partial k$. Secondly, we have the following coefficients, \allowdisplaybreaks[2] \begin{align*} \begin{Bmatrix} f^{(0)}_{6}\\ f^{(0)}_{7}\\ f^{(0)}_{10}\\ f^{(0)}_{11} \end{Bmatrix} &=\dfrac{\gamma_1\beta_{1}(k)}{2\Delta_{\bk}^2(\Delta_{\bk}^2-\omega^2)} \begin{Bmatrix} -\omega(n_1-n_2-n_3+n_4)\\ -i\Delta_{\bk}(n_1-n_2-n_3+n_4)\\ \Delta_{\bk}(n_1+n_2-n_3-n_4)\\ i\omega(n_1+n_2-n_3-n_4) \end{Bmatrix},\notag\\ \begin{Bmatrix} f^{(0)}_{1}\\ f^{(0)}_{4}\\ f^{(0)}_{12}\\ f^{(0)}_{15} \end{Bmatrix} &=\dfrac{\beta_{2}(k)}{2\Delta_{\bk}(\gamma_1^2-\omega^2)} \begin{Bmatrix} \gamma_1 (n_1-n_2-n_3+n_4)\\ \omega (n_1-n_2+n_3-n_4)\\ \omega(n_1-n_2-n_3+n_4)\\ \gamma_1(n_1-n_2+n_3-n_4) \end{Bmatrix}, \end{align*} and finally, \begin{align*} f^{(0)}_{3}=&\dfrac{ik\beta_{2}(k)}{\Delta_{\bk}}\left[\dfrac{n_1-n_4}{\omega^2-(\Delta_{\bk}+\gamma_1)^2}+ \dfrac{n_2-n_3}{\omega^2-(\Delta_{\bk}-\gamma_1)^2}\right],\\ f^{(0)}_{13}=&\dfrac{k\beta_{2}(k)}{\Delta_{\bk}}\left[\dfrac{n_1-n_4}{\omega^2-(\Delta_{\bk}+\gamma_1)^2}- \dfrac{n_2-n_3}{\omega^2-(\Delta_{\bk}-\gamma_1)^2}\right],\\ f^{(0)}_{2}=&\\ \dfrac{\omega\beta_{2}(k)}{4k\Delta_{\bk}}&\left[\dfrac{(n_1-n_4)(\Delta_{\bk}-\gamma_1)}{\omega^2-(\Delta_{\bk}+\gamma_1)^2}+\dfrac{(n_2-n_3)(\Delta_{\bk}+\gamma_1)}{\omega^2-(\Delta_{\bk}-\gamma_1)^2}\right],\\ f^{(0)}_{14}=&\\ \dfrac{i\omega\beta_{2}(k)}{4k\Delta_{\bk}}&\left[\dfrac{(n_1-n_4)(\Delta_{\bk}-\gamma_1)}{\omega^2-(\Delta_{\bk}+\gamma_1)^2}-\dfrac{(n_2-n_3)(\Delta_{\bk}+\gamma_1)}{\omega^2-(\Delta_{\bk}-\gamma_1)^2}\right]. \end{align*} In the above expressions $\beta_{1}(k)$ and $\beta_{2}(k)$ are given by \begin{align} \beta_{1}(k)=e(\bm{E}\cdot\hat{\bm{k}}), \quad \beta_{2} (k)=e(\bm{E}\times\hat{\bm{k}})_z. \label{Eq:DefinitionOfBeta} \end{align} They represent two different couplings to the electric field. The total current in the noninteracting limit is then obtained by inserting the above results into the general equation Eq.~\eqref{Eq:TotalCurrents}. \subsection{Interaction corrections to the nonequilibrium density matrix \texorpdfstring{$\rho_{1}$}{rho} \label{SubSection:Equations}} In the presence of electron-electron interaction, the nonequilibrium density matrix $\rho_{1}$ will be further modified. The effect of interaction is incorporated by a quasiparticle exchange self-energy term in the Hamiltonian, which is given by \begin{align} \Sigma(\bm{k}) = -\sum_{\bm{k}'}V_{\bm{k}\bk'}\rho(\bm{k}'), \end{align} where $V_{\bm{k}\bk'}$ is the Coulomb potential. The property that the self-energy matrix at one wave vector is simply an interaction-weighted average of the density matrix at different wave vectors can be attributed to the model's pseudospin-independent interaction $V_{\bm{k}\bk'}$. The quantum kinetic equation in Eq.~\eqref{Eq:QKE} now reads \begin{align} -i\omega \rho_{1}+e\bm{E}\cdot\partii{\rho_{0}}{\bm{k}}+i&\left[\mathcal{H}_0, \rho_{1}\right]+i\left[ \Sigma^{(0)}(\bm{k}), \rho_{1}^{(0)}\right] \notag\\ &=i\sum_{\bm{k}'}V_{\bm{k}\bk'}\left[\rho_{1}^{(0)}(\bm{k}'), \rho_{0} \right],\label{Eq:firstorderequation} \end{align} where $\Sigma^{(0)}(\bm{k})$ is the equilibrium self-energy matrix, \begin{align} \Sigma^{(0)}(\bm{k}) = -\sum_{\bm{k}'} V_{\bm{k}\bk'} \rho_{0}(\bm{k}'). \label{Eq:Self-Energy} \end{align} In the band basis, the diagonal entries of this matrix $\Sigma^{(0)}_{\lambda}\equiv\bra{u_{\lambda}}\Sigma^{(0)}\ket{u_{\lambda}}$ (where $\lambda=1,2,3,4)$ represent the electron self-energy in each band. In addition, in the presence of an applied electric field and electron-electron interaction, the quasiparticles are no longer in the same band eigenstates as the noninteracting electrons. As such, the equilibrium self-energy acquires off-diagonal entries in the band basis. We find that four off-diagonal self-energies are nonzero, $\Sigma^{(0)}_{13}\equiv \bra{u_{1}(\bm{k})}\Sigma^{(0)}\ket{u_{3}(\bm{k})}=(\Sigma^{(0)}_{31})^{\ast}$, and $\Sigma^{(0)}_{24}\equiv \bra{u_{2}(\bm{k})}\Sigma^{(0)}\ket{u_{4}(\bm{k})}=(\Sigma^{(0)}_{42})^{\ast}$, and the other entries vanish because of azimuthal symmetry. This is to be contrasted with the two-band model, where the self-energy matrix only contains diagonal entries~\cite{James:2009PRB_TwobandModel}. The explicit expressions for these matrix elements are given in Appendix~\ref{Appendix:SelfEnergy}. Directly substituting the expansion in Eq.~\eqref{Eq:DensityMatrixExpansionFormal} and rewriting the above matrix Eq.~\eqref{Eq:firstorderequation} yields a set of coupled 16 equations that is quite cumbersome and lacks transparency to their underlying physical meaning. We have found a better way to organize the coupled 16 equations with the following. We define a new set of variables from $f_i$ as follows, $A_{\pm}=f_2\pm if_{14}$, $B_{\pm}=if_3\pm f_{13}$, $C_{\pm}=f_{1}\pm f_{15}$, $D_{\pm}=f_{4}\pm f_{12}$, $E_{\pm}=f_{6}\pm if_{11}$, $F_{\pm}=f_{10}\pm if_{7}$, $G_{\pm}=f_{8}\pm if_{9}$, and $H_{\pm} =if_{5}\pm if_{16}$. With these new variables, the equations greatly simplify, and can be expressed as \allowdisplaybreaks[2] \begin{align} \omega A_{+}+\delta_{23}B_{+}&-(n_2-n_3)e\bm{E}\cdot\mathcal{A}_{23}\label{Eq:FirstTwoEq_1}\\&-\Sigma^{(0)}_{13}C_{+}-\Sigma^{(0)}_{24}C_{-}=\dse_{23,-},\notag\\ \delta_{23}A_{+}+\omega B_{+}&-\Sigma^{(0)}_{13}D_{+}+\Sigma^{(0)}_{24}D_{-}=\dse_{23,+},\label{Eq:FirstTwoEq_2}\\ \omega C_{+}+\delta_{21}D_{+}&-\Sigma^{(0)}_{13}A_{+}-\Sigma^{(0)}_{24}A_{-}=\dse_{21,+},\label{Eq:FirstTwoEq_3}\\ \delta_{21}C_{+}+\omega D_{+}&-(n_2-n_1)e\bm{E}\cdot\mathcal{A}_{21} \label{Eq:FirstTwoEq_4} \\&-\Sigma^{(0)}_{13}B_{+}+\Sigma^{(0)}_{24}B_{-}=\dse_{21,-},\notag\\ \omega E_{+}+\delta_{13}F_{+}&-i(n_1-n_3)e\bm{E}\cdot\mathcal{A}_{13}=\dse_{13,-},\label{Eq:FirstTwoEq_5}\\ \delta_{13}E_{+}+\omega F_{+}&-2\Sigma^{(0)}_{13}G_{+}=\dse_{13,+},\label{Eq:FirstTwoEq_6}\\ \omega G_{+}&-2\Sigma^{(0)}_{13}F_{+}=e(\bm{E}\cdot\hat{\bm{k}}) (n_1'-n_3')/2,\label{Eq:FirstTwoEq_7}\\ \omega H_{+}&=e(\bm{E}\cdot\hat{\bm{k}})(n_2'+n_4')/2;\label{Eq:FirstTwoEq_8}\\ \notag\\ \omega A_{-}+\delta_{14}B_{-}&-(n_1-n_4)e\bm{E}\cdot\mathcal{A}_{14}\label{Eq:FirstTwoEq_9}\\&-\Sigma^{(0)}_{13}C_{-}-\Sigma^{(0)}_{24}C_{+}=\dse_{14,-},\notag\\ \delta_{14}A_{-}+\omega B_{-}&-\Sigma^{(0)}_{13}D_{-}+\Sigma^{(0)}_{24}D_{+}=\dse_{14,+},\label{Eq:FirstTwoEq_10}\\ \omega C_{-}+\delta_{34}D_{-}&-\Sigma^{(0)}_{13}A_{-}-\Sigma^{(0)}_{24}A_{+}=\dse_{34,+},\label{Eq:FirstTwoEq_11}\\ \delta_{34}C_{-}+\omega D_{-}& -(n_3-n_4)e\bm{E}\cdot\mathcal{A}_{34}\label{Eq:FirstTwoEq_12}\\&-\Sigma^{(0)}_{13}B_{-}+\Sigma^{(0)}_{24}B_{+}=\dse_{34,-},\notag\\ \omega E_{-}+\delta_{42}F_{-}&-i(n_4-n_2)e\bm{E}\cdot\mathcal{A}_{42}=\dse_{42,-},\label{Eq:FirstTwoEq_13}\\ \delta_{42}E_{-}+\omega F_{-}&-2\Sigma^{(0)}_{24}G_{-}=\dse_{42,+},\label{Eq:FirstTwoEq_14}\\ \omega G_{-}&-2\Sigma^{(0)}_{24}F_{-}=e(\bm{E}\cdot\hat{\bm{k}})(n_4'-n_2')/2,\label{Eq:FirstTwoEq_15}\\ \omega H_{-}&=-e(\bm{E}\cdot\hat{\bm{k}})(n_1'+n_3')/2.\label{Eq:LastEq} \end{align} where $\delta_{ij}=\eps{i}+\Sigma^{(0)}_{i}-\eps{j}-\Sigma^{(0)}_{j}$ is the energy needed to create a vertical interband excitation between band $i$ and $j$. The right-hand-side of Eqs.~\eqref{Eq:FirstTwoEq_1}-\eqref{Eq:LastEq} represent the nonequilibrium self-energy changes, whose detailed expressions are presented in Appendix~\ref{Appendix:EquationForG}. $\mathcal{A}_{ij}$ is the non-Abelian Berry connection defined in Eq.~\eqref{Eq:BerryConnection}. This set of pseudospin Bloch equations generalize Eqs.~\eqref{Eq:Two-band-Bk}-\eqref{Eq:two-bandEquations} we obtained for the two-band Hamiltonian in Section~\ref{Sec:SLGDIC} to the four-band case. Eq.~\eqref{Eq:two-bandEquations} for the case of bilayer graphene ($m=2$) can be reproduced by Eqs.~\eqref{Eq:FirstTwoEq_1}-\eqref{Eq:FirstTwoEq_2} in the limit of large interlayer hopping energy ($\gamma_1\rightarrow\infty$), with $A_{+}(B_{+})$ in Eqs.~\eqref{Eq:FirstTwoEq_1}-\eqref{Eq:FirstTwoEq_2} given by $A_{+}\rightarrow P$ and $B_{+}\rightarrow Q$. Let us comment briefly on the physical meaning of the non-Abelian Berry connection appearing in our equations. It was shown in the context of semiclassical wavepacket dynamics~\cite{Yao:2005ke,DiXiao:2009RMP_BerryPhase} that such a coupling between the electric field $\bm{E}$ and the non-Abelian Berry connection $\mathcal{A}_{ij}$ governs the redistribution of the electron occupations among different bands. These terms in our equations play a similar role. {To see this, note that } such a coupling can be written explicitly as \begin{align} e\bm{E}\cdot \mathcal{A}_{ij}(\bm{k})&=i\beta_{1}(k)\bra{u_i(\bm{k})} \frac{\partial}{\partial k} \ket{u_j(\bm{k})}\notag\\ &\quad+i\beta_{2}(k)\bra{u_i(\bm{k})} \dfrac{1}{k}\frac{\partial}{\partial \phi} \ket{u_j(\bm{k})}, \end{align} where $\beta_{1}(k)$ and $\beta_{2}(k)$ are given by Eq.~\eqref{Eq:DefinitionOfBeta}. Interestingly, the six coupling terms in our equations fall naturally into two categories: $e\bm{E}\cdot \mathcal{A}_{13}(\bm{k})$ and $e\bm{E}\cdot \mathcal{A}_{42}(\bm{k})$ are proportional to $\beta_{1}(k)$, while the other four couplings are proportional to $\beta_{2}(k)$. This correspondence is strikingly similar to the off-diagonal elements of the equilibrium self-energy matrix, where only $\Sigma^{(0)}_{13}=(\Sigma^{(0)}_{31})^{\ast}$ and $\Sigma^{(0)}_{24}=(\Sigma^{(0)}_{42})^{\ast}$ are nonzero [see Appendix~\ref{Appendix:SelfEnergy}]. We now explain the physical meaning of this set of coupled equations. The functions $G_{\pm}, H_{\pm}$ in Eqs.~(\ref{Eq:FirstTwoEq_7})-(\ref{Eq:FirstTwoEq_8}), (\ref{Eq:FirstTwoEq_15})-(\ref{Eq:LastEq}) describe Drude intraband dynamics for the four bands, whereas $A_{\pm}, B_{\pm}, C_{\pm}, D_{\pm}, E_{\pm}, F_{\pm}$ in other equations describe interband dynamics. Coupling between intraband and interband responses in these equations can be seen clearly as follows. We first note that the source terms $\bm{E}\cdot\bm{k}$ and $(\bm{E}\times\bm{k})_z$ in the kinetic equations respectively drive the intraband and interband responses; the appearance of the Berry connection $\mathcal{A}_{13} \propto \bm{E}\cdot\bm{k}$ [$\mathcal{A}_{42}$] in Eqs.~\eqref{Eq:FirstTwoEq_5}-\eqref{Eq:FirstTwoEq_6} [\eqref{Eq:FirstTwoEq_13}-\eqref{Eq:FirstTwoEq_14}] therefore corresponds to a direct coupling of the interband transitions between bands $1$ and $3$ [$4$ and $2$] with the Drude intraband response. Due to exchange interaction, an indirect mechanism of Drude-interband coupling also occurs through the equilibrium $\Sigma^{(0)}_{13}$ [$\Sigma^{(0)}_{24}$] and nonequilibrium $\delta\Sigma^{(0)}_{13,+}$ [$\delta\Sigma^{(0)}_{42,+}$] self-energies. It is this interaction-induced Drude-interband coupling that gives rise to the renormalization of the optical Drude weight. The interband responses $A_{\pm}, B_{\pm}, C_{\pm}, D_{\pm}$ in Eqs.~(17)-(20), (25)-(28) couple to intraband responses only through the nonequilibrium self-energies $\delta\Sigma_{23,+}, \delta\Sigma_{21,+}, \delta\Sigma_{14,+}, \delta\Sigma_{34,+}$ through exchange effects. These coupled equations can be solved numerically to yield $\rkle$, the interaction correction to the nonequilibrium density matrix $\rho_{1}$. We then invert the equations to find the original coefficients $f_i$ and insert $f^{(e)}_{1}$, $f^{(e)}_{5}$, $f^{(e)}_{7}$, and $f^{(e)}_{14}$ into Eq.~\eqref{Eq:TotalCurrents} to obtain the interaction corrections to the optical conductivity and Drude weight. \section{Optical Drude weight in bilayer graphene \label{Sec:Drudeweight}} In this section, we adopt the above formalism to obtain the optical Drude weight for bilayer graphene. We will first compute the optical conductivity in the noninteracting limit and show that it agrees with existing results. We will then turn on electron-electron interaction and study how it modifies the Drude weight. From now on, we will assume for concreteness that the Fermi energy $\varepsilon_F>0$ is above the charge neutrality point. \subsection{Noninteracting results for the Drude weight } In the noninteracting limit, the optical conductivity of bilayer graphene is obtained by inserting the noninteracting density matrix $\rho_{1}^{(0)}$ found in Section~\ref{Sec:NonInteractingDensityMatrix} into Eq.~\eqref{Eq:TotalCurrents}. As a result, the real and imaginary parts of the conductivity are given explicitly by \begin{widetext} \begin{align} \dfrac{\text{Re}[\sigma(\Omega)]}{\sigma_0} &= \dfrac{\Theta(\Omega-1)}{4 \Omega^2}[\Theta(\Omega-2\mu-1)+\Theta(\Omega-2\mu+1)] +\Theta(\Omega-2\mu)\left[\dfrac{\Omega+2}{4(\Omega+1)} +\Theta(\Omega-2)\dfrac{\Omega-2}{4(\Omega-1)} \right]\notag\\ &\quad +\mathcal{D}^{(0)}_1 \delta(\Omega)+\mathcal{D}^{(0)}_2\delta(\Omega-1),\notag\\ \dfrac{\text{Im}[\sigma(\Omega)]}{\sigma_0/\pi} &= \dfrac{1}{4\Omega^2}\left[\ln\left|\dfrac{1-\Omega}{1+\Omega}\right| \Theta(1-\mu) +\left(\ln\left|\dfrac{1+2\mu-\Omega}{1+2\mu+\Omega}\right| +\dfrac{4\Omega(\mu+1)}{2\mu+1}\right) -\Theta(\mu-1)\left(\ln\left|\dfrac{1-2\mu-\Omega}{1-2\mu+\Omega}\right|+\dfrac{4\Omega(\mu-1)}{2\mu-1}\right)\right]\notag\\ &\quad+\dfrac{\mathcal{D}^{(0)}_1}{\Omega}+\dfrac{2\Omega\mathcal{D}^{(0)}_2}{\Omega^2-1}-\dfrac{1}{4}\big[r(\Omega,1)-r(\Omega,-\mu)+\Theta(\mu-1)(r(\Omega,\mu)-r(\Omega,1))\big], \label{Eq:Conductivity} \end{align} \end{widetext} where $\sigma_0=e^2/\hbar$. In the above results, $\mu\equiv\varepsilon_F/\gamma_1$ and $\Omega\equiv \omega/\gamma_1$ are the Fermi energy and optical frequency normalized by interlayer hopping energy $\gamma_1$, respectively. In addition, the function $r(\Omega,\mu)$ is given by \begin{align} r(\Omega,\mu)= \dfrac{\Omega+2}{\Omega+1}\ln|\Omega+2 \mu| -\dfrac{\Omega-2}{\Omega-1}\ln|\Omega-2 \mu|\notag\\ -\dfrac{2 \Omega}{\Omega^2-1}\ln|2 \mu-1|. \label{Eq:r-function} \end{align} Finally, the coefficients in front of the two delta functions are the optical weight for the $\omega=0$ and $\omega=\gamma_1$ peak, respectively, \begin{align} \mathcal{D}^{(0)}_1(\mu) &= \dfrac{2\mu(\mu+1)}{2\mu+1}+\dfrac{2\mu(\mu-1)}{2\mu-1}\Theta(\mu-1),\notag\\ \mathcal{D}^{(0)}_2(\mu) &= \dfrac{1}{4}[\ln(2\mu+1)-\Theta(\mu-1)\ln(2\mu-1)].\label{Eq:DrudeZero} \end{align} This result agrees with previous studies~\cite{Nicol:2008PRB_BLGCond,Nilsson:2008PRB_ElePropertyBLG,Zhang:2008PRB_BLGconductivity,Abergel:2007PRB_OpticalInfrared,McCann:2007SSCom_ElectronsBLG,Benfatto:2008PRB_OpticalSumRule}, and has been discussed extensively in the literature. Here we just want to emphasize that only $\mathcal{D}^{(0)}_1$ arises from intraband contributions, and is the Drude weight we are looking for. The $\mathcal{D}^{(0)}_2$ peak $\omega=\gamma_1$ arises from optical transitions between the two conduction bands, and its delta function dependence is due to the constant energy difference $\gamma_1$ between the two bands in our model. When a band gap is opened~\cite{Nicol:2008PRB_BLGCond} or remote hopping parameters are taken into account~\cite{Wright:2009Nanotechnology_EffectOfNNNhopping}, the two bands will no longer be energetically equidistant and the sharp peak at $\omega=\gamma_1$ will then be broadened. \subsection{Interaction corrections to the Drude weight \label{Sec:CondInteraction}} Now we are going to study how electron-electron interaction modifies the Drude weight in bilayer graphene. In general, the Eqs.~\eqref{Eq:FirstTwoEq_1}-\eqref{Eq:LastEq} can be solved numerically to obtain the nonequilibrium density matrix $\rho_{1}$ to all orders of interaction potential. However, this is quite complicated, and we will introduce some simplifications. First, we will only solve these coupled equations perturbatively and obtain lowest order interaction corrections to the Drude weight. Therefore, we will keep terms up to first order in the interaction potential in these equations, which allows us to obtain closed-form solutions for the coefficients $f^{(e)}_1$, $f^{(e)}_5$, $f^{(e)}_7$, and $f^{(e)}_{14}$ as follows, \begin{widetext} \begin{align} f^{(e)}_{1} &= \dfrac{i\gamma_1(\dse_{21,-}-\dse_{34,-})+i\omega(\dse_{21,+}+\dse_{34,+})}{2(\omega^2-\gamma_1^2)}, \; f^{(e)}_{7} = \dfrac{-i\Delta_{\bm{k}} (\dse_{13,-}+\dse_{42,-})-i\omega (\dse_{42,+}-\dse_{13,+})}{2(\Delta_{\bm{k}}^2-\gamma_1^2)}, \; f^{(e)}_{5}=0, \notag\\ f^{(e)}_{14}&=[2(\omega^2-(\Delta_{\bk}+\gamma_1)^2)(\omega^2-(\gamma_1-\Delta_{\bk})^2)]^{-1} \Big[2i\omega\gamma_1\Delta_{\bk}(\dse_{21,-}-\dse_{34,-}) +i\gamma_1(\gamma_1^2-\Delta_{\bk}^2-\omega^2)(\dse_{23,+}+\dse_{14,+})\notag\\ &-i\Delta_{\bk}(\gamma_1^2+\omega^2-\Delta_{\bk}^2)(\dse_{23,+}-\dse_{14,+}) +i\omega(\Delta_{\bk}^2+\gamma_1^2-\omega^2)(\dse_{14,-}-\dse_{23,-})\Big] .\label{Eq:F14} \end{align} \end{widetext} Here, to first order in the interaction potential, the nonequilibrium self-energy changes in the above expression are given by Appendix~\ref{Appendix:EquationForG} with $A_{\pm}$ to $H_{\pm}$ taking their noninteracting values given in Sections~\ref{Sec:NonInteractingDensityMatrix}-\ref{SubSection:Equations}. In addition, because we are only concerned with the Drude weight, it is sufficient to extract the $\omega^{-1}$ dependence in these coefficients. We then note that $f^{(e)}_5$ vanishes, and hence does not contribute to the conductivity. In addition, neither $f^{(e)}_1$ nor $f^{(e)}_7$ contains an overall $\omega^{-1}$ dependence. Therefore, only the $f^{(e)}_{14}$ term matters. We can then extract the coefficient in front of $\omega^{-1}$ as \allowdisplaybreaks[2] \begin{align} f^{(e)}_{14}\sim &\Big\{(n_1-n_2+n_3-n_4)\Delta_{\bk}\left[2\gamma_1(S_1+S_3)+8kS_2\right]\notag\\ -(n_1+n_2&-n_3-n_4)\left[(\Delta_{\bk}^2+\gamma_1^2)(S_1+S_3)+8k\gamma_1S_2\right]\Big\}\notag\\ &\times(16\Delta_{\bk} k^2)^{-1}, \label{Eq:G14} \end{align} where the functions $S_1(k)$, $S_2(k)$ and $S_3(k)$ are \begin{align} S_1(k) &=\sum_{\bm{k}'}V_{\bm{k}\bk'} f^{(0)}_{5}(\bm{k}')\sin2\phi_{k'k},\notag\\ S_2(k) &= \sum_{\bm{k}'}V_{\bm{k}\bk'} f^{(0)}_{9}(\bm{k}')\sin\phi_{k'k}\dfrac{k'}{\Delta_{\bk'}},\notag\\ S_3(k) &=\sum_{\bm{k}'}V_{\bm{k}\bk'} f^{(0)}_{9}(\bm{k}')\sin2\phi_{k'k} \dfrac{\gamma_1}{\Delta_{\bk'}}, \label{Eq:S1S2S3} \end{align} and $\phi_{k'k}$ is the angle between momenta $\bm{k}$ and $\bm{k}'$. For future convenience, we also define \begin{align} s_{j}=16\pi S_{j}/ie^2\beta_{2}(k), \;(j=1,2,3).\label{Eq:s1s2s3} \end{align} Note that the $\omega^{-1}$ dependence of $f^{(e)}_{14}$ comes from $f^{(0)}_{5}$ and $f^{(0)}_{9}$ only [see Eq.~\eqref{Eq:DeltaDistributionFunctions}]. The explicit expressions for $s_1(k)$, $s_2(k)$ and $s_3(k)$ will depend on the form of Coulomb potential we adopt in the calculation. Other than this choice, Eqs.~\eqref{Eq:G14}-\eqref{Eq:s1s2s3} represent the most general form for the $\omega^{-1}$ dependence in $f^{(e)}_{14}$ and hence $\rkle$, which can then be inserted into Eq.~\eqref{Eq:TotalCurrents} to obtain its contribution to the conductivity as follows \begin{align} \sigma^{(e)} &= \dfrac{ie^4\gamma_1}{32\pi^2\omega}\int_0^{\tilde{k}_c}d\tilde{k}\dfrac{\mathcal{G}(\tilde{k})}{4\tilde{k}^2+1}. \label{Eq:Sigmae} \end{align} Here the integration cutoff is set by the Brillouin zone boundary $ k_c=1/a=\SI{7.0e9}{\per\m}$, where $a=\SI{1.43}{\angstrom}$ is the carbon-carbon distance. This corresponds to an energy scale of $\Lambda=\hbar vk_c\approx 11.50\gamma_1$. Also, we have changed the integration variable to a dimensionless one $\tilde{k}\equiv \hbar vk/\gamma_1$, and similarly $\tilde{k}_c\equiv\hbar vk_c/\gamma_1$. The function $\mathcal{G}(\tilde{k})$ in the integrand is given by \begin{align} \mathcal{G}(k)&= \omega\big[8k s_2(k)+(4k^2+2)( s_1(k)+ s_3(k))\big]\notag\\ &\quad\times(n_1+n_2-n_3-n_4)\notag\\ &-\omega\sqrt{4k^2+1}\big[2( s_1(k)+ s_3(k) )+8k s_2(k)\big]\notag\\ &\quad\times (n_1-n_2+n_3-n_4). \end{align} Because the functions $s_j (j=1,2,3)$ are all proportional to $\omega^{-1}$ by virtue of Eqs.~\eqref{Eq:DeltaDistributionFunctions},~\eqref{Eq:S1S2S3},~and \eqref{Eq:s1s2s3}, $\mathcal{G}(\tilde{k})$ is independent of $\omega$. If we then replace $\omega^{-1}$ by $-i\delta(\omega)$ and restore $\hbar$ and $v$ in Eq.~\eqref{Eq:Sigmae}, the leading order interaction correction to the Drude weight now reads \begin{align} \mathcal{D}^{(e)}_1= \dfrac{e^2}{\hbar}\dfrac{\alpha^\ast\gamma_1}{32\pi}\int_0^{\tilde{k}_c}d\tilde{k}\dfrac{\mathcal{G}(\tilde{k})}{4\tilde{k}^2+1}. \label{Eq:Drude-2} \end{align} It was shown in Ref.~\onlinecite{James:2009PRB_TwobandModel} that broken Galilean invariance gives rise to a peculiar mechanism that couples the Drude response to the interband response in bilayer graphene. This is the very reason why electron-electron interaction can modify the Drude weight in bilayer graphene. The importance of such a coupling can be quantified by the interaction-induced Drude weight renormalization $\bar{\mathcal{D}}$, \begin{align} \bar{\mathcal{D}}-1= \dfrac{\mathcal{D}^{(e)}_1}{\mathcal{D}^{(0)}_1} = \dfrac{\displaystyle\dfrac{\alpha^{\ast}}{32\pi\mu}\int_0^{\tilde{k}_c}d\tilde{k}\dfrac{\mathcal{G}(\tilde{k})}{4\tilde{k}^2+1}} {\dfrac{2(\mu+1)}{2\mu+1}+\dfrac{2(\mu-1)}{2\mu-1}\Theta(\mu-1)}. \label{Eq:DIC-General} \end{align} The rest of the section will be devoted to calculations of $\bar{\mathcal{D}}$. Before presenting the results, however, we note that in this work we will only include static screening effects, and use the following Coulomb potential \begin{align} V_{\bm{k}\bk'}= \dfrac{2\pi e^2}{|\bm{k}-\bm{k}'|+k_{\text{TF}}}, \label{Eq:CoulombPotential} \end{align} where $k_{\text{TF}}$ is the Thomas-Fermi screening length [see Appendix~\ref{Appendix:Thomas-Fermi} for derivations] given by \begin{align} k_{\text{TF}} = \dfrac{2\alpha^{\ast}}{v} \left[2\varepsilon_F+\gamma_1+\Theta(\varepsilon_F-\gamma_1)(2\varepsilon_F-\gamma_1)\right].\label{Eq:TFScreening} \end{align} Also note that there is a discontinuity in $k_{\text{TF}}$ at $\varepsilon_F=\gamma_1$, i.e., when the Fermi energy moves into the higher conduction band. In the limit of small Fermi energy $\varepsilon_F\ll \gamma_1$, this result correctly reduces to $2\alpha^{\ast}\gamma_1/v$, the result deduced from a two-band model of bilayer graphene~\cite{DasSarma:2011RMP_GrapheneTransport}. In what follows, we first consider two limits where analytical expressions for $\bar{\mathcal{D}}$ can be obtained. When the interaction is weakly screened, we can neglect the $k_{\text{TF}}$ term in the Coulomb potential [see Eq.~\eqref{Eq:CoulombPotential}]. This corresponds to the limit of long-range interaction. In contrast, when the interaction is heavily screened, the Thomas-Fermi screening length $k_{\text{TF}}$ will be much larger than typical values of $|\bm{k}-\bm{k}'|$. Thus, we can keep only the $k_{\text{TF}}$ term in the denominator of the Coulomb potential, which corresponds to the limit of short-range interaction. Finally, we will compare these two results to exact numerical evaluations of Eq.~\eqref{Eq:DIC-General} using the full Coulomb potential in Eq.~\eqref{Eq:CoulombPotential}. \subsubsection{Long-range interaction limit} In this limit, we ignore screening effects entirely and set $k_{\text{TF}}$ in Eq.~\eqref{Eq:CoulombPotential} to zero. We then obtain analytical expressions for the functions $s_i$ in Eq.~\eqref{Eq:S1S2S3} as follows \begin{align} \allowdisplaybreaks s_1(k) &= \dfrac{k_{F+}}{\omega}[\Phi_3(k,k_{F+})-\Phi_1(k,k_{F+})]\notag\\ &\quad-\dfrac{k_{F-}}{\omega}\Theta(\varepsilon_F-\gamma_1)[\Phi_3(k,k_{F-})-\Phi_1(k,k_{F-})], \notag\\ s_2(k) &= \dfrac{k_{F+}^2}{\omega\Delta_{+}}[\Phi_2(k,k_{F+})-\Phi_0(k,k_{F+})]\notag\\ &\quad+\dfrac{k_{F-}^2\Theta(\varepsilon_F-\gamma_1)}{\omega\Delta_{-}}[\Phi_2(k,k_{F-})-\Phi_0(k,k_{F-})],\notag\\ s_3(k) &= \dfrac{\gamma_1k_{F+}}{\omega\Delta_{+}}[\Phi_3(k,k_{F+})-\Phi_1(k,k_{F+})] \label{Eq:S1S2S3ForLongRangeCoulomb}\\ &\quad+\dfrac{\gamma_1k_{F-}\Theta(\varepsilon_F-\gamma_1)}{\omega\Delta_{-}}[\Phi_3(k,k_{F-})-\Phi_1(k,k_{F-})].\notag \end{align} where $\Delta_{\pm}=\sqrt{4k_{\pm}^2+\gamma_1^2}$. In addition, $k_{F+}$ ($k_{F-}$) is the Fermi wave-vector at which the Fermi energy intersects with the lower (upper) conduction band, defined as \begin{align} k_{F-} &= \Theta(\varepsilon_F-\gamma_1)\sqrt{\varepsilon_F(\varepsilon_F-\gamma_1)}, \notag\\ k_{F+}&=\sqrt{\varepsilon_F(\varepsilon_F+\gamma_1)}.\label{Eq:FermiWaveVector} \end{align} The special functions $\Phi_i(k,k')$ arise from the integration of the long-range Coulomb potential over $\phi_{k'k}$, and their expressions are given in Eq.~\eqref{Eq:AngularIntegrals}. Before the numerical evaluation of $\bar{\mathcal{D}}$, we want to show that our result correctly reduces to the one obtained by a two-band model of bilayer graphene~\cite{James:2009PRB_TwobandModel} in the $\varepsilon_F\ll\gamma_1$ limit, which proceeds as follows. First, in the limit of $\varepsilon_F\rightarrow 0$, the two Fermi wave-vectors satisfy $\tilde{k}_{F-}\equiv \hbar vk_{F-}/\gamma_1\rightarrow 0$ and $\tilde{k}_{F+}\equiv\hbar vk_{F+}/\gamma_1\simeq\sqrt{\varepsilon_F/\gamma_1}=\sqrt{\mu}$. The three special functions $s_i(k)$ in this limit thus satisfy $s_1 = \gamma_1s_3$ and $s_2=0$. In addition, $\Delta_{\bk}$ can be approximated by $\gamma_1$. As a result, the integrand of Eq.~\eqref{Eq:DIC-General} reduces to \begin{align} \dfrac{\mathcal{G}(\tilde{k})}{4\tilde{k}^2+1}\rightarrow -8\omega s_1(\tilde{k})\Theta(\tilde{k}-\tilde{k}_{F+}). \end{align} We further note that the function $s_1(k)$ in the limit of $\varepsilon_F\ll \gamma_1$ can be written as \begin{align*} \omega s_1(\tilde{k}) \equiv \tilde{k}_{F+}[\Phi_3(\tilde{k},\tilde{k}_{F+})-\Phi_{1}(\tilde{k},\tilde{k}_{F+})]=-4\mathcal{R}\left(\dfrac{\tilde{k}}{\tilde{k}_{F+}}\right), \end{align*} where the function $\mathcal{R}(y)$ is given by \begin{align} \mathcal{R}(y)= \dfrac{4(y+1)(y^4-y^2+1)}{15y^3}\mathbb{E}\left(\dfrac{4y}{(y+1)^2}\right)\notag\\ -\dfrac{4(y^2+1)(y-1)^2(y+1)}{15y^3}\mathbb{K}\left(\dfrac{4y}{(y+1)^2}\right), \end{align} and $\mathbb{K}(z)$ [$\mathbb{E}(z)$] is the complete elliptic integrals of the first (second) kind [see Eq.~\eqref{Eq:EllipticIntegrals}]. Finally, in the limit of $\mu\ll 1$, the denominator in Eq.~\eqref{Eq:DIC-General} reduces to a constant 2. Putting everything together we can obtain $\bar{\mathcal{D}}$ in the limit $\mu\ll 1$ as follows, \begin{align} \bar{\mathcal{D}}-1 = \dfrac{1}{2}\dfrac{\alpha^{\ast}}{\pi\mu}\int_{\sqrt{\mu}}^{\tilde{k}_c}d\tilde{k}\; 32\mathcal{R}(\tilde{k}) =\dfrac{\alpha^{\ast}\displaystyle\int_1^{\frac{\tilde{k}_c}{\sqrt{\mu}}}\mathcal{R}(y) dy}{2\pi\sqrt{\mu}}.\label{Eq:Two-band} \end{align} This result agrees with the one obtained in Ref.~\onlinecite{James:2009PRB_TwobandModel}, which confirms the validity of our theory in the $\mu\ll 1$ limit. \begin{figure}[!] \centering \includegraphics[scale=0.5]{Fig-DIC} \caption{(a) Interaction corrections to the Drude weight in bilayer graphene [see Eq.~\eqref{Eq:Drude-2}]. We compare the long-range limit (black solid line), short-range limit (dashed line), and the numerical result (red solid line). Here $\mu\equiv \varepsilon_F/\gamma_1$, and dielectric constant $\kappa=1$. The discontinuity in the short-range and numerical results is due to the discontinuity of $k_{\text{TF}}$ at $\mu=1$ (see the discussions in Appendix~\ref{Appendix:Thomas-Fermi}). (b) Interaction-induced Drude weight renormalization $\bar{\mathcal{D}}$ [see Eq.~\eqref{Eq:DIC-General}]. As a comparison, we also show the leading-order (in interaction strength) $\bar{\mathcal{D}}$ in the long-range limit obtained previously by the two-band model (gray dotted line) in Ref.~\onlinecite{James:2009PRB_TwobandModel}. Note that the leading-order short-range $\bar{\mathcal{D}}$ in the two-band model vanishes. \label{Fig:DIC}} \end{figure} We now evaluate $\bar{\mathcal{D}}$ in the long-range interaction limit, which is shown in the black solid line in Fig.~\ref{Fig:DIC}. Several comments are in order. First, the Drude weight correction $\bar{\mathcal{D}}-1\sim10\%-40\%$ [Fig.~\ref{Fig:DIC}(b)], which indicates that the interaction correction to the Drude weight is at least one order of magnitude smaller than the noninteracting Drude weight. This shows that our perturbative solutions to the quantum kinetic equation is well controlled, even the expansion parameter $\alpha^{\ast}\equiv e^2/\kappa v$ may not be small. In addition, the evaluation of $\bar{\mathcal{D}}$ in this long-range limit yields a convergent result, in contrast to the short-range limit, which has a logarithmic dependence on the cutoff $\Lambda$ [see Eq.~\eqref{Eq:Short-range} below]. In contrast, $\bar{\mathcal{D}}$ in single-layer graphene has a logarithmic dependence on the cutoff $\Lambda$ in the long-range limit and a linear dependence in the short-range limit~\cite{Polini:2011PRB_Drudeweight}. Finally, although the Drude weight renormalization $\bar{\mathcal{D}}$ in our theory can be reduced to the one obtained by the two-band model in the limit of $\mu\ll1$, the latter tends to underestimate the interaction corrections when the electron density is higher [Fig.~\ref{Fig:DIC}(b)]. This is expected because the analysis based on the two-band model cannot account for the contributions from the higher energy bands $\varepsilon_1$ and $\varepsilon_4$. \subsubsection{Short-range interaction limit} We now consider the opposite limit where the electron-electron interaction is heavily screened {and hence effectively short-ranged}, so that we can ignore the momentum dependence in the Coulomb potential in Eq.~\eqref{Eq:CoulombPotential}. Therefore, $V_{\bm{k}\bk'}$ is now a constant and no longer subject to the angular integration over $\phi_{k'k}$. As a result, both $s_1(k)$ and $s_3(k)$ vanish, while $s_2(k)$ becomes \begin{align} s_2(k)=-2\pi\left[\dfrac{\tilde{k}_{F+}}{\tilde{\Delta}_{+}}\dfrac{k_{F+}}{k_{\text{TF}}}+\Theta(\varepsilon_F-\gamma_1)\dfrac{\tilde{k}_{F-}}{\tilde{\Delta}_{-}}\dfrac{k_{F-}}{k_{\text{TF}}}\right], \end{align} where $\tilde{k}_{F\pm}\equiv\hbar vk_{F\pm}/\gamma_1$ and $\tilde{\Delta}_{\pm}\equiv \sqrt{4\tilde{k}_{F\pm}^2+1}$. The $\bar{\mathcal{D}}$ in this limit is given by \begin{align} \allowdisplaybreaks[4] \bar{\mathcal{D}}-& 1 =\dfrac{\alpha^{\ast}\gamma_1\left[\dfrac{\tilde{k}_{F+}}{\tilde{\Delta}_{+}}\dfrac{k_{F+}}{k_{\text{TF}}}+\Theta(\mu-1)\dfrac{\tilde{k}_{F-}}{\tilde{\Delta}_{-}}\dfrac{k_{F-}}{k_{\text{TF}}}\right]}{16\pi\left[\dfrac{2\mu(\mu+1)}{2\mu+1}+\dfrac{2\mu(\mu-1)}{2\mu-1}\Theta(\mu-1)\right]}\label{Eq:Short-range}\\ \times \Big[& \ln\dfrac{4\tilde{k}_c^2+1}{4\tilde{k}_{F+}^2+1}-\Theta(\mu-1)\ln\dfrac{4\tilde{k}_c^2+1}{4\tilde{k}_{F-}^2+1}\notag\\ -2&\big(\sqrt{4\tilde{k}_{F+}^2+1}-1\big)+2\Theta(\mu-1)\big(\sqrt{4\tilde{k}_{F-}^2+1}-1\big)\Big].\notag \end{align} where $\Lambda$ is the energy cutoff introduced in Eq.~\eqref{Eq:Drude-2}. In the limit that the Fermi energy is much higher than the bottom of the higher conduction band ($\varepsilon_F\gg\gamma_1$), this result can be simplified to \begin{align} \bar{\mathcal{D}}-1 \simeq \dfrac{\gamma_1}{64\pi\varepsilon_F}\left[\ln(\hbar vk_c/\varepsilon_F)-1\right], \quad \varepsilon_F\gg\gamma_1. \end{align} We find that this result approximates the full expression in Eq.~\eqref{Eq:Short-range} within 1\% when $\varepsilon_F\geq 2\gamma_1$. The short-range result in Eq.~\eqref{Eq:Short-range} is shown by the black dashed line in Fig.~\ref{Fig:DIC}. Several comments are in order. First, this result is much smaller than $\bar{\mathcal{D}}$ in the long-range limit. This is because the Thomas-Fermi screening length $k_{\text{TF}}$ is actually fairly large in bilayer graphene [see Eq.~\eqref{Eq:TFScreening}]. In fact, when the Fermi energy exceeds $\gamma_1$, $k_{\text{TF}}$ can be much larger than the momentum cutoff $\Lambda$ introduced in Eq.~\eqref{Eq:Short-range}. Secondly, we note that this is a new result that cannot be obtained by the two-band model of bilayer graphene, as Ref.~\onlinecite{James:2009PRB_TwobandModel} predicts a vanishing $\bar{\mathcal{D}}$ in this short-range interaction limit. This further suggests that interaction corrections to the optical conductivity may not vanish even in the short-range limit. Finally, the short-range $\bar{\mathcal{D}}$ in Eq.~\eqref{Eq:Short-range} is actually independent of the effective fine-structure constant $\alpha^{\ast}$, as the $\alpha^{\ast}$ in the numerator will cancel the $\alpha^{\ast}$ dependence in $k_{\text{TF}}$ in the denominator. This suggests that this short-range result is not affected by the interaction strength of the system. \subsubsection{Comparison with numerical results} Having studied the interaction corrections to the Drude weight in the above two limits, we now compare them with numerical evaluations [red solid lines in Fig.~\ref{Fig:DIC}]. We note that the short-range limit [Eq.~\eqref{Eq:Short-range}] gives a very good approximation to the numerical result. To shed light on this result, note that the Thomas-Fermi screening wavevector $k_{\text{TF}}$ is extremely large {in bilayer graphene}. From Eq.~\eqref{Eq:TFScreening}, we can see that the $k_{\text{TF}}$ does not vanish even when the Fermi energy is at the charge neutrality point $\varepsilon_F=0$. This reflects the constant density of states in bilayer graphene, even at the charge neutrality point. We therefore find the lower bound for $k_{\text{TF}}$ to be $2\alpha^\ast\gamma_1/\kappa v=\SI{2.65e9}{\per\cm}$. Such a momentum corresponds to a band energy of about $4\gamma_1$, which is four times the interlayer coupling energy. The large screening wavevector thus makes the electron-electron interaction effectively short-ranged, which explains why the numerical calculation of $\bar{\mathcal{D}}$ can be approximated reasonably well by the short-range limit. \section{Discussions \label{Sec:Discussions}} In regular semiconductors with a parabolic band dispersion, Galilean invariance also prevents plasmon frequency $\omega_p$ from long-wavelength interaction renormalization~\cite{Kohn:1961_PR_CyclotronResonance}. One may wonder whether $\omega_p$ is also modified in graphene. We argue that because the Drude weight is closely related to the plasmon frequency, the latter should also be modified in graphene. To show this, we start from the well-known relation between the conductivity and the polarizability, \begin{align} \sigma(\omega) = \lim_{q\rightarrow 0} \dfrac{ie^2\omega}{q^2}\Pi(q,\omega), \end{align} which indicates that the real part of the polarizability in the limit of $vq\ll \omega$ and $\omega\ll \varepsilon_F$ is given by \begin{align} \text{Re}\,\Pi(q,\omega) = \dfrac{\gamma_1 \tilde{\mathcal{D}}_1}{\pi}\left(\dfrac{q}{\omega}\right)^2. \end{align} The renormalized plasmon frequency $\omega_p$ is thus given by the {zero} of the dielectric function $\epsilon(q,\omega)=1-V(q)\text{Re}\Pi(q,\omega)=0$. For bilayer graphene we find that \begin{align} \omega_p^2 = 2e^2\gamma_1\tilde{\mathcal{D}}_1 q. \end{align} One can see immediately that the plasmon frequency $\omega_p^2$ is directly proportional to $\bar{\mathcal{D}}$, the Drude weight renormalization. Our prediction of interaction-modified plasmon frequency can be verified experimentally by using electron energy-loss spectroscopy of suspended bilayer graphene samples. We mention that the theory presented in this work is applicable for Fermi energies $\varepsilon_F>\varepsilon_{c}\equiv \gamma_1(\gamma_3/\gamma_0)^2/2\sim \SI{10}{\meV}$~\cite{McCann:2006ev}. For {a smaller carrier density}, the effects of trigonal warping and electron-hole asymmetry become non-negligible~\cite{Malard:2007PRB_Raman,Zhang:2008PRB_BLGconductivity,Kuzmenko:2009PRB_InfraredSpect,Li:2009PRL_BandAsymmetry,Varlet2014:BLG} and can be easily incorporated into our theory through the Hamiltonian in Eq.~\eqref{Eq:BilayerH0}. We expect such effects will give rise to quantitative differences in the renormalized Drude weight, but do not alter our main qualitative conclusions. In addition, we wish to emphasize that some effects can only be captured by the full four-band model but not the two-band model, even at relatively low doping levels $\varepsilon_F<\gamma_1$. The reason is two-fold. First of all, our full four-band calculation can capture the additional interband transitions involving the higher conduction and lower valence bands, as well as the transitions between the two conduction (valence) bands. These ingredients cannot be included in the two-band treatments~\cite{James:2009PRB_TwobandModel}. Secondly, even the low energy bands are not well captured in the two-band description of bilayer graphene, because the dispersion quickly deviates from being parabolic when $\varepsilon_F\gtrsim \gamma_1/4$~\cite{McCann:2006ev}. Indeed, our calculations show that these effects give rise to important differences. For example, in the long-range limit the interaction corrections to the Drude weight are qualitatively different in both cases. In addition, in the short-range limit the interaction corrections to the Drude weight completely vanish in the two-band calculation, while we find finite corrections in the four-band treatments [see Fig.~\ref{Fig:DIC}(b)]. One of the interesting properties of bilayer and multilayer graphene systems is the opening of a band gap achieved by breaking the symmetry of the layer degrees of freedom with an applied out-of-plane voltage. We expect that the effects of Drude weight and plasmon frequency renormalization to be suppressed by a band gap. The physical reason is that the renormalization effects arise from the coupling between the interband and the Drude intraband responses. Such a coupling is diminished by an increasing value of the band gap as the Dirac sea of valence-band electrons at $k > k_F$ moves farther apart from the conduction-band Fermi surface. Another way to look at this is by noting that the pseudospins become increasingly aligned with the out-of-plane $z$ direction with an increasing band gap. The pseudospin texture of all states therefore become more uniform, with the pseudospin of each quantum state becoming more similar. When an external electric field is applied under transport conditions, the degree to which the Galilean invariance symmetry is broken will be less severe, suppressing the interaction-induced renormalization effects. \section{Conclusions\label{Sec:Conclusion}} In this paper we have developed a theory for the optical conductivity of chiral multilayer graphene based on a quantum kinetic approach including the effects of electron-electron interaction. Our theory is first applied to the two-band model of chiral multilayer graphene and then generalized to the four-band model of bilayer graphene. We have obtained the equations of motion for the pseudospin components of the density matrix, which generalizes the semiconductor Bloch equation in conventional parabolic band electron systems to chiral electron systems with pseudospins. From these equations we have calculated the interaction-induced corrections to the optical Drude weight, quantified by the Drude weight renormalization $\bar{\mathcal{D}}$. We find that $\bar{\mathcal{D}}$ increases with decreasing number of layers and hence pseudospin winding number, reaching the largest value in single-layer graphene. $\bar{\mathcal{D}}$ is also found to increase with decreasing electron density. Finally, we note that the renormalization effects of Drude weight and plasmon frequency are not limited to graphene systems. Our work has direct implications on the optical properties of other materials whose electronic states are also chiral or helical. In many topological states of matter, Galilean invariance of electronic states near the nodal points is explicitly broken due to the helicity of the low-energy electrons. As a result, we expect interaction-induced renormalization effects of Drude weight and plasmon frequency also in chiral systems such as monolayer MoS$_2$~\cite{Xiao:2012dv,Li:2013tx,Tianyi_MagneticControl}, topological insulators~\cite{Hao:2010ku,Qi:2011hb,Vafek2014,Qiao_BLG}, and topological crystalline insulators~\cite{Hsieh:2012NatCommn_SnTe,Dziawa:2012NatMater_PbSnSe,Tanaka:2012NatPhys_SnTe,chiu2016type-II}. \begin{acknowledgments} We are greatly indebted to A.~H.~MacDonald and M.~Polini, who shared with us many insightful discussions. X. L. was supported by the U.S. DOE (Grant No. DE-FG03- 02ER45958, Division of Materials Science and Engineering) and the Welch Foundation (Grant No. F- 1255) in Austin, Texas, and is currently supported by JQI- NSF-PFC and LPS-MPO-CMTC in Maryland. W.-K. is supported by a startup fund from the University of Alabama. \end{acknowledgments}
1,116,691,498,029
arxiv
\section{Introduction} In 1989 Ahlswede and Zhang\cite{ahlswede_zhang_1989} introduced the model of write efficient memories. In full generality they a describe memory system with some alphabet of possible characters, some number of slots that can contain any character from the alphabet, and an arbitrary transition cost Matrix that denotes the cost of transitioning any slot between any two characters. However most memory systems have highly symmetric read and write costs for any values. This lack of motivating hardware resulted in minimal follow-up work on the 1989 paper until the emergence of interest in phase-change RAM. Each bit in a phase-change RAM is stored as one of two crystalline configurations of chalcogenide glass. These two configurations have different resistances and so the configuration can be easily checked by measuring the drop in voltage when running a small current across the bit. However changing the configuration requires running enough current through the bit to melt it and let it recrystallize in the other configuration. So for this memory system the overwhelming cost in terms of power and damage to the hardware comes from the number of bit flips that a computation uses. The emergence of phase-change RAM has has re-motivated the study of a binary write efficient memories, leading to a slew of recent papers. Many of these papers are concerned with constructing codes using additional coding bits to create multiple codewords representing each possible value. These codes attempt to minimize the Hamming distance between each codeword of each value and the nearest codeword of every other value \cite{jacobvitz2013coset} \cite{Cho2009FlipNWriteAS} \cite{li_jiang_2013}. These codes are now a fairly well plowed field and will be referenced only in passing in this paper. Instead this paper will be concerned with bit flipping wins that can be found by taking advantage of properties on the data structure layer. We call the main property in question local order agnosticism: where the order of nearby elements can either be inferred from their values or does not matter. This property shows up in two classes of data structures that seem to be polar opposites, data structures that are either sorted or unsorted. In inherently sorted data structures such as hash tables with linear probing, large sets of possible arrangements of values will never be written. In inherently unsorted data structures, such as an unsorted sets, all the rearrangements of the same set of values are all equivalent. In both cases this redundancy can be taken advantage of to allow for write efficient coding ``for free", where ``for free" means with no additional physical memory costs. Encoding and decoding processes may take additional time and computational resources. \section{Locality and The General Memory Model} Previous work on bit flip efficient methods has focused primarily on ways to represent or arrange individual values in such a way that they can be overwritten with other values in a bit flip efficient manner. Our key insight was to take a step back and think about codewords of larger blocks of data. This focus on local neighborhoods allows us to take advantage of local order agnosticism as well as any other structural restrictions on the valid sets of values, or equivalencies between them. This paper is concerned with the ``local" behavior of data-structures. When we refer to ``locality" or a ``local neighborhood" we are referring to an area of memory containing $k$ slots, where each slot contains values that are $n$ bits long. Within this framework we will have codewords of length $n \times k$, which will decode into a set of up to $k$ values each of which are $n$ bits long. Depending on the memory model we are working under this decoded set may be ordered or unordered. The least restrictive model over these $n \times k$ bits is the General Memory Model. In this model any possible value over the full $n \times k$ bits is valid and has unique meaning, and transitioning between any two values is allowed. A good example of this might be a Cartesian Point structure in $\mathbb{Z},^k$ with $k$ coordinates being stored as those $k$ integers in order. The important observation here is that the order of the those coordinates conveys which dimension each coordinate varies over. If you rearranged the coordinates you change the point. In the General Memory Model any or all of the coordinates can be changed in a single write. Every set of $k$ coordinates represents a valid Cartesian Point and each Cartesian Point can be written down exactly one way. \section{Breaking Down the Data Model} As we examined the linear probing hash table and related data structures, we realized that they had three principal properties. \begin{itemize} \item Local Order Agnosticism (LOA) which we also refer to as the Multi-set model: the data structure can be broken up into blocks within which local order does not convey information. There are two main ways a data structure can have this property, either order can be inferred from the data (for instance with a sorted list), or local order does not affect the meaning of the data structure (for instance in an unsorted collection where A,B,C is equivalent to C,B,A). \item Uniqueness of Elements (UoE) : the data structure cannot contain duplicate elements within any one block (so A,A,B is an invalid set of data for a block to have). \item Single Cell Modifications (SCM): during an update to a given block, at most one element will be added or deleted. \end{itemize} The first property depends exclusively on the data structure. The second property can be insured by the data structure, for instance hash tables that store key-value pairs will have no duplicates because each key can only appear once, or can be insured by guarantees that the inputs will be unique. The third property typically requires assumptions about the memory hierarchy that this program is operating within. If the program is writing directly to non-volatile RAM then this would be a very reasonable assumption, however pretty much all modern computers have some cache hierarchy. In the presence of a cache hierarchy, our work only accurately models programs / data structures with bad cache locality that write to the same block only once before getting flushed from cache to RAM. Hash tables with linear probing provide a good example of a data structure that will have all three properties even with a cache hierarchy. Losing information about local ordering will not stop us from successfully probing. Because we can't have a key stored with multiple values each key-value pair will be unique. And because each block will be written to with roughly uniform probability, assuming we have a good hash function, with high probability each block will be flushed from the cache with only one modification. These three properties can be combined to find eight possible memory models from the ``General Memory Model" with none of these properties to the full LOADS memory model. These memory model seem to break down into roughly three groups. The Multi-set, Set, and Ordered Unique models behave optimally under a compression and a classic WEM codes model. SCM and the General Memory Model appear to have no wins that we can take advantage of. Where things appear to get most interesting is in the combination of SCM with uniqueness and / or local order agnosticism. \begin{figure}[H] \centering \includegraphics[scale=0.4]{venn} \caption{The 8 memory models}\label{models} \end{figure} Before we get to the interactions of the properties, we will dig a little bit deeper into the precise ways Local Order Agnosticism and Uniqueness of Elements decrease the number of possible values our data structure needs to be able to represent. \section{Properties that Limit the Symbol Space} In the General Memory Model every bit-string of length $nk$ represents a unique and valid possible value for our block. If we assume our data structure is constrained by uniqueness of elements then any bit-string in which two slots contain the same value is invalid.The number of possible values is given by the below equation. If our data structure is local order agnostic than any two bit-strings with the same set of slot values are equivalent. The number of unique values is the number of multisets of up to $k$ bit-strings of length $n$, this number is given by given by the below equation. The double parentheses in this equation is the notation for multi-set choose. $$ \multiset{2^n}{k} = \set{2^n+k-1}{k} = \frac{(2^n+k-1)!}{(k)!(2^n-1)!}$$ If our data structure is both local order agnostic and has unique elements, then the valid / unique elements are all the sets of up to $k$ bit-strings of length $n$, this number is given by the below equation. $$ \set{2^n}{k} = \frac{(2^n)!}{(k)!(2^n-k)!}$$ For these three models we are dealing with a fairly simple situation in which we have a certain number of valid symbols and $2^{n \times k}$ possible bit-strings that we can use to represent them. Without the addition of single cell modification all transitions between the symbol set are valid. The only difference between this and the generally studied area of write efficient memory is that our number of valid symbols might not be a power of two. We would be surprised if this has a huge effect and one can simply add a collection of "NULL" values to pad out the symbol set to a power of two. Write efficient memory is a fairly well studied field with optimality results proven for polar codes \cite {li_jiang_2013}. The one other obstacle is compressing the set of symbols down into an efficient representation, however work has been done for both sets and multisets \cite{kovacevic_tan_2018} \cite{steinruecken_2014}. While during this project we decided to not pursue an attempt to combine these two techniques, we believe that this is one of the lowest hanging fruit for future researchers. During this project we saw the juicer but higher hanging fruit of SCM and doggedly failed to pick it. Separately, in the case of the Unique Elements Model, $k$ has to be very high before multiple codewords of every value are possible and so we chose not to concern ourselves with additional study of that memory model. \section{Analysis of SCM} Under the General Memory Model any transition between block values is valid. Under the Single Cell Modification Model, only one block may be changed during each write. There are two main variations on this model: first the write/delete model where we can either clear a slot, or if a slot is empty, we can write that slot; and second the overwrite model where any one slot (empty or filled) can be set to any value including empty. In both models only a single slot can be changed in any one write. We first examine the SCM overwrite model. In this model any of the $2^{n \times k}$ bit-strings is valid and unique, but for any given setting there are only $k \times (2^n-1)$ possible transitions. Each valid transition involves choosing one of the $k$ slots and changing its value to one of the $2^n -1$ values it currently does not have (this includes $n$ 0's which is how we typically encode that the slot is empty in write/delete models). Under the General Memory Model each bit-string has $2^{n \times k}$ possible valid transitions. For building write efficient codes we need to have a much smaller set of codewords nearby any given point. \subsection{The Hypercube Representation of Bit-strings} When we say "at a given point" we are implicitly referring to the hypercube model of the world of bit-strings which provides an incredibly helpful visualization and model for understanding many ideas in Coding Theory. To use this model we think of the universe of length $n$ bit-strings as existing at the corners of an $n$-dimensional hypercube. \begin{figure}[H] \centering \includegraphics[scale=0.28]{hypercube} \caption{The first three dimensional hypercubes.}\label{hypercube} \end{figure} Under this model it's clear that the goal of any Write Efficient Code is to have a representation of every state that can be transitioned to at a small Hamming distance away on the hypercube. Hamming distance being the number of bit-flips necessary to move from one bit-string to another. This contrasts with Error Correcting Codes in which the goal is to have no valid representations of any state that are a small Hamming distance away. \subsection{Information Theoretic Limits} On the Hypercube the number of ``neighbors" with maximum distance $d$ that each point has is equal to: $${\displaystyle \sum_{i=0}^{d}{\set{n \times k}{d}}}$$ Under the General Memory Model, we have to be able to transition to any of the $2^{n \times k} - 1$ possible bit-strings. This requires $d$ to equal $n \times k$ meaning that there are transitions we may be forced to take with cost as high as $n \times k$. Under the General Memory Model we may be asked to flip every single bit in our bit-string. Under overwrite SCM though the number of valid transitions is only $k \times (2^n -1)$ which is much smaller than $2^{n \times k} -1$. For example, if $k$ = 4 and $n$ = 4, then we need to be able to transition to $4 \times (2^4-1) = 4 \times (15) = 60$ possible states. Under a trivial encoding we already can only be asked to flip the 4 bits of any one slot, but if we allow ourselves to modify any of our 16 bits there are 136 bit-strings with distance at most 2 which is more than enough to (in theory) have a codeword of every one of those 60 states within distance at most 2. So we have an opportunity to halve our maximum cost. This behavior gives us increased opportunities as the number of slots $k$ increases until $k > 2^n$ at which point every possible transition may be placed at Hamming distance 1. If we also add in Uniqueness of Elements and Local Order Agnosticism we could even have a simple indicator vector with the $i^th$ bit representing if the value $i$ is part of our set. \section{Conjectures 1 and 2} SCM provides some very promising opportunities for bit flip efficiency. However the hypercube is a nasty beast. Under pure SCM (without Local Order Agnosticism, or Uniqueness of Elements) we conjecture that these opportunities cannot be realized. This is because under pure SCM, every value has exactly one codeword which must be placed somewhere in the hypercube. So creating a code is equivalent to doing a series of element swaps. Every swap can decrease the hamming distance for some transitions, but usually at the cost of increasing the hamming distance for many others. From experimenting with very small $n$ \and $k$ we were unable to find an example of a coding mechanism that decreased maximum or average bit-flips. \begin{customconj}{1}\label{one} In the SCM model there do not exists codes such that the average or maximum cost of valid transitions is less than that of the trivial encoding. \end{customconj} However we do believe that with the flexibility of having multiple codewords of each value should allow us to realize some of the promise of SCM. We believe that wins should be possible by using built in redundancy. This redundancy can come from guarantees on the data, or be uncovered through compressing the elements left after taking into account Local Order Agnosticism, Uniqueness of Elements, or both. \begin{customconj}{2}\label{two} Given an SCM model with sufficient redundancy, there exist codes such that the average or maximum cost of valid transitions is less than the costs using normal encoding methods on each slot individually. \end{customconj} \section{Difficulties in proving the conjectures} Because a pure SCM code is a mapping from bit-strings of length $nk \rightarrow$ bit-strings of length $nk$, it can be thought of as a reordering of the $2^{n \times k}$ bit-strings. There are $(2^{n \times k})!$ such reorderings. This is an absolutely enormous space to be working in even for very very small $n$ and $k$. This makes any sort of brute force proof of Conjecture 1 very difficult even for case studies of small $n$ and $k$. And if Conjecture 1 turned out to be wrong, finding the useful codes within that huge space is a monstrously difficult task without some principled method. The first proof techniques that we attempted for Conjecture 1 was to argue that each swap made during the creation of an ordering representation of the code would have cost higher than gain. However while we conjecture that the gain at each step can never outweigh the costs imposed by all previous swaps, if some very poor swaps were made previously a single swap could have very positive effects. For instance if we had swapped all elements with their not ($10001$ would be swapped with $01110$) except for the all 0's and all 1's string then swapping those two would get them much closer to their sets of valid neighbors (the one's where all slots but one are 0's or 1's respectively). We failed to come up with a promising technique to prove that the gain at each step can never outweigh the previous costs. The second proof technique we attempted was inspired by coding theory proofs for optimality of codes such as Huffman codes. This technique works by a clever use of induction and contradiction. The proof goes something like this. Assume code $H_n$ is optimal for any alphabet of size $n$. We then assume that code $H_{n+1}$ is not optimal for an alphabet of size $n+1$. This means there is a $H'_{n+1}$ that is better than $H_{n+1}$, we then use $H'_{n+1}$ to construct a code $H'_n$ that is better than $H_n$. This is a contradiction because by the inductive hypothesis $H_n$ was optimal. This proof technique transfers over pretty well. We hoped to show that there did not exist a code $C_{n,k}$ with who's total cost function $${\displaystyle \sum_{e_1 \in C}\sum_{e_2 \in C}{| e1 \oplus e2 |}} $$ is less than the trivial representation cost of $(2^n)(n \times k/2)$. Because when $k = 1$ we are in the General Memory Model of length $n$, every transition is valid and so there is no way to reduce the total transition cost. So all we need to do is induct over $k$. We'd hoped that the existence of improved codes with $k + 1$ slots would allow for the construction of improved codes with $k$ slots. In one case, where a collection of $k$ bits are affected only by the value in a single cell, these ``Isolated" bits could be removed creating an improved code for $k$ slots. But outside of that special case we struggled to find a way to construct $k$ slot codes from $k+1$ slot codes. We still believe that Conjecture 1 is true and we suspect some of the tools in the original WEM paper \cite{ahlswede_zhang_1989} might be used to prove this result. However that paper's notation makes many of its main results very unclear and the techniques used to find those results even more so. Further research on this question is recommended. It seemed like the simplest approach to proving Conjecture 2 would be to find a code with better performance. We failed to do this. The other approach that was considered was a proof by the probabilistic method popularized by Erd\"os\cite{alon2004probabilistic}. For this we would show that a random code would have performance $x$ with probability $p_x$. Then hopefully show that the probability of having non trivial performance was non zero. However we were unable to find a way of calculating the performance of a random code so this approach stalled out. Further research into Conjecture 2 is strongly recommended. \section{Conclusion} After this section we include additional appendices on short paths that our research took us down; some attempts at formalizing our two Conjectures; calculations, formulas, and remarks related to the behavior of our various memory models; and advice for those attempting to follow this work. However the core of our results and the narrative of our intuition ends here. The vast majority of work in Bit-Flip Efficient Methods has focused on either using redundancy to pack codewords of every symbol closer to each other on the hypercube. The second large body of work concerns using patterns in the data itself to reduce bit-flips. We decided to eschew both these to examine the restrictions on a data structure's locally valid values and transitions. The first upside of this approach turned out to be that we could extract redundancy directly out of a data structure's rules without having to devote extra physical resources to memory. The second upside was to discover the potentially huge wins of the Single Cell Modification model which massively restricts the set of legal transitions over a larger local area. This is an incredibly promising area with the possibility for large wins for bit-flip efficiency that require fairly low computational overhead and no additional hardware costs. However we spent most of our energy focused on the biggest and most interesting potential win: taking advantage of SCM, and failed to prove our two main conjectures about the achievability of those wins, and failed to find codes which took advantage of those wins. For future work, the lowest hanging fruit is to take a deeper look at the set encoding mechanisms and to see if the redundancy left from set compression could be used for classical error correction and bit-flip efficient codes. This would give significant free wins for data structures including the linear probing hash table. The more difficult but more rewarding research path would be to try to verify Conjectures 1 \and 2. The potential wins here are much larger, but the path forward likely involves the use of serious tools from information theory and coding theory. \section{Acknowledgements} I cannot thank my collaborator Justin Raizes enough. He has been an incredible friend for the last four years, and his ideas, inspiration, and excitement made this work possible. I would like to thank my sister Nelle Gray for her help with multi-sets. It's wonderful having family I can reach out to when I find myself in combinatorial hot water. And I would like to thank Andrew Stolman and Daniel Bittman for providing skeptical but encouraging listening ears. I'd like to thank my advisors, Peter Alvaro, Seshadhri Comandur, and Ethan Miller. All three have served as fonts of inspiration and advice. And lastly I'd like to thank Darrell Long who first exposed me to NVRAM and bit-flip efficiency. He opened my eyes to the fact that professors are among the most interesting people in the University, and inspired me to go talk to them more.
1,116,691,498,030
arxiv
\section{Introduction}\label{sec1} Quantum Chromodynamic (QCD), the theory of strong interaction, undergoes certain phase transition at high temperature to a plasma of quarks and gluons. Heavy-ion collision (HIC) at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) provide this opportunity to study the phase structure of QCD and the properties of quark-gluon plasma (QGP) \cite{Ackermann:2000tr,Lacey:2001va,Park:2001gm,Aamodt:2010pa,ALICE:2011ab,Chatrchyan:2012ta,ATLAS:2011ah,Aad:2014vba}. One feature of this plasma is collective behavior which can be successfully described by the hydrodynamic models \cite{Luzum:2008cw}. Among the various probes to study the dynamics of relativistic HIC, the most used one, is the anisotropic flow that is quantified with harmonics $v_n$, measuring the azimuthal asymmetry of the emitted hadrons. The flow distribution and the cumulants are used to gain more information on the even-by-event fluctuations \cite{Ollitrault:1992bk}. This insight shed light on the collision geometry, quantum fluctuations at initial state as well as the effects of different evolution stages in heavy-ion process \cite{Schenke:2012wb,Miller:2003kd}. From experimental point of view, the distribution of $v_{2}$ and $v_{3}$ are accessible through the unfolding method \cite{Jia:2013tja}. This leads to an observation of a Bessel-Gaussian distribution for collision of spherical nuclei, i.e., Pb-Pb, in the central collision \cite{Aad:2013xma}. On the other hand, cumulants can be obtained as a measure of multi-particle correlation functions \cite{Borghini:2001vi}. Measuring the correlation of particles gives this chance to map the shape of nuclei \cite{Bally:2022vgo}. Conventionally, in low-energy nuclear physics, a Woods-Saxon profile describes the density of nucleons inside a nucleus, \begin{equation} \rho(r,\theta,\phi)\propto\frac{1}{1+e^{\frac{r-R(\theta,\phi)}{a_0}}}, \end{equation} where $a_0$ and $R(\theta,\phi)$ are the surface diffuseness and the nuclear surface parameter, respectively. In general, to take into account the deformation of the nucleus, $R(\theta,\phi)$ is expanded in terms of spherical harmonics $Y_{\ell}^{m}(\theta,\phi)$. In particular the quadrupole deformation is defined by $R(\theta,\phi)=R_0(1+\beta_2(\cos\gamma Y_2^0(\theta,\phi)+\sin\gamma Y_2^2(\theta,\phi)))$ \cite{Moller:1994cnm}. Here, $R_0$ is the half-density radius, $\gamma$ determines the relative length of the three axes of the ellipsoid, and $\beta_2$ is the magnitude of quadrupole deformation. Experimental results show a noticeable difference between the $v_{2}$ of deformed and spherical nuclei \cite{STAR:2015mki}, with the largest difference being in the most central collisions. Besides the quadrupole deformation, axial deformation may also significantly affect observables. This form of deformation arises due to the breaking of parity symmetry in the intrinsic nuclear shape. This kind of deformity is modeled by including $\beta_3Y_3^0(\theta,\phi)$ in the nuclear surface parameter. \par The structure of this paper is as follows: In Sec.~\ref{sec2}, we present a brief review of standard Gram-Charlier series method. Using this approach, we find the cumulants of flow harmonic in Sec.~\ref{sec3}. We argue that to include the effect of deformation on the cumulants we have to consider a shift parameter in the definition. Then we obtain the flow distribution for the magnitude of flow harmonics. We observe, the conventional Bessel-Gaussian is not appropriate for the central collisions. Once, we consider higher order correction the data is explained accurately. Having the appropriate distribution, we compare the spherical and deformed nuclei in Sec.~\ref{sec5}. We see, for particular case of quadrupole deformation, the corresponding distribution is broader than the spherical one. In Sec.~\ref{sec6}, we present an approach to observe the shift parameter in experiments. We summarize in Sec.~\ref{sec.con} and present our concluding remarks. \section{Method of Analysis}\label{sec2} In this section, we use the so-called standard Gram-Charlier (sGC) series to find the distribution of any random variable. In case where the related information of the desired variable, such as its moments, is incompatible with Gaussian distributions, this series can be used to modify Gaussian distributions. This method relates the probability distribution $P(\mathbf{Z})$ to the Gaussian distribution by applying an appropriate differential operator \cite{Kendall:1945,Cramer:1999,Krzanowski:2000}. To do this, let us start with the characteristic function $\Phi_{\mathbf{Z}}(\mathbf{t})$ of a k-dimensional random vector $\mathbf{Z}$, which is defined as: \begin{equation}\label{qq1} \Phi_{\mathbf{Z}}(\mathbf{t})=\int \mathbf{d}\mathbf{Z}\; e^{i\mathbf{t}^{T}\mathbf{Z}} P(\mathbf{Z}). \end{equation} Here, $\mathbf{t}$ belongs to the $\mathcal{C}^k$ ($\mathcal{R}^k$) when $\mathbf{Z}$ is complex (real). As it turns out, if the random vector $\mathbf{Z}$ has a moment-generating function $\mathcal{M}$, the domain of the characteristic function can be extended to the complex plane, and thus we have $\Phi(−i\mathbf{t})=\mathcal{M}(\mathbf{t})$. Moreover, an alternative definition of the second characteristic function $\mathcal{M}(t)$ is given by $\mathcal{K}(\mathbf{t})=\log \mathcal{M}(\mathbf{t})$. Thus, this leads to the cumulants of random vector as $\mathcal{K}^{(n)}(\mathbf{t})|_{t=0}=(\log \mathcal{M}(\mathbf{t}))^{(n)}|_{\mathbf{t}=0}$\footnote{Assuming a real random variable $x$ the corresponding equation leads to: \begin{align*} 1+\sum_{n=1}^{\infty}\frac{\mu_n t^n}{n!}=\exp\Big(\sum_{n=1}^{\infty}\frac{\kappa_n t^n}{n!} \Big). \end{align*} where for a particular choice $\mu=\langle x\rangle$, it implies: \begin{align*} \mu_1&=\mu =k_1,\\ \mu_2&=\mu ^2+\sigma ^2=k_1{}^2+k_2, \\ \mu_3&=\mu ^3+3 \mu \sigma ^2+k_3=k_1{}^3+3k_1k_2+k_3. \end{align*} }. Furthermore, the probability function defined in Eq.~\ref{qq1} is determined by an inverse Fourier transformation \cite{Cramer:1999} as following: \begin{equation}\label{qq2} P(\mathbf{Z})=\frac{1}{2\pi}\int \mathbf{d}\mathbf{t}\; e^{-i\mathbf{t}^T\mathbf{Z}} \Phi_{\mathbf{Z}}(\mathbf{t}). \end{equation} Expanding the characteristic function up to second order, writing $(i\mathbf{Z})^n$ in terms of an appropriate differential operator, we arrive at an expression for $P(\mathbf{Z})$. Following the steps described above to find the distribution function for a real random variable $x$, we find the corrections to the conventional Gaussian distribution. The corresponding terms are given in terms of the probabilist's Hermite polynomials, $He_n$, as $\sum_{n=3}^{\infty}\frac{\kappa_n}{n!\sigma^n}He_{n}(\frac{x-\mu}{\sigma})$. The sGC method aims to find the non-Gaussianity correction to obtain a complete description of our variable. This leads us to focus on the characteristic function instead. Since this quantity gives us the desired information about the moments and cumulants, it provides insight into the probability distribution. It is known that both collision geometry and event-by-event fluctuations are encoded in flow harmonic distributions $P(v_n)$ as well as cumulants \cite{Voloshin:2007pc}. In the following sections, we study the connection between them to gain a deeper insight on the effects of nucleus deformation using sGC series in the two cases of symmetric and deformed ion collisions. \section{spherical and deformed nuclei collisions}\label{sec3} In this section, we examine the effect of deformation on the flow anisotropy. To start, we present the $2k$-particle correlation functions $c_n\{2k\}$ \cite{Borghini:2001vi} as well as the shifted cumulants \cite{Mehrabpour:2020wlu} for symmetric and deformed ions. The present work aims to study the effects of deformation on cumulants, resulting from the initial stage of collision of nuclei. To do this, we use the approximate relation between $v_n$ and the initial anisotropy $\varepsilon_n$ for second and third harmonics: $v_n=\alpha_n\varepsilon_n$ \cite{Giacalone:2017dud,Giacalone:2018apa}. We do not need to compute $v_2$ and $v_3$ by means of full hydrodynamic simulations. The $\alpha_n$ is a response coefficient that depends on the properties of the medium, such as its viscosity, and it is the same for all events at a given centrality\footnote{Since this linear response works for $n=2,3$ not higher harmonics and gives us a simple picture of the relationship between initial and final states, we do not discuss higher harmonics here. The method employed here, however, works to study higher flow harmonic as well while that would be complicated.}. Its value has been determined at both RHIC and LHC energies (see Ref.\cite{Giacalone:2019pca}). Thus, we have generated data for PbPb and UU, as well as ZrZr, collisions at the center-of-mass energy $\sqrt{s_{NN}}=5.02\;\text{TeV}$ and $200\;\text{GeV}$, respectively, motivated by LHC \cite{ALICE:2022xir} and RHIC \cite{STAR:2015mki} experiments \footnote{We performed a same analysis for UU collisions at $5.02\;\text{TeV}$ center-of-mass energies and the results are exactly the same.}. Also, we use the same T$_{\text{R}}$ENTO parametrization \footnote{In this study, we use the geometric thickness function with $p=0$. The nucleus thickness function is a superposition of the nucleon thickness function whose Gaussian width is chosen as $w=0.5$. Moreover, the fluctuation of nucleon thickness function is considered by a gamma distribution with variance $1/k$ where we have used $k=1$.} for these simulations at different centrality classes to have the same situation for both deformed and symmetric nuclei. To better understand the effect of deformation in deformed collisions, we consider three sets of deformation parameters for UU collisions as follows: \begin{equation*} \begin{split} 1)&\; \beta_2=0 \;\text{and}\; \beta_3=0 \;\text{: spherical U},\\ 2)&\; \beta_2=0.265 \;\text{and}\; \beta_3=0 \;\text{: effect of $\beta_2$ },\\ 3)&\; \beta_2=0.265 \;\text{and}\; \beta_3=0.1 \;\text{: effect of $\beta_2$ and $\beta_3$}. \end{split} \end{equation*} \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.4]{1.pdf}\\ \includegraphics[scale=0.4]{2.pdf}\\ \includegraphics[scale=0.4]{3.pdf} \end{tabular} \caption{(Color online) Comparing $2k$-particle correlation functions $c_n\{2k\}$ for spherical and deformed nuclei collisions. Data has been generated at the center-of-mass energy $\sqrt{s_{NN}}=5.02\;\text{TeV}$.} \label{fig1} \end{figure} In the previous section, it was shown that one way to study a random variable is to scrutinize its moments, cumulants, and probability distribution. We propose that, in order to obtain more insight into the impact of deformation, we need to study flow harmonics. Thus, we will compare the observables of symmetric and deformed nuclei. To show how this method works in the flow studies, we plug $t=\frac{1}{2}(t_x-it_y)$ and $Z=v_{x,n}+iv_{y,n}$ in Eq.~\ref{qq1} for any harmonics (see more details in Ref.\cite{Mehrabpour:2020wlu}). Following these considerations, we arrive at the $2k$-particle correlation functions \cite{Borghini:2001vi}: \begin{equation} c_n\{2\}=\langle v_n^2\rangle,\; c_n\{4\}=\langle v_n^4\rangle-2\langle v_n^2\rangle^2,\;\cdots. \end{equation} In Fig.~\ref{fig1}, we plot the two particle correlation function $c_2\{2k\}$ for $k=1,2,3$ at various centralities. As demonstrated in this figure, we find a finite difference between the correlation function $c_2\{2\}$ of symmetric and asymmetric nuclei, whose magnitude is larger for deformed ones. Concerning the four- and six-particle correlation function, we see there is no significant difference at $0-5\%$ and $5-10\%$ centralities, whereas we expected to observe a considerable deformation effect. In Fig.~\ref{fig9}, we present the 2-dimensional distributions of elliptic flow for different spherical and deformed nuclei collisions at $0-5\%$ centrality. The values of average ellipticity $\bar{v}_2$ are shown in each panel as well. Fig.~\ref{fig9} suggests that the flow distributions of different symmetric and asymmetric nuclei have the same behavior at most central collisions. Furthermore, it is known that the experimental data \cite{ATLAS:2019peb} favor Bessel-Gaussian (BG) distribution to explain elliptic flow distribution in spherical nuclei, \begin{align*} p(v_n)=\frac{2v_n}{c_n\{2\}}e^{-\frac{v_n^2+\bar{v}_n^2}{c_n\{2\}}}I_0\left(\frac{2v_n \bar{v}_n}{c_n\{2\}}\right), \end{align*} due to $c_n\{4\}\approx c_n\{6\}\approx\cdots\approx0$ at most central collisions. One may wonder if the BG can be a suitable choice for flow distribution of \emph{deformed ion} at low central collisions. The data of STAR \cite{STAR:2015mki} shows there is a noticeable difference between measured the $v_2$ of deformed and spherical nuclei collisions. This compels us to challenge our assumptions. As mentioned in Refs.\cite{hadi} and \cite{Mehrabpour:2020wlu}, we have to consider a shift in the x-direction, $Z=(v_{n,x}-\bar{v}_n)+iv_{n,y}$, where $\bar{v}_n=\langle v_{n,x}\rangle\neq0$ for even harmonics as depicted in Fig.~\ref{fig9}. However, this is not the case for odd harmonics where we have $\bar{v}_{2n+1}=0$. Imposing this change and following the previous steps, the relation between the generating functions of moments and cumulants of a complex variable is given by: \begin{equation}\label{qq4} \log \langle e^{t^*Z}\rangle =\sum_{k}\frac{(tt^*)^k}{(k!)}K_n\{2k\}. \end{equation} \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.2]{88.jpg} \includegraphics[scale=0.2]{99.jpg}\\ \includegraphics[scale=0.2]{1010.jpg} \includegraphics[scale=0.2]{1111.jpg} \end{tabular} \caption{(Color online) These plots present $v_{2,x}=\alpha_2 \varepsilon_{2,x}$ vs. $v_{2,y}=\alpha_2 \varepsilon_{2,y}$ obtained from T$_{\text{R}}$ENTO using the same parametrization for different collisions.} \label{fig9} \end{figure} Therefore, one finds the desired order of the real cumulants $K_n\{2k\}$ \cite{Mehrabpour:2020wlu} as follows\footnote{It should be mentioned that they are derived by differentiating both sides of Eq.(\ref{qq4}) at $t_x=0$ and $t_y=0$. Also, we mention that $K_n\{2k\}$ are 2 dimensional cumulants.}: \begin{equation}\label{qq5} \begin{aligned} K_n\{2\}&=\langle ZZ^*\rangle,\\ K_n\{4\}&=\langle (ZZ^*)^2\rangle-2\langle ZZ^*\rangle^2,\\ K_n\{6\}&=\langle (ZZ^*)^3\rangle+12\langle ZZ^*\rangle^3-9\langle ZZ^*\rangle\langle (ZZ^*)^2\rangle. \end{aligned} \end{equation} Note that cumulants $K_n\{2k\}$ contain $c_n\{2k\}$ as well as moments such as $\bar{v}_n^i \langle v_{n,x}^j v_{n,y}^l \rangle$, where $i+j+l=2k$ \footnote{By fixing the reaction plane in the experiment and computing an event-by-event $\mathbf{V}_n=v_ne^{in\Psi_n}=v_{n,x}+iv_{n,y}$ from the $\overrightarrow{\rm q}=(q_x,q_y)$ vectors, these cumulants can be found experimentally (see Ref.\cite{Bilandzic:2010jr} for more details). $v_n$ is the amplitude of anisotropic flow in the $n$-th harmonic, and $\Psi_n$ is the corresponding symmetry plane.}. We call the above correlation functions, the shifted cumulants due to non-vanishing $\bar{v}_n$. In Fig.~\ref{fig2}, we plot Eq.~\ref{qq5} for $n=2$ for both spherical and deformed collisions. As depicted in the top plot ($K_2\{2\}$) of this figure, we observe a difference between spherical-spherical (SS) and deformed-deformed (DD) collisions. In addition, the effect of octupole deformation with non-zero $\beta_3$ in UU can be seen in this plot as well. For the particular cumulant $K_{2}\{2\}$, effect of $\beta_3$ manifests itself in mid-central collisions more noticeably. It turns out that the centrality dependence of cumulants for PbPb and spherical UU is very similar. Concerning the next order cumulants, i.e., $K_{2}\{4\}$, there is a considerable difference between the magnitude of the aforementioned quantity for deformed and spherical nuclei. We have $K_2\{4\}\approx 0$ for SS collisions as expected. The difference between deformed uranium collisions with and without $\beta_3$ manifest itself in $K_2\{4\}$ and $K_2\{6\}$. It can be seen that the effect of $\beta_3$ is decreasing in $K_{2}\{6\}$, whereas increasing in $K_{2}\{4\}$ and $K_{2}\{2\}$. This difference appears more clearly in $K_2\{6\}$ compared to the $K_2\{4\}$. As demonstrated in Figs.~\ref{fig1} and \ref{fig2}, the splitting between different values of quadrupole $\beta_2$ and octupole $\beta_3$ can be obtained from the shifted cumulants $K_n\{2k\}$ in contrast with $2k$-particle correlations $c_n\{2k\}$. This splitting appears stronger in the higher order of $K_n\{2k\}$. In other words, the results show that if we want to study the effect of deformation on flow anisotropies, it would be helpful to investigate the shifted cumulants $K_n\{2k\}$. Also, the difference between different spherical nuclei at mid-central centrality can be extracted in $K_2\{6\}$. To see this difference, one needs to include higher-order terms of cumulants in probability distributions, which we leave to future work. \begin{figure}[h!] \begin{tabular}{c} \includegraphics[scale=0.4]{11.pdf}\\ \includegraphics[scale=0.4]{22.pdf}\\ \includegraphics[scale=0.4]{33.pdf} \end{tabular} \caption{(Color online) This is similar to Fig.~\ref{fig1} for the shifted cumulants $K_n\{2k\}$. The results show that the relationship between different orders of shifted cumulants is $K_2\{2\}>K_2\{4\}>K_2\{6\}$ the same as $2k$-particle correlation functions.} \label{fig2} \end{figure} As we observed the correction $\bar{v}_n$ in our assumptions led us to extract some fascinating properties. Now, we study the effect of this modification on flow distributions. As discussed in Sec.~\ref{sec2}, one can study a stochastic variable using the sGC method and its characteristic function, which gives the modification to the Gaussian distribution. Thus, using Eq.~\ref{qq4}, we can investigate shifted flow harmonics $Z=v_{n,x}-\bar{v}_n+iv_{n,y}$. To have an approach accessible in experiments, we should obtain a distribution for the magnitudes of flow harmonics. To do this, we find the radial distribution $P_r(v_n)$ by writing the distribution given in Cartesian coordinate in the polar coordinate and then integrating the two-dimensional distribution $P(v_{n,x},v_{n,y})$ over the $\Psi_n$ (see Ref.\cite{Mehrabpour:2020wlu}) as follows: \begin{equation}\label{qq6} \begin{split} P_r(v_n) &=\frac{d}{dv_n}\int P(v_x,v_y)\;dv_x\;dv_y\\ &=\frac{d}{dv_n}\int v_n\; P(v_n,\Psi_n)\;dv_n\;d\Psi_n, \end{split} \end{equation} where \begin{equation}\label{qq7} P(v_n,\Psi_n)\approx\left[1+\sum_{k=2} \frac{R_n\{2k\}\mathcal{D}^{k}_{v_n,\Psi_n}}{4^k(k!)^2}\right] \mathcal{G}(v_n,\Psi_n). \end{equation} Here, $\sqrt{2\pi}\sigma \mathcal{G}(z,\phi)$ is a 2D Gaussian distribution with mean $\bar{v}_n$ and standard deviation $\sqrt{R_n\{2\}/2}$. Moreover, $\mathcal{D}=\partial_{v_n}^2+(1/v_n)\partial_{v_n}+(1/v_n^2)\partial_{\Psi_n}^2$. Bearing in mind the importance of the shift parameter, introduced in Eq.~\ref{qq5}, we have to use the radial shifted cumulants $R_n\{2k\}$. This means that the cumulants in Eq.~\ref{qq4} are not applicable anymore. Therefore, we employ the main definition of moments, \begin{equation}\label{q7} \langle v_n^{2k}\rangle= \int_{0}^{\infty} v_n^{2k} P_r(v_n) \;dv_n, \end{equation} to obtain $R_n\{2k\}$ (see Ref.\cite{hadi}). In order to obtain an expression for $R_{n}\{2k\}$, we truncate Eq.~\ref{qq7} at desired order of $k$. For example, if we keep only the first term in Eq.~\ref{qq7}, $P_r(v_n)$ would be a BG distribution: \begin{equation}\label{qq8} P_r(v_n)\equiv BG(v_n)=\mathcal{G}(v_n;\bar{v}_n)I_0\left(\frac{2v_n\bar{v}_n}{R_n\{2\}}\right), \end{equation} where $\mathcal{G}(v_n;\bar{v}_n)=(2v_n/R_n\{2\})\exp\left[-\frac{v_n^2+\bar{v}_n^2}{R_n\{2\}}\right]$ is 1D Gaussian distribution with non-zero central moment, and $I_j(z)$ is the modified Bessel function of the first kind. In this case, we just have $R_n\{2\}$: \begin{align*} \langle v_n^2\rangle&= \int v_n^2 P_r(v_n)\;dv_n=\int v_n^2 BG(v_n)\;dz=R_n\{2\}+\bar{v}_n^2. \end{align*} Now, we obtain the form of the first shifted cumulants using the equality above as $R_n\{2\}=\langle v_n^2\rangle-\bar{v}_n^2=c_n\{2\}-\bar{v}_n^2$. However, we can keep higher order terms in this expansion as well to arrive at a distribution that includes corrections to Bessel-Gaussianity in Eq.~\ref{qq8} as $P_r(v_n)=BG(v_n)+P_k(v_n)$. Here, $P_k(v_n)$ includes corrections from higher cumulants. We know from experiments \cite{Aad:2014vba} that BG distribution presents a suitable description of the flow harmonics of spherical nuclei from most to mid-central collisions. Now, one may wonder if we can use this expression for deformed nuclei in the same centrality classes. To answer this, we consider the correction to BG distribution which just includes the second and third radial shifted cumulants, $R_n\{4\}$ and $R_n\{6\}$, as follows: \begin{equation}\label{qqq8} \begin{split} P_r(v_n)&=BG(v_n)+P_2(v_n)+P_3(v_n) \\&=\mathcal{G}(v_n;\bar{v}_n)I_0\left(\frac{2v_n\bar{v}_n}{R_n\{2\}}\right) \\&+\frac{1}{2}\gamma_4\mathcal{G}(v_n;\bar{v}_n)\sum_{j=0}^{2}\alpha_{2,j} I_j\left(2v_n\bar{v}_n/R_n\{2\}\right) \\&+\frac{1}{6}\gamma_6\mathcal{G}(v_n;\bar{v}_n)\sum_{j=0}^{3}\alpha_{3,j} I_j\left(2v_n\bar{v}_n/R_n\{2\}\right). \end{split} \end{equation} The coefficients $\gamma_{2k}$ and $\alpha_j$ in the correction terms of Eq.~\ref{qqq8} are: \begin{align*} \gamma_4&=R_n\{4\}/R_n\{2\}^2,\;\gamma_6=R_n\{6\}/R_n\{2\}^3,\\ \alpha_{2,0}&=\mathit{L}_2 \left(\frac{v_n^2+\bar{v}_n^2}{R_n\{2\}}\right)+\frac{v_n^2\bar{v}_n^2}{R_n\{2\}^2},\\ \alpha_{3,0}&=\mathit{L}_3 \left(\frac{v_n^2+\bar{v}_n^2}{R_n\{2\}}\right)+\frac{3v_n^2\bar{v}_n^2}{R_n\{2\}^2}\mathit{L}_1\left(\frac{v_n^2+\bar{v}_n^2}{3 R_n\{2\}}\right),\\ \alpha_{2,1}&=\frac{4v_n\bar{v}_n}{R_n\{2\}}\mathit{L}_1 \left(\frac{v_n^2+\bar{v}_n^2}{2R_n\{2\}}\right),\\ \alpha_{3,1}&=\frac{6v_n\bar{v}_n}{R_n\{2\}}\mathit{L}_2 \left(\frac{v_n^2+\bar{v}_n^2}{2R_n\{2\}}\right)+\frac{v_n^4+6v_n^2\bar{v}_n^2+\bar{v}_n^4}{8R_n\{2\}^2},\\ \alpha_{2,2}&=\frac{v_n^2\bar{v}_n^2}{R_n\{2\}^2},\; \alpha_{3,2}=\frac{3v_n^2\bar{v}_n^2}{R_n\{2\}^2}\mathit{L}_1\left(\frac{v_n^2+\bar{v}_n^2}{3 R_n\{2\}}\right),\\ \alpha_{3,3}&=\frac{v_n^3\bar{v}_n^3}{3R_n\{2\}^3}. \end{align*} Here, $\mathit{L}_i(z)$ are the Laguerre polynomials. Let us emphasize that we kept up to the third orders $k=3$ since higher order cumulants are small \footnote{To examine this claim, we included them and confirmed they are small and negligible.}. Plugging, Eq.~\ref{qqq8} into Eq.~\ref{q7}, the cumulants $R_{n}\{2k\}$ with $k=1,2,3$ are given by: \begin{equation}\label{qR} \begin{split} R_n\{2\}&=\langle v_n^2\rangle-\bar{v}_n^2=c_n\{2\}-\bar{v}_n^2,\\ R_n\{4\}&=\langle v_n^4\rangle-2\langle v_n^2\rangle^2+\bar{v}_n^4=c_n\{4\}+\bar{v}_n^4.\\ R_n\{6\}&=\langle v_n^6\rangle-9\langle v_n^4\rangle\langle v_n^2\rangle+12\langle v_n^2\rangle^3-4\bar{v}_n^6=c_n\{6\}-4\bar{v}_n^6. \end{split} \end{equation} We see that $R_n\{2\}=K_n\{2\}$ as expected. So, if one obtains $R_n\{2\}$ for different collisions, the results in the top panel of Fig.~\ref{fig2} are reproduced. To investigate $P_r(v_n)$ for both SS and DD collisions, we focus on $0-5\%$ centrality where we expect to have the most deformity of nuclei \cite{STAR:2015mki}. Fig.~\ref{fig3} shows a comparison of the distribution of PbPb with UU. As depicted in this figure, the leading order truncation of Eq.~\ref{qq8} is a reliable estimation for PbPb data in most central collisions. In contrast to PbPb, the distribution of UU indicates a trace of non-Bessel-Gaussianity. This comes from the term involving $R_2\{4\}$ comparable to the leading term in deformed UU collisions. In this context, Fig.~\ref{fig4} shows that there is a noticeable difference between the values of $R_2\{4\}$ for spherical and deformed collisions. Furthermore, once $\beta_3$ is turned on, we can observe an increase in the magnitude of $R_2\{4\}$ opposite to $c_2\{4\}$ and $K_2\{4\}$. However, there is a splitting between different deformed nuclei as well. \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.42]{1.jpg}\\ \includegraphics[scale=0.42]{2.jpg} \end{tabular} \caption{(Color online) A comparison of the obtained elliptic distribution with $BG(v_2)$ and different corrections $P_2$ and $P_3$ is presented by dashed black, red, and blue lines, respectively. The top panel displays the data of PbPb at $0-5\%$, while the bottom panel shows the results of UU collisions in the same centrality class.} \label{fig3} \end{figure} \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.45]{3b.pdf} \end{tabular} \caption{(Color online) Comparing the cumulants $R_2\{4\}$ as a function of centrality for different spherical and deformed nuclei collisions. The mini panel presents the values $R_2\{4\}$ where we expect maximum deformity.} \label{fig4} \end{figure} \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.42]{3.jpg}\\ \includegraphics[scale=0.42]{4.jpg} \end{tabular} \caption{(Color online) Similar to Fig.~\ref{fig3} but for the third harmonic $v_3$ distribution of PbPb (top) and ZrZr (bottom) collisions.} \label{fig5} \end{figure} If we want to investigate the deformation effect on $v_3$ or the octupole structure of nuclei, it seems that increasing $\beta_3$ would lead to a correction to Bessel-Gaussianity as well. In Fig.~\ref{fig5}, we show the distribution of $v_3$ both for SS (PbPb) and DD (ZrZr) collisions in $0-5\%$ centrality. The results imply that large values of $\beta_3$ play a significant role in $v_3$ distribution. This effect appears as a non-Bessel-Gaussian distribution, while the Bessel-Gaussian approximation works well for spherical nuclei. \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.445]{6.pdf}\\ \includegraphics[scale=0.45]{7.pdf} \end{tabular} \caption{(Color online) The results show the values of $R_3\{2\}$ and $R_3\{4\}$, which obtained from PbPb data and ZrZr data.} \label{fig6} \end{figure} To study the correction part of $v_3$ distribution, we should investigate the coefficient $\gamma_4=R_3\{4\}/R_3\{2\}^2$. Fig.~\ref{fig6} shows a comparison of $R_3\{2\}$ and $R_3\{4\}$ for PbPb with ZrZr collisions. Note that due to $\bar{v}_3=0$, one can find $R_3\{2k\}=c_3\{2k\}$. As illustrated in Fig.~\ref{fig6}, we find that the cumulants of ZrZr have a larger magnitude than spherical ones. This leads to a non-negligible difference in $\gamma_4$, and thus, the correction terms are crucial in ZrZr as shown in Fig.~\ref{fig5}. The deformation of nuclei manifests itself in the shape of colliding nuclei. On the other hand, the change in the overlapping area can have effects on the cumulants and distributions of flow harmonics. In this regard, we believe that the standard Gram-Charlier series would be ideal tool as a probe of nuclear structure in distribution analysis. \section{relation between observables}\label{sec5} In Sec.~\ref{sec3}, we presented the comparison of SS and DD distributions as a probe of nuclear structure. We showed that the effect of deformation appears for the second and third harmonics. It is useful to have an estimate of the observable DD collisions. Thus, we want to estimate the observables by fitting a known SS distribution, e.g., PbPb, to the deformed nuclei data like UU. To do that, we need to modify the distribution of the spherical one. Since the correction included at $k=3$ is negligible, we modify $P_r(v_n)$ in Eq.~\ref{qqq8} by considering the truncation at $k=2$: \begin{equation}\label{qq9} \begin{split} P_r^{M}&(v_n)=\mathcal{G}(v_n';\bar{v}_{est})I_0\left(2v_n'\bar{v}_{est}/R_n\{2\}_{est}\right)\\ &+\frac{1}{2}\gamma_4^{est}\mathcal{G}(v_n';\bar{v}_{est})\sum_{j=0}^{2}\alpha_{2,j}^{est}(v_n') I_j\left(2v_n'\bar{v}_{est}/R_n\{2\}_{est}\right), \end{split} \end{equation} Inspired by Refs.\cite{Jia:2021tzt} and \cite{Jia:2021qyu}, the estimated parameters in the above are defined by: \begin{equation}\label{qq10} \begin{split} &v_n'=v_{n,0}+\sum_{m=2}p_m\beta_m,\quad \bar{v}_{est}=\bar{v}_{0}+\sum_{m=2}\delta_{1,m} \beta_m,\\ &R_n\{2\}_{est}=R_n\{2\}_{0}+\left(\sum_{m=2}\delta_{2,m} \beta_m\right)^2,\\ &R_n\{4\}_{est}=R_n\{4\}_{0}+\left(\sum_{m=2}\delta_{3,m} \beta_m\right)^4, \end{split} \end{equation} \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.42]{5.jpg}\\ \includegraphics[scale=0.42]{66.jpg}\\ \includegraphics[scale=0.42]{7.jpg} \end{tabular} \caption{(Color online) In the top panel, the different corrections of spherical uranium in most-central collisions are compared; while in the middle panel, the distributions of spherical nuclei are compared to the flow distribution of deformed uranium collisions. The bottom panel shows different estimations of DD using the SS distribution.} \label{fig7} \end{figure}where the index $0$ in the above indicates spherical observables. In fact, Eq.~\ref{qq10} is the simplest case to study the impact of deformation directly in terms of observables. However, this modification allows us to study the effect of deformation directly, in analogy to Ref.\cite{Jia:2021tzt}. Here, we show that having the cumulants obtained from PbPb data, one arrives at the UU observables using Eqs.~\ref{qq9} and \ref{qq10}. To do this, we show that the distribution of spherical PbPb and UU is equivalent. In the top panel of Fig.~\ref{fig7}, we plot the distribution of spherical uranium with the vanishing deformation parameter $\beta_{2}=\beta_{3}=0$. It is obvious that the BG distribution can explain the data for the spherical uranium accurately. In the middle panel of this plot, the comparison of PbPb and spherical uranium shows good agreement between them. This allows us to estimate the deformed UU observables using PbPb data. Since we want to study quadrupole deformation of nuclei, as a simple case study, we generate UU collisions by setting $\beta_2=0.265$ and $\beta_3=0$. This is because of removing the $\beta_3$ effect on the cumulants $R_2\{2k\}$. As illustrated in the middle plot of Fig.~\ref{fig7}, there is a noticeable difference between SS and DD distributions. It should be mentioned that truncation at $k=2$ was considered for both SS and DD. \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.42]{10.pdf} \end{tabular} \caption{(Color online) $\chi^2/$NDF values of fitting the distribution $P^M_r(v_2)$ to simulation data as function of centrality.} \label{fig8} \end{figure} Finding the coefficients $p$ and $\delta_i$ leads us to the distribution of deformed nuclei. Since we just considered non-zero $\beta_2$, we rename the coefficients in Eq.~\ref{qq10} as $p=p_2$, $\delta_{1,est}=\delta_{1,2}$, $\delta_{2,est}=\delta_{2,2}^2$, and $\delta_{3,est}=\delta_{3,2}^4$. As demonstrated in the bottom panel of Fig.~\ref{fig7}, we found different estimations of UU distribution as follows: \begin{equation}\label{qq11} \begin{split} P^{DD}_{est}&=P^{M}(p,\delta_{1,est},\delta_{2,est},\delta_{3,est}),\\ P^{DD}_{est}&=P^{M}(0,\delta_{1,est},\delta_{2,est},\delta_{3,est}), \end{split} \end{equation} Results show that the estimated distributions are compatible with the $v_2$ distribution obtained from UU data qualitatively. In other words, the definitions in Eq.~\ref{qq10} worked. To find their consistency, one can investigate them in other centralities. The values of coefficients $\delta_{i,est}$ are presented in Table.~\ref{tab1} at different centrality classes. We can see that the effect of deformation would be different in each centrality due to various estimated values. Also, $\chi^2/$NDF of fitting in each centrality are illustrated in Fig.~\ref{fig8}. Since these values are closer to 1, one can interpret that Eqs.~\ref{qq9} and \ref{qq10} present a good estimation of deformed UU observables. Of course, the value of $\chi^2/$NDF in mid-central collisions is growing. This means that that if we go to higher centralities we needs to other truncations in Eq.~\ref{qq7}, e.g. keep the terms included $R_n\{6\}$,$R_n\{8\}$ and so on to explain data without any fitting. Moreover, if one wants to study $2k$-particle correlation functions $c_n\{2k\}$ in this context, they should be written as a function included $\beta_n$: \begin{equation}\label{qcn} \begin{split} c_n\{2k\}_{est}&=c_n\{2k\}_0+\left(\sum_{m=2}\xi_{2k,m} \beta_m\right)^{2k}. \end{split} \end{equation} As mentioned in Eq.~\ref{qR}, $R_n\{2k\}_{est}$ is a function of $c_n\{2k\}_{est}$ and $\bar{v}_{n,est}$. Plugging Eq.~\ref{qcn} in Eq.~\ref{qR} and separating the terms with $\beta_n$ from spherical terms, one can find the following relations: \begin{equation}\label{qxi} \begin{split} R_n&\{2\}_{est}=R_n\{2\}_{0}-\left(\sum_{m=2}\delta_{1,m} \beta_m\right)^2\\&-2\left(\sum_{m=2}\delta_{2,m} \beta_m\right)\bar{v}_0+\left(\sum_{m=2}\xi_{2,m} \beta_m\right)^{2},\\ R_n&\{4\}_{est}=R_n\{4\}_{0}+\left(\sum_{m=2}\delta_{1,m} \beta_m\right)^4\\&+4\left(\sum_{m=2}\delta_{1,m} \beta_m\right)^3\bar{v}_0+6\left(\sum_{m=2}\delta_{1,m} \beta_m\right)^2\bar{v}_0^2\\&+4\left(\sum_{m=2}\delta_{1,m} \beta_m\right)\bar{v}_0^3+\left(\sum_{m=2}\xi_{4,m} \beta_m\right)^{4}, \end{split} \end{equation} keeping in mind $R_n\{2\}_0=c_n\{2\}_0-\bar{v}_{n,0}^2$ and $R_n\{4\}_0=c_n\{4\}_0+\bar{v}_{n,0}^4$. \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.4]{14.pdf} \end{tabular} \caption{(Color online) Comparison of $c_2\{2\}$ and $c_2\{4\}$ obtained from DD data with their estimates using SS. The value of $c_2\{4\}$ was multiplied by $100$.} \label{fig11} \end{figure} Now, we obtain $\xi_{2k,m}$ by equating Eqs.~\ref{qq10} and \ref{qxi}. Since we are interested in $\beta_{2}$ terms, we seek an expression for $\xi_{2k,2}$ as a function of $\delta$ and $\beta_{2}$. This is given by: \begin{equation}\label{qqA16} \begin{split} \xi_2\equiv\xi_{2,2}^2&=\delta_{1,2}^2+\delta_{2,2}^2+2\frac{\delta_{1,2}\bar{v}_{2,0}}{\beta_2},\\ \xi_4\equiv\xi_{4,2}^4&=-\delta_{1,2}^4+\delta_{3,2}^4-4\frac{\delta_{1,2}^3\bar{v}_{2,0}}{\beta_2}\\&-6\frac{\delta_{1,2}^2\bar{v}_{2,0}^2}{\beta_2^2}-4\frac{\delta_{1,2}\bar{v}_{2,0}^3}{\beta_2^3}. \end{split} \end{equation} Plugging $\delta_{1,est}=\delta_{1,2}$, $\delta_{2,est}=\delta_{2,2}^2$, and $\delta_{3,est}=\delta_{3,2}^4$ in Eq.~\ref{qqA16} we arrive at: \begin{equation} \begin{split} \xi_2\equiv\xi_{2,2}^2&=\delta_{1,est}^2+\delta_{2,est}+2\frac{\delta_{1,est}\bar{v}_{2,0}}{\beta_2},\\ \xi_4\equiv\xi_{4,2}^4&=-\delta_{1,est}^4+\delta_{3,est}-4\frac{\delta_{1,est}^3\bar{v}_{2,0}}{\beta_2}\\&-6\frac{\delta_{1,est}^2\bar{v}_{2,0}^2}{\beta_2^2}-4\frac{\delta_{1,2}\bar{v}_{2,0}^3}{\beta_2^3}. \end{split} \end{equation} For the particular values listed in Table.~\ref{tab1}, the coefficients $\xi_{2}$ and $\xi_{4}$ are found. \begin{table}[t!] \caption{The estimated coefficients in Eq.~\ref{qq10} are shown at different centralities.} \begin{supertabular}{| c | c | c | c |} \hline $\%$& $\delta_{1,est}$ & $\delta_{2,est}$ & $\delta_{3,est}$ \\ \hline $0-5$ & $0.014\pm0.0082$ & $0.020\pm0.0014$ & $-0.0002\pm0.00002$ \\ \hline $5-10$ & $0.001\pm0.0001$ & $0.022\pm0.0004$ & $-0.0004\pm0.00003$ \\ \hline $10-20$ & $0.0088\pm0.0006$ & $0.017\pm0.0003$ & $-0.0004\pm0.00003$ \\ \hline $20-30$ & $0.020\pm0.0006$ & $0.004\pm0.0002$ & $0.0015\pm0.00003$ \\ \hline \end{supertabular} \label{tab1} \end{table} We plot the $c_{2}\{2\}$ and $c_{2}\{4\}$ in Fig.~\ref{fig11}. In this plot, the solid black and brown represent the true centrality dependence of the aforementioned quantities. Moreover, the dashed red and pink lines are derived from our estimation. There is a good agreement between the true and estimated values. Moreover, this figure shows that if we consider $\xi_2\approx\delta_{2,est}$ (blue dashed line) and $\xi_4\approx\delta_{3,est}$ (green dashed line), we can find a reasonable approximation for them from $0$ to $20\%$ centralities. Plugging Eq.~\ref{qcn} into \ref{qq10} and using the approximation described, we obtain: \begin{equation} R_2\{2k\}_{D}-R_2\{2k\}_{0}=c_2\{2k\}_{D}-c_2\{2k\}_{0}. \end{equation} This leads to the same value of averaged ellipticity for both SS and DD collisions, i.e. $\bar{v}_D\approx\bar{v}_S$ (see Sec.~\ref{sec6}). While the studied nuclei have an mass number near one another, it is possible to access this information. To compare the observables of DD with those of SS, we study the ratio of $2k$-particle correlation functions: \begin{equation}\label{q19} \begin{split} \frac{c_2\{2\}_{D}}{c_2\{2\}_0}&=1+\frac{\xi_{2}\beta_2^{2}}{c_2\{2\}_0},\\ \frac{c_2\{4\}_{D}}{c_2\{4\}_0}&=1+\frac{\xi_{4}\beta_2^{4}}{c_2\{4\}_0}. \end{split} \end{equation} Keep in mind that we have only turned on the quadrupole deformation in Eq.~\ref{qcn}. Using the generated data for both SS (i.e., PbPb) and DD (i.e., UU), we obtain $c_2\{2\}_{D}/c_2\{2\}_0\approx2$ and $c_2\{4\}_{D}/c_2\{4\}_0\approx-5$ at most central collisions. The values imply that we have $c_2\{2\}_0\approx\xi_{2}\beta_2^{2}$ and $c_2\{4\}_0\approx(-1/6)\xi_{4}\beta_2^{4}$. The effect of deformation on 2 and 4-particle correlation functions is significant and cannot be ignored. \section{Ellipticity}\label{sec6} One of the main results from studying flow harmonics is that the averaged ellipticity $\bar{v}_{2n}$ is non-zero. This leads us to look for an accessible estimation of $\bar{v}_{2n}$ experimentally. Since we are interested in $v_2$ for SS and DD collisions, we present a possible approach to observe this quantity. Let us start with a 2D distribution of $(v_{2,x},v_{2,y})$ in Fig.~\ref{fig9}. As it turns out, there is a non-vanishing $\bar{v}_{2}$ for collisions of deformed as well as spherical nuclei. Despite the large size of $(v_{2,x},v_{2,y})$ distributions for DD collisions, the values of averaged ellipticity are the same. However, the path to find the estimations of $\bar{v}_2$ for SS and DD collisions is different. \begin{figure}[t!] \begin{tabular}{c} \includegraphics[scale=0.42]{12.pdf}\\ \includegraphics[scale=0.42]{13.pdf} \end{tabular} \caption{(Color online) Here we show a comparison of $R_2\{4\}$ with $R_2\{6\}$ in the top panel and the estimated values of $\bar{v}_{2}$ for different collisions as a function of centrality.} \label{fig10} \end{figure} At first, we prefer to present this estimation for deformed nuclei collisions. To do this, we start with the closest estimate of distribution $P_r(v_2)$ for DD collisions which is given by $BG+P_2(v_2)$. This means the higher order correction terms, i.e. $R_2\{6\}$, are very small such that $R_2\{6\}\approx0$. To verify this, we plotted this cumulant in Fig.~\ref{fig10}. As demonstrated, the magnitude of $R_2\{4\}$ for various centralities is larger than $R_2\{6\}$. Moreover, at $0-5\%$ and $5-10\%$ centalities $R_2\{6\}$ is closer to zero. Therefore, we estimate the value of $R_2\{2k\}$ for $k=1,2,3$ by considering: \begin{equation}\label{qq13} \begin{split} R_2\{2\}&=c_2\{2\}-\bar{v}_2^2\approx 0 \;\implies\; \bar{v}_2\{2\}\approx\left(c_2\{2\}\right)^{1/2},\\ &\hspace*{3cm}\text{or}\\ R_2\{4\}&=c_2\{4\}+\bar{v}_2^4\approx 0 \;\implies\; \bar{v}_2\{4\}\approx\left(-c_2\{4\}\right)^{1/4},\\ &\hspace*{3cm}\text{or}\\ R_2\{6\}&=c_2\{6\}-4\bar{v}_2^6\approx 0 \;\implies\; \bar{v}_2\{6\}\approx\left(c_2\{6\}/4\right)^{1/6}. \end{split} \end{equation} Focusing on the first condition, we find that in this case all the $\gamma_{2k}$ in Eq.~\ref{qqq8} diverge unless $R_2\{2k\}=0$. This leads to finding a delta function for $P(v_{2,x},v_{2,y})$, thus it is not compatible with the experimental observation. As the bottom panel in Fig.~\ref{fig10} depicts, $\bar{v}_2\{2\}$ is not a suitable candidate of $\bar{v}_2$. Having a Bessel-Gaussian distribution is the result of choosing the second line \cite{Jia:2022qgl}. This implies the behavior of SS and DD distributions is similar and we see no effect of nuclei deformity using distribution analysis. This is in contrast to our conclusion so far. The mini panel in the bottom plot in Fig.~\ref{fig10} indicates this estimation is not accurate at most central collisions. Of course, $\bar{v}_2\{4\}$ is a suitable choice to estimate averaged ellipticity at large centralities. Finally, we arrive at the last line of Eq.~\ref{qq13}. This implies a truncation at $k=2$. This is in agreement with our results in sec.~\ref{sec3}. To conclude this section, the closest estimate of $\bar{v}_2$ is given by $\bar{v}_2\{6\}$. In contrast to $\bar{v}_2\{4\}$, only $\bar{v}_2\{6\}$ explains $\bar{v}_2$ at most central collisions where the maximum deformity is expected to be observed. Since PbPb data can be explained by BG distribution, we find that $\bar{v}_{2,S}=\bar{v}_{2,S}\{4\}$. Concerning this argument and derived relation $\bar{v}_D\approx\bar{v}_S$ in the Sec.~\ref{sec5}, one can find $\bar{v}_{2,D}=\bar{v}_{2,D}\{6\}=\bar{v}_{2,S}\{4\}$ as well. This means that we can determine the averaged ellipticity of DD collisions with the observables of SS ones. \\ \vspace*{.25cm} \section{Conclusions}\label{sec.con} Motivated by the collisions of deformed nuclei, in this paper, we studied the flow distribution of symmetric and deformed nuclei. In the first part of this manuscript, we presented a systematic approach to calculating the corresponding cumulants for SS and DD collisions. It was shown that in most central collisions there is no difference between different nuclei for $c_2\{2k\}|_{k>1}$. To be able to distinguish between different ions, we considered the effect of the shift parameter $\bar v_{n}$. Then, we scrutinized the effect of a different form of deformation, including quadrupole $\beta_{2}$ as well as octupole $\beta_{3}$, through the shifted cumulants. We observed that the shift parameter manifest the differences between the cumulants of deformed and spherical nuclei clearly. \par Using the obtained information from cumulative studies, we calculated the corresponding distribution of flow harmonics. It was shown that, after keeping an appropriate number of terms, the resulting distribution described the data very well. Comparing the distribution of deformed and spherical nuclei reveals the effect of various kinds of deformation on flow harmonics. As it turns out, increasing the quadrupole magnitude $\beta_{2}$, deformation results in a broader distribution compared to the symmetric one, see for example Fig.~\ref{fig7}. We, further, discussed the possibility of interpolating from spherical to deformed nuclei by including appropriate corrections. We examined this idea where we could generate the deformed correlation as well as cumulants with high precision. \par Finally, we discussed a possible way to measure the shift parameter through the analysis of different radial cumulants for deformed nuclei in central collisions. We observed for asymmetric nuclei the most appropriate choice is the measurement of $\bar{v}_{2}\{6\}$. It would be interesting to extend this work to the collision of $\text{Ru}$-$\text{Ru}$ and $\text{Zr}$-$\text{Zr}$ using full hydrodynamic simulations which are more relevant for the isobar program. The aim of such studies is to extract the effect related to CME from the background. We postpone these subjects to future studies. \section*{Acknowledgment} We thank Jiangyong Jia and Giuliano Giacalone for their useful comments. We are thankful to Wilke van der Schee for helpful discussions and invaluable feedback. H.M.~thanks CERN-TH group for the support. H.M.~is funded by the Cluster of Excellence {\em Precision Physics, Fundamental Interactions, and Structure of Matter} (PRISMA$^+$ EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy (Project ID 39083149).
1,116,691,498,031
arxiv
\section{Introduction} Let $X$ be a smooth projective variety of dimension $n$ with Hodge numbers $h^{p,q} (X)$. It follows from Hirzebruch-Riemann-Roch theorem that \begin{equation}\label{eq:LWIntro} \frac{d^2}{du^2} E (X; u,1)_{|u=1}=\frac{n(3n-5)}{12}c_n (X)+\frac{1}{6} c_1 (X) c_{n-1}(X) \end{equation} where $E( X; u,v)=\sum_{p,q} (-1)^{p+q}h^{p,q} (X) u^p v^q$, see Libgober and Wood \cite{LW} and also Borisov \cite[Proposition 2.2]{Borisov}. By duality, we get \begin{equation}\label{eq:VarianceDegreCohomologie} \sum_{p,q} (-1)^{p+q} h^{p,q}(X) (p-\frac{n}{2})^2 =\frac{n}{12}c_n (X)+\frac{1}{6} c_1 (X) c_{n-1}(X) \end{equation} If $X$ is more generally a $n$-dimensional projective variety with at most log-terminal singularities (we will focuse in this paper on the toric case), Batyrev \cite{Batyrev3} has proved a stringy version of formula (\ref{eq:LWIntro}) \begin{equation}\label{eq:VarStringyIntro} \frac{d^2}{du^2} E_{st} (X; u,1)_{|u=1}=\frac{n(3n-5)}{12}e_{st} (X)+\frac{1}{6} c_{st}^{1,n}(X) \end{equation} where $E_{st}$ is the stringy $E$-function of $X$, $e_{st}$ is the stringy Euler number and $c_{st}^{1,n}(X)$ is a stringy version of $c_1 (X) c_{n-1}(X)$. On the singularity theory side, the expected mirror partners of toric varieties are the Givental-Hori-Vafa models \cite{Giv}, \cite{HV}, in general a class of Laurent polynomial. One associates to such functions their {\em spectrum at infinity}, a sequence $\alpha_{1},\cdots ,\alpha_{\mu}$ of rational numbers, suitable logarithms of the eigenvalues of the monodromy at infinity of the function involved (see \cite{Sab1}; the main features are recalled in section \ref{sec:SpecAlgebriquePolytope}). A specification of mirror symmetry is that the spectrum at infinity of a given Givental-Hori-Vafa model is related to the degrees of the (orbifold) cohomology groups of its mirror variety (orbifold). So one can expect a formula similar to (\ref{eq:VarianceDegreCohomologie}) involving the spectrum at infinity of any tame regular function: the aim of this text is to look for such a counterpart. The key observation is that the spectrum at infinity of a Laurent polynomial can be described (under a tameness condition due to Kouchnirenko \cite{K}, see section \ref{sec:SpecAlgebriquePolytope}) with the help of the Newton filtration of its Newton polytope. Since a polytope determines a stacky fan \cite{BCS}, we are led to define a ``stacky'' version of the $E$-polynomial. Given a Laurent polynomial $f$ with (simplicial) Newton polytope $P$, global Milnor number $\mu$ and spectrum at infinity $\alpha_1 ,\cdots ,\alpha_{\mu}$, the program is thus as follows: \begin{itemize} \item to construct a stacky version of the $E$-polynomial, the {\em geometric spectrum} of $P$: we define $Spec_P^{geo} (z) :=(z-1)^{n} \sum_{v\in N} z^{-\nu (v)}$ where $\nu$ is the Newton function of the polytope $P$, see section \ref{sec:SpecGeometriquePolytope}. This geometric spectrum is closely related to the Ehrhart series and $\delta$-vector of the polytope $P$, more precisely to their twisted versions studied by Stapledon \cite{Stapledon} and Mustata-Payne \cite{MustataPayne}; this function is also an orbifold Poincar\'e series (see corollary \ref{coro:SpecGeoOrbifold}), thanks to the description of orbifold cohomology given by Borisov, Chen and Smith \cite[Proposition 4.7]{BCS}, \item to show that this geometric spectrum is equal to the (generating function of the) spectrum at infinity of $f$ and this is done by showing that both functions are Hilbert-Poincar\'e series of isomorphic graded rings (see corollary \ref{coro:SpecGeoegalSpecAalg}); this gives the expected identification between the spectrum at infinity and orbifold degrees, \item to show a formula \begin{equation}\nonumber \frac{d^2}{dz^2} Spec_P^{geo} (z)_{|z=1}=\frac{n(3n-5)}{12}\mu +\frac{1}{6} \widehat{\mu} \end{equation} where $\widehat{\mu}$ is a linear combination of intersection numbers: this is done using the analog of Batyrev's stringy formula (\ref{eq:VarStringyIntro}), see theorem \ref{theo:VarianceSpectreGeometrique}. \end{itemize} At the end we get a version of (\ref{eq:VarianceDegreCohomologie}) for the spectrum at infinity of Laurent polynomials, see theorem \ref{theo:VarianceMiroirToriqueChampetre}: \begin{equation}\label{eq:VarianceIntro} \sum_{i=1}^{\mu} (\alpha_i -\frac{n}{2})^2 =\frac{n}{12}\mu +\frac{1}{6}\widehat{\mu} \end{equation} It should be emphasized that this formula is in essence produced by Hirzebruch-Riemann-Roch theorem. In order to enlighten formula (\ref{eq:VarianceIntro}), assume that $N=\zit^{2}$ and that $P$ is a full dimensional reflexive lattice polytope in $N_{\mathbb{R}}$. Then we have the following well-known Noether's formula \begin{equation}\label{eq:Noether} 12 =\mu_P +\mu_{P^{\circ}} \end{equation} where $P^{\circ}$ is the polar polytope of $P$ and $\mu_{P}$ ({\em resp.} $\mu_{P^{\circ}}$) is the normalized volume of $P$ ({\em resp.} $P^{\circ}$), see equation (\ref{eq:MuVol}) (by Pick's formula, $\mu_P =\card (\partial P\cap N)$ if $P$ is reflexive). We show in section \ref{sec:NoetherFanoPolytope} that if $P$ is a Fano lattice polytope (a polytope is Fano if its vertices are primitive lattice points) we have $\widehat{\mu_{P}}=\mu_{P^{\circ}}$. From formula (\ref{eq:VarianceIntro}), we then get \begin{equation}\label{eq:VarianceIntroDim2} \sum_{i=1}^{\mu} (\alpha_i -1)^2 =\frac{1}{6}\mu_P +\frac{1}{6}\mu_{P^{\circ}} \end{equation} which is a generalization of Noether's formula (\ref{eq:Noether}): indeed, a reflexive polytope $P$ is Fano and its algebraic and/or geometric spectrum satisfies $\sum_{i=1}^{\mu} (\alpha_i -1)^2 =2$ ({\em version 3 of july 2016}: I am informed that an analogous result has been proved independently by Batyrev and Schaller, see \cite{BS}). Last, it follows from equation (\ref{eq:VarianceIntro}) that if $\widehat{\mu}\geq 0$ we have \begin{equation}\label{eq:ConjIntro} \frac{1}{\mu}\sum_{i=1}^{\mu}(\alpha_i -\frac{n}{2})^2 \geq \frac{\alpha_{max}-\alpha_{min}}{12} \end{equation} where $\alpha_{max}$ ({\em resp.} $\alpha_{min}$) denotes the maximal ({\em resp.} minimal) spectral value, because $\alpha_{max}-\alpha_{min} =n$ for Laurent polynomials. This inequality is expected to be true for any tame regular function: this is the global version of Hertling's conjecture about the variance of the spectrum, see section \ref{sec:Conj}. For instance, formula (\ref{eq:VarianceIntroDim2}) show that this will be the case in the two dimensional case if the Newton polytope of $f$ is Fano. This paper is organized as follows: in section \ref{sec:PolytopeVarTor} we recall the basic facts on polytopes and toric varieties that we will use. In section \ref{sec:CahierChargesSpectre}, we discuss of what should be the spectrum of a polytope. The geometric spectrum is defined in section \ref{sec:SpecGeometriquePolytope} and the algebraic spectrum is defined in section \ref{sec:SpecAlgebriquePolytope}: both are compared in section \ref{sec:Comparaison}. The previous results are used in section \ref{sec:ConjectureVarianceSpectre} in order to get formula (\ref{eq:VarianceIntro}). We show Noether's formula (\ref{eq:VarianceIntroDim2}) for Fano polytopes in section \ref{sec:NoetherFanoPolytope}. Last, we use our results in order to motivate (and show in some cases) the conjecture about the variance of the algebraic spectrum in section \ref{sec:Conj}. This text owes much to Batyrev's work \cite{Batyrev2}, \cite{Batyrev3}. The starting point was \cite[Remark 3.13]{Batyrev2} and its close resemblance with Hertling's conjecture about the variance of the spectrum of an isolated singularity \cite{Her}: this link is previously alluded to in \cite{Her1}. \section{Polytopes and toric varieties (framework)} \label{sec:PolytopeVarTor} We give in this section an overview of the results that we will use and we set the notations. \subsection{Polytopes and reflexive polytopes} Let $N$ be the lattice $\zit^n$, $M$ its dual lattice, $\langle \ ,\ \rangle$ the pairing between $N_{\mathbb{R}}=N\otimes_{\zit}\mathbb{R}$ and $M_{\mathbb{R}}=M\otimes_{\zit}\mathbb{R}$. A full dimensional lattice polytope $P\subset N_{\mathbb{R}}$ is the convex hull of a finite set of $N$ such that $\dim P=n$. If $P$ is a full dimensional lattice polytope containing the origin in its interior, there exists, for each facet (face of dimension $n-1$) $F$ of $P$ , $u_F \in M_{\qit}$ such that \begin{equation}\label{eq:PresentationFacette} P\subset \{n\in N_{\mathbb{R}},\ \langle u_F ,n\rangle \leq 1 \}\ \mbox{and}\ F=P\cap \{n\in N_{\mathbb{R}},\ \langle u_F ,n\rangle = 1 \} \end{equation} This gives the hyperplane presentation \begin{equation}\label{eq:PresentationPolytope} P=\cap_F \{n\in N_{\mathbb{R}},\ \langle u_F ,n\rangle \leq 1 \} \end{equation} We define, for $v\in N_{\mathbb{R}}$, $\nu_F (v):=\langle u_F ,v\rangle $ and $\nu (v):=\max_{F}\nu_{F}(v)$ where the maximum is taken over the facets of $P$. \begin{definition}\label{def:FonctionSupport} The function $\nu : N_{\mathbb{R}}\rightarrow \mathbb{R}$ is the Newton function of $P$. \end{definition} If $P$ is a full dimensional lattice polytope in $N_{\mathbb{R}}$ containing the origin, the polytope \begin{equation}\nonumber P^{\circ}=\{m\in M_{\mathbb{R}},\ \langle m,n \rangle\leq 1\ \mbox{for all}\ n\in P\} \end{equation} is the {\em polar polytope} of $P$. The vertices of $P^{\circ}$ are in correspondence with the facets of $P$ {\em via} \begin{equation}\label{eq:CorrVerticePolar} u_{F}\ \mbox{vertice of}\ P^{\circ} \leftrightarrow \ F =P\cap \{x\in N_{\mathbb{R}}, \langle u_{F}, x\rangle =1\} \end{equation} A lattice polytope $P$ is {\em reflexive} if it contains the origin and if $P^{\circ}$ is a lattice polytope. All the polytopes considered in this paper are full dimensional lattice polytopes containing the origin in their interior. For such a polytope $P$, we define its {\em normalized volume} \begin{equation}\label{eq:MuVol} \mu_P :=n! \vol (P) \end{equation} where the volume $\vol (P)$ is normalized such that the volume of the unit cube is equal to $1$. \subsection{Ehrhart polynomial and Ehrhart series} \label{sec:Ehrhart} Let $Q$ be a full dimensional lattice polytope. The function $\ell \mapsto Ehr_Q (\ell ):= \card ( (\ell Q )\cap M)$ is a polynomial of degree $n$, the {\em Ehrhart polynomial}. We have \begin{equation}\label{eq:serie Ehrhart} F_Q (z):= \sum_{m\geq 0}Ehr_Q (m) z^m =\frac{\delta_0 +\delta_1 z +\cdots +\delta_n z^n}{(1-z)^{n+1}} \end{equation} where the $\delta_j$'s are positive integers \cite[Theorem 3.12]{BeckRobbins}: $F_Q$ is the {\em Ehrhart series} and the vector \begin{equation}\label{eq:deltavecteur} \delta =( \delta_0 ,\cdots ,\delta_n )\in\nit^{n+1} \end{equation} is the {\em $\delta$-vector} of the polytope $Q$. We have \begin{equation}\label{eq:DiversDelta} \delta_0 =1,\ \delta_1 =\card (Q\cap M)-(n+1),\ \delta_n =\card (\Inter(Q)\cap M) \end{equation} and \begin{equation}\label{eq:deltaVolume} \delta_0 +\cdots +\delta_n =n! \vol (Q) \end{equation} see \cite[Chapter 3]{BeckRobbins}. The $\delta$-vector gives a characterization of reflexive polytopes, see for instance \cite[Theorem 4.6]{BeckRobbins}: the polytope $Q$ is reflexive if and only if $\delta_i =\delta_{n-i}$ for $i=0,\cdots ,n$. \subsection{Toric varieties} \label{sec:VarPolFano} Let $\Delta$ be a fan in $\nit_{\mathbb{R}}$ and denote by $\Delta (i)$ the set of its cones of dimension $i$. The {\em rays} of $\Delta$ are its one-dimensional cones. Let $X := X_{\Delta}$ be the toric variety of the fan $\Delta$: $X$ is {\em simplicial} if each cone $\Delta$ is generated by independent vectors of $N_{\mathbb{R}}$, {\em complete} if the support of its fan (the union of its cones) is $N_{\mathbb{R}}$. One can get toric varieties from polytopes in the following ways: a full dimensional lattice polytope $Q$ in $M_{\mathbb{R}}$ yields a toric variety $X_Q$, associated with the normal fan $\Sigma_Q$ of $Q$, which is a fan in $N$; alternatively, if $P\subset N_{\mathbb{R}}$ is a full dimensional lattice polytope containing the origin in its interior we get a complete fan $\Delta_P$ in $N_{\mathbb{R}}$ by taking the cones over the proper faces of $P$ and we will denote by $X_{\Delta_P}$ the associated toric variety. Both constructions are dual, see for instance \cite[Exercise 2.3.4]{CLS}: if $P^{\circ}$ is the polar polytope of the polytope $P$ in $N_{\mathbb{R}}$ then $\Delta_P$ is the normal fan of $\ell P^{\circ}$ where $\ell$ is an integer such tha $\ell P^{\circ}$ is a lattice polytope and $X_{\Delta_P}=X_{\ell P^{\circ}}$. In particular, $X_{\Delta_P}=X_{P^{\circ}}$ if $P$ is reflexive. Recall that a projective normal toric variety $X$ is {\em Fano} ({\em resp.} {\em weak Fano}) if the anticanonical divisor $-K_X$ is $\qit$-Cartier and ample ({\em resp.} nef and big). The variety $X$ is {\em Gorenstein} if $K_X$ is a Cartier divisor. (Weak) Fano toric varieties play an important role in our vision of mirror symmetry, see section \ref{sec:ConstMiroirGen}. We will say that a full dimensional lattice polytope $P$ containing the origin in its interior is {\em Fano} if its vertices are primitive lattice points of $N$, {\em smooth Fano} if each of its facets has exactly $n$ vertices forming a basis of the lattice $N$. It should be emphasized that a Fano polytope is not necessarily smooth. {\em Otherwise stated, all toric varieties that we will consider are complete and simplicial.} \subsection{Stacky fans and orbifold cohomology} \label{sec:EventailChampetre} Let $\Delta$ be a complete simplicial fan, $\rho_1 ,\cdots ,\rho_r$ be its rays generated respectively by the primitive vectors $v_1 ,\cdots ,v_r$ of $N$. Choose $b_1 ,\cdots , b_r \in N$ whose images in $N_{\qit}$ generate the rays $\rho_1 ,\cdots ,\rho_r$: the data $\mathbf{\Delta} =(N, \Delta ,\{b_i\})$ is a {\em stacky fan}, see \cite{BCS}. In particular, let $P$ be a lattice polytope containing the origin such that $\Delta :=\Delta_P$ is simplicial: there are $a_i$ such that $b_i :=a_i v_i \in \partial P\cap N$ and we will call the stacky fan $\mathbf{\Delta} =(N, \Delta ,\{b_i\})$ the {\em stacky fan of $P$}. In this situation, we define, for a cone $\sigma\in \Delta$, \begin{itemize} \item $N_{\sigma}$ the subgroup generated by $b_i$, $\rho_i \subseteq\sigma$, \item $N(\sigma )=N/ N_{\sigma}$, \item the fan $\Delta/ \sigma$ in $N(\sigma )_{\qit}$: this is the set $\{\tilde{\tau}=\tau +(N_{\sigma})_{\qit}, \ \sigma\subseteq \tau, \tau\in\Delta \}$ \item $\boite (\sigma):=\{\sum_{\rho_{i}\subseteq\sigma}\lambda_i b_i,\ \lambda_i \in ]0,1[\}$ \end{itemize} One associates to this stacky fan a (separated) Deligne-Mumford stack ${\cal X}(\mathbf{\Delta})$, see \cite[Proposition 3.2]{BCS}. We will denote by $H_{orb}^{\bullet} ( {\cal X}(\mathbf{\Delta}), \qit )$ its orbifold cohomology (with rational coefficients) and by $A_{orb}^* ( {\cal X}(\mathbf{\Delta}))$ its orbifold Chow ring (with rational coefficients). By \cite[Proposition 4.7]{BCS} we have \begin{equation}\label{eq:OrbiCohDecomposition} H_{orb}^{2i} ({\cal X}(\mathbf{\Delta}), \qit )=\oplus_{\sigma\in\Delta }\oplus_{v\in \boite (\sigma)\cap N }H^{2(i-\nu (v))}(X_{\Delta/ \sigma }, \qit ) \end{equation} where $\nu$ is the Newton function of $P$ (see definition \ref{def:FonctionSupport}). \subsection{Batyrev's stringy functions} \label{sec:FonctionsFiliformes} Let $X_{\Delta}$ be a normal $\qit$-Gorenstein toric variety and $\rho :Y\rightarrow X_{\Delta}$ be a toric (log-)resolution defined by a refinement $\Delta'$ of $\Delta$, see for instance \cite[Proposition11.2.4]{CLS}. The irreducible components of the exceptional divisor of $\rho$ are in one-to-one correspondence with the primitive generators $v_{1}' ,\cdots ,v_{q}'$ of the rays of $\Delta' (1)$ of $Y$ that do not belong to $\Delta (1)$ and in the formula \begin{equation}\label{eq:ResolutionDiviseurCanonique} K_{Y}=\rho^{*}K_{X_{\Delta}}+\sum_{i=1}^{q}a_{i}D_{i} \end{equation} we have $a_{i}=\varphi (v'_{i})-1$ where $\varphi$ is the support function of the divisor $K_{X_{\Delta}}$, see for instance \cite[Lemma 11.4.10]{CLS}. In our toric situation we have $a_{i}>-1$ because $\varphi (v'_{i})>0$. Recall the $E$-polynomial of a smooth variety $X$ defined by \begin{equation}\label{eq:Epolynome} E (X, u,v):=\sum_{p,q=0}^n (-1)^{p+q} h^{p,q} (X) u^p v^q \end{equation} where the $h^{p,q}(X)$'s are the Hodge numbers of $X$. It is possible to extend this definition to singular spaces having log-terminal singularities and to get {\em stringy} invariants that extend topological invariants of smooth varieties. Here is the construction: let $\rho : Y\rightarrow X$ be a resolution of $X:=X_{\Delta}$ as above, $I=\{1,\cdots ,q\}$ and put, for any subset $J\subset I$, \begin{equation}\nonumber D_J := \cap_{j\in J}D_j\ \mbox{if}\ J\neq \emptyset ,\ D_J := Y \ \mbox{if}\ J= \emptyset\ \mbox{and}\ D_J^{\circ}=D_J - \bigcup_{j\in I-J}D_j \end{equation} The following definition is due to Batyrev \cite{Batyrev2} (we assume that the product over $\emptyset$ is $1$; recall that $a_i >-1$): \begin{definition}\label{def:StringyFunction} Let $X$ be a toric variety. The function \begin{equation}\label{eq:DefStringyFunction} E_{st}(X, u,v):=\sum_{J\subset I}E( D_J^{\circ}, u,v)\prod_{j\in J}\frac{uv-1}{(uv)^{a_j +1} -1} \end{equation} is the stringy $E$-function of $X$. The number \begin{equation}\label{def:StringyeulerNumber} e_{st}(X):=\lim_{u,v\rightarrow 1}E_{st}(X, u,v) \end{equation} is the stringy Euler number. \end{definition} \noindent The stringy $E$-function can be defined using motivic integrals, see \cite{Batyrev2} and \cite{Veys}. By \cite[Theorem 3.4]{Batyrev2}, $E_{st}(X,u ,v)$ do not depend on the resolution. In our setting, $E_{st}$ depends only on the variable $z:=uv$, and we will write $E_{st}(X, z)$ instead of $E_{st}(X, u,v)$. In section \ref{sec:InterpretationResolutionSing} we will use a modified version of the stringy $E$-function in order to compute the geometric spectrum of a polytope. \section{The spectrum of a polytope} \label{sec:CahierChargesSpectre} Let $P$ be a full dimensional lattice polytope in $N_{\mathbb{R}}$. In this text, a spectrum $Spec_P$ of $P$ is {\em a priori} an ordered sequence of rational numbers $\alpha_{1}\leq \cdots\leq \alpha_{\mu}$ that we will identify with the generating function $Spec_P (z) :=\sum_{i=1}^{\mu}z^{\alpha_{i}}$. The specifications are the following ($d(\alpha_i )$ denotes the multiplicity of $\alpha_i$ in the $Spec_P$): \begin{itemize} \item {\em Rationality} : the $\alpha_i$'s are rational numbers, \item {\em Positivity} : the $\alpha_i$'s are positive numbers, \item {\em Poincar\'e duality} : $Spec_P (z)=z^n Spec_P (z^{-1})$, \item {\em Volume} : $\lim_{z\rightarrow 1} Spec_P (z)= n! \vol(P)=:\mu_{P}$ \item {\em Normalisation} : $d(\alpha_1 )=1$ \item {\em Modality (Lefschetz)} : $d(\alpha_1) \leq d(\alpha_2 )\leq\cdots \leq d(\alpha_{\ell})$ if $\alpha_{\ell}\leq [\frac{n}{2}]$ \end{itemize} In particular, $Spec_P$ is contained in $[0,n]$ and $\sum_{i=1}^{\mu}\alpha_{i}=\frac{n}{2}\mu_P $. Basic example: if $P$ is a smooth Fano polytope in $\mathbb{R}^n$, the Poincar\'e polynomial $\sum_{i=1}^n b_{2i}(X_{\Delta_P}) z^i$ is a spectrum of $P$. \section{Geometric spectrum of a polytope} \label{sec:SpecGeometriquePolytope} We define here the geometric spectrum of a polytope and we give several methods in order to compute it. It will follow that the geometric spectrum is indeed a spectrum in the sense of the previous section. Recall that the toric varieties considered here are assumed to be simplicial. \subsection{The geometric spectrum} Let $P$ be a full dimensional lattice polytope in $N_{\mathbb{R}}$, containing the origin in its interior. Recall the Newton function $\nu$ of $P$ of definition \ref{def:FonctionSupport}. \begin{definition}\label{def:specgeopoltope} The function \begin{equation}\nonumber Spec_P^{geo} (z) :=(z-1)^{n} \sum_{v\in N} z^{-\nu (v)} \end{equation} is the geometric spectrum of the polytope $P$. The number $e_P :=\lim_{z\rightarrow 1} Spec_P^{geo} (z)$ is the geometric Euler number of $P$. \end{definition} \noindent It will follow from proposition \ref{prop:SpecGeohFunction} that $Spec_P^{geo} (z)=\sum_{i=1}^{e_P} z^{\beta_{i}}$ for an ordered sequence $\beta_{1}\leq\cdots\leq \beta_{e_P}$ of non-negative rational numbers. We shall also say that the sequence $\beta_{1}, \beta_{2},\cdots , \beta_{e_{P}}$ is the {\em geometric spectrum} of the polytope $P$. We shall also see that $e_P$ is the normalized volume of $P$. \subsection{Various interpretations} We give three methods to compute $Spec_P^{geo}$, showing that it yields finally a spectrum of $P$ in the sense of section \ref{sec:CahierChargesSpectre}. The first one and the third one are inspired by the works of Mustata-Payne \cite{MustataPayne} and Stapledon \cite{Stapledon}. The second one is inspired by Batyrev's stringy $E$-functions. \subsubsection{First interpretation: fundamental domains} Let $P$ be a full dimensional lattice polytope in $N_{\mathbb{R}}$, containing the origin in its interior, $\Delta:=\Delta_P$ be the corresponding complete fan as in section \ref{sec:VarPolFano}. We identify each vertex of $P$ with an element $b_i \in N$. If $\sigma \in \Delta (r)$ is generated by $b_1 ,\cdots ,b_r$, set \begin{equation}\nonumber \Box (\sigma ):=\{\sum_{i=1}^{r} q_i b_i,\ q_i \in [0,1[,\ i=1,\cdots ,r\}, \end{equation} and \begin{equation}\nonumber \boite (\sigma ):=\{\sum_{i=1}^{r} q_i b_i,\ q_i \in ]0,1[,\ i=1,\cdots ,r\} \end{equation} ($\boite (\sigma )$ has already been defined in section \ref{sec:EventailChampetre}). \begin{lemma}\label{lemma:SpecGeoBoxDimn} We have \begin{equation}\label{eq:DescriptionSpecGeon} Spec_P^{geo} (z)=\sum_{r=0}^{n}(z-1)^{n-r}\sum_{\sigma\in \Delta (r)} \sum_{v\in \Box (\sigma )\cap N}z^{\nu (v)} \end{equation} and $e_P =n!\vol(P)=:\mu_P$. \end{lemma} \begin{proof} Let $\sigma\in \Delta (r)$. A lattice element $v\in \stackrel{\circ}{\sigma}$ has one of the following decomposition: \begin{itemize} \item $v =w +\sum_{i=1}^r \lambda_i b_i $ with $w\in \boite (\sigma)\cap N$ and $\lambda_i \geq 0$ for all $i$, \item $v =w +\sum_{i=1}^r \lambda_i b_i $ with $w\in \boite^c (\sigma )\cap N-\{0\}$, $\lambda_i \geq 0$ for all $i\geq 2$ and $\lambda_1 >0$, \item $v= \sum_{i=1}^r \lambda_i b_i$ where $\lambda_i >0$ for all $i$ \end{itemize} where $\boite^c (\sigma )$ is the complement of $\boite (\sigma )$ in $\Box (\sigma )$. We get \begin{equation}\label{eq:intermediaireBox} (z-1)^r \sum_{v\in \stackrel{\circ}{\sigma}\cap N}z^{-\nu (v)} =\sum_{v\in \boite (\sigma)\cap N }z^{r-\nu (v)} + \sum_{v\in \boite^c (\sigma )\cap N-\{0\}} z^{r-1-\nu (v)} +1 \end{equation} because \begin{itemize} \item $\sum_{\lambda_1 ,\cdots, \lambda_r \geq 0}z^{-\nu (w)}z^{-\lambda_1}\cdots z^{-\lambda_r}=\frac{z^{r-\nu (w)}}{(z-1)^r}$ if $w\in \boite (\sigma)\cap N$, \item $\sum_{\lambda_1 >0 ,\lambda_2 ,\cdots, \lambda_r \geq 0}z^{-\nu (w)}z^{-\lambda_1}\cdots z^{-\lambda_r}=\frac{z^{r-1-\nu (w)}}{(z-1)^r}$ if $w\in \boite^c (\sigma )\cap N - \{ 0\}$, \item $\sum_{\lambda_1 ,\cdots, \lambda_r > 0}z^{-\lambda_1}\cdots z^{-\lambda_r}=\frac{1}{(z-1)^r}$ \end{itemize} (and we use the fact that $\nu (b_i )=1$). Moreover, \begin{itemize} \item $\alpha\in \nu (\boite (\sigma)):=\{\nu (v), v\in \boite (\sigma)\}$ if and only if $r-\alpha\in \nu (\boite (\sigma))$, \item $\alpha\in \nu (\boite^c (\sigma )):=\{\nu (v), v\in \boite^c (\sigma ) \}$ if and only if $r-1-\alpha\in \nu (\boite^c (\sigma ))$. \end{itemize} because $q_i \in ]0,1[$ if and only if $1-q_i \in ]0,1[$. We then deduce from (\ref{eq:intermediaireBox}) that \begin{equation}\label{eq:intermediaireBoxBis} (z-1)^n \sum_{v\in \stackrel{\circ}{\sigma}\cap N}z^{-\nu (v)} =(z-1)^{n-r}\sum_{v\in \Box (\sigma)\cap N }z^{\nu (v)} \end{equation} for any $\sigma\in\Delta (r)$. The expected equality follows because the relative interiors of the cones of the complete fan $\Delta$ give a partition of its support. For the assertion about the Euler number, notice that \begin{equation}\nonumber \lim_{z\rightarrow 1} Spec_P^{geo} (z)=\sum_{\sigma\in \Delta (n)} \sum_{v\in \Box (\sigma )\cap N}1= n!\vol(P) \end{equation} because the normalized volume of $\sigma \cap \{v\in N_{\mathbb{R}},\ \nu (v)\leq 1\}$ is equal to the number of lattice points in $\Box (\sigma )$. \end{proof} \begin{proposition}\label{prop:SpecGeohFunction} We have \begin{equation}\nonumber Spec_P^{geo} (z)=\sum_{\sigma\in \Delta } \sum_{v\in \boite (\sigma )\cap N}h_{\sigma }(z) z^{\nu (v)} \end{equation} where $h_{\sigma}(z):=\sum_{\sigma \subseteq \tau} (z-1)^{n-\dim\tau}$. \end{proposition} \begin{proof} The expected equality follows from (\ref{eq:DescriptionSpecGeon}). \end{proof} \noindent It turns out that $h_{\sigma}(z)$ is the Hodge-Deligne polynomial of the orbit closure $V(\sigma )$ (as defined for instance in \cite[page 121]{CLS}) of the orbit $O(\sigma )$. Because $V(\sigma )$ is a toric variety, the coefficients of $h_{\sigma}(z)$ are non-negative natural numbers (see for instance \cite[Lemma 2.4]{Stapledon} and the references therein) and we get $Spec_P^{geo} (z)=\sum_{i=1}^{\mu_P} z^{\beta_i}$ for a sequence $\beta_1 ,\cdots ,\beta_{\mu_P}$ of nonnegative rational numbers. We will also call this sequence the {\em geometric spectrum of $P$}. \begin{corollary}\label{coro:SymetrieSpecGeo} The geometric spectrum satisfies $z^n Spec^{geo}_P (z^{-1})=Spec^{geo}_P (z)$. \end{corollary} \begin{proof} Because $z^{n-\dim\sigma}h_{\sigma}(z^{-1})=h_{\sigma}(z)$ (see again \cite[Lemma 2.4]{Stapledon}) and $\sum_{i}(1-m_{i})b_{i}\in \boite (\sigma )$ if $\sum_{i}m_{i}b_{i}\in \boite (\sigma )$ the assertion follows from proposition \ref{prop:SpecGeohFunction}. \end{proof} \begin{corollary}\label{coro:SpecGeoOrbifold} Let $P$ be a full dimensional simplicial lattice polytope in $N_{\mathbb{R}}$, containing the origin in its interior, and $\mathbf{\Delta} =(N, \Delta_P ,\{b_i\})$ be its stacky fan. Then \begin{equation}\nonumber Spec_{P}^{geo}(z)=\sum_{\alpha\in\qit} \dim_{\cit}H^{2\alpha}_{orb}({\cal X}(\mathbf{\Delta}),\cit) z^{\alpha} \end{equation} In words, the geometric spectrum of $P$ is the Hilbert-Poincar\'e series of the graded vector space $H^{2*}_{orb}({\cal X}(\mathbf{\Delta}),\cit)$. \end{corollary} \begin{proof} We have $H_{orb}^{2i} ({\cal X}(\mathbf{\Delta}), \qit )=\oplus_{\sigma\in\Delta}\oplus_{v\in\boite (\sigma)\cap N } H^{2(i-\nu (v))}(X_{\Delta/ \sigma}, \qit )$, see formula (\ref{eq:OrbiCohDecomposition}). It follows that the orbifold degrees are $\alpha =j+\nu (v)$ where $v\in \boite (\sigma )$ and $j=0,\cdots , n-\dim\sigma$ and we get \begin{equation}\nonumber \sum_{\alpha}\dim_{\qit} H_{orb}^{2\alpha}({\cal X}(\mathbf{\Delta}), \qit )z^{\alpha}=\sum_{\sigma\in\Delta}\sum_{v\in\boite (\sigma)\cap N } \sum_{j=0}^{n-\dim \sigma}\dim_{\qit} H^{2j}(X_{\Delta/ \sigma}, \qit )z^{j} z^{\nu (v)} \end{equation} Now, $\sum_{j=0}^{n-\dim \sigma}\dim_{\qit} H^{2j}(X_{\Delta/ \sigma}, \qit )z^{j}=h_{\sigma} (z)$ because the orbit closure $V(\sigma )$ and the toric variety $X_{\Delta /\sigma}$ are isomorphic (see for instance \cite[Proposition 3.2.7]{CLS}) and we get \begin{equation}\nonumber \sum_{\alpha}\dim_{\qit} H_{orb}^{2\alpha}({\cal X}(\mathbf{\Delta}), \qit )z^{\alpha}= \sum_{\sigma\in\Delta}\sum_{v\in\boite (\sigma)\cap N } h_{\sigma}(z) z^{\nu (v)} \end{equation} The assertion then follows from proposition \ref{prop:SpecGeohFunction}. \end{proof} \noindent It follows that if $P$ is smooth we have $Spec_{P}^{geo}(z)=\sum_{i=0}^n \dim_{\cit}H^{2i}(X_{\Delta_P}, \cit) z^{i}$: the geometric spectrum is thus the Poincar\'e polynomial in this case. To sum up, the geometric spectrum of a simplicial polytope is a spectrum in the sense of section \ref{sec:CahierChargesSpectre}. Rationality, positivity and the volume property are given by lemma \ref{lemma:SpecGeoBoxDimn} and proposition \ref{prop:SpecGeohFunction}, symmetry (Poincar\'e duality) by corollary \ref{coro:SymetrieSpecGeo} and modality by corollary \ref{coro:SpecGeoOrbifold}. \subsubsection{Second interpretation: stacky $E$-function of a polytope (resolution of singularities)} \label{sec:InterpretationResolutionSing} Let $P$ be a full dimensional lattice polytope in $N_{\mathbb{R}}$, containing the origin in its interior. Let $\rho :Y\rightarrow X$ be a resolution of $X:=X_{\Delta_P}$ as in section \ref{sec:FonctionsFiliformes}, $\rho_1 ,\cdots ,\rho_r$ be the rays of $Y$ with primitive generators $v_1 ,\cdots ,v_r$ and associated divisors $D_1 ,\cdots, D_r$ and $I$. Put, for a subset $J\subset I:=\{1,\cdots ,r\}$, $D_J := \cap_{j\in J}D_j$ if $ J\neq \emptyset$ and $D_J := Y$ if $J= \emptyset$ and define \begin{equation}\label{eq:DefStringyFunctionChampetre} E_{st, P}(z):=\sum_{J\subset I}E( D_J , z)\prod_{j\in J}\frac{z-z^{\nu_j}}{z^{\nu_j}-1} \end{equation} where $\nu_j =\nu (v_j)$ and $\nu$ is the Newton function of $P$ of definition \ref{def:FonctionSupport}. \begin{proposition}\label{prop:SpecGeoEgalEstP} We have $Spec_{P}^{geo}(z)= E_{st, P}(z)$. In particular, $E_{st, P}(z)$ does not depend on the resolution $\rho$. \end{proposition} \begin{proof} Using the notations of section \ref{sec:FonctionsFiliformes} we have $E(D_J^{\circ}, z)=\sum_{J' \subset J}(-1)^{|J|-|J'|}E(D_{J'},z)$ and \begin{equation}\nonumber E_{st, P}(z)=\sum_{J\subset I}E( D_J^{\circ} , z)\prod_{j\in J}\frac{z-1}{z^{\nu_j}-1} \end{equation} as in \cite[Proof of theorem 3.7]{Batyrev2}. Let $\sigma$ be a smooth cone of $\Delta'$, the fan of $Y$, generated by $v_{i_1},\cdots , v_{i_r}$ and $v\in \stackrel{\circ}{\sigma}$: we have $v=a_1 v_{i_1}+\cdots +a_r v_{i_r}$ for $a_1 ,\cdots ,a_r >0$ and $\nu (v)=a_1 \nu (v_{i_1} )+\cdots +a_r \nu (v_{i_r})$. Thus $$\sum_{v\in\stackrel{\circ}{\sigma}\cap N}z^{-\nu (v)}= \frac{1}{z^{\nu (v_{i_1} )}-1}\cdots \frac{1}{z^{\nu (v_{i_r} )}-1}$$ With these two observations in mind, the proof of the proposition is similar to the one of \cite[Theorem 4.3]{Batyrev2}. \end{proof} \begin{remark} Applying Poincar\'e duality to the smooth subvarieties $D_J$, we get once again the symmetry relation $z^n Spec^{geo}_P (z^{-1})=Spec^{geo}_P (z)$ of corollary \ref{coro:SymetrieSpecGeo}. \end{remark} \subsubsection{Third interpretation: twisted $\delta$-vector} Let $P$ be a full dimensional lattice polytope in $N_{\mathbb{R}}$, containing the origin in its interior. Following \cite{Stapledon}, we define $$F^0_P (z)=\sum_{m\geq 0 }\sum_{v\in mP\cap N}z^{\nu (v)-\lceil \nu (v)\rceil +m}$$ which is a twisted version of the Ehrhart series $F_P (z)$ defined in section \ref{sec:Ehrhart}. \begin{proposition}\label{prop:SpecGeodelta0} We have $Spec_{P}^{geo}(z)=(1-z)^{n+1} F^0_P (z)$. \end{proposition} \begin{proof} Notice first that $v\in mP$ if and only if $\nu (v)\leq m$: this follows from the presentation (\ref{eq:PresentationPolytope}) and the definition of the Newton function $\nu$. We thus have $$F^0_P (z^{-1}) =\sum_{m\geq 0 }\sum_{\nu (v)\leq m}z^{-\nu (v)+\lceil \nu (v)\rceil -m} =\sum_{v\in N}\sum_{\lceil \nu (v)\rceil \leq m}z^{-\nu (v)+\lceil \nu (v)\rceil -m}= \frac{1}{1-z^{-1}} \sum_{v\in N}z^{-\nu (v)}$$ and this gives $(z-1)^n (1-z^{-1})F^0_P (z^{-1})=Spec_{P}^{geo}(z)$. Thus $$(1-z)^{n+1} F^0_P (z)=z^n Spec_{P}^{geo}(z^{-1})=Spec_{P}^{geo}(z)$$ where the last equality follows from corollary \ref{coro:SymetrieSpecGeo}. \end{proof} \begin{corollary}\label{coro:SpecGeo01} The part of the geometric spectrum contained in $[0,1[$ is the sequence $\nu (v),\ v\in \Inter P\cap N$. In particular, the multiplicity of $0$ in the geometric spectrum is equal to one. Moreover the multiplicity of $1$ in $Spec^{geo}_{P}$ is equal to $\card (\partial P\cap N )-n$. \end{corollary} \begin{proof} Scrutinization of the coefficients of $z^{a}$, $a\leq 1$, in the formula of proposition \ref{prop:SpecGeodelta0} (see also \cite[Lemma 3.13]{Stapledon}). \end{proof} If $P$ is reflexive, we have the following link between the $\delta$-vector of $P$ from section \ref{sec:Ehrhart} and its geometric spectrum, see also \cite{MustataPayne}: \begin{corollary}\label{coro:SpecGeoReflexif} Let $P$ be a reflexive full dimensional lattice polytope containing the origin in its interior. Then $Spec_{P}^{geo}(z)=\delta_0 +\delta_1 z +\cdots +\delta_n z^n$ where $\delta =(\delta_0 ,\cdots ,\delta_n )$ is the $\delta$-vector of $P$. \end{corollary} \begin{proof} By (\ref{eq:CorrVerticePolar}) we have $\nu (v)\in\nit$ for all $v\in N$ because $P$ is reflexive. We thus get $F^0_P (z)=F_P (z)$ where $F_P (z)$ is the Ehrhart series of $P$ of section \ref{sec:Ehrhart} because $$F_P (z)=\sum_{m\geq 0 }\card (mP\cap N) z^{m}=\sum_{m\geq 0 }\sum_{v\in mP\cap N}z^{m}$$ By proposition \ref{prop:SpecGeodelta0} we have $Spec_{P}^{geo}(z)=(1-z)^{n+1}F_P (z)$ and we use formula (\ref{eq:serie Ehrhart}). \end{proof} \section{Algebraic spectrum of a polytope} \label{sec:SpecAlgebriquePolytope} Singularity theory associates to a (tame) Laurent polynomial function $f$ a {\em spectrum at infinity}, see \cite{Sab1}. We recall its definition and its main properties in section \ref{sec:SpecPolLaurent}. We can shift this notion to the Newton polytope $P$ of $f$ ($P$ is assumed to be simplicial) and get in this way the {\em algebraic spectrum} of $P$. In order to motivate the next sections, we describe the Givental-Hori-Vafa models \cite{Giv}, \cite{HV} which are the expected mirror partners of toric varieties. In order to make the text as self-contained as possible, we first recall Kouchnirenko's results. \subsection{Preliminaries: Kouchnirenko's framework} \label{sec:K} We briefly recall the setting of \cite{K}. Let $f: (\cit^*)^n \rightarrow \cit$ be a Laurent polynomial, $f (\underline{u})=\sum_{a\in\zit^n} c_a \underline{u}^a$ where $\underline{u}^a := u_1^{a_1}\cdots u_n^{a_n}$. The Newton polytope $P$ of $f$ is the convex hull of the multi-indices $a$ such that $c_a \neq 0$. We say that $f$ is {\em convenient} if $P$ contains the origin in its interior, {\em nondegenerate} if, for any face $F$ of $P$, the system $$u_1 \frac{\partial f_F}{\partial u_1}=\cdots =u_n \frac{\partial f_F}{\partial u_n}=0$$ has no solution on $(\cit^* )^n$ where $f_F (\underline{u})=\sum_{a\in F\cap P} c_a \underline{u}^a$ the sum being taken over the multi-indices $a$ such that $c_a \neq 0$. A convenient and nondegenerate Laurent polynomial $f$ has only isolated critical points and its global Milnor number $\mu_f$ (the number of critical points with multiplicities) is $\mu_{P}:=n! \vol (P)$. Moreover, $f$ is tame in the sense that the set outside which $f$ is a locally trivial fibration is made from critical values of $f$, and these critical values belong to this set only because of the critical points at finite distance. \subsection{Givental-Hori-Vafa models and mirror symmetry} \label{sec:ConstMiroirGen} Let $N=\zit^n$, $M$ be the dual lattice, $\Delta$ be a complete and simplicial fan and $v_1 ,\cdots ,v_r$ be the primitive generators of its rays. Consider the exact sequence \begin{equation}\nonumber 0\longrightarrow \zit^{r-n}\stackrel{\psi}{\longrightarrow} \zit^r \stackrel{\varphi}{\longrightarrow} \zit^n \longrightarrow 0 \end{equation} where $\varphi (e_i) =v_i$ for $i=1,\cdots ,r$ ($(e_i )$ denotes the canonical basis of $\zit^r$) and $\psi$ describes the relations between the $v_i$'s. Applying $Hom_{\zit} (--, \cit^* )$ to this exact sequence, we get \begin{equation}\nonumber 1\longrightarrow (\cit^*)^{n}\stackrel{}{\longrightarrow} (\cit^* )^r \stackrel{\pi}{\longrightarrow} (\cit^* )^{r-n} \longrightarrow 1 \end{equation} where \begin{equation}\label{eq:defpi} \pi (u_1 ,\cdots , u_r )= (q_1 ,\cdots , q_{r-n})= (u_1^{a_{1,1}}\cdots u_r^{a_{r,1}},\cdots , u_1^{a_{1, r-n}}\cdots u_r^{a_{r, r-n}}) \end{equation} and the integers $a_{i,j}$ satisfy $\sum_{j=1}^r a_{j, i} v_j =0$ for $i=1,\cdots , r-n$. The {\em Givental-Hori-Vafa model} of the toric variety $X_{\Delta}$ is the function \begin{equation}\nonumber u_1 +\cdots + u_r \ \mbox{restricted to}\ U:= \pi^{-1}(q_1 ,\cdots , q_{r-n}) \end{equation} We will denote it by $f_{X_{\Delta}}$. The stacky version of this construction is straightforward: replace the $v_i$'s by the $b_i$'s. If $\Delta$ contains a smooth cone, the function $f_{X_{\Delta}}$ is easily described: \begin{proposition}\label{prop:HVLaurent} Assume that $(v_{1},\cdots , v_{n})$ is the canonical basis of $N$. Then $f_{X_{\Delta}}$ is the Laurent polynomial defined on $(\cit^*)^n$ by \begin{equation}\nonumber f_{X_{\Delta}} (u_1 ,\cdots , u_n )= u_1 +\cdots + u_n + \sum_{i=n+1}^r q_i u_1^{v^i_1}\cdots u_n^{v^i_n} \end{equation} if $v_i = (v_1^i ,\cdots , v_n^i )\in\zit^{n}$ for $i=n+1,\cdots ,r$. \end{proposition} \begin{proof} We have $v_i = \sum_{j=1}^{n}v_j^i v_j$ for $i=n+1,\cdots ,r$ and we use presentation (\ref{eq:defpi}). \end{proof} Above a convenient and non degenerate Laurent polynomial $f$ we make grow a differential system (see \cite{DoSa1} and also \cite{DoMa} for explicit computations on weighted projective spaces) and we say that $f$ is a mirror partner of a variety $X$ if this differential system is isomorphic to the one associated with the (small quantum, orbifold) cohomology of $X$. If $f$ is the mirror partner of a smooth variety $X$ the following properties are in particular expected (non-exhaustive list): \begin{itemize} \item the Milnor number of $f$ is equal to the rank of the cohomology of $X$, \item the spectrum at infinity of $f$ (see section \ref{sec:SpecPolLaurent} below) is equal to half of the degrees of the cohomology groups of $X$, \item multiplication by $f$ on its Jacobi ring yields the cup-product by $c_1 (X)$ on the cohomology algebra of $X$. \end{itemize} The first thing to do is to compare the dimension of the Jacobi ring of $f_{X_{\Delta}}$, hence its Milnor number $\mu_{f_{X_{\Delta}}}$, and the rank of the cohomology algebra of $X_{\Delta}$. In the smooth case, we have equality if (and only if) $X_{\Delta}$ is weak Fano. If $X_{\Delta}$ is smooth, complete, but not weak Fano we have $\mu_{f_{X_{\Delta}}}>\chi (X_{\Delta})$: see section \ref{ex:SurfacesHirz} for a picture of this phenomenon. In the singular simplicial case ({\em i.e} $X$ is an orbifold), cohomology should be replaced by orbifold cohomology, see section \ref{sec:Comparaison}: the rank of the orbifold cohomology is not a number of cones but a normalized volume, and the cohomology degrees are the orbifold degrees. \subsection{The spectrum at infinity of a tame Laurent polynomial} \label{sec:SpecPolLaurent} We assume in this section that $f$ is a convenient and nondegenerate Laurent polynomial, defined on $U:=(\cit^{*})^n$, with global Milnor number $\mu$. For the (very small) $D$-module part, we use the notations of \cite[2.c]{DoSa1}. Let $G$ be the Fourier-Laplace transform of the Gauss-Manin system of $f$, $G_{0}$ be its Brieskorn lattice ($G_0$ is indeed a free $\cit [\theta ]$-module because $f$ is convenient and nondegenerate and $G=\cit [\theta ,\theta^{-1}]\otimes G_0$, see \cite[Remark 4.8]{DoSa1}) and $V_{\bullet}$ be the $V$-filtration of $G$ at infinity, that is along $\theta^{-1}=0$. From these data we get by projection a $V$-filtration on the $\mu$-dimensional vector space $\Omega_{f}:=\Omega^n (U)/df\wedge \Omega^{n-1}(U)=G_0 / \theta G_0$, see \cite[Section 2.e]{DoSa1}. The {\em spectrum at infinity} of $f$ is the spectrum of the $V$-filtration defined on $\Omega_{f}$, that is the (ordered) sequence $\alpha_{1}, \alpha_{2},\cdots , \alpha_{\mu}$ of rational numbers with the following property: the frequency of $\alpha$ in the sequence is equal to $\dim_{\cit}gr_{V}^{\alpha}\Omega_{f}$. We will denote it by $Spec_f$ and we will write $Spec_f (z) =\sum_{i=1}^{\mu}z^{\alpha_{i}}$. Recall the following facts, see \cite{Sab1}: we have $\alpha_{i}\geq 0$ for all $i$ and $Spec_f (z)=z^{n}Spec_f (z^{-1})$. In particular, $Spec_f \subset [0,n]$. If $f$ is convenient and nondegenerate, its spectrum at infinity can be computed using the Newton function of its Newton polytope: let us define the Newton filtration $\mathcal{N}_{\bullet}$ on $\Omega^n (U)$ by $$ \mathcal{N}_{\alpha}\Omega^{n}(U)=\{ \sum_{c\in N} a_c \omega_c ,\ \nu (\omega_c)\leq\alpha \ \mbox{for all}\ c\ \mbox{such that}\ a_c\neq 0\} $$ where $\nu (\omega_c):=\nu (c)$ if $\omega_c =u_1^{c_1}\cdots u_n^{c_n} \frac{du_1\wedge\cdots\wedge du_n}{u_1\cdots u_n}$ and $ c=(c_1 ,\cdots , c_n )\in N$ (notice the normalization $\nu (\frac{du_1\wedge\cdots\wedge du_n}{u_1\cdots u_n})=0$). This filtration induces a filtration on $\Omega_f$ by projection and the spectrum at infinity of $f$ is equal to the spectrum of this filtration, see \cite[Corollary 4.13]{DoSa1}. \subsection{The algebraic spectrum of a polytope} \label{sec:DefAlgSpec} We define the algebraic spectrum $Spec_P^{alg}$ of a simplicial full dimensional lattice polytope $P$ containing the origin in its interior to be the spectrum at infinity of the Laurent polynomial $f_P (\underline{u})=\sum_{b\in {\cal V}(P)}\underline{u}^b$ where ${\cal V}(P)$ denotes the set of the vertices of $P$. Notice that $f_P$ is a convenient and nondegenerate Laurent polynomial and that its Milnor number is $\mu_{f_{P}}=\mu_{P}$: indeed, $f_P$ is convenient by definition because $P$ contains the origin in its interior and it is nondegenerate because of the simpliciality assumption; the assertion about the Milnor number then follows from \cite{K}. We identify the algebraic spectrum with its generating function $Spec_P^{alg} (z) =\sum_{i=1}^{\mu}z^{\alpha_{i}}$. We have $Spec_P^{alg} (z)=z^{n}Spec_P^{alg}(z^{-1})$. \begin{proposition}\label{prop:SpectreAlg01} Let $P$ be a full dimensional simplicial lattice polytope in $N_{\mathbb{R}}$ containing the origin in its interior. Then the part of the algebraic spectrum contained in $[0,1[$ is the sequence $\nu (v),\ v\in \Inter P\cap N$ where $\nu$ is the Newton function of $P$ of definition \ref{def:FonctionSupport}. In particular, the multiplicity of $0$ in $Spec_P^{alg}$ is equal to one. \end{proposition} \begin{proof} The assertion for $Spec_{f_P}$ follows from \cite[Lemma 4.6]{DoSa1}, as in \cite[Example 4.17]{DoSa1}. \end{proof} In the two dimensional case, we deduce the following description of the algebraic spectrum: \begin{proposition}\label{prop:SpecAlgDim2} Let $P$ be a full dimensional lattice polytope in $N_{\mathbb{R}}=\mathbb{R}^2$ containing the origin in its interior. Then \begin{equation}\nonumber Spec_ P ^{alg}(z) =(\card (\partial P\cap N)-2)z+ \sum_{v\in \Inter P\cap N}(z^{\nu (v)}+z^{2-\nu (v)}) \end{equation} where $\nu$ is the Newton function of $P$. \end{proposition} \begin{proof} Let $f_P (\underline{u})=\sum_{b\in {\cal V}(P)}\underline{u}^b$ be as above. From proposition \ref{prop:SpectreAlg01}, and because $Spec_{f_P} (z)=z^{2}Spec_{f_P}(z^{-1})$, we get \begin{equation}\nonumber Spec_ {f_P} (z) =(\card (\partial P\cap N)-2)z+ \sum_{v\in \Inter P\cap N}(z^{\nu (v)}+z^{2-\nu (v)}) \end{equation} where the coefficient of $z$ is computed using Pick's formula because $\mu_{f_P} =2 \vol(P)$ by \cite{K}. \end{proof} In any dimension, we also have the following description for reflexive polytopes: \begin{proposition}\label{prop:SpectrePolytopeLisse} Let $P$ be a full dimensional reflexive simplicial polytope in $N_{\mathbb{R}}=\mathbb{R}^n$ containing the origin in its interior. Then: \begin{itemize} \item $Spec_P^{alg} (z) =\sum_{i=0}^{n}d(i)z^{i}$ where $d(i)\in\nit$, \item $d(i)=d(n-i)$ for $i=0,\cdots ,n$ with $d(0)=d(n)=1$, \item $\sum_{i=0}^{n}d(i)=\mu_P$ \end{itemize} \end{proposition} \begin{proof} Because $P$ is reflexive, the Newton function takes integer values at the lattice points, see (\ref{eq:CorrVerticePolar}). This gives the first point because $Spec_P^{alg}\subset [0,n]$. For the second one, use the symmetry and the fact that $0$ is in the spectrum with multiplicity one. \end{proof} \subsection{Examples: Hirzebruch surfaces and their Givental-Hori-Vafa models} \label{ex:SurfacesHirz} Let $m$ be a positive integer. The fan $\Delta_{\mathbb{F}_{m}}$ of the Hirzebruch surface $\mathbb{F}_m$ is the one whose rays are generated by the vectors $ v_{1}=(1,0)$, $v_{2}=(0,1)$, $v_{3}=(-1,m)$, $v_{4}=(0,-1)$, see for instance \cite{Ful}. The surface ${\mathbb{F}_m}$ is Fano if $m=1$, weak Fano if $m=2$. Its Givental-Hori-Vafa model is the Laurent polynomial \begin{equation}\nonumber f_m (u_1 ,u_2 )=u_1 +u_2 +\frac{q_1}{u_2}+ q_2\frac{u_2^m}{u_1} \end{equation} defined on $(\cit^*)^2$, where $q_1$ and $q_2$ are two non zero parameters. We have \begin{enumerate} \item $\mu_{f_1} =4$ and $Spec_{f_1}(z)= 1+2z +z^2$, \item $\mu_{f_2} =4$ if $q_2 \neq \frac{1}{4}$ and $Spec_{f_2}(z)= 1+2z +z^2$, \item $\mu_{f_m} =m+2$ if $m\geq 3$ and $$Spec_{f_m}(z)= 1+2z+z^2 +z^{\frac{1}{p}}+z^{\frac{2}{p}}+\cdots + z^{\frac{p-1}{p}}+ z^{2-\frac{1}{p}}+z^{ 2-\frac{2}{p}}+\cdots +z^{2-\frac{p-1}{p}}$$ if $m=2p$ and $p\geq 2$, $$Spec_{f_m}(z)=1+z+z^2 +z^{\frac{2}{m}}+z^{ \frac{4}{m}}+\cdots +z^{ \frac{2p}{m}}+ z^{2-\frac{2}{m}}+z^{2-\frac{4}{m}}+\cdots z^{ 2-\frac{2p}{m}}$$ if $m=2p+1$ and $p\geq 1$. \end{enumerate} Indeed, for $m\neq 2$ we have $\mu_{f_m} =2! \vol(P)$, where $P$ is the Newton polytope of $f_m$, because $f_m$ is convenient and nondegenerate for all non zero value of the parameters, see section \ref{sec:K}. For $m=2$, $f_2$ is nondegenerate if and only if $q_2 \neq \frac{1}{4}$ and the previous argument applies in this case (if $q_2 =1/4$ the Milnor number is $2$). The spectrum is given by proposition \ref{prop:SpecAlgDim2}. The function $f_2$ is a guenine mirror partner of the surface $\mathbb{F}_2$, see \cite{D}, \cite{RS}. If $m\geq 3$, we have $\mu_{f_m} >4$ and the model $f_m$ has too many critical points: because $\mathbb{F}_m$ is not weak Fano in this case, this is consistent with the results of section \ref{sec:ConstMiroirGen}. \section{Geometric spectrum {\em vs} algebraic spectrum} \label{sec:Comparaison} We show in this section the equality $Spec_P^{alg}(z)=Spec_P^{geo} (z)$, see corollary \ref{coro:SpecGeoegalSpecAalg} (in the two dimensional case, this follows immediately from corollary \ref{coro:SpecGeo01}, proposition \ref{prop:SpectreAlg01}, and the symmetry property). The idea is to show that these functions are Hilbert-Poincar\'e series of isomorphic graded rings (theorem \ref{theo:IsoGradedRings}). This is achieved using the description of the orbifold Chow ring given in \cite{BCS}. \subsection{An isomorphism of graded rings} Let $P$ be a simplicial polytope containing the origin in its interior, $f_P (\underline{u})=\sum_{j=1}^r \underline{u}^{b_{j}}$ be the corresponding convenient and nondegenerate Laurent polynomial as in section \ref{sec:DefAlgSpec}. In what follows, we will write $\underline{u}^c := u_1^{c_1}\cdots u_n^{c_n}$ if $c=(c_1 ,\cdots ,c_n )\in N$ and $K:=\qit [u_1, u_1^{-1},\cdots, u_n ,u_n^{-1}]$. Let \begin{equation}\nonumber \mathcal{A}_{f_{P}}=\frac{K} {\langle u_1 \frac{\partial f_P}{\partial u_{1}},\cdots , u_n \frac{\partial f_P}{\partial u_{n}} \rangle } \end{equation} be the jacobian ring of $f_{P}$. Define on $K$ the Newton filtration $\mathcal{N}_{\bullet}$ by \begin{equation}\nonumber \mathcal{N}_{\alpha}K= \{ \sum_{c\in N} a_c \underline{u}^c \in K,\ \nu (c)\leq\alpha \ \mbox{for all}\ c\ \mbox{such that}\ a_c\neq 0\} \end{equation} where $\nu$ is the Newton function of $P$ (the vector space $\mathcal{N}_{<\alpha}K$ is defined similarly: replace the condition $\nu (c)\leq\alpha$ by $\nu (c)<\alpha$). This filtration induces by projection a filtration, also denoted by $\mathcal{N}_{\bullet}$ on $\mathcal{A}_{f_P}$, and we get the graded ring $Gr_{\mathcal{N}}\mathcal{A}_{f_{P}}=\oplus_{\alpha} \mathcal{N}_{\alpha}\mathcal{A}_{f_{P}}/\mathcal{N}_{<\alpha}\mathcal{A}_{f_{P}}$ (see equation (\ref{eq:MultiplicationNewton}) below). \begin{theorem}\label{theo:IsoGradedRings} There is an isomorphism of graded rings \begin{equation}\nonumber A_{orb}^* ({\cal X}(\mathbf{\Delta}))\cong Gr_{\mathcal{N}}\mathcal{A}_{f_{P}} \end{equation} where $\mathbf{\Delta}$ is the stacky fan associated with $P$ (see section \ref{sec:EventailChampetre}). \end{theorem} \begin{proof} We first recall the setting of \cite[section 5 ]{BCS}. Let $\mathbf{\Delta} =(N, \Delta ,\{b_i\})$ be the stacky fan of $P$. We define its deformed group ring $\qit [N]^{\mathbf{\Delta}}$ as follows: \begin{itemize} \item as a $\qit$-vector space, $\qit [N]^{\mathbf{\Delta}}=\oplus_{c\in N}\qit\ y^c$ where $y$ is a formal variable, \item the ring structure is given by $y^{c_1}.y^{c_2}=y^{c_1 +c_2}$ if $c_1$ and $c_2$ belong to a same cone, $y^{c_1}.y^{c_2}= 0$ otherwise, \item the grading is defined as follows: if $c=\sum_{\rho_i \subseteq \sigma (c)} m_i b_i$ then $\deg (y^c )=\sum m_i \in\qit$ ($\sigma (c)$ is the minimal cone containing $c$). \end{itemize} It will be important to notice that $\deg (y^c )=\nu (c)$ where $\nu$ is the Newton function. Because $\Delta$ is simplicial, we have an isomorphism of $\qit$-graded rings \begin{equation}\label{eq:IsoOrbifoldChowRing} \frac{\qit [N]^{\mathbf{\Delta}}}{\langle \sum_{i=1}^r \langle m, b_i \rangle y^{b_i} , \ m\in M\rangle}\longrightarrow \oplus_{v\in \Box (\Delta )} A^* ({\cal X}(\mathbf{\Delta /\sigma (v)}))[\deg (y^v )] \end{equation} where $\sigma (v)$ is the minimal cone containing $v$, $\Box (\sigma):=\{\sum_{\rho_{i}\subset\sigma}\lambda_i b_i,\ \lambda_i \in [0,1[\}$ and $\Box (\Delta )$ is the union of $\Box (\sigma )$ for all $n$-dimensional cones $\sigma\in\Delta$, see \cite[Theorem 1.1]{BCS}. On the other side, we have \begin{equation}\nonumber \mathcal{A}_{f_{P}}=\frac{K} {\langle \sum_{j=1}^{r}\langle m, b_{j}\rangle \underline{u}^{b_{j}}, \ m\in M\rangle } \end{equation} because $u_i \frac{\partial f_P}{\partial u_{i}} =\sum_{j=1}^{r} \langle e_{i}^* , b_{j}\rangle \underline{u}^{b_{j}} $ where $(e_{i}^*)$ is the dual basis of the canonical basis of $N$. Define the ring \begin{equation}\nonumber A_{f_{P}}=\frac{K^g} {\langle \sum_{j=1}^{r}\langle m, b_{j}\rangle \underline{u}^{b_{j}}, \ m\in M\rangle } \end{equation} where $K^g =K$ as a vector space and the multiplication on $K^g$ is defined as follows: \begin{align}\nonumber \underline{u}^{c_{1}}.\underline{u}^{c_{2}}= \left\{ \begin{array}{ll} \underline{u}^{c_{1}+c_{2}} & \mbox{if}\ c_{1}\ \mbox{ and}\ c_{2}\ \mbox{ belong to a same cone}\ \sigma\ \mbox{of}\ \Delta_P,\\ 0 & \mbox{otherwise} \end{array} \right . \end{align} \noindent Define a grading on $K^g$ by $\deg (\underline{u}^{c})=\nu (c)$. Because $\nu (c_1 +c_2 )= \nu (c_1 )+\nu (c_2)$ if and only if $c_{1}$ and $c_{2}$ belong to a same cone $\sigma$ of $\Delta_P$ and because $\nu (b_j )=1$, the ring $A_{f_{P}}$ is graded. Moreover, because \begin{align}\label{eq:MultiplicationNewton} \underline{u}^{v}. \underline{u}^{w}\in \left\{ \begin{array}{ll} \mathcal{N}_{\alpha +\beta}K & \mbox{if}\ \underline{u}^{v}\in \mathcal{N}_{\alpha}K\ \mbox{and}\ \underline{u}^{w}\in \mathcal{N}_{\beta}K ,\\ \mathcal{N}_{<\alpha +\beta}K & \mbox{if $v$ and $w$ do not belong to a same cone of $\Delta_P$} \end{array} \right . \end{align} the graded ring $A_{f_P}$ isomorphic to $Gr_{\mathcal{N}}\mathcal{A}_{f_{P}}$. Last, the rings $\frac{\qit [N]^{\mathbf{\Delta}}}{\langle \sum_{i=1}^r \langle m, b_i \rangle y^{b_i} , \ m\in M\rangle}$ and $A_{f_P}$ are ismorphic, and the theorem follows now from the isomorphism (\ref{eq:IsoOrbifoldChowRing}). \end{proof} \begin{corollary}\label{coro:SpecGeoegalSpecAalg} Assume that $P$ is a simplicial polytope containing the origin in its interior. Then $Spec_P^{alg}(z)=Spec_P^{geo} (z)$. \end{corollary} \begin{proof} As explained in section \ref{sec:SpecPolLaurent}, $Spec_P^{alg} (z)$ is the Hilbert-Poincar\'e series of the graded vector space $Gr_{\mathcal{N}} \mathcal{A}_{f_{P}}$. Now, theorem \ref{theo:IsoGradedRings} shows that the latter coincide with the Hilbert-Poincar\'e series of $A_{orb}^* ({\cal X}(\mathbf{\Delta}))$ and we get the assertion by corollary \ref{coro:SpecGeoOrbifold}. \end{proof} \subsection{A significant class of examples: weighted projective spaces} \label{sec:EPP} Let $(\lambda_{0},\cdots ,\lambda_{n})\in (\nit^*)^{n+1}$ such that $\gcd (\lambda_{0},\cdots ,\lambda_{n})=1$ and $X$ be the weighted projective space $\ppit (\lambda_{0},\cdots ,\lambda_{n})$. The (stacky) fan of $X$ is the simplicial complete fan whose rays are generated by vectors $b_0 ,\cdots , b_n$ in $N$ such that \begin{enumerate} \item $\lambda_{0}b_{0}+\cdots + \lambda_{n}b_{n}=0$ \item the $b_{i}$'s generate $N$ \end{enumerate} Such a family is unique, up to isomorphism. We have $\lambda_{0}=1$ if and only if $(b_{1},\cdots , b_{n})$ is a basis of $N$ and this will be our favorite situation: {\em we assume from now on that} $\lambda_0 =1$. In this situation, we will call the convex hull $P$ of $b_{0},\cdots , b_{n}$ {\em the polytope of} $\ppit (1,\lambda_{1},\cdots ,\lambda_{n})$. We have $\mu_P =1+\lambda_{1}+\cdots +\lambda_{n}$ and $\mu_{P^{\circ}}=\frac{(1+\lambda_1 +\cdots +\lambda_n)^n}{\lambda_1 \cdots \lambda_n}$. The polytope $P$ is reflexive if and only if $\lambda_{i}$ divides $\mu_P$ for all $i$. By proposition \ref{prop:HVLaurent}, the Givental-Hori-Vafa model of $\ppit (1,\lambda_1 ,\cdots ,\lambda_n)$ is the Laurent polynomial defined on $(\cit^*)^n$ by \begin{equation}\nonumber f (u_1 ,\cdots , u_n )= u_1 +\cdots + u_n + \frac{q}{u_1^{\lambda_1}\cdots u_n^{\lambda_n}} \end{equation} where $q\in \cit^*$. Its Milnor number is $\mu_f = 1+\lambda_1 +\cdots +\lambda_n =\mu_P$. A mirror theorem is shown in \cite{DoMa}. Let $$F:=\left\{\frac{\ell}{\lambda_{i}}|\, 0\leq\ell\leq \lambda_{i}-1,\ 0\leq i\leq n\right\}.$$ and $f_{1},\cdots , f_{k}$ the elements of $F$ arranged by increasing order. We then define \begin{align}\nonumber S_{f_i}:=\{j|\ \lambda_{j}f_i \in\zit\}\subset \{0,\cdots ,n\}\ \mbox{and}\ d_{i}:=\card S_{f_{i}} \end{align} Let $c_{0},c_{1},\cdots , c_{\mu -1}$ be the sequence $$\underbrace{f_{1},\cdots ,f_{1}}_{d_{1}},\underbrace{f_{2},\cdots ,f_{2}}_{d_{2}},\cdots ,\underbrace{f_{k},\cdots ,f_{k}}_{d_{k}}$$ arranged by increasing order. By \cite[Theorem 1]{DoSa2}, the spectrum at infinity of $f$ is the sequence $\alpha_0 ,\alpha_1 ,\cdots , \alpha_{\mu -1}$ where $$ \alpha_{k}:=k-\mu c_{k}\ \mbox{for}\ k=0,\cdots ,\mu -1$$ \noindent Notice that the spectrum of $f$ is integral if and only if the polytope of $\ppit (1, \lambda_1 ,\cdots ,\lambda_n )$ is reflexive. \begin{example} \label{ex:SpectreGeoEPP} We test corollary \ref{coro:SpecGeoegalSpecAalg} on some weighted projective spaces. The computation of the geometric spectrum is done using proposition \ref{prop:SpecGeoEgalEstP}. \begin{enumerate} \item Let $a$ be a positive integer, and $P$ be the convex hull of $(1,0)$, $(-1, -a)$ and $(0,1)$: this is the polytope of $\ppit (1,1,a)$. We consider the resolution obtained by adding the ray generated by $(0,-1)$. Using the notations of proposition \ref{prop:SpecGeoEgalEstP}, we have $\nu_{1}=1$, $\nu_2 =\frac{2}{a}$, $\nu_{3}=1$ and $\nu_{4}=1$ (the rays are numbered clockwise) and we get ($\mathbb{F}_{a}$ is the Hirzebruch surface) \begin{equation}\nonumber Spec_{P}^{geo} (z)=E(\mathbb{F}_a ,z)+E(\ppit^1 ,z)\frac{z-z^{2/a }}{z^{2/a }-1} =1+2z+z^2 +(1+z)(\frac{z-1}{z^{2/a}-1}-1) \end{equation} \begin{equation}\nonumber =1+z+z^{2}+z^{2/a}+z^{4/a}+\cdots +z^{2(a-1)/a}=Spec_P^{alg} (z) \end{equation} \item Let $P$ be the convex hull of $(1,0)$, $(-2, -5)$ and $(0,1)$: $P$ is the polytope of $\ppit (1,2,5)$. We consider the resolution obtained by adding the ray generated by $(0,-1)$, $(-1,-3)$ and $(-1,-2)$. Using the notations of proposition \ref{prop:SpecGeoEgalEstP}, we have $\nu_1 =1$, $\nu_{2}=\frac{3}{5}$, $\nu_{3}=\frac{4}{5}$, $\nu_{4}=1$, $\nu_{5}=1$ and $\nu_{6}=1$ (the rays are numbered clockwise) and we get \begin{equation}\nonumber Spec_{P}^{geo}(z)=z^2 +4z +1 +(z+1)\frac{z-z^{3/5}}{z^{3/5}-1}+(z+1)\frac{z-z^{4/5}}{z^{4/5}-1} +\frac{z-z^{3/5}}{z^{3/5}-1}.\frac{z-z^{4/5}}{z^{4/5}-1} \end{equation} \begin{equation}\nonumber =z^2 +2z +1 +z^{3/5} + z^{4/5}+z^{6/5}+z^{7/5}=Spec_P^{alg} (z) \end{equation} \item Let $\ell\in\nit^*$ and $P$ be the convex hull of $(1,0)$, $(-\ell , -\ell )$ and $(0,1)$: $P$ is the polyope of $\ppit (1,\ell , \ell)$. The variety $X_{\Delta_P}$ is $\ppit^2$, generated by the rays $v_1 =(-1, -1)$, $v_2=(0,1)$ and $v_3 =(1,0)$. Because $\nu (v_1 )=\frac{1}{\ell}$, we get \begin{equation}\nonumber Spec_{P}^{geo}(z)=E(\ppit^2 ,z)+E(\ppit^1 ,z)\frac{z-z^{1/\ell }}{z^{1/\ell }-1} =1+z+z^2 +(1+z)\frac{z-z^{1/\ell }}{z^{1/\ell }-1} \end{equation} \begin{equation}\nonumber =z^2 +z +1 + z^{1/\ell}+\cdots +z^{(\ell -1)/\ell} + z^{1+1/\ell}+\cdots + z^{1+(\ell -1)/\ell }=Spec_P^{alg} (z) \end{equation} \item Let $P$ be the convex hull of $(-2, -2, -2)$, $(1, 0, 0)$, $(0,1,0)$ and $(0,0,1)$: $P$ is the polytope of $\ppit (1, 2 , 2 , 2)$. We have $X_{\Delta_P}=\ppit^3$, generated by the rays $v_1 =(-1, -1,-1)$, $v_2=(1,0,0)$, $v_3 =(0, 1,0)$ and $v_4 =(0,0,1)$. Because $\nu (v_1 )=\frac{1}{2}$, we get $$Spec_{P}^{geo}(z) =E(\ppit^3 ,z)-E(\ppit^2 ,z)+E(\ppit^2 ,z)\frac{z-1}{z^{1/2}-1}$$ $$=z^3 +z^2 +z +1 + z^{1/2}+z^{3/2} +z^{5/2}=Spec_{P}^{alg}(z)$$ \end{enumerate} \end{example} \section{A formula for the variance of the spectra} \label{sec:ConjectureVarianceSpectre} We are now ready to prove formula (\ref{eq:VarianceIntro}) of the introduction. We first show it for the geometric spectrum of a full dimensional simplicial lattice polytope. \subsection{Libgober-Wood's formula for the spectra} In order to get first a stacky version of the Libgober-Wood formula (\ref{eq:LWIntro}), we give the following definition, inspired by Batyrev's stringy number $c_{st}^{1, n-1} (X)$, see \cite[Definition 3.1]{Batyrev3}: \begin{definition} Let $P$ be a full dimensional simplicial lattice polyope in $N$ containing the origin in its interior and let $\rho :Y\rightarrow X$ be a resolution of $X:=X_{\Delta_P}$. We define the rational number \begin{equation}\label{eq:ClasseBatyrevChampetre} \widehat{\mu}_P:=c_1 (Y) c_{n-1} (Y) +\sum_{J\subset I,\ J\neq\emptyset}c_1 (D_J )c_{n-|J|-1}(D_J ) \prod_{j\in J}\frac{1-\nu_j}{\nu_j} \end{equation} \begin{equation}\nonumber -\sum_{J\subset I,\ J\neq\emptyset} (\sum_{j\in J} \nu_j )c_{n-|J|}(D_J ) \prod_{j\in J}\frac{1-\nu_j}{\nu_j } \end{equation} where the notations in the right hand term are the ones of section \ref{sec:InterpretationResolutionSing} (convention: $c_{r}(D_J )=0$ if $r<0$). \end{definition} \begin{remark}\label{remark:DiversCstP} We have $\widehat{\mu}_P =c_1 (Y) c_{n-1} (Y)$ if $\nu_i =1$ for all $i$ (crepant resolutions) and $\widehat{\mu}_P=c_1 (X) c_{n-1} (X)$ if $X$ is smooth. \end{remark} \begin{theorem} \label{theo:VarianceSpectreGeometrique} Let $P$ be a full dimensional simplicial lattice polyope in $N$ containing the origin in its interior and $Spec_{P}^{geo}(z)=\sum_{i=1}^{e_P}z^{\beta_i}$ be its geometric spectrum. Then \begin{equation}\label{eq:VarianceSpectreGeometrique} \sum_{i=1}^{e_P}(\beta_i -\frac{n}{2})^2 =\frac{n}{12} \mu_P +\frac{1}{6} \widehat{\mu}_P \end{equation} where $\widehat{\mu}_P$ is defined by formula (\ref{eq:ClasseBatyrevChampetre}) and $\mu_P :=n!\vol (P)$. \end{theorem} \begin{proof} Recall the stacky $E$-fonction $E_{st, P}(z):=\sum_{J\subset I}E( D_J , z)\prod_{j\in J}\frac{z-z^{\nu_j}}{z^{\nu_j}-1}$, see (\ref{eq:DefStringyFunctionChampetre}). Then we have \begin{equation}\label{eq:VarianceLWChampetre} E''_{st,P} (1)=\frac{n(3n-5)}{12} e_{P}+\frac{1}{6}\widehat{\mu}_P \end{equation} where $e_P$ is the geometric Euler number of $P$, see definition \ref{def:specgeopoltope}. The proof of this formula is a straightforward computation and is similar to the one of \cite[Theorem 3.8]{Batyrev3}: if $V$ is a smooth variety of dimension $n$, we have the Libgober-Wood formula \begin{equation}\label{eq:LW} E '' (V,1)=\frac{n(3n-5)}{12}c_n (V)+\frac{1}{6}c_1 (V) c_{n-1}(V) \end{equation} where $E$ is the $E$-polynomial of $V$, see \cite[Proposition 2.3]{LW}; in order to get (\ref{eq:VarianceLWChampetre}), apply this formula to the components $E(D_J ,z)$ of $E_{st, P}(z)$ and use the equalities \begin{equation}\nonumber E(D_J ,1)=c_{n-|J|}(D_J ),\ \frac{d}{dz} (E(D_J ,z))_{|z=1}=\frac{n-|J|}{2}c_{n-|J|}(D_J ) \end{equation} (the first one follows from the fact that the value at $z=1$ of the Poincar\'e polynomial is the Euler characteristic and we get the second one using Poincar\'e duality for $D_J$) and \begin{equation}\nonumber \frac{d}{dz}(\frac{z-z^{\nu}}{z^{\nu}-1})_{| z=1}=\frac{1-\nu}{2\nu},\ \frac{d^2}{dz^2}(\frac{z-z^{\nu}}{z^{\nu}-1})_{| z=1}=\frac{(\nu -1)(\nu +1)}{6\nu} \end{equation} if $\nu$ a positive rational number. By proposition \ref{prop:SpecGeoEgalEstP}, we have $Spec^{geo}_P (z)=E_{st, P}(z)$. Finally, we get \begin{equation}\label{eq:Specf1} \frac{d^2}{dz^2}(Spec_P^{geo} (z))_{| z=1}=\frac{n(3n-5)}{12}e_P+\frac{1}{6} \widehat{\mu}_P \end{equation} We have $\frac{d}{dz}(Spec_P^{geo}(z)) _{| z=1}= \frac{n}{2} e_P$, because the geometric spectrum is symmetric with respect to $\frac{n}{2}$ (see corollary \ref{coro:SymetrieSpecGeo}), and we deduce that \begin{equation}\label{eq:Specf1bis} \frac{d^2}{dz^2}(Spec_P^{geo} (z))_{| z=1} = \sum_{i=1}^{e_P}(\beta_i -\frac{n}{2})^2 +\frac{n(n-2)}{4} e_P \end{equation} Now, formulas (\ref{eq:Specf1}) and (\ref{eq:Specf1bis}) give equality (\ref{eq:VarianceSpectreGeometrique}) because $e_P =\mu_P$ by lemma \ref{lemma:SpecGeoBoxDimn}. \end{proof} \begin{corollary} \label{coro:BLWindepedantRho} The number $\widehat{\mu}_P$ does not depend on the resolution $\rho$. \hfill~~\mbox{$\Box$} \end{corollary} \noindent The version for singularities is then given by the following result: \begin{theorem} \label{theo:VarianceMiroirToriqueChampetre} Let $f$ be a convenient and nondegenerate Laurent polynomial with global Milnor number $\mu$ and spectrum at infinity $Spec_{f} (z)=\sum_{i=1}^{\mu}z^{\alpha_i}$. Let $P$ be its Newton polytope (assumed to be simplicial). Then \begin{equation}\label{eq:VarianceMiroirToriqueChampetre} \sum_{i=1}^{\mu}(\alpha_i -\frac{n}{2})^2 =\frac{n}{12}\mu_P +\frac{1}{6} \widehat{\mu}_P \end{equation} where $\widehat{\mu}_P$ is defined by formula (\ref{eq:ClasseBatyrevChampetre}) and $\mu_P :=n!\vol (P)=\mu$. \end{theorem} \begin{proof} By \cite{NS} we have $Spec^{alg}_P (z)=Spec_{f} (z)$ and the assertion thus follows from theorem \ref{theo:VarianceSpectreGeometrique} and corollary \ref{coro:SpecGeoegalSpecAalg}. Last, because $f$ is convenient and nondegenerate we have $\mu =\mu_P$ by \cite{K}. \end{proof} \begin{example}\label{ex:VarianceMiroirLisse} Assume that $f$ is the mirror partner of a projective, smooth, weak Fano toric variety $X$ of dimension $n$ (mirror symmetry is discussed in section \ref{sec:ConstMiroirGen}). Then \begin{equation}\label{eq:VarianceMiroirLisse} \sum_{i=1}^{\mu}(\alpha_i -\frac{n}{2})^2 =\mu\frac{n}{12} +\frac{1}{6}c_1 (X) c_{n-1}(X) \end{equation} Indeed, the convex hull $P$ of the primitive generators of the rays of the fan of $X$ is reflexive because $X$ is weak Fano and (\ref{eq:VarianceMiroirLisse}) follows from theorem \ref{theo:VarianceMiroirToriqueChampetre} and remark \ref{remark:DiversCstP}. We have $c_1 (X) c_{n-1}(X)\geq 0$ because $X$ is weak Fano and we get in particular $$\sum_{i=1}^{\mu}(\alpha_i -\frac{n}{2})^2 \geq \mu\frac{n}{12}$$ This is a first step towards Hertling's conjecture, see section \ref{sec:Conj}. \end{example} \subsection{The number $\widehat{\mu}_P$ in the two dimensional case} \label{sec:TwoDimCase} In this section we give an explicit formula for $\widehat{\mu}_P$ in the two dimensional case. Let $P$ be a full dimensional polytope in $N_{\mathbb{R}}=\mathbb{R}^2$, containing the origin, and $\rho :Y\rightarrow X$ be a resolution of $X:=X_{\Delta_P}$ as in section \ref{sec:InterpretationResolutionSing}. We assume that the primitive generators $v_1 ,\cdots ,v_r$ of the rays of $Y$ are numbered clockwise and we consider indices as integers modulo $r$ so that $\nu_{r+1}:=\nu_1$ (recall that $\nu_i :=\nu (v_i )$ where $\nu$ is the Newton function of $P$). \begin{proposition}\label{prop:MuPDim2} We have \begin{equation}\nonumber \widehat{\mu}_P=c_1 ^2 (Y) -2r +\sum_{i=1}^r (\frac{\nu_i}{\nu_{i+1}}+\frac{\nu_{i +1}}{\nu_{i}})=(\sum_{i=1}^r \nu_i D_i ) (\sum_{j=1}^r \frac{1}{\nu_{j}}D_j) \end{equation} \end{proposition} \begin{proof} By definition, we have \begin{equation}\label{eq:ClasseBatyrevChampetreDim2} \widehat{\mu}_P:=c_1 (Y) c_{1} (Y) +\sum_{J\subset I,\ J\neq\emptyset}c_1 (D_J )c_{1-|J|}(D_J ) \prod_{j\in J}\frac{1-\nu_j}{\nu_j} \end{equation} \begin{equation}\nonumber -\sum_{J\subset I,\ J\neq\emptyset} (\sum_{j\in J} \nu_j )c_{2-|J|}(D_J ) \prod_{j\in J}\frac{1-\nu_j}{\nu_j } \end{equation} and thus \begin{equation}\nonumber \widehat{\mu}_P=c_1 ^2 (Y) + 2\sum_{i=1}^r \frac{(1-\nu_i )^2}{\nu_{i}}- \sum_{i=1}^r( \nu_i +\nu_{i+1} )\frac{(1-\nu_i )}{\nu_{i}}.\frac{(1-\nu_{i+1})}{\nu_{i+1}} \end{equation} It follows that \begin{equation}\nonumber \widehat{\mu}_P -c_1 ^2 (Y) = \sum_{i=1}^r (\frac{1}{\nu_{i}}+\nu_i - \frac{1}{\nu_{i+1}}- \nu_{i+1} +\frac{\nu_i }{\nu_{i+1}} +\frac{\nu_{i+1}}{\nu_{i}}-2 ) \end{equation} \begin{equation}\nonumber =-2r + \sum_{i=1}^r (\frac{\nu_i }{\nu_{i+1}} +\frac{\nu_{i+1}}{\nu_{i}}) \end{equation} and this gives the first equality. For the second one, notice that \begin{equation}\nonumber (\sum_{i=1}^r \nu_i D_i ) (\sum_{j=1}^r \frac{1}{\nu_{j}}D_j) =\sum_{i=1}^r (D_i^2 +\frac{\nu_i}{\nu_{i+1}}D_i D_{i+1} +\frac{\nu_i}{\nu_{i-1}}D_i D_{i-1}) \end{equation} \begin{equation} \nonumber =\sum_{i=1}^r (D_i^2 +\frac{\nu_i}{\nu_{i+1}} +\frac{\nu_{i+1}}{\nu_{i}}) =c_1^2 (Y)-2r+\sum_{i=1}^r (\frac{\nu_i}{\nu_{i+1}} +\frac{\nu_{i+1}}{\nu_{i}})=\widehat{\mu}_P \end{equation} \end{proof} \begin{corollary}\label{coro:VarianceDim2Vraie} Let $P$ be a full dimensional lattice polyope in $N_{\mathbb{R}}=\mathbb{R}^2$ containing the origin in its interior and $Spec_{P}^{geo}(z)=\sum_{i=1}^{e_P}z^{\beta_i}$ be its geometric spectrum. Then \begin{equation}\label{eq:VarianceMiroirToriqueChampetreDim2} \sum_{i=1}^{\mu_P}(\beta_i -1)^2 =\frac{\mu_P}{6} +\frac{\widehat{\mu}_P}{6} \end{equation} where $\widehat{\mu}_P\geq c_1 ^2 (Y)$ for any resolution $\rho : Y\rightarrow X_{\Delta_P}$. \end{corollary} \begin{proof} The first equality follows from theorem \ref{theo:VarianceSpectreGeometrique}. By proposition \ref{prop:MuPDim2} we have $\widehat{\mu}_P\geq c_1 ^2 (Y)$ because $\nu +\frac{1}{\nu}\geq 2$ for all real positive number $\nu$. \end{proof} \noindent In particular, $\sum_{i=1}^{\mu}(\alpha_i -1)^2 \geq \frac{\mu_P}{6}$ if there exists a resolution $\rho$ such that $c_1 ^2 (Y) \geq 0$. The singularity version is straightforward. \subsection{Examples} \label{ex:VarianceEPP} \subsubsection{Weighted projective spaces (examples \ref{ex:SpectreGeoEPP} continued)} We test theorem \ref{theo:VarianceMiroirToriqueChampetre} on weighted projective spaces: given $X=\ppit (1,\lambda_1 ,\cdots ,\lambda_n )$, $P$ will denote its polytope, $f(\underline{u})=\sum_{i=0}^n \underline{u}^{b_i}$ will denote its Givental-Hori-Vafa model with spectrum at infinity $\sum_{i=1}^{\mu} z^{\alpha_{i}}$ as in section \ref{sec:EPP}. We put $V(\alpha ):= \sum_{i=1}^{\mu}(\alpha_i -\frac{n}{2})^2$. \begin{itemize} \item The polytope $P$ is Fano (see section \ref{sec:VarPolFano}): \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline\hline X & $\mu$ & $V(\alpha )$ & $\mu n/12$ & $ \widehat{\mu}_P$ \\ \hline\hline $\ppit (1,1,a)$ & $a+2$ & $(2a^2 +6a+4)/6a$ & $(a+2)/6$ & $\frac{(a+2)^2}{a}$ \\ \hline $\ppit (1,2,5)$ & $8$ & $12/5$ & $4/3$ & $32/5$\\ \hline \end{tabular} \end{center} \noindent For $\ppit (1,1,a)$, the polytope $P$ is the convex hull of $(1,0)$, $(0,1)$ and $(-1 ,-a )$ and we consider the resolution obtained by adding the ray generated by $(0,-1)$. We use proposition \ref{prop:MuPDim2} in order to compute $\widehat{\mu}_P$, with $\nu_{1}=1$, $\nu_2 =\frac{2}{a}$, $\nu_{3}=1$, $\nu_{4}=1$ and $c_1^2 (Y)=8$. \noindent For $\ppit (1,2,5)$, the polytope $P$ is the convex hull of $(1,0)$, $(0,1)$ and $(-2 ,-5 )$ and we consider the resolution obtained by adding the rays generated by $(0,-1)$, $(-1,-3)$ and $(-1,-2)$. We use proposition \ref{prop:MuPDim2} in order to compute $\widehat{\mu}_P$, with $\nu_1 =1$, $\nu_{2}=\frac{3}{5}$, $\nu_{3}=\frac{4}{5}$, $\nu_{4}=1$, $\nu_{5}=1$, $\nu_{6}=1$ and $c_1^2 (Y)=6$. \noindent Notice that in these examples we have $\widehat{\mu}_P=\mu_{P^{\circ}}$ where $\mu_{P^{\circ}}$ is the volume of the polar polyptope: this is not a coincidence, see section \ref{sec:NoetherFanoPolytope} below. \item The polytope $P$ is not Fano: \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline\hline Example & $\mu$ & $V(\alpha )$ & $\mu n/12$ & $\widehat{\mu}_P$ \\ \hline\hline $\ppit (1,\ell ,\ell)$ & $1+2\ell$ & $2+\frac{(\ell -1)(2\ell -1)}{3\ell}$ & $(2\ell +1)/6$ & $9+2\frac{(\ell -1)^2}{\ell}$\\ \hline $\ppit (1,2,2,2)$ & $7$ & $7$ & $7/4$ & $63/2$\\ \hline \end{tabular} \end{center} \vspace{5mm} \noindent For $\ppit (1,\ell , \ell)$, $\ell\geq 2$, the polytope $P$ is the convex hull of $(1,0)$, $(0,1)$ and $(-\ell ,-\ell )$. Formula (\ref{eq:ClasseBatyrevChampetre}) gives \begin{equation}\nonumber \widehat{\mu}_P =c_1 (\ppit^2) c_{1} (\ppit^2 ) +c_1 (\ppit^1 ) (\ell -1) -\frac{1}{\ell} c_{1}(\ppit^1 ) (\ell -1) \end{equation} In this case we have $\widehat{\mu}_P\neq \mu_{P^{\circ}}$. \noindent For $\ppit (1, 2 , 2 , 2)$, $P$ is the convex hull of $(-2, -2, -2)$, $(1, 0, 0)$, $(0,1,0)$ and $(0,0,1)$. Formula (\ref{eq:ClasseBatyrevChampetre}) gives \begin{equation}\nonumber \widehat{\mu}_P =c_1 (\ppit^3) c_{2} (\ppit^3 ) +c_1 (\ppit^2 ) c_1 (\ppit^2 ) -\frac{1}{2} c_{2}(\ppit^2 ) \end{equation} \end{itemize} \subsubsection{Miscellaneous} In order to complete the panaorama, let us now consider somewhat different situations: \begin{itemize} \item let $P_{1,2,2}$ be the polytope with vertices $b_1 =(1,0)$, $b_2 =(0,2)$ and $b_3 =(-2 ,-2)$. Its stacky fan is $\mathbf{\Delta} =(N, \Delta ,\{b_1 ,b_2 ,b_3\})$ where $\Delta$ is the fan whose rays are $v_1 =(1,0)$, $v_2 =(0,1)$, $v_3 =(-1 ,-1)$. \item Let $P_{\ell ,\ell ,\ell }$ be the polytope with vertices $b_1 =(\ell ,0)$, $b_2 =(0,\ell )$ and $b_3 =(-\ell ,-\ell )$ where $\ell$ is a positive integer. Its stacky fan is $\mathbf{\Delta} =(N, \Delta ,\{b_1 ,b_2 ,b_3\})$ where $\Delta$ is as above. \end{itemize} \noindent We have the following table: \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline\hline Example & $\mu$ & $V(\alpha )$ & $\mu n/12$ & $ \widehat{\mu}_P$ \\ \hline\hline $P_ {1,2,2}$ & $8$ & $3$ & $4/3$ & $10$ \\ \hline $P_{\ell ,\ell ,\ell }$ & $3\ell^2$ & $(\ell^2 +3)/2$ & $\ell^2 /2$ & $9$ \\ \hline \end{tabular} \end{center} \noindent This agrees with formula (\ref{eq:VarianceMiroirToriqueChampetre}) \section{A Noether's formula for two dimensional Fano polytopes} \label{sec:NoetherFanoPolytope} In this section, we still focuse on the two dimensional case: $P$ is polytope in $N_{\mathbb{R}}=\mathbb{R}^2$. Recall that $\mu_P :=2\vol (P)$. Observe first the following: if $P$ is a reflexive polytope, $X_{\Delta_P}$ has a crepant resolution and $\widehat{\mu}_P=c_1^2 (Y)$ by remark \ref{remark:DiversCstP}; the anticanonical divisor of $Y$ is nef and $c_1^2 (Y)=\mu_{P^{\circ}}$ by \cite[Theorem 13.4.3]{CLS} so that finally $\widehat{\mu}_P=\mu_{P^{\circ}}$. Moreover, by corollary \ref{coro:SpecGeoReflexif}, the geometric spectrum of $P$ satisfies $\sum_{i=1}^{\mu}(\beta_i -1)^2 =2$ and we thus get from corollary \ref{coro:VarianceDim2Vraie} the well-known Noether's formula \begin{equation}\label{eq:Noetherformula} 12=\mu_P +\mu_{P^{\circ}} \end{equation} for a reflexive polytope $P$. Recall that a convex lattice polytope is {\em Fano} if the origin is contained in the strict interior of $P$ and if its vertices are primitive lattice points of $N$, see section \ref{sec:VarPolFano}. We have the following generalization of equation (\ref{eq:Noetherformula}) for Fano polytopes (a reflexive polytope is Fano): \begin{theorem}\label{theo:VarianceMuPolytopePolaire} Assume that $P$ is a full dimensional Fano polytope in $N_{\mathbb{R}}$ with geometric spectrum $Spec_P^{geo} (z)=\sum_{i=1}^{\mu_P} z^{\beta_i}$. Then \begin{equation}\label{eq:VarianceMuPolytopePolaireEq} \sum_{i=1}^{\mu_P}(\beta_i -1)^2 =\frac{\mu_P}{6} +\frac{\mu_{P^{\circ}}}{6} \end{equation} where $P^{\circ}$ is the polar polytope of $P$. \end{theorem} \begin{proof} Notice first that, because of the Fano assumption, the support function of the $\qit$-Cartier divisor $K_X$ is equal to the Newton function of $P$ and thus $\rho^* (-K_X)=\sum_{i=1}^r \nu_i D_i $ since $\rho^* (-K_X)$ and $-K_X$ have the same support function. We shall show that \begin{equation}\label{eq:RhoChapeauEgalRho} \widehat{\mu}_P =\rho^* (-K_X)\rho^* (-K_X) \end{equation} Because $(\rho^* (-K_X))^2 =(-K_X)^2 =\mu_{P^{\circ}}$ (for the first equality see \cite[Lemma 13.4.2]{CLS} and for the second one see the $\qit$-Cartier version of \cite[Theorem 13.4.3]{CLS}), equation (\ref{eq:VarianceMuPolytopePolaireEq}) will follow from theorem \ref{theo:VarianceSpectreGeometrique}. By proposition \ref{prop:MuPDim2}, we have \begin{equation}\nonumber \widehat{\mu}_P =\sum_{i=1}^r (D_i^2 +\frac{\nu_i}{\nu_{i+1}} +\frac{\nu_{i+1}}{\nu_{i}})=\sum_{i=1}^r D_i^2 +\sum_{i=1}^r (\frac{\nu_{i-1}}{\nu_{i}} +\frac{\nu_{i+1}}{\nu_{i}}) \end{equation} and, as noticed above, \begin{equation}\nonumber \rho^* (-K_X)\rho^* (-K_X)=(\sum_{i=1}^r \nu_{i} D_i )^2 =\sum_{i=1}^r \nu_{i}^2 D_i^2 +\sum_{i=1}^r (\nu_i \nu_{i+1} +\nu_i \nu_{i-1}) \end{equation} so (\ref{eq:RhoChapeauEgalRho}) reads \begin{equation}\label{eq:RhoChapeauEgalRhoInterm} \sum_{i=1}^r (\nu_{i}^2 -1)D_i^2 = \sum_{i=1}^r (\nu_{i}^2 -1)(-\frac{\nu_{i-1}}{\nu_{i}} -\frac{\nu_{i+1}}{\nu_{i}}) \end{equation} Notice the following: \begin{itemize} \item if $v_{i-1}$, $v_{i}$ and $v_{i+1}$ are primitive generators of rays of $Y$ inside a same cone of the fan of $X$, we have \begin{equation}\nonumber \nu (v_{i-1}+v_{i+1})=(\frac{\nu_{i-1}}{\nu_{i}} +\frac{\nu_{i+1}}{\nu_{i}})\nu (v_{i}) \end{equation} because $\nu (v_{i-1})=\nu_{i-1}$, $\nu (v_{i})=\nu_{i}$ and $\nu (v_{i+1})=\nu_{i+1}$ and the Newton function is linear on each cone of the fan of $X$. Because $Y$ is smooth and complete, it follows that \begin{equation}\nonumber v_{i-1} +v_{i+1}=(\frac{\nu_{i-1}}{\nu_{i}} +\frac{\nu_{i+1}}{\nu_{i}})v_i \end{equation} and we get $D_{i}^{2}=-\frac{\nu_{i-1}}{\nu_{i}} -\frac{\nu_{i+1}}{\nu_{i}}$. \item otherwise, $\nu_{i}=1$ due to the Fano condition. \end{itemize} Equation (\ref{eq:RhoChapeauEgalRhoInterm}), hence equation (\ref{eq:RhoChapeauEgalRho}), follows from these two observations. \end{proof} \begin{example}\label{ex:EPP} Let $P$ be the convex hull of $(1,0)$, $(-\lambda_1 ,-\lambda_2 )$ and $(0,1)$ where $\lambda_{1}$ and $\lambda_{2}$ are relatively prime integers: this is the polytope of $\ppit (1,\lambda_{1} ,\lambda_{2})$, see section \ref{sec:EPP}. Then, $$\sum_{i=1}^{\mu_P}(\beta_i -1)^2 =\frac{1}{6}((1+\lambda_{1}+\lambda_{2})+\frac{(1+\lambda_{1}+\lambda_{2})^2}{\lambda_{1}\lambda_{2}})$$ \end{example} \begin{remark} Theorem \ref{theo:VarianceMuPolytopePolaire} is not true if we forget the assumption Fano: for instance, if $P$ is the polytope of $\ppit (1,\ell ,\ell )$, $\ell\geq 2$, we have $\widehat{\mu}_P =9+2(\ell -1)^2 /\ell$ and $\mu_{P^{\circ}}= (1+2\ell )^2 /\ell^2$, see examples \ref{ex:VarianceEPP}. Moreover, it follows from proposition \ref{prop:MuPDim2} that $\widehat{\mu_{\ell P}}=\widehat{\mu_{P}}$ if $\ell$ is an integer greater or equal than one, and thus $\widehat{\mu_{P}}$ can't be seen as a volume in general. \end{remark} Finally, we get the following statement for singularities: \begin{corollary}\label{coro:ConjFanoPolytope} Let $f$ be a nondegenerate and convenient Laurent polynomial on $(\cit^* )^2$ with spectrum at infinity $\alpha_1 , \cdots , \alpha_{\mu}$. Assume that the Newton polytope $P$ of $f$ is Fano. Then \begin{equation}\label{eq:VarianceMuPolytopePolaireSing} \sum_{i=1}^{\mu}(\alpha_i -1)^2 =\frac{\mu_P}{6} +\frac{\mu_{P^{\circ}}}{6} \end{equation} where $P^{\circ}$ is the polar polytope of $P$. In particular, $\frac{1}{\mu}\sum_{i=1}^{\mu}(\alpha_i -1)^2 \geq \frac{1}{6}$.\hfill~~\mbox{$\Box$} \end{corollary} \section{Hertling's conjecture for regular functions} \label{sec:Conj} From theorem \ref{theo:VarianceMiroirToriqueChampetre} we get: \begin{proposition} Let $f$ be a convenient and nondegenerate Laurent polynomial with global Milnor number $\mu$ and spectrum at infinity $Spec_{f} (z)=\sum_{i=1}^{\mu}z^{\alpha_i}$. Let $P$ be its Newton polytope (assumed to be simplicial). Assume that $\widehat{\mu}_P\geq 0$. Then \begin{equation}\label{eq:VarianceSpectreInfini} \frac{1}{\mu}\sum_{i=1}^{\mu}(\alpha_i -\frac{n}{2})^2 \geq \frac{n}{12} \end{equation}\hfill~~\mbox{$\Box$} \end{proposition} \noindent In example \ref{ex:VarianceMiroirLisse}, corollary \ref{coro:VarianceDim2Vraie}, examples \ref{ex:VarianceEPP} and corollary \ref{coro:ConjFanoPolytope} we have $\widehat{\mu}_P\geq 0$ and we expect that it is a general rule. Notice that if true, inequality (\ref{eq:VarianceSpectreInfini}) is the best possible: for instance, in the situation of example \ref{ex:EPP} we have $$\frac{1}{\mu}\sum_{i=1}^{\mu}(\alpha_i -1)^2 =\frac{1}{6} +\frac{(1+\lambda_{1}+\lambda_{2})}{6\lambda_{1}\lambda_{2}}$$ and the last term on the right can be as small as we want ({\em e.g} $\lambda_1 =p$ and $\lambda_2 =p+1$ with $p$ large enough). This motivates the following conjecture (recall that $\alpha_1 =0$ and $\alpha_{\mu}=n$ if $f$ is a convenient and nondegenerate Laurent polynomial, see proposition \ref{prop:SpectreAlg01}) which has been already stated without any further comments in \cite[Remark 4.15]{DoSa1} as a global counterpart of C. Hertling's conjecture for germs of holomorphic functions (see \cite{Her}, where the equality is inversed). The tameness assumption is discussed in section \ref{sec:K}.\\ \noindent {\bf Conjecture on the variance of the spectrum (global version)} {\em Let $f$ be a regular, tame function on a smooth $n$-dimensional affine variety $U$. Then \begin{equation}\label{eq:Conj} \frac{1}{\mu}\sum_{i=1}^{\mu }(\alpha_{i}-\frac{n}{2})^{2}\geq\frac{1}{12}(\alpha_{\mu }-\alpha_{1}) \end{equation} where $\alpha_{1}\leq \cdots \leq\alpha_{\mu}$ is the (ordered) spectrum of $f$ at infinity.} \hfill~~\mbox{$\Box$} \\ \noindent This is another story, but, as suggested by example \ref{ex:VarianceMiroirLisse}, one should expect $$\frac{1}{\mu}\sum_{i=1}^{\mu}(\alpha_i -\frac{n}{2})^2 = \frac{\alpha_{\mu }-\alpha_{1}}{12}$$ if $f$ belongs to the ideal generated by its partial derivatives (this is the case for quasi-homogeneous polynomials, see \cite{Dimca} and \cite{Her}) because, under mirror symmetry, the multiplication by $f$ on its Jacobi ring corresponds to the cup-product by $c_{1}(X)$ on the cohomology algebra. Example: $f(x,y)=xy(x-1)$ for which we have $\mu =2$ and $\alpha_1 =\alpha_2 =1$.
1,116,691,498,032
arxiv
\section{Introduction} As the prevalence of interconnected devices grows, vulnerable communication networks must be able to counter the actions of malicious actors; a unified understanding of the fundamental communication limits of these networks is therefore paramount. The correction of errors introduced by adversaries in networks has been studied in a number of previous works. Cai and Yeung give generalizations of several classical coding bounds to the network setting in \cite{YC06,CY06}. Refined bounds and related code constructions for adversarial networks are presented in, e.g., \cite{YY07, JLKHKM07, M07, YNY07, YYZ08, RK18}. The work most closely related to this paper is \cite{RK18}, where a unified combinatorial framework for adversarial networks and a method for porting point-to-point coding-theoretic results to the network setting are established. In contrast to works that address random errors in networks, or a combination of random and adversarial errors, \cite{RK18} focuses purely on adversarial, or \textit{worst-case}, errors. The results presented here assume the same model in a single-use regime. We focus on networks whose inputs are drawn from a finite alphabet and whose intermediate nodes may process information before forwarding. We assume that an omniscient adversary can corrupt up to some fixed number of alphabet symbols sent along a subset of network edges. The \textit{one-shot capacity} of such an adversarial network measures the number of symbols that can be sent with zero error during a single transmission round. A universal approach to forming \textit{cut-set bounds}, which are derived by reducing the capacity problem to a minimization across cut-sets of the underlying directed graph of the network, is presented in \cite{RK18}. Any coding-theoretic bound may be ported to the networking setting, including the famous \textit{Singleton Bound}. By exhibiting a minimal example, we show that even when the Singleton bound gives the best established upper bound on one-shot capacity for a network, it is not always tight (regardless of the size of the network alphabet). Our example, which we call the \textit{Diamond Network}, requires that a single symbol be sacrificed to the task of locating the adversary within the network. Interestingly, this requirement results in a non-integer-valued one-shot capacity. We note that the requirement that the receiver locate the adversary is related to the problem of \textit{authentication} in networks (see, e.g. \cite{KK16,SBDP19, BGKKY20}). In our capacity-achieving scheme for the Diamond Network, one intermediate vertex must be able to either sound an alarm (if the adversary is detected), or decode correctly (when the adversary is absent). On the other hand, in our presented scheme for a modification of the Diamond Network, called the \textit{Mirrored Diamond Network}, the way in which intermediate vertices sound the alarm must simultaneously serve as the way in which a particular alphabet symbol is transmitted. This interplay between authentication and correction is reminiscent of the work in \cite{BGKKY20}, where the idea of \textit{partial correction} over arbitrarily-varying multiple-access channels is introduced. This paper is organized as follows. In Section \ref{sec:prelims} we introduce necessary notation and background. Sections \ref{sec:min-achiev} and \ref{sec:min-conv} together establish the exact one-shot capacity of the Diamond Network, proving that the Singleton Cut-Set Bound is not tight. In Section \ref{sec:mirrored-diamond}, we establish the (bound-achieving) one-shot capacity of the Mirrored Diamond Network. Section \ref{sec:2-level} expands our focus to the broader class of \textit{two-level} networks, and gives a sufficient condition for a network in this class to meet the best cut-set bound. We conclude and give future directions in Section \ref{sec:conclusion}. \section{Preliminaries \label{sec:prelims} We introduce the terminology and notation for the remainder of the paper. We start by formally defining communication networks as in~\cite{RK18}. \begin{definition} A (\textbf{single-source communication)} \textbf{network} is a 4-tuple $\mN=(\mV,\mE,S, {\bf T})$, where: \begin{itemize} \item[(A)] $(\mV,\mE)$ is a finite, directed and acyclic multigraph; \item[(B)] $S \in \mV$ is the \textbf{source}; \item[(C)] ${\bf T} \subseteq \mV$ is the set of \textbf{terminals}. \end{itemize} We also assume the following: \begin{itemize} \item[(D)] $|{\bf T}| \ge 1$ and $S \notin {\bf T}$; \item[(E)] there exists a directed path from $S$ to any $T \in {\bf T}$; \item[(F)] for every $V \in \mV \setminus (\{S\} \cup {\bf T})$ there exists a directed path from $S$ to $V$ and from $V$ to some terminal $T \in {\bf T}$. \end{itemize} The elements of $\mV$ are called \textbf{vertices} or \textbf{nodes}, and those of~$\mE$ are called \textbf{edges}. The elements of $\mV \setminus (\{S\} \cup {\bf T})$ are the \textbf{intermediate vertices}/\textbf{nodes}. The set of incoming and outgoing edges for a vertex $V$ are denoted by $\inn(V)$ and $\out(V)$, respectively. Their cardinalities are the \textbf{indegree} and \textbf{outdegree} of $V$, which are denoted by $\ein(V)$ and $\eout(V)$, respectively. \end{definition} Our communication model is as follows: all edges of a network $\mN$ can carry precisely one element from a set $\mA$ of cardinality at least 2, which we call the \textbf{alphabet}. The vertices of the network collect alphabet symbols over the incoming edges, process them according to \textit{functions}, and send the outputs over the outgoing edges. Vertices are \textit{memoryless} and transmissions are \textit{delay-free}. We model errors as being introduced by an adversary $\adv$, who can corrupt the value of up to $t$ edges from a fixed set $\mU \subseteq \mE$. An alphabet symbol sent along one of the edges in $\mU$ can be changed to any other alphabet symbol at the discretion of the adversary. In particular, the noise we consider is \textit{not} probabilistic in nature, but rather worst-case: we focus on correcting \textit{any} error pattern that can be introduced by the adversary. We call the pair $(\mN,\adv)$ an \textbf{adversarial network}. It is well-known that an acyclic directed graph $(\mV,\mE)$ defines a partial order on the set of its edges, $\mE$. More precisely, $e_1 \in \mE$ \textbf{precedes} $e_2 \in \mE$ (in symbols, $e_1 \preccurlyeq e_2$) if there exists a directed path in $(\mV,\mE)$ whose first edge is~$e_1$ and whose last edge is $e_2$. We may extend this partial order to a total order on $\mE$, which we fix once and for all and denote by $\le$. Important to note is that the results in this paper do not depend on the particular choice of $\le$. \begin{definition} Let $\mN=(\mV,\mE,S,{\bf T})$ be a network. A \textbf{network code} $\mF$ for $\mN$ is a family of functions $\{\mF_V \mid V \in \mV \setminus (\{S\} \cup {\bf T})\}$, where $\mF_V: \mA^{\ein(V)} \to \mA^{\eout(V)}$ for all $V$. \end{definition} A network code $\mF$ describes how the vertices of a network~$\mN$ process the inputs received on the incoming edges. There is a unique interpretation for these operations thanks to the choice of the total order $\le$. \begin{definition}\label{def:isolates} Let $\mN=(\mV,\mE,S,{\bf T})$ be a network and let $\mU,\mU' \subseteq \mE$ be non-empty subsets. We say that $\mU$ \textbf{precedes}~$\mU'$ if every path from $S$ to an edge of $\mU'$ contains an edge from~$\mU$. \end{definition} Our next step is to define outer codes for a network and give necessary and sufficient conditions for decodability. We do this by introducing the notion of an adversarial channel as proposed in \cite[Section IV.B]{RK18}. \begin{notation} Let $(\mN,\adv)$ be an adversarial network with $\mN=(\mV,\mE,S,{\bf T})$ and let $\mU,\mU' \subseteq \mE$ be non-empty such that $\mU$ {precedes}~$\mU'$. Let $\mF$ be a network code for~$\mN$. For $\mathbf{x} \in \mA^{|\mU|}$, we denote by \begin{equation} \label{nnn}\Omega[\mN,\adv,\mF,\mU \to \mU'](\mathbf{x}) \subseteq \mA^{|\mU'|} \end{equation} the set of vectors over the alphabet that can be exiting the edges of $\mU'$ when: \begin{itemize} \item the coordinates of the vector $\mathbf{x}$ are the alphabet values entering the edges of $\mU$, \item vertices process information according to $\mF$, \item everything is interpreted according to the total order $\le$. \end{itemize} Note that~\eqref{nnn} is well-defined because $\mU$ precedes $\mU'$. Furthermore, $\mU\cap \mU'$ need not be empty. We refer to the discussion following~\cite[Definition~41]{RK18}; see also~\cite[Example 42]{RK18}. \end{notation} \begin{example} \label{eee} Let $(\mN,\adv)$ be the network in Figure~\ref{fig:eee}, where the edges are ordered according to their indices. We consider an adversary capable of corrupting up to one of the dashed edges. At each intermediate node $V\in \{ V_{1},V_{2}\}$, let $\mF_{V}$ be the identity function. Then, for example, for $\mathbf{x}=(x_1,x_2,x_3) \in \mA^3$ we have that \[\Omega[\mN,\adv,\mF,\{e_1,e_2,e_3\} \to \{e_2,e_4,e_5\}](\mathbf{x}) \subseteq \mA^3\] is the set of all alphabet vectors $\mathbf{y}=(y_1,y_2,y_3) \in \mA^3$ for which $\dH((y_2,y_1,y_3),(x_1,x_2,x_3)) \le 1$, where $\dH$ denotes the Hamming distance. \begin{figure}[htbp] \centering \begin{tikzpicture} \tikzset{vertex/.style = {shape=circle,draw,inner sep=0pt,minimum size=1.9em}} \tikzset{nnode/.style = {shape=circle,fill=myg,draw,inner sep=0pt,minimum size=1.9em}} \tikzset{edge/.style = {->,> = stealth}} \tikzset{dedge/.style = {densely dotted,->,> = stealth}} \tikzset{ddedge/.style = {dashed,->,> = stealth}} \node[vertex] (S1) {$S$}; \node[shape=coordinate,right=\mynodespace of S1] (K) {}; \node[nnode,above=0.5\mynodespace of K] (V1) {$V_1$}; \node[nnode,below=0.5\mynodespace of K] (V2) {$V_2$}; \node[vertex,right=\mynodespace of K] (T) {$T$}; \draw[ddedge,bend left=0] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_1$} (V1); \draw[ddedge,bend left=0] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_2$} (T); \draw[ddedge,bend right=0] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_3$} (V2); \draw[edge,bend left=0] (V1) to node[near start,sloped,fill=white, inner sep=1pt]{\small $e_4$} (T); \draw[edge,bend left=0] (V2) to node[sloped,fill=white, inner sep=1pt]{\small $e_{5}$} (T); \end{tikzpicture} \caption{{{Network for Example~\ref{eee}.}}}\label{fig:eee} \end{figure} \end{example} We now define error-correcting codes in the context of adversarial networks. Informally, codes are comprised of the alphabet vectors that may be emitted by the source. \begin{definition} An (\textbf{outer}) \textbf{code} for a network $\mN=(\mV,\mE,S,{\bf T})$ is a subset $C \subseteq \mA^{\eout(S)}$ with $|C| \ge 1$. If $\mF$ is a network code for $\mN$ and $\adv$ is an adversary, then we say that $C$ is \textbf{unambiguous} (or \textbf{good}) for $(\mN,\adv,\mF)$ if for all $\mathbf{x}, \mathbf{x}' \in C$ with $\mathbf{x} \neq \mathbf{x}'$ and for all $T \in {\bf T}$ we have \begin{equation*} \Omega[\mN,\adv,\mF,\out(S) \to \inn(T)](\mathbf{x}) \, \cap \Omega[\mN,\adv,\mF,\out(S) \to \inn(T)](\mathbf{x}') = \emptyset. \end{equation*} \end{definition} The last condition in the above definition guarantees that every element of $C$ can be uniquely recovered by every terminal, despite the action of the adversary. Finally, we define the one-shot capacity of an adversarial network. \begin{definition} The (\textbf{one-shot}) \textbf{capacity} of an adversarial network $(\mN,\adv)$ is the maximum $\alpha \in \R$ for which there exists a network code $\mF$ and an unambiguous code $C$ for $(\mN,\adv,\mF)$ with $\alpha=\log_{|\mA|}(|C|)$. We denote this maximum value by $\CA(\mN,\adv)$. \end{definition} In~\cite{RK18}, a general method was developed to ``lift'' bounds for Hamming-metric channels to the networking context. The method allows any classical coding bound to be lifted to the network setting. The next result states the lifted version of the well-known Singleton bound. Recall that an edge-cut between source $S$ and terminal $T$ is a set of edges whose removal would separate $S$ from~$T$. \begin{theorem}[The Singleton Cut-Set Bound] \label{cutset} Let $\mN$ be a network with edge set $\mE$. Assume an adversary $\adv$ can corrupt up to $t \ge 0$ edges from a subset $\mU \subseteq \mE$. Then $$\CA(\mN,\adv) \le \min_{T \in {\bf T}} \min_{\mE'} \left( |\mE' \setminus \mU| +\max\{0, |\mE' \cap \mU|-2t\} \right),$$ where $\mE' \subseteq \mE$ ranges over all edge-cuts between $S$ and $T$. \end{theorem} \section{The Diamond Network: Achievability} \label{sec:min-achiev} We present a minimal example of a network for which the best bound in~\cite{RK18}, namely the Singleton Cut-Set Bound, is not sharp. The example will serve to illustrate the necessity of performing \textit{partial} decoding at the intermediate nodes in order to achieve capacity. \begin{example}[The Diamond Network] \label{minimal} The network $\mD$ of Figure~\ref{figminimal} has one source $S$, one terminal~$T$, and two intermediate vertices $V_1$ and $V_2$. The vertices are connected as in the figure. We consider an adversary $\adv_\mD$ able to corrupt at most one of the dashed edges, and we call the pair $(\mD,\adv_\mD)$ the \textbf{Diamond Network}. \begin{figure}[htbp] \centering \begin{tikzpicture} \tikzset{vertex/.style = {shape=circle,draw,inner sep=0pt,minimum size=1.9em}} \tikzset{nnode/.style = {shape=circle,fill=myg,draw,inner sep=0pt,minimum size=1.9em}} \tikzset{edge/.style = {->,> = stealth}} \tikzset{dedge/.style = {densely dotted,->,> = stealth}} \tikzset{ddedge/.style = {dashed,->,> = stealth}} \node[vertex] (S1) {$S$}; \node[shape=coordinate,right=\mynodespace of S1] (K) {}; \node[nnode,above=0.5\mynodespace of K] (V1) {$V_1$}; \node[nnode,below=0.5\mynodespace of K] (V2) {$V_2$}; \node[vertex,right=\mynodespace of K] (T) {$T$}; \draw[ddedge,bend left=0] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_1$} (V1); \draw[ddedge,bend left=15] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_2$} (V2); \draw[ddedge,bend right=15] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_3$} (V2); \draw[edge,bend left=0] (V1) to node[near start,sloped,fill=white, inner sep=1pt]{\small $e_4$} (T); \draw[edge,bend left=0] (V2) to node[sloped,fill=white, inner sep=1pt]{\small $e_{5}$} (T); \end{tikzpicture} \caption{{{The Diamond Network of Example~\ref{minimal}.}}}\label{figminimal} \end{figure} \end{example} For the Diamond Network, the best bound among those proved in~\cite{RK18} is the Singleton Cut-Set Bound, given below. \begin{corollary} \label{csD} For the Diamond Network $(\mD,\adv_\mD)$, $\CA(\mD,\adv_\mD) \le 1$. \end{corollary} We will prove in this section and the next that the Diamond Network has capacity \begin{equation}\label{DNC} \CA(\mD,\adv_\mD) = \log_{|\mA|}(|\mA|-1). \end{equation} This shows that the bounds of~\cite{RK18} are not sharp. The results presented in the remainder of the paper offer an intuitive explanation for why Corollary~\ref{csD} is not sharp: in order to achieve the capacity of the Diamond Network, one alphabet symbol needs to be reserved to implement an adversary detection strategy. We will elaborate on this idea following the proof of achievability, given below. \begin{proposition} \label{achiev} For the Diamond Network $(\mD,\adv_\mD)$, $\CA(\mD,\adv_\mD) \ge \log_{|\mA|}(|\mA|-1)$. \end{proposition} \begin{proof} We isolate a symbol $* \in \mA$ and define $\mA'=\mA \setminus \{*\}$. Consider the scheme where the source~$S$ can send any symbol of $\mA'$ via a three-times repetition code over its outgoing edges. Vertex $V_1$ simply forwards the received input, while vertex $V_2$ proceeds as follows: If the two received inputs coincide and are equal to $a \in \mA'$, then it forwards $a$. Otherwise, it transmits~$*$. It is not difficult to check that any symbol from $\mA'$ can be uniquely decoded, showing that the proposed scheme is unambiguous. This concludes the proof. \end{proof} The communication strategy on which the previous proof is based reserves an alphabet symbol $* \in \mA$ to pass information about the location of the adversary (more precisely, the symbol~$*$ reveals whether or not the adversary is acting on the lower ``stream'' of the Diamond Network). Note that the source is not allowed to emit the reserved symbol~$*$, rendering $\log_{|\mA|}(|\mA|-1)$ the maximum rate achievable by this scheme. It is natural to then ask whether the reserved symbol~$*$ can simultaneously be a part of the source's codebook, achieving a rate of $1=\log_{|\mA|}(|\mA|)$ transmitted message per single channel use. In the next section, we will formally answer this question in the negative; see Proposition~\ref{prop:conv}. In Section~\ref{sec:mirrored-diamond}, we consider a modification of the Diamond Network and present a scheme where one symbol is reserved for adversary detection, but can nonetheless also be used as a message symbol. \section{The Diamond Network: The Converse} \label{sec:min-conv} In this section, we establish an inequality for the cardinality of any unambiguous code $C$ for the Diamond Network. The inequality is quadratic in the code's size and implies that $|C| \le |\mA|-1$. Together with Proposition~\ref{achiev}, this computes the exact capacity of $(\mD,\adv_\mD)$. \begin{proposition} \label{prop:conv} Let $\mF$ be a network code for $(\mD,\adv_\mD)$ and let $C \subseteq \mA^3$ be an outer code. If $C$ is unambiguous for $(\mD,\adv_\mD,\mF)$, then $$|C|^2 + |C| -1-|\mA|^2 \le 0.$$ In particular, we have $|C| \le |\mA|-1$. \end{proposition} \begin{proof} The argument is organized into various claims. We denote by $\pi: \mA^3 \to \mA$ the projection onto the first coordinate. \begin{claim}\label{clA} We have $|\pi(C)|=|C|$. \end{claim} \begin{clproof} This follows from the fact that $C \subseteq \mA^3$ must have minimum Hamming distance 3 in order to be unambiguous, as one can easily check. \end{clproof} \begin{claim}\label{clB} The restriction of $\mF_{V_1}$ to $\pi(C)$ is injective. \end{claim} \begin{clproof} Suppose by contradiction that there exist $\mathbf{x},\mathbf{y} \in C$ with $\pi(\mathbf{x}) \neq \pi(\mathbf{y})$ and $\mF_{V_1}(\pi(\mathbf{x})) = \mF_{V_1}(\pi(\mathbf{y}))$. Then it is easy to see that the sets $\Omega[\mD,\adv_\mD,\mF,\out(S) \to \inn(T)](\mathbf{x})$ and $\Omega[\mD,\adv_\mD,\mF,\out(S) \to \inn(T)](\mathbf{y})$ intersect non-trivially. Indeed, if $\mathbf{x}=(x_1,x_2,x_3)$ and $\mathbf{y}=(y_1,y_2,y_3)$, then the final output $$(\mF_{V_1}(x_1), \mF_{V_2}(x_2,y_3)) \in \mA^2$$ belongs to both sets. \end{clproof} We now concentrate on the transfer from the edges in $\{e_1,e_2,e_3\}$ to $e_5$. To simplify the notation, let $$\Omega:=\Omega[\mD,\adv_\mD,\mF,\{e_1,e_2,e_3\} \to \{e_5\}],$$ which is well-defined because $\{e_1,e_2,e_3\}$ precedes $e_5$; see Definition~\ref{def:isolates}. \begin{claim}\label{clC} There exists at most one codeword $\mathbf{x} \in C$ for which the cardinality of $\Omega(\mathbf{x})$ is 1. \end{claim} \begin{clproof} Towards a contradiction, suppose that there are $\mathbf{x},\mathbf{y} \in C$ with $\mathbf{x} \neq \mathbf{y}$ and $|\Omega(\mathbf{x})|=|\Omega(\mathbf{y})|=1$. We write $\Omega':=\Omega[\mD,\adv_\mD,\mF,\{e_1,e_2,e_3\} \to \{e_2,e_3\}]$ and observe that $|\mF_{V_2}(\Omega'(\mathbf{x}))| = |\mF_{V_2}(\Omega'(\mathbf{y}))|=1$. Let $\mathbf{x}=(x_1,x_2,x_3)$, $\mathbf{y}=(y_1,y_2,y_3)$. Since $(x_2,x_3), (x_2,y_3) \in \Omega'(\mathbf{x})$ and $(y_2,y_3), (x_2,y_3) \in \Omega'(\mathbf{y})$, we have $$\mF_{V_2}(x_2,x_3)=\mF_{V_2}(x_2,y_3)=\mF_{V_2}(y_2,y_3).$$ By observing that the adversary may corrupt the symbol sent on $e_{1}$, this implies that the sets $\Omega[\mD,\adv_\mD,\mF,\out(S) \to \inn(T)](\mathbf{x})$ and $\Omega[\mD,\adv_\mD,\mF,\out(S) \to \inn(T)](\mathbf{y})$ intersect non-trivially, a contradiction. \end{clproof} To simplify the notation further, denote the transfer from $S$ to $T$ by $$\Omega'':=\Omega[\mD,\adv_\mD,\mF,\{e_1,e_2,e_3\} \to \{e_4,e_5\}].$$ Since $C$ is unambiguous, we have \begin{equation} \label{bb} \sum_{\mathbf{x} \in C} |\Omega''(\mathbf{x})| \le |\mA|^2. \end{equation} For all $\mathbf{x} \in C$, write $\Omega''(\mathbf{x}) = \Omega_1''(\mathbf{x}) \cup \Omega_2''(\mathbf{x})$, where \begin{align*} \Omega_1''(\mathbf{x}) &=\{\mathbf{z} \in \Omega''(\mathbf{x}) \mid z_1=\mF_{V_1}(x_1)\}, \\ \Omega_2''(\mathbf{x})&=\{\mathbf{z} \in \Omega''(\mathbf{x}) \mid z_2=\mF_{V_2}(x_2,x_3)\}. \end{align*} By definition, we have $$|\Omega''(\mathbf{x})| = |\Omega''_1(\mathbf{x})| + |\Omega''_2(\mathbf{x})| -1.$$ Summing the previous identity over all $\mathbf{x} \in C$ and using Claims~\ref{clA}, \ref{clB} and \ref{clC} we find \begin{align*} \sum_{\mathbf{x} \in C}|\Omega''(\mathbf{x})| &\ge 1+ 2(|C|-1) + \sum_{\mathbf{x} \in C} |C| - |C|\\ &= 2|C| -1 +|C|^2 -|C| \\ &= |C|^2+|C|-1. \end{align*} Combining this with~\eqref{bb}, we find $|C|^2+|C|-1 \le |\mA|^2$, which is the desired inequality. \end{proof} We can now compute the capacity of the Diamond Network by combining Propositions~\ref{achiev} and~\ref{prop:conv}. \begin{theorem} For the Diamond Network $(\mD,\adv_\mD)$, $\CA(\mD,\adv_\mD) = \log_{|\mA|}(|\mA|-1)$. \end{theorem} The Diamond Network is admittedly a small example. However, we believe that it will provide valuable insight into the general behavior of the one-shot capacity of larger networks. \section{The Mirrored Diamond Network} \label{sec:mirrored-diamond} It is interesting to observe that by adding a single edge to the Diamond Network as in Figure~\ref{fig:no-conv}, the capacity is exactly the one predicted by the Singleton Cut-Set Bound of Theorem~\ref{cutset}. We call the network in Figure~\ref{fig:no-conv} the \textbf{Mirrored Diamond Network}. Again, the adversary can corrupt at most one edge from the dashed ones. The notation for the network-adversary pair is~$(\mS,\adv_\mS)$. \begin{figure}[htbp] \centering \begin{tikzpicture} \tikzset{vertex/.style = {shape=circle,draw,inner sep=0pt,minimum size=1.9em}} \tikzset{nnode/.style = {shape=circle,fill=myg,draw,inner sep=0pt,minimum size=1.9em}} \tikzset{edge/.style = {->,> = stealth}} \tikzset{dedge/.style = {densely dotted,->,> = stealth}} \tikzset{ddedge/.style = {dashed,->,> = stealth}} \node[vertex] (S1) {$S$}; \node[shape=coordinate,right=\mynodespace of S1] (K) {}; \node[nnode,above=0.5\mynodespace of K] (V1) {$V_1$}; \node[nnode,below=0.5\mynodespace of K] (V2) {$V_2$}; \node[vertex,right=\mynodespace of K] (T) {$T$}; \draw[ddedge,bend left=15] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_1$} (V1); \draw[ddedge,bend right=15] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_2$} (V1); \draw[ddedge,bend left=15] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_3$} (V2); \draw[ddedge,bend right=15] (S1) to node[sloped,fill=white, inner sep=1pt]{\small $e_4$} (V2); \draw[edge,bend left=0] (V1) to node[near start,sloped,fill=white, inner sep=1pt]{\small $e_5$} (T); \draw[edge,bend left=0] (V2) to node[sloped,fill=white, inner sep=1pt]{\small $e_{6}$} (T); \end{tikzpicture} \caption{{{The Mirrored Diamond Network.}}}\label{fig:no-conv} \end{figure} \begin{proposition} \label{prop:sym-dia} We have $\CA(\mS,\adv_\mS) = 1$. \end{proposition} \begin{proof} By Theorem \ref{cutset}, $\CA(\mS,\adv_\mS) \leq 1$, so we need only prove achievability. Select $* \in \mA$, and consider the scheme where the source $S$ sends any symbol of $\mA$ via a four-times repetition code. Vertices $V_1$ and $V_2$ both proceed as follows: If the two received inputs coincide and are equal to $a \in \mA$, the vertex forwards $a$; otherwise it transmits $*$. At $T$, if the received symbols match and are equal to $a\in \mA$, decode to $a$. Otherwise, decode to the symbol that is not equal to $*$. It is clear that any symbol from $\mA$ can be uniquely decoded, including $*$, showing that the proposed scheme is unambiguous. This concludes the proof. \end{proof} Note that, as in the proof of Proposition~\ref{achiev}, the above scheme uses an alphabet symbol to pass information about the location of the adversary. In strong contrast with the Diamond Network however, in the Mirrored Diamond Network this strategy comes at no cost, as the ``reserved'' alphabet symbol can be used by the source like any other symbol. \section{Two-Level Networks} \label{sec:2-level} We initiate a systematic study of communication with restricted adversaries. Since a global treatment is out of reach at the moment, we start by concentrating on a small but sufficiently interesting family of highly structured networks. These are defined as follows. \begin{definition} A \textbf{two-level network} is a network $\mN=(\mV,\mE,S,\{T\})$ with a single terminal $T$ such that any path from $S$ to $T$ is of length $2$. \end{definition} \begin{figure}[htbp] \centering \begin{tikzpicture} \tikzset{vertex/.style = {shape=circle,draw,inner sep=0pt,minimum size=1.9em}} \tikzset{nnode/.style = {shape=circle,fill=myg,draw,inner sep=0pt,minimum size=1.9em}} \tikzset{edge/.style = {->,> = stealth}} \tikzset{dedge/.style = {densely dotted,->,> = stealth}} \tikzset{ddedge/.style = {dashed,->,> = stealth}} \node[vertex] (S1) {$S$}; \node[shape=coordinate,right=\mynodespace of S1] (K) {}; \node[nnode,above=0.7\mynodespace of K] (V1) {$V_1$}; \node[nnode,above=0.1\mynodespace of K] (V2) {$V_2$}; \node[below=0\mynodespace of K] {$\vdots$}; \node[nnode,below=0.6\mynodespace of K] (Vn) {$V_n$}; \node[vertex,right=\mynodespace of K] (T) {$T$}; \draw[ddedge,bend left=10] (S1) to node[sloped,fill=white, inner sep=1pt]{} (V1); \draw[ddedge,bend right=10] (S1) to node[sloped,fill=white, inner sep=1pt]{} (V1); \draw[ddedge,bend left=10] (S1) to node[sloped,fill=white, inner sep=1pt]{} (V2); \draw[ddedge,bend right=10] (S1) to node[sloped,fill=white, inner sep=1pt]{} (V2); \draw[ddedge,bend left=15] (S1) to node[sloped,fill=white, inner sep=1pt]{} (Vn); \draw[ddedge,bend left=5] (S1) to node[sloped,fill=white, inner sep=1pt]{} (Vn); \draw[ddedge,bend right=5] (S1) to node[sloped,fill=white, inner sep=1pt]{} (Vn); \draw[ddedge,bend right=16] (S1) to node[sloped,fill=white, inner sep=1pt]{} (Vn); \draw[edge,bend left=16] (V1) to node[sloped, inner sep=1pt]{} (T); \draw[edge,bend left=3] (V1) to node[sloped, inner sep=1pt]{} (T); \draw[edge,bend right=10] (V1) to node[sloped, inner sep=1pt]{} (T); \draw[edge,bend left=10] (V2) to node[sloped, inner sep=1pt]{} (T); \draw[edge,bend right=10] (V2) to node[sloped, inner sep=1pt]{} (T); \draw[edge,bend left=10] (Vn) to node[sloped, inner sep=1pt]{} (T); \draw[edge,bend right=10] (Vn) to node[sloped, inner sep=1pt]{} (T); \end{tikzpicture} \caption{\label{fig:2level-ex} An example of a two-level network where vulnerable edges are restricted to those in the first level. In general, there may be any number of edges between the source/sink and each intermediate node.} \end{figure} An example of a two-level network is given in Figure \ref{fig:2level-ex}. By applying the Singleton Cut-Set Bound of Theorem \ref{cutset} to two-level networks with vulnerable edges restricted to the first level, we establish the following bound. \begin{theorem} \label{cor:cut-set-2-level} Consider a two-level network $\mN$ where the adversary $\adv$ can act on up to $t$ edges of the first level. Then, $C_{1}(\mN,\adv)$ is upper bounded by the following value: \[\min_{\mV_{1},\mV_{2}}\left(\sum_{V_{i}\in \mV_{1}}\eout(V_{i})+\max\left\{0,\sum_{V_{i}\in \mV_{2}}\ein(V_{i})-2t\right\}\right),\] where the minimum is taken over all 2-partitions $\mV_{1}, \mV_{2}$ of the set of intermediate vertices $\{V_{1},\ldots,V_{n}\}$. \end{theorem} To understand when the Singleton Cut-Set Bound is achievable in a two-level network, we introduce the following terminology. \begin{definition} Consider a network where an adversary can act simultaneously on up to $t$ edges. We call an intermediate vertex in the network \textbf{damming} if \[\eout(V_{i})+1 \leq \ein(V_{i})\leq \eout(V_{i})+2t-1.\] \end{definition} Notice that if the adversary can change at most one symbol, the above definition reduces to $\ein(V_{i})= \eout(V_{i})+1$; such a vertex is present in both the Diamond Network and the Mirrored Diamond Network. \begin{theorem} \label{thm:2-level} In a two-level network where an adversary can act on up to $t$ edges of the first level, if no intermediate vertex is damming, then the Singleton Cut-Set Bound is achievable for sufficiently large alphabet size. \end{theorem} \begin{proof} Suppose that no intermediate vertex is damming. That is, for every intermediate vertex $V_{i}\in \{V_{1},\ldots,V_{n}\}$, either $\ein(V_{i})\leq \eout(V_{i})$ or $\ein(V_{i})\geq \eout(V_{i})+2t$. In this case, the bound of Theorem \ref{cor:cut-set-2-level} is achieved when \begin{align*} \mV_{1}&=\{V_{i} \mid \ein(V_{i})\geq\eout(V_{i})+2t\},\\ \mV_{2}&=\{V_{i} \mid \ein(V_{i})\leq\eout(V_{i})\}. \end{align*} We exhibit a scheme that achieves the Singleton Cut-Set Bound. Choose a sufficiently large alphabet (determined by the required MDS codes below), and let $\mV_{1}$ and $\mV_{2}$ be as above. On $\eout(V_{i})+2t$ of the $\ein(V_{i})$ edges from the source, $S$, to vertex $V_{i}\in \mV_{1}$, send $\eout(V_{i})$ information symbols encoded using an MDS code of minimum distance $2t+1$; any extra edges from $S$ to $V_{i}$ may be disregarded. At vertex~$V_{i}$, decode the $\eout(V_{i})$ information symbols and forward them to the sink, $T$. Meanwhile, if $\sum_{V_{i}\in \mV_{2}}\ein(V_{i})> 2t$, encode $\sum_{V_{i}\in \mV_{2}}\ein(V_{i})-2t$ symbols using an MDS code with parameters $$\left[\sum_{V_{i}\in \mV_{2}}\ein(V_{i}),\sum_{V_{i}\in \mV_{2}}\ein(V_{i})-2t,2t+1\right],$$ and send this codeword along the edges from $S$ to the intermediate vertices in $\mV_{2}$. At the intermediate vertices, forward the received symbols; extra outgoing edges may be disregarded. If $\sum_{V_{i}\in \mV_{2}}\ein(V_{i})\leq 2t$, edges to $\mV_{2}$ may be disregarded. At terminal $T$, decode the codeword sent through the vertices in $\mV_{2}$, if one exists, to retrieve $\sum_{V_{i}\in \mV_{2}}\ein(V_{i})-2t$ information symbols. An additional $\sum_{V_{i}\in \mV_{1}}\eout(V_{i})$ symbols were sent faithfully through the vertices of $\mV_{1}$. Altogether, this gives us $$\sum_{V_{i}\in \mV_{1}}\eout(V_{i})+\max\left\{0,\sum_{V_{i}\in \mV_{2}}\ein(V_{i})-2t\right\}$$ information symbols, achieving the Singleton Cut-Set Bound. \end{proof} The results of Section~\ref{sec:mirrored-diamond} demonstrate that the converse of Theorem \ref{thm:2-level} does not hold. Indeed, both intermediate vertices of the Mirrored Diamond Network are damming but its capacity is as predicted by the Singleton Cut-Set Bound; see Proposition~\ref{prop:sym-dia}. \section{Discussion and Future Work} \label{sec:conclusion} We considered the problem of determining the one-shot capacity of communication networks with adversarial noise. In contrast with the typical scenario considered in the context of network coding, we allow the noise to affects only a subset of the network's edges. We defined the Diamond Network and computed its capacity, illustrating that previously known cut-set bounds are not sharp in general. We then studied the family of two-level networks, giving a sufficient condition under which the Singleton Cut-Set Bound is sharp over a sufficiently large alphabet. Natural problems inspired by these results are the complete characterization of two-level networks for which cut-set bounds are sharp, and development of techniques to derive upper bounds for the capacity of more general adversarial networks. These will be the subject of future work. \bibliographystyle{IEEEtran}
1,116,691,498,033
arxiv
\section{Introduction} \label{sec:introduction} \input{sections/introduction.tex} \section{Related Work} \label{sec:related} \input{sections/related.tex} \section{Problem Formulation} \label{sec:formulation} \input{sections/formulation.tex} \section{Algorithm} \label{sec:algorithm} \input{sections/algorithm.tex} \subsection{Performance Guarantees} \label{sec:theory} \input{sections/theory.tex} \section{Experiments} \label{sec:experiments} \input{sections/experiments.tex} \section{Conclusion} \label{sec:conclusion} \input{sections/conclusion.tex} \subsection{Learning with noisy labels} To begin, we ask, \emph{what is the optimal approach to learn the predictor function $\widehat f$ when for each worker we have $\widehat \pi$, a good estimation of the true confusion matrix $\pi$, and $\widehat q$, an estimate of the prior}? A recent paper, \citet{natarajan2013learning} proposes minimizing an unbiased loss function specifically, a weighted sum of the original loss over each possible ground truth label. They provide weights for binary classification where each example is labeled by only one worker. Consider a worker with confusion matrix $\pi$, where $\pi_{y} > 1/2$ and $\pi_{-y} > 1/2$ represent her probability of correctly labeling the examples belonging to class $y$ and $-y$ respectively. Then their weights are $\pi_{-y}/(\pi_{y} + \pi_{-y} - 1)$ for class $y$ and $-(1-\pi_{y})/(\pi_{y} + \pi_{-y}-1)$ for class $-y$. It is evident that their weights become unstably large when the probabilities of correct classification $\pi_{y}$ and $\pi_{-y}$ are close to $1/2$, limiting the method's usefulness in practice. As explained below, for the same scenario, our weights would be $\pi_{y}/(1 + \pi_{y} - \pi_{-y})$ for class $y$ and $(1-\pi_{-y})/(1 + \pi_{y} - \pi_{-y})$ for class $-y$. Inspired by their idea, we propose weighing the loss function according to the posterior distribution of the true label given the $Z^{(r)}$ observed labels and an estimate of the confusion matrices of the worker who provided those labels. In particular, we define $\ell_{\widehat\pi,\widehat q}$ to be \begin{eqnarray}\label{eq:ell_pi} \ell_{\widehat\pi,\widehat q}(f(X),Z^{(r)},w^{(r)}) & \coloneqq & \sum_{k \in \mathcal{K}} \mathbb{P}_{\widehat \pi, \widehat q}[Y = k \;|\; Z^{(r)}; w^{(r)}] \;\ell(f(X),Y=k)\,. \end{eqnarray} If the observed label is uniformly random, then all weights are equal and the loss is identical for all predictor functions $f$. Absent noise, we recover the original loss function. Under the Dawid-Skene model, given the observed noisy labels $Z^{(r)}$, an estimate of confusion matrices $\widehat \pi$, and an estimate of prior $\widehat q$, the posterior distribution of the true labels can be computed as follows: \begin{eqnarray} \label{eq:posterior} \mathbb{P}_{\widehat \pi, \widehat q}[Y_i = k \;|\; Z_i^{(r)}; w_i^{(r)}] & = & \frac{\widehat q_{k}\prod_{j=1}^r \Big(\sum_{s \in \mathcal{K}}\mathbb{I}[Z_{ij} =s]\widehat\pi_{ks}^{(w_{ij})}\Big)}{\sum_{k' \in \mathcal{K}} \Big(\widehat q_{k'}\prod_{j=1}^r \Big(\sum_{s \in \mathcal{K}}\mathbb{I}[Z_{ij} =s]\widehat\pi_{k's}^{(w_{ij})}\Big)\Big)}\,, \end{eqnarray} where $\mathbb{I}[.]$ is the indicator function which takes value one if the identity inside it is true, otherwise zero. We give guarantees on the performance of the proposed loss function in Theorem \ref{thm:main}. In practice, it is robust to noise level and significantly outperforms the unbiased loss function. Given $\ell_{\widehat\pi,\widehat q}$, we learn the predictor function $\widehat f$ by minimizing the empirical risk \begin{eqnarray} \label{eq:f_hat} \widehat f & \leftarrow & \arg\min_{f \in \mathcal{F}}\; \frac{1}{n}\sum_{i=1}^n \ell_{\widehat\pi,\widehat q} (f(X_i),Z_{i}^{(r)}, w_{i}^{(r)})\,. \end{eqnarray} \subsection{Estimating annotator noise} The next question is: how do we get a good estimate $\widehat\pi$ of the true confusion matrix $\pi$ for each worker. If redundancy $r$ is sufficiently large, we can employ the EM algorithm. However, in practical applications, redundancy is typically three or five. With so little redundancy, the standard applications of EM are of limited use. In this paper we look to transcend this problem, posing the question: Can we estimate confusion matrices of workers even when there is only one label per example? While this isn't possible in the standard approach, we can overcome this obstacle by incorporating a supervised learning model into the process of assessing worker quality. Under the Dawid-Skene model, the EM algorithm estimates the ground truth labels and the confusion matrices in the following way: It alternately fixes the ground truth labels and the confusion matrices by their estimates and and updates its estimate of the other by maximizing the likelihood of the observed labels. The alternating maximization begins by initializing the ground truth labels with a majority vote. With only $1$ label per example, EM estimates that all the workers are perfect. We propose using model predictions as estimates of the ground truth labels. Our model is initially trained on the majority vote of the labels. In particular, if the model prediction is $\{t_i\}_{i \in [n]}$, where $t_i \in \mathcal{K}$, then the maximum likelihood estimate of confusion matrices and the prior distribution is given below. For the $a$-th worker, $\widehat \pi^{(a)}_{ks}$ for $k, s \in \mathcal{K}$, and $\widehat q_k$ for $k \in \mathcal{K}$, we have, \begin{eqnarray} \label{eq:conf_prior_est} \widehat\pi^{(a)}_{ks} & = & \frac{\sum_{i=1}^n \sum_{j=1}^r \mathbb{I}[w_{ij} =a] \mathbb{I}[t_i = k]\mathbb{I}[Z_{ij} = s]}{\sum_{i=1}^n \sum_{j=1}^r \mathbb{I}[w_{ij}=a] \mathbb{I}[t_i = k]}\,, \qquad \widehat q_k = (1/n) \sum_{i=1}^n \mathbb{I}[t_i =k] \end{eqnarray} The estimate is effective when the hypothesis class $\mathcal{F}$ is expressive enough and the learner is robust to noise. Thus the model should, in general, have small training error on correctly labeled examples and large training error on wrongly labeled examples. Consider the case when there is only one label per example. The model will be trained on the raw noisy labels given by the workers. For simplicity, assume that each worker is either a \emph{hammer} (always correct) or a \emph{spammer} (chooses labels uniformly random). By comparing model predictions with the training labels, we can identify which workers are hammers and which are spammers, as long as each worker labels sufficiently many examples. We expect a hammer to agree with the model more often than a spammer. \subsection{Iterative Algorithm} Building upon the previous two ideas, we present `Model Bootstrapped EM', an iterative algorithm for efficient learning from noisy labels with small redundancy. MBEM takes data, noisy labels, and the corresponding worker IDs, and returns the best predictor function $\widehat f$ in the hypothesis class $\mathcal{F}$. In the first round, we compute the weights of the modified loss function $\ell_{\widehat \pi,\widehat q}$ by using the weighted majority vote. Then we obtain an estimate of the worker confusion matrices $\widehat \pi$ using the maximum likelihood estimator by taking the model predictions as the ground truth labels. In the second round, weights of the loss function are computed as the posterior probability distribution of the ground truth labels conditioned on the noisy labels and the estimate of the confusion matrices obtained in the previous round. In our experiments, only two rounds are required to achieve substantial improvements over baselines. \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{center} \begin{algorithm} \caption{Model Bootstrapped EM (MBEM)} \begin{algorithmic} \label{algo:algo1} \REQUIRE $\{(X_i, Z_i^{(r)},w_i^{(r)})\}_{i \in[n]}$, $T:$ number of iterations \ENSURE $\widehat f:$ predictor function \STATE \hspace{-1em}\textbf{Initialize posterior distribution using weighted majority vote} \STATE $\mathbb{P}_{\widehat \pi, \widehat q}[Y_i = k \;|\; Z_i^{(r)};w_i^{(r)}] \leftarrow (1/r) \sum_{j=1}^r \mathbb{I}[Z_{ij} = k]$ , for $k \in \mathcal{K}, i \in [n]$ \STATE \hspace{-1em}\textbf{Repeat $T$ times:} \STATE \textbf{learn predictor function} $\widehat f$ \STATE $\widehat f \leftarrow \arg\min_{f \in \mathcal{F}}\; \frac{1}{n}\sum_{i=1}^n \sum_{k \in \mathcal{K}} \mathbb{P}_{\widehat \pi,\widehat q}[Y_i = k \;|\; Z_i^{(r)};w_i^{(r)}] \;\ell(f(X_i),Y_i=k)$ \STATE \textbf{predict on training examples} \STATE $t_i \leftarrow \arg\max_{k \in \mathcal{K}} \widehat f(X_i)_k$, for $i \in [n]$ \STATE \textbf{estimate confusion matrices $\widehat\pi$ and prior class distribution $\widehat q$ given $\{t_i\}_{i \in [n]}$} \STATE $\widehat \pi^{(a)} \leftarrow$ Equation \eqref{eq:conf_prior_est}, for $a \in [m]$; $\widehat q \leftarrow$ Equation \eqref{eq:conf_prior_est} \STATE \textbf{estimate label posterior distribution given} $\widehat \pi, \widehat q$ \STATE $\mathbb{P}_{\widehat\pi,\widehat q}[Y_i = k \;|\; Z_i^{(r)};w_i^{(r)}], \leftarrow$ Equation \eqref{eq:posterior}, for $k \in \mathcal{K}, i \in [n]$ \STATE \hspace{-1em}\textbf{Return} $\widehat f$ \end{algorithmic} \end{algorithm} \end{center} \vspace{-10px} \subsection{Proof of Lemma \ref{lem:lem1}} Let $f^* \coloneqq \arg\min_{f \in \mathcal{F}} R_{\ell,\mathcal{D}}(f)$. Let's denote the distribution of $(X,Z^{(r)},w^{(r)})$ by $\mathcal{D}_{W,\pi,r}$. For ease of notation, we denote $\mathcal{D}_{W,\pi,r}$ by $\mathcal{D}_{\pi}$. Similar to $R_{\ell,\mathcal{D}}$, risk of decision function $f$ with respect to the modified loss function $\ell_{\widehat \pi}$ is characterized by the following quantities: \begin{enumerate} \item $ \ell_{\widehat\pi}$-risk under $\mathcal{D}_{\pi}$: $R_{\ell_{\widehat\pi},\mathcal{D}_{\pi}}(f) \coloneqq \mathbb{E}_{(X,Z^{(r)},w^{(r)})\sim \mathcal{D}_{\pi}}\big[ \ell_{\widehat\pi} (f(X),Z^{(r)}, w^{(r)})\big].$ \item Empirical $ \ell_{\widehat\pi}$-risk on samples: $\widehat R_{\ell_{\widehat\pi},\mathcal{D}_{\pi}}(f) \coloneqq \frac{1}{n}\sum_{i=1}^n \ell_{\widehat\pi} (f(X_i),Z_{i}^{(r)}, w_{i}^{(r)}).$ \end{enumerate} With the above definitions, we have the following, \begin{eqnarray} &&R_{\ell,\mathcal{D}}(\widehat f) - R_{\ell,\mathcal{D}}(f^*) \nonumber\\ & = & R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(\widehat f) - R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f^*) + \left(R_{\ell,\mathcal{D}}(\widehat f) - R_{\ell_{\widehat\pi},\mathcal{D}_{\pi}}(\widehat f)\right) - \left(R_{\ell,\mathcal{D}}(f^*)- R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f^*)\right)\nonumber \\ & \leq & R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(\widehat f) - R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f^*) + 2\beta_{\widehat \pi} \left(R_{\ell,\mathcal{D}}(\widehat f) - R_{\ell,\mathcal{D}}(f^*)\right) \label{eq:pf1}\\ & = & \widehat R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(\widehat f) - \widehat R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f^*) + \left(R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(\widehat f) - \widehat R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(\widehat f) \right) + \left(\widehat R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f^*) - R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f^*) \right) \nonumber\\ && + 2\beta_{\widehat \pi} \left(R_{\ell,\mathcal{D}}(\widehat f) - R_{\ell,\mathcal{D}}(f^*)\right)\nonumber\\ & \leq & 2\max_{f \in \mathcal{F}} \left|\widehat R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f) - R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f) \right| + 2\beta_{\widehat \pi} \left(R_{\ell,\mathcal{D}}(\widehat f) - R_{\ell,\mathcal{D}}(f^*)\right) \label{eq:pf31}\\ & \leq & C\Bigg(\sqrt{\frac{V}{n}} + \sqrt{\frac{\log(1/\delta)}{n}}\Bigg) + 2\beta_{\widehat \pi} \left(R_{\ell,\mathcal{D}}(\widehat f) - R_{\ell,\mathcal{D}}(f^*)\right) \label{eq:pf3}\,, \end{eqnarray} where \eqref{eq:pf1} follows from Equation \eqref{eq:pf4}. \eqref{eq:pf31} follows from the fact that $\widehat f$ is the minimizer of $\widehat R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}$ as computed in \eqref{eq:f_hat}. \eqref{eq:pf3} follows from the basic excess-risk bound. $V$ is the VC dimension of hypothesis class $\mathcal{F}$, and $C$ is a universal constant. Following shows the inequality used in Equation \eqref{eq:pf1}. For binary classification, we denote the two classes by $Y,-Y$. \begin{eqnarray} &=& R_{\ell,\mathcal{D}}(\widehat f) - R_{\ell_{\widehat\pi},\mathcal{D}_{\pi}}(\widehat f) - \left(R_{\ell,\mathcal{D}}(f^*)- R_{\ell_{\widehat \pi},\mathcal{D}_{\pi}}(f^*)\right) \nonumber\\ &=& \mathbb{E}_{(X,Y)\sim \mathcal{D}}\left[ \beta_{\widehat \pi}(Y)\left(\left( \ell (\widehat f(X),Y) - \ell (f^*(X),Y) \right) - \left(\ell (\widehat f(X),- Y) - \ell (f^*(X),- Y)\right)\right)\right] \label{eq:pf6}\\ &=& 2\mathbb{E}_{(X,Y)\sim \mathcal{D}}\left[ \beta_{\widehat \pi}(Y)\left( \ell (\widehat f(X),Y) - \ell (f^*(X),Y) \right)\right] \label{eq:pf7}\\ &\leq & 2\beta_{\widehat \pi} \left(R_{\ell,\mathcal{D}}(\widehat f) - R_{\ell,\mathcal{D}}(f^*)\right)\label{eq:pf4}\,, \end{eqnarray} where \eqref{eq:pf6} follows from Equation \eqref{eq:pf8}. \eqref{eq:pf7} follows from the fact that for $0$-$1$ loss function $\ell(f(X),Y) + \ell(f(X),-Y) = 1$. \eqref{eq:pf4} follows from the definition of $\beta_{\widehat \pi}$ defined in Equation \eqref{eq:beta}. When $\ell_{\widehat \pi}$ is computed using weighted majority vote of the workers then \eqref{eq:pf4} holds with $\beta_{\widehat \pi}$ replaced by $\alpha$. $\alpha$ is defined in \eqref{eq:alpha}. Following shows the equality used in Equation \eqref{eq:pf6}. Using the notations $\rho_{\widehat \pi}$ and $\tau_{\pi}$, in the following, for any function $f \in \mathcal{F}$, we compute the excess risk due to the unbiasedness of the modified loss function $\ell_{\widehat \pi}$. \begin{eqnarray} && R_{\ell,\mathcal{D}}(f) - R_{\ell_{\widehat\pi},\mathcal{D}_{\pi}}(f)\nonumber\\ &=&\mathbb{E}_{(X,Y)\sim \mathcal{D}}\left[\ell(f(X),Y)\right] - \mathbb{E}_{(X,Z^{(r)},w^{(r)})\sim \mathcal{D}_{\pi}}[ \ell_{\widehat \pi} (f(X),Z^{(r)}, w^{(r)})]\nonumber\\ &=& \mathbb{E}_{(X,Y)\sim \mathcal{D}}\left[\ell(f(X),Y)\right] \nonumber\\ && - \mathbb{E}_{(X,Y,w^{(r)})\sim \mathcal{D}_{\pi}}\Bigg[ \sum_{Z^{(r)} \in \{\pm 1\}^{r}}\Big( (1-\rho_{\widehat \pi}(-Y,Z^{(r)}, w^{(r)}))\ell (f(X),Y) \label{eq:pf61}\\ && + \rho_{\widehat \pi}(-Y,Z^{(r)}, w^{(r)})\ell (f(X),-Y)\Big)\tau_{\pi}(Y,Z^{(r)}, w^{(r)}) \Bigg]\nonumber\\ &=& \mathbb{E}_{(X,Y)\sim \mathcal{D}}\left[ \beta_{\widehat \pi}(Y)\left( \ell (f(X),Y) - \ell (f(X),- Y)\right)\right] \label{eq:pf8}\,, \end{eqnarray} where $\beta_{\widehat \pi}(Y)$ is defined in \eqref{eq:beta_y}. Where \eqref{eq:pf61} follows from the definition of $\ell_{\widehat \pi}$ given in Equation \eqref{eq:ell_pi}. Observe that when $\ell_{\widehat \pi}$ is computed using weighted majority vote of the workers then Equation \eqref{eq:pf8} holds with $\beta_{\widehat \pi}(Y)$ replaced by $\alpha(y)$. $\alpha(y)$ is defined in \eqref{eq:alpha_y}. \subsection{Proof of Lemma \ref{lem:lem2}} Recall that we have \begin{eqnarray} \widehat\pi^{(a)}_{ks} & = & \frac{\sum_{i=1}^n \sum_{j=1}^r \mathbb{I}[w_{ij} =a] \mathbb{I}[t_i = k]\mathbb{I}[Z_{ij} = s]}{\sum_{i=1}^n \sum_{j=1}^r \mathbb{I}[w_{ij}=a] \mathbb{I}[t_i = k]} \end{eqnarray} Let $t_i$ denote $\widehat f(X_i)$. By the definition of risk, for any $k \in \mathcal{K}$, we have $$\mathbb{P}\Big[\big|\mathbb{I}[Y_i = k] - \mathbb{I}[t_i = k]\big| = 1\Big] = \delta\,.$$ Let $|\mathcal{K}| = K$. Define, for fixed $a \in [m]$, and $k,s \in \mathcal{K}$, \begin{eqnarray} A & \coloneqq & \sum_{i=1}^n \sum_{j=1}^r \mathbb{I}[w_{ij} =a] \mathbb{I}[t_i = k]\mathbb{I}[Z_{ij} = s]\,, \qquad \bar A \coloneqq \frac{nr \pi_{ks}}{mK}\\ B &\coloneqq & \sum_{i=1}^n \sum_{j=1}^r \mathbb{I}[w_{ij} =a] \mathbb{I}[t_i = k]\,, \qquad \bar B \coloneqq \frac{nr}{m K} \\ C &\coloneqq & \sum_{i=1}^n\sum_{j=1}^r \mathbb{I}[w_{ij}=a]\Big|\mathbb{I}[Y_i = k] - \mathbb{I}[t_i = k]\Big|\,, \qquad \bar C \coloneqq \frac{nr \delta}{m}\,,\\ D & \coloneqq & \sum_{i=1}^n \sum_{j=1}^r \mathbb{I}[w_{ij} =a] \mathbb{I}[Y_i = k]\mathbb{I}[Z_{ij} = s]\,,\\ E & \coloneqq & \sum_{i=1}^n \sum_{j=1}^r \mathbb{I}[w_{ij} =a] \mathbb{I}[Y_i = k]\,. \end{eqnarray} Note that $A,B,C,D,E$ depend upon $a \in [m]$, $k,s \in \mathcal{K}$. However, for ease of notations, we have not included the subscripts. We have, \begin{eqnarray}\label{eq:l1} \Big|\widehat\pi_{ks}^{(a)} - \pi_{ks}^{(a)} \Big| \; = \; \frac{A - B\pi_{ks}}{B} & = & \frac{|(A-\bar A) - (B-\bar B)\pi_{ks}|}{|\bar B + (B- \bar B)|} \nonumber\\ & \leq & \frac{|A-\bar A| + |(B-\bar B)|\pi_{ks}}{|\bar B| - |B- \bar B|} \end{eqnarray} Now, we have, \begin{eqnarray}\label{eq:l2} |A -\bar A| & \leq & |A - D| \;+\; |D - \bar A|\nonumber\\ & \leq & C \;+\; |D - \bar A| \,. \end{eqnarray} We have, \begin{eqnarray}\label{eq:l3} |B - \bar B| & \leq & |B - E| \;+\; |E - \bar B|\nonumber\\ & \leq & C \;+\; |E - \bar B| \end{eqnarray} Observe that $C$ is a sum of $nr$ i.i.d. Bernoulli random variables with mean $\delta/m$. Using Chernoff bound we get that \begin{eqnarray}\label{eq:l4} C & \leq & \frac{nr \delta}{m} + \sqrt{\frac{3nr\delta \log(2mK/\delta_1)}{m}}\,, \end{eqnarray} for all $a \in [m]$, and $k \in \mathcal{K}$ with probability at least $1-\delta_1$. Similarly, $D$ is a sum of $nr$ i.i.d. Bernoulli random variables with mean $\pi_{ks}/(mk)$. Again, using Chernoff bound we get that \begin{eqnarray}\label{eq:l5} \big|D - \bar A \big| & \leq & \sqrt{\frac{3nr \pi_{ks} \log(2mK^2/\delta_1)}{mK}}\,, \end{eqnarray} for all $a \in [m]$, $k,s \in \mathcal{K}$ with probability at least $1-\delta_1$. From the bound on $|D -\bar A|$, it follows that \begin{eqnarray}\label{eq:l6} |E - \bar B| & \leq & \sqrt{\frac{3nr \log(2mK^2/\delta_1)}{m}} \end{eqnarray} Collecting Equations \eqref{eq:l1}-\eqref{eq:l6}, we have for all $a \in [m]$, $k,s\in \mathcal{K}$ \begin{eqnarray} \Big|\widehat\pi_{ks}^{(a)} - \pi_{ks}^{(a)} \Big| & \leq & \frac{2\delta + 16\sqrt{m\log(2mK^2 \delta_1/(nr)}}{1/K - \delta - 8\sqrt{m\log(2mK^2/\delta_1)/(nr)}}\,, \end{eqnarray} with probability at least $1-2\delta_1$.
1,116,691,498,034
arxiv
\subsection{Polynomial-time Approximation Scheme} We will start by presenting our PTAS. At a high-level, our PTAS is similar to that of Bansal et al.'s: our algorithm use bruteforce to try every possible values of $\pi(1), \dots, \pi(\exp(\tilde{O}(1/\epsilon)))$. Once these are fixed, we solve the remaining problem using linear programming (LP). We use the same LP as Bansal et al., except with a slightly more refined rounding procedure, which allows us to achieve a better approximation guarantee. The remainder of this section is organized as follows. In \Cref{subsec:lp-rounding}, we present our LP rounding algorithm and its guarantees. Then, we show how to use it to yield our PTAS in \Cref{subsec:ptas-ranking}. \subsubsection{Improved LP Rounding} \label{subsec:lp-rounding} For convenience in the analysis below, let us also define a more generic objective function where $\frac{1}{\log(t_\pi(S)) + 1}$ in~\Cref{eq:dcg-def} can be replaced by any non-increasing function $f: [n] \to (0, 1]$: \begin{align*} \DCG^f_{\mathcal{S}, \mathbf{k}}(\pi) := \sum_{S \in \mathcal{S}} f(t_{\pi}(S)). \end{align*} The main result of this subsection is the following polynomial time LP rounding algorithm for the above general version of DCG: \begin{lemma} \label{lem:rounding-final} There exists an absolute constant $C$ such that for any $\alpha \in (0, 0.5)$ the following holds: there is a polynomial-time algorithm that computes a ranking with expected DCG at least $(1 - \alpha) \cdot \tau_{f, \alpha}$ times that of the optimum where $$\tau_{f, \alpha} := \min_{t \in [n]} \frac{f\left(\frac{C \log(1/\alpha)}{\alpha} \cdot \frac{t}{f(t)}\right)}{f(t)}.$$ \end{lemma} Informally speaking, the term $\tau_{f, \alpha}$ somewhat determines ``how fast $f$ increases''. In the next section, once we fix the first $u$ elements of the ranking, $f$ will become $f(t) := 1/\log(t + u)$ which is ``slowly growing'' when $u$ is sufficiently large. This allows us to ensure that the guarantee in \Cref{lem:rounding-final} yields an $(1 - O(\epsilon))$-approximation as desired. \paragraph{LP Formulation.} To prove \Cref{lem:rounding-final}, we use the same knapsack constraint-enhanced LP as in~\cite{BansalJKN10}, stated below. Note that the number of knapsack constraints can be super-polynomial. However, it is known that such an LP can be solved in polynomial time; see e.g.~\cite[Section 3.1]{BansalGK10} for more detail. \begin{align*} &\text{Maximize} & \sum_{S \in \mathcal{S}} \sum_{t \in [n]} (y_{S, t} - y_{S, t - 1}) \cdot f(t) & & \\ &\text{subject to} &\sum_{e \in [n]} x_{e,t} = 1 & &\forall t \in [n] \\ & &\sum_{t \in [n]} x_{e,t} = 1 & &\forall e \in [n] \\ & &\sum_{e \in S \subseteq A} \sum_{t' < t} x_{e,t'} \geq (k_S - |A|) \cdot y_{S,t} & &\forall S \in \mathcal{S}, A \subseteq S, t \in [n] \\ & &y_{S, t} \geq y_{S, t - 1} & &\forall S \in \mathcal{S}, t \in \{2, \dots, n\} \\ & &x_{e,t}, y_{S,t} \in [0, 1] & &\forall e, t \in [n], S \in \mathcal{S}. \end{align*} \paragraph{Rounding Algorithm.} Let $\gamma \in (0, 0.1)$ be a parameter to be chosen later. Our rounding algorithm works as follows: \begin{enumerate} \item $\pi \leftarrow \emptyset$ \item For $i = 1, \dots, \lceil\log n\rceil$ do: \begin{enumerate} \item Let $t_i = \min\{n, 2^i\}$. \item Let $z_{e, i} = \sum_{t \leq t_i} x^*_{e, t}$ and $p_{e, i} = \min\{1, \frac{z_{e, i}}{\gamma \cdot f(t_i)}\}$ for all $e \in [n]$. \item Let $A_i$ be the set such that $e \in [n]$ is independently included w.p. $p_{e, i}$. \end{enumerate} \end{enumerate} Finally, our permutation $\pi$ is defined by adding elements from $A_1, \dots, A_{\lceil \log n\rceil}$ in that order, where the order within each $A_i$ can be arbitrary and we do not add an element if it already appears in the permutation. Once again, we remark that our algorithm closely follows that of~\cite{BansalJKN10}, except that Bansal et al. simply chose their $p_{e, i}$ to be $\min\{1, O(\log^2 n) \cdot z_{e, i}\}$, whereas our $p_{e, i}$ is a more delicate $\min\{1, \frac{z_{e, i}}{\gamma \cdot f(t_i)}\}$. This allows our analysis below to produce a better approximation ratio. \paragraph{Analysis.} We will now proceed to analyze our proposed randomized rounding procedure. Let $\eta \in (0, 0.1)$ be a parameter to be chosen later, and let $(\mathbf{x}^*, \mathbf{y}^*)$ denote an optimal solution to the LP. For each $S$, let $t^*(S)$ be the largest positive integer $t^*$ such that \begin{align} \label{eq:tstar-def} y^*_{S,t^* - 1} \leq \eta \cdot f(t^*). \end{align} We start with the following lemma, which is a refinement of~\cite[Lemma 1]{BansalJKN10}. \begin{lemma} \label{lem:opt-sep-bound} $\OPT \leq (1 + \eta) \cdot \sum_{S \in \mathcal{S}} f(t^*(S))$. \end{lemma} \begin{proof} We have \begin{align*} \OPT &\leq \sum_{S \in \mathcal{S}} \sum_{t \in [n]} (y^*_{S, t} - y^*_{S, t - 1}) \cdot f(t) \\ &= \sum_{S \in \mathcal{S}} \left(\sum_{t=1}^{t^*(S)-1} (y^*_{S, t} - y^*_{S, t - 1}) \cdot f(t) + \sum_{t=t^*(S)}^{n} (y^*_{S, t} - y^*_{S, t - 1}) \cdot f(t)\right) \\ &\leq \sum_{S \in \mathcal{S}} \left(\sum_{t=1}^{t^*(S)-1} (y^*_{S, t} - y^*_{S, t - 1})+ \sum_{t=t^*(S)}^{n} (y^*_{S, t} - y^*_{S, t - 1}) \cdot f(t^*(S))\right) \\ &\leq \sum_{S \in \mathcal{S}} \left(y^*_{S, t^*(S) - 1} + f(t^*(S))\right) \\ &\overset{\eqref{eq:tstar-def}}{\leq} \sum_{S \in \mathcal{S}} (1 + \eta) \cdot f(t^*(S)). \qedhere \end{align*} \end{proof} Next, we show via standard concentration inequalities that $|A_i|$'s has small sizes with a large probability. \begin{lemma} With probability $1 - 2\exp\left(-\frac{1}{3\gamma}\right)$, we have $|A_i| \leq \frac{2 t_i}{\gamma f(t^*)}$ for all $i \in [\lceil \log n\rceil]$. \end{lemma} \begin{proof} Notice that $\sum_{e \in [n]} p_{e, i} \leq \frac{\sum_{e \in [n]} z_{e, i}}{\gamma f(t_i)} = \frac{t_i}{\gamma f(t_i)}$. As a result, by Chernoff bound (\Cref{lem:chernoff}), we have \begin{align*} \Pr\left[|A_i| > \frac{2 t_i}{\gamma f(t^*)}\right] \leq \exp\left(-\frac{t_i}{3\gamma f(t^*)}\right) \leq \exp\left(-\frac{t_i}{3\gamma}\right). \end{align*} By union bound, we thus have $|A_i| \leq \frac{2 t_i}{\gamma f(t^*)}$ for all $i \in [\lceil \log n\rceil]$ with probability at least \begin{align*} 1 - \sum_{i \in [\lceil \log n\rceil]} \exp\left(-\frac{t_i}{3\gamma}\right) \leq 1 - 2\exp\left(-\frac{1}{3\gamma}\right). \end{align*} \end{proof} Let $i^*(S)$ denote the smallest $i$ such that $t_i \geq t^*(S)$. We now bound the probability that $S$ is covered ($k_S$ times) by the end of the $i^*(S)$-th iteration of the algorithm. Our bound is stated below. We note that our bound here is not with high probability, unlike that of the analysis of~\cite{BansalJKN10} which yields a bound of $1 - o(1/n)$. We observe here that such a strong bound is not necessary for the analysis because we are working with a maximization problem and therefore such a high probability bound is not necessary to get a bound on the expectation of the DCG. \begin{lemma} Assume that $\eta \geq 2\gamma$. For each $S \in \mathcal{S}$, we have $t_{\pi}(S) \leq |A_1| + \cdots + |A_{i^*(S)}|$ with probability $1 - \exp\left(\frac{\eta}{8\gamma}\right)$. \end{lemma} \begin{proof} It suffices to show that at least $k_S$ elements of $S$ are selected in $A_{i^*(S)}$. Let $S_g$ denote the set of elements $e \in S$ for which $p_{e, i^*(S)} = 1$. If $|S_g| \geq k_S$, then we are done. Otherwise, from knapsack constraint, we have \begin{align*} \sum_{e \in S \setminus S_g} z_{e, i^*(S)} \geq (k_S - |S_g|) y^*_{S, t_{i^*(S)}} \geq (k_S - |S_g|) y^*_{S, t^*(S)} &\geq \eta \cdot f(t^*(S)) \cdot (k_S - |S_g|) \\ &\geq \eta \cdot f(t_{i^*(S)}) \cdot (k_S - |S_g|), \end{align*} where the third inequality follows from our choice of $t^*(S)$. This implies that \begin{align*} \sum_{e \in S \setminus S_g} p_{e, i^*(S)} \geq \eta / \gamma \cdot (k_S - |S_g|). \end{align*} Recall that $\eta / \gamma \geq 2$. This means that the probability that at least $k_S$ elements of $S$ are selected in $A_{i^*(S)}$ is at least \begin{align*} &1 - \Pr[|(S \setminus S_g) \cap A_{i^*(S)}| \leq 0.5\eta / \gamma \cdot (k_S - |S_g|)] \\ &\leq 1 - \exp\left(-\frac{1}{8} \cdot \eta / \gamma \cdot (k_S - |S_g|)\right) \\ &\leq 1 - \exp\left(-\frac{\eta}{8\gamma}\right), \end{align*} where the first inequality follows from the Chernoff bound. \end{proof} Applying the union bound to the two previous lemmas, we immediately arrive at the following: \begin{lemma} \label{lem:term-by-term} Assume that $\eta \geq 2\gamma$. For all $S \in \mathcal{S}$, we have $$\mathbb{E}_\pi[f(t_\pi(S))] \geq \left(1 - 2\exp\left(-\frac{1}{3\gamma}\right) - \exp\left(\frac{\eta}{8\gamma}\right)\right) \cdot f\left(\frac{8 t^*(S)}{\gamma f(t^*(S))}\right)$$ \end{lemma} Finally, combining~\Cref{lem:opt-sep-bound,lem:term-by-term} and selecting $\eta = 2\alpha, \gamma = O(\eta / \log(1/\eta))$ yields \Cref{lem:rounding-final}. \subsubsection{From LP Rounding to PTAS} \label{subsec:ptas-ranking} As stated earlier, we may now use bruteforce to try all possible values of the first few elements in the ranking and then use our LP rounding to arrive at the PTAS: \begin{proof}[Proof of \Cref{thm:main-ptas}] For any $\epsilon < 0.1$, we use bruteforce for the first $u = (4C/\epsilon)^{100/\epsilon}$ elements and then use~\Cref{lem:rounding-final} on the remaining instance but with $f(t) := \frac{1}{\log(t + u)}$. The expected approximation ratio we have is at least \begin{align*} &(1 - 0.5\epsilon) \cdot \tau_{f, 0.5\epsilon} \\ &\geq (1 - 0.5\epsilon) \cdot \min_{t \in [n]} f\left(\frac{4C \log(1/\epsilon)}{\epsilon} \cdot \frac{t}{f(t)}\right) / f(t) \\ &= (1 - 0.5\epsilon) \cdot \min_{t \in [n]} \frac{\log(t + u)}{\log\left(\frac{4C \log(1/\epsilon)}{\epsilon} \cdot \frac{t}{f(t)} + u\right)} \\ &\geq (1 - 0.5\epsilon) \cdot \min_{t \in [n]} \frac{\log(t + u)}{\log\left(\frac{4C \log(1/\epsilon)}{\epsilon} \cdot (t + u)\log(t + u)\right)} \\ &= (1 - 0.5\epsilon) \cdot \min_{t \in [n]} \frac{1}{1 + \frac{\log\left(\frac{4C \log(1/\epsilon)}{\epsilon}\right)}{\log(t + u)} + \frac{\log\log(t + u)}{\log(t + u)}} \\ &= (1 - 0.5\epsilon) \cdot \frac{1}{1 + \frac{\log\left(\frac{4C \log(1/\epsilon)}{\epsilon}\right)}{\log(u)} + \frac{\log\log(u)}{\log(u)}} \\ &\geq (1 - 0.5\epsilon) \cdot \frac{1}{1 + 0.1\epsilon + 0.1\epsilon} \\ &\geq 1 - \epsilon, \end{align*} as desired. \end{proof} \subsection{Running Time Lower Bound} To prove our running time lower bound, we will reduce from the \emph{Maximum $k$-Coverage} problem. Recall that in Maximum $k$-Coverage, we are given a set $\mathcal{T} \subseteq [M]$ and an integer $k$; the goal is to find $T^*_1, \cdots T^*_k \in \mathcal{T}$ that maximizes $|T^*_1 \cup \cdots \cup T^*_k|$. We write $\Cov(\mathcal{T}, k)$ to denote this optimum. Furthermore, we say that a Maximum $k$-Coverage is \emph{regular} if $|T| = M/k$ for all $T \in \mathcal{T}$. Finally, we use $N$ to denote $|\mathcal{T}| \cdot M$ which upper bound the ``size'' of the problem. Manurangsi~\cite{Manurangsi20} showed the following lower bound for this problem: \begin{theorem}[\cite{Manurangsi20}] \label{thm:max-coverage-lb} Assuming the Gap Exponential Time Hypothesis (Gap-ETH), for any constant $\delta > 0$, there is no $N^{o(k)}$-time algorithm that can, given a regular instance $(\mathcal{T}, k)$ distinguish between the following two cases: \begin{itemize} \item (YES) $\Cov(\mathcal{T}, k) \geq M$. \item (NO) $\Cov(\mathcal{T}, k) \leq (1 - 1/e + \delta) M$. \end{itemize} \end{theorem} \begin{proof}[Proof of \Cref{thm:dcg-lb}] Fix $\delta = 0.1$. We reduce from the Maximum $k$-Coverage problem. Suppose that $(\mathcal{T}, k)$ is a regular Maximum $k$-Coverage instance; we assume w.l.o.g. that $k$ is divisible by 10. We construct the instance $(\mathcal{S}, \{k_S\}_{S \in \mathcal{S}})$ of the DCG maximization as follows: \begin{itemize} \item Let $n = |\mathcal{T}|$ where we associate each $j \in [n]$ with $T_j \in \mathcal{T}$. \item Let $\mathcal{S} = \{S_1, \dots, S_M\}$ where $S_i = \{j \in [n] \mid i \in T_j\}$. \item Let $k_S = 1$ for all $S \in \mathcal{S}$. \end{itemize} In the YES case, let $T_{j_1}, \dots, T_{j_k}$ be such that $|T_{j_1} \cup \cdots \cup T_{j_k}| = M$. Let $\pi^*: [n] \to [n]$ be any permutation such that $\pi^*(\ell) = j_\ell$ for all $\ell \in [k]$. From regularity of $(\mathcal{T}, k)$, there are exactly $q := M/k$ sets $S \in \mathcal{S}$ such that $t_{\pi^*}(S) = i$. Therefore, we have \begin{align*} \DCG_{\mathcal{S}, \mathbf{k}}(\pi^*) &= \sum_{i \in [k]} \frac{M}{k} \cdot \frac{1}{\log(i + 1)}. \end{align*} Let $\OPT^*$ denote the RHS quantity. Notice that \begin{align} \OPT^* \leq \frac{M}{\log(k+1)}. \end{align} In the NO case, consider any permutation $\pi: [n] \to [n]$. Let $t_i$ denote the $i$-th smallest value in the multiset $\{t_{\pi}(S)\}_{S \in \mathcal{S}}$. Regularity of $(\mathcal{T}, k)$ implies that \begin{align} \label{eq:t-gap} t_i \geq t_{i - q} + 1 \end{align} for all $i > q$. This in turn implies that \begin{align} \label{eq:t-from-size} t_i \geq \left\lceil i/q \right\rceil. \end{align} Furthermore, $\Cov(\mathcal{T}, k) \leq (1 - 1/e - \delta)M \leq 0.8M$ implies that \begin{align*} t_{0.8M} > k. \end{align*} Furthermore, applying~\eqref{eq:t-gap} to the above, we have \begin{align} \label{eq:t-from-unconver} t_{0.9M} \geq t_{0.8M} + \left\lfloor\frac{0.1M}{q}\right\rfloor = k + 0.1k = 1.1k. \end{align} With the above notion, we may write $\DCG_{\mathcal{S}, \mathbf{k}}(\pi) - \OPT^*$ as \begin{align*} \DCG_{\mathcal{S}, \mathbf{k}}(\pi) - \OPT^* &= \sum_{i=1}^M \frac{1}{\log(t_i + 1)} - \sum_{i=1}^M \frac{1}{\log(\lceil i/q \rceil + 1)} \\ &\overset{\eqref{eq:t-from-size}}{\geq} \sum_{i=0.9M}^M \left(\frac{1}{\log(t_i + 1)} - \frac{1}{\log(\lceil i/q \rceil + 1)}\right) \\ &\overset{\eqref{eq:t-from-unconver}}{\geq} \sum_{i=0.9M}^M \left(\frac{1}{\log(1.1k + 1)} - \frac{1}{\log(\lceil i/q \rceil + 1)}\right) \\ &\geq \sum_{i=0.9M}^M \left(\frac{1}{\log(1.1k + 1)} - \frac{1}{\log(k + 1)}\right) \\ &= 0.1M \cdot \left(\frac{1}{\log(1.1k + 1)} - \frac{1}{\log(k + 1)}\right) \\ &= \Theta\left(\frac{M}{\log^2 k}\right). \end{align*} Finally, observe also that \begin{align*} \OPT^* = \frac{M}{k} \cdot \sum_{i\in [k]} \frac{1}{\log(i+1)} = \frac{M}{k} \Theta\left(\frac{k}{\log k}\right) = \Theta\left(\frac{M}{\log k}\right). \end{align*} Combining the above two inequalities, we have \begin{align*} \DCG_{\mathcal{S}, \mathbf{k}}(\pi) \geq \left(1 + \Theta\left(\frac{1}{\log k}\right)\right) \cdot \OPT^*. \end{align*} Now, suppose that there is a PTAS for maximizing DCG that runs in time $f(\epsilon) \cdot (nm)^{2^{o(1/\epsilon)}}$. If we run the algorithm with $\epsilon = \gamma / \log k$ where $\gamma > 0$ is sufficiently small constant, then we can distinguish between the YES case and the NO case in time $f(1/\log k) \cdot (nm)^{2^{o(\log k)}} \leq f(1/\log k) \cdot (nm)^{o(k)} = g(k) \cdot N^{o(k)}$ which, from \Cref{thm:max-coverage-lb}, violates Gap-ETH. \end{proof} \subsection{Additional Preliminaries: Approximation Algorithms for Densest $k$-Subgraph} \subsection{A Structural Lemma} Henceforth, we write $\disp(S, T)$ to denote $\sum_{u \in S, v \in T} d(u, v)$ and $\disp(u, T)$ as a shorthand for $\disp(\{u\}, T)$. Furthermore, we use $\cB{u}{D}$ to denote $\{z \in U \mid d(z, u) \leq D\}$ and let $\bcB{u}{D} := U \setminus \cB{u}{D}$. We now formalize our structural lemma. It gives a lower bound on the objective based on a vertex in the optimal solution and another vertex \emph{not} in the optimal solution. Later on, by guessing these two vertices, we can reduce to \textsc{DkS}\ while avoiding the ``small optimum'' issue. \begin{lemma} \label{lem:max-dispersion-structural} Let $S^{\OPT}$ be any optimal solution of Max-Sum Dispersion and let $u^{\min}$ be the vertex in $S^{\OPT}$ that minimizes $\disp(u^{\min}, S^{\OPT})$. Furthermore, let $v$ be any vertex \emph{not} in $S^{\OPT}$ and let $\Delta = d(u^{\min}, v)$. Then, we have \begin{align*} \disp(S^{\OPT}) \geq \frac{p(p - 1)\Delta}{16}. \end{align*} \end{lemma} \begin{proof}[Proof of \Cref{lem:max-dispersion-structural}] Let $S^{\OPT}_{\text{close}} := S^{\OPT} \cap \cB{u^{\min}}{0.5\Delta}$. Consider two cases, based on the size of $S^{\OPT}_{\text{close}}$: \begin{itemize} \item Case I: $|S^{\OPT}_{\text{close}}| \leq p / 2$. In this case, we have \begin{align*} \disp(u^{\min}, S^{\OPT}) \geq \disp(u^{\min}, S^{\OPT} \setminus S^{\OPT}_{\text{close}}) \geq (p/2)(\Delta/2) = \Delta p / 4. \end{align*} Furthermore, by our definition of $u^{\min}$, we have \begin{align*} \disp(S^{\OPT}) = \frac{1}{2} \sum_{u \in S} \disp(u, S^{\OPT}) \geq \frac{p}{2} \disp(u^{\min}, S^{\OPT}). \end{align*} Combining the two inequalities, we have $\disp(S^{\OPT}) \geq p^2\Delta / 8$. \item Case II: $|S^{\OPT}_{\text{close}}| > p / 2$. In this case, since $S^{\OPT}$ is an optimal solution, replacing any $z \in S^{\OPT}_{\text{close}}$ with $v$ must not increase the solution value, i.e. \begin{align*} \disp(z, S^{\OPT}) &\geq \disp(v, S^{\OPT} \setminus \{z\}) \\ &\geq \disp(v, S^{\OPT}_{\text{close}} \setminus \{z\}) \\ &\geq ((p - 1)/2)(0.5\Delta), \end{align*} where the second inequality uses the fact that for any $z' \in S^{\OPT}_{\text{close}}$ we have $d(v, z') \geq d(u, v) - d(u, z') \geq \Delta - 0.5\Delta$. From this, we once again have \begin{align*} \disp(S^{\OPT}) = \frac{1}{2} \sum_{u \in S} \disp(u, S^{\OPT}) \geq \frac{1}{2} \sum_{z \in S^{\OPT}_{\text{close}}} \disp(z, S^{\OPT}) &\geq |S^{\OPT}_{\text{close}}| \cdot \frac{(p - 1)\Delta}{8} \\ &> \frac{p(p - 1)}{16\Delta}, \end{align*} where the last inequality follows from our assumption of this case. \qedhere \end{itemize} \end{proof} \subsection{QPTAS for Max-Sum Dispersion} We now present our QPTAS, which simply guesses $u^{\min}$ and $v = \argmax_{z \notin S^{\OPT}} d(z, u)$ and then reduces the problem to \textsc{DkS}. By definition of $v$, if we let $\Delta = d(u, v)$, every point outside $\cB{u^{\min}}{\Delta}$ must be in $S^{OPT}$. The actual reduction to \textsc{DkS}\ is slightly more complicated than that described at the beginning of this section. Specifically, among points $\bcB{u^{\min}}{\Delta}$ that surely belong to $S^{\OPT}$, we ignore all points outside $\cB{u^{\min}}{ 20\Delta/\epsilon}$ (i.e., they do not appear in the \textsc{DkS}\ instance) and we let $\cB{u^{\min}}{20\Delta/\epsilon} \setminus \cB{u^{\min}}{\Delta}$ be the ``must pick'' part. Ignoring the former can be done because the contribution to the objective from those points can be approximated to within $(1 \pm O(\epsilon))$ regardless of the points picked in the ball $\cB{u^{\min}}{\Delta}$. This is not true for the latter, which means that we need to include them in our \textsc{DkS}\ instance. \begin{proof}[Proof of \Cref{thm:qptas-dispersion}] Our algorithm works as follows: \begin{enumerate} \item For every distinct $u, v \in U$ do: \begin{enumerate} \item Let $\Delta := d(u, v)$ and $\Delta^* = 20 \Delta / \epsilon$. \item If $|\bcB{u}{\Delta}| \geq p$, then skip the following steps and continue to the next pair $u, v$. \item Otherwise, create a \textsc{DkS}\ instance where $V := \cB{u}{\Delta^*}, I := V \setminus \cB{u}{\Delta}$, $k = p - |\bcB{u}{\Delta^*}|$ and $w$ is defined as $w(\{y, z\}) := 0.5 d(y, z) / \Delta^*$ for all $y, z \in V$. \item Use the additive QPTAS from \Cref{thm:qptas-dks} to solve the above instance to within an additive error of $\epsilon' := 0.00005\epsilon^2$. Let $T$ be the solution found. \item Finally, let $S^{u, v} := T \cup \bcB{u}{\Delta^*}$. \end{enumerate} \item Output the best solution among $S^{u, v}$ considered. \end{enumerate} It is obvious that the running time is dominated by the running time of the QPTAS which takes $n^{O(\log n / (\epsilon')^2)} = n^{O(\log n / \epsilon^4)}$ as desired. Next, we show that the algorithm indeed yields a $(1 - \epsilon)$-approximation. To do this, let us consider $S^{\OPT}, u^{\min}$ as defined in \Cref{lem:max-dispersion-structural}, and let $u = u^{\min}, v := \argmax_{z \notin S^{\OPT}} d(u, z)$. Let $T$ be the solution found by the \textsc{DkS}\ algorithm for this $u, v$ and let $T' := T \setminus I$. We have \begin{align} &\disp(S^{u, v}) \nonumber \\ &= \disp(\bcB{u}{\Delta^*}) + \disp(\bcB{u}{\Delta^*}, T) + \disp(T) \nonumber \\ &= \disp(\bcB{u}{\Delta^*}) + \disp(\bcB{u}{\Delta^*}, I) + \disp(\bcB{u}{\Delta^*}, T') + \disp(T). \label{eq:current-s-decompose} \end{align} Similarly, letting $S := S^{\OPT} \cap \cB{u}{\Delta^*}$ and $S' := S^{\OPT} \setminus I$, we have \begin{align} &\disp(S^{\OPT}) \nonumber \\ &= \disp(\bcB{u}{\Delta^*}) + \disp(\bcB{u}{\Delta^*}, I) + \disp(\bcB{u}{\Delta^*}, S') + \disp(S). \label{eq:optimal-s-decompose} \end{align} Now, observe from the definition of the \textsc{DkS}\ instance (for this $u, v$) that for any $J$ such that $I \subseteq J \subseteq V$, we have \begin{align*} \den(J) = \frac{1}{k(k - 1)/2} \cdot \frac{0.5}{\Delta^*} \disp(J). \end{align*} The additive approximation guarantee from \Cref{thm:qptas-dks} implies that $\den(T) \geq \den(S) - \epsilon'$. Using the above equality, we can rewrite this guarantee as \begin{align} \label{eq:dks-qptas-guarantee} \disp(S) - \disp(T) \leq \epsilon' \cdot \Delta^* \cdot k(k - 1). \end{align} Taking the difference between~\Cref{eq:optimal-s-decompose} and~\Cref{eq:current-s-decompose} and applying~\Cref{eq:dks-qptas-guarantee}, we have \begin{align*} \disp(S^{\OPT}) - \disp(S^{u, v}) &\leq \disp(\bcB{z}{\Delta^*}, S') - \disp(\bcB{z}{\Delta^*}, T') + \epsilon' \cdot \Delta^* \cdot k(k - 1). \\ (\text{Our choice of } \epsilon') &\leq \disp(\bcB{z}{\Delta^*}, S') - \disp(\bcB{z}{\Delta^*}, T') + 0.001\epsilon \Delta \cdot p(p - 1) \\ (\text{\Cref{lem:max-dispersion-structural}}) &\leq \disp(\bcB{z}{\Delta^*}, S') - \disp(\bcB{z}{\Delta^*}, T') + 0.1\epsilon \disp(S^{\OPT}). \end{align*} Now, since $|S'| = |T'| \leq p$ and $S', T' \subseteq \cB{z}{\Delta}$, we have \begin{align*} \disp(\bcB{z}{\Delta^*}, S') - \disp(\bcB{z}{\Delta^*}, T') &\leq |\bcB{z}{\Delta^*}| \cdot |S'| \cdot ((\Delta^* + \Delta) - (\Delta^* - \Delta)) \\ &\leq 2 |\bcB{z}{\Delta^*}| \cdot |S'| \cdot \Delta \\ (\text{Our choice of } \Delta^*) &\leq 0.1\epsilon \cdot |\bcB{z}{\Delta^*}| \cdot |S'| \cdot (\Delta^* - \Delta) \\ &\leq 0.1\epsilon \cdot \disp(\bcB{z}{\Delta^*}, S') \\ &\leq 0.1\epsilon \cdot \disp(S^{\OPT}). \end{align*} Combining the above two inequalities, we get $\disp(S^{u, v}) \geq (1 - 0.2\epsilon) \cdot \disp(S^{\OPT})$, as desired. \end{proof} \subsection{Approximating Densest Subgraph and Submodular Function} \label{subsec:submodular-dks} \input{submodular-dks} \subsection{From Submodular \textsc{DkS}\ to Max-Sum Diversification} \ificalp Having provided an approximation algorithm for Submodular \textsc{DkS}, we can use it to approximate Max-Sum Diversification via a similar approach to the reduction from Max-Sum Dispersion to \textsc{DkS}\ in the previous section. In particular, we can prove a structural lemma for Max-Sum Diversification that is analogous to \Cref{lem:max-dispersion-structural} for Max-Sum Dispersion. We can then use the reduction nearly identical to the one in the proof of \Cref{thm:qptas-dispersion} to arrive at \Cref{thm:diversification-main-detailed}. The full details are deferred to the appendix. \fi \iffullversion Having provided an approximation algorithm for Submodular \textsc{DkS}, we now turn our attention back to how to use it to approximate Max-Sum Diversification. \subsubsection{A Structural Lemma} We start by proving a structural lemma for Max-Sum Diversification that is analogous to \Cref{lem:max-dispersion-structural} for Max-Sum Dispersion. \begin{lemma} \label{lem:max-diversification-structural} Let $S^{\OPT}$ be any optimal solution of Max-Sum Diversification and let $u^{\min}$ be the vertex in $S^{\OPT}$ that minimizes $\disp(u^{\min}, S^{\OPT})$. Furthermore, let $v$ be any vertex \emph{not} in $S^{\OPT}$ and let $\Delta = d(u^{\min}, v)$. Then, we have \begin{align*} \dive(S^{\OPT}) \geq \frac{p(p - 1)\Delta}{16}. \end{align*} \end{lemma} \begin{proof}[Proof of \Cref{lem:max-diversification-structural}] Let $S^{\OPT}_{\text{close}} := S^{\OPT} \cap \cB{u^{\min}}{0.5\Delta}$. Consider two cases, based on the size of $S^{\OPT}_{\text{close}}$: \begin{itemize} \item Case I: $|S^{\OPT}_{\text{close}}| \leq p / 2$. This is similar to the first case in the proof of \Cref{lem:max-dispersion-structural}: we have \begin{align*} \disp(u^{\min}, S^{\OPT}) \geq \disp(u^{\min}, S^{\OPT} \setminus S^{\OPT}_{\text{close}}) \geq (p/2)(\Delta/2) = \Delta p / 4. \end{align*} Furthermore, by our definition of $u^{\min}$, we have \begin{align*} \disp(S^{\OPT}) = \frac{1}{2} \sum_{u \in S} \disp(u, S^{\OPT}) \geq \frac{p}{2} \disp(u^{\min}, S^{\OPT}). \end{align*} Combining the two inequalities, we have $\dive(S^{\OPT}) \geq \disp(S^{\OPT}) \geq p^2\Delta / 8$. \item Case II: $|S^{\OPT}_{\text{close}}| > p / 2$. In this case, since $S^{\OPT}$ is an optimal solution, replacing any $z \in S^{\OPT}_{\text{close}}$ with $v$ must not increase the solution value, i.e. \begin{align*} \left[f(S^{\OPT}) - f(S^{\OPT} \setminus \{z\})\right] + \disp(z, S^{\OPT}) &\geq \disp(v, S^{\OPT} \setminus \{z\}) \\ &\geq \disp(v, S^{\OPT}_{\text{close}} \setminus \{z\}) \\ &\geq ((p - 1)/2)(0.5\Delta), \end{align*} where the second inequality uses the fact that for any $z' \in S^{\OPT}_{\text{close}}$ we have $d(v, z') \geq d(u, v) - d(u, z') \geq \Delta - 0.5\Delta$. From this, we have \begin{align*} \dive(S^{\OPT}) = f(S^{\OPT}) + \disp(S^{\OPT}) &= f(S^{\OPT}) + \frac{1}{2} \sum_{u \in S} \disp(u, S^{\OPT}) \\ &\geq \sum_{z \in S^{\OPT}} \left[f(S^{\OPT}) - f(S^{\OPT} \setminus \{z\})\right] + \frac{1}{2} \sum_{u \in S} \disp(u, S^{\OPT}) \\ &\geq \frac{1}{2} \sum_{z \in S^{\OPT}_{\text{close}}} \left(\left[f(S^{\OPT}) - f(S^{\OPT} \setminus \{z\})\right] + \disp(z, S^{\OPT})\right) \\ &\geq |S^{\OPT}_{\text{close}}| \cdot \frac{(p - 1)\Delta}{8} \\ &> \frac{p(p - 1)}{16\Delta}, \end{align*} where the last inequality follows from our assumption of this case. \qedhere \end{itemize} \end{proof} \subsubsection{Putting Things Together: Proof of \Cref{thm:diversification-main-detailed}} Finally, we use the structural lemma to reduce Max-Sum Diversification to Submodular \textsc{DkS}. Again, this reduction is analogous to that of Max-Sum Dispersion to \textsc{DkS}\ presented in the previous section. \begin{proof}[Proof of \Cref{thm:diversification-main-detailed}] Our algorithm works as follows: \begin{enumerate} \item For every distinct $u, v \in U$ do: \begin{enumerate} \item Let $\Delta := d(u, v)$ and $\Delta^* = 20 \Delta / \epsilon$. \item If $|\bcB{u}{\Delta}| \geq p$, then skip the following steps and continue to the next pair $u, v$. \item Otherwise, create a submodular \textsc{DkS}\ instance where $V := \cB{u}{\Delta^*}, I := V \setminus \cB{u}{\Delta}$, $k = p - |\bcB{u}{\Delta^*}|$, define $h$ by $h(C) := \frac{1}{k(k - 1) \Delta^*} \cdot f(\bcB{u}{\Delta} \cup C)$, and define $w$ by $w(\{y, z\}) := (0.5 / \Delta^*) \cdot d(y, z)$ for all $y, z \in V$. \item Use the algorithm from \Cref{thm:submodular-dks} to solve the above instance with $\gamma := 0.00005\epsilon^2$. Let $T$ be the solution found. \item Finally, let $S^{u, v} := T \cup \bcB{u}{\Delta^*}$. \end{enumerate} \item Output the best solution among $S^{u, v}$ considered. \end{enumerate} It is obvious that the running time is dominated by the running time of the algorithm from \Cref{thm:submodular-dks} which takes $n^{O(\log n / \gamma^2)} = n^{O(\log n / \epsilon^4)}$ as desired. Next, we prove the algorithm's approximation guarantee. To do this, let us consider $S^{\OPT}, u^{\min}$ as defined in \Cref{lem:max-dispersion-structural}, and let $u = u^{\min}, v := \argmax_{z \notin S^{\OPT}} d(u, z), \Delta = d(u, v)$. Recall that by the definition of $v$, we have $S^{\OPT} \supseteq \bcB{u}{\Delta} = \bcB{u}{\Delta^*} \cup I$. Let $T$ be the solution found by the submodular \textsc{DkS}\ algorithm for this $u, v$ and let $T' := T \setminus I$. We have \begin{align} &\disp(S^{u, v}) \nonumber \\ &= \disp(\bcB{u}{\Delta^*}) + \disp(\bcB{u}{\Delta^*}, T) + \disp(T) \nonumber \\ &= \disp(\bcB{u}{\Delta^*}) + \disp(\bcB{u}{\Delta^*}, I) + \disp(\bcB{u}{\Delta^*}, T') + \disp(T). \label{eq:current-s-decompose-2} \end{align} Similarly, letting $S := S^{\OPT} \cap \cB{u}{\Delta^*}$ and $S' := S^{\OPT} \setminus I$, we have \begin{align} &\disp(S^{\OPT}) \nonumber \\ &= \disp(\bcB{u}{\Delta^*}) + \disp(\bcB{u}{\Delta^*}, I) + \disp(\bcB{u}{\Delta^*}, S') + \disp(S). \label{eq:optimal-s-decompose-2} \end{align} Now, observe from the definition of the submodular \textsc{DkS}\ instance (for this $u, v$) that for any $J$ such that $I \subseteq J \subseteq V$, we have \begin{align*} \den(J) = \frac{1}{k(k - 1)\Delta^*} \cdot \disp(J) \end{align*} and \begin{align*} h(J) = \frac{1}{k(k - 1)\Delta^*} \cdot f(\bcB{u}{\Delta} \cup J). \end{align*} The approximation guarantee from \Cref{thm:submodular-dks} ensures that $\mathbb{E}[h(S^{u, v}) + \den(T)] \geq (1 - 1/e - \gamma)h(S^{\OPT}) + \den(S) - \gamma$. Using the above two equalities, we can rewrite this guarantee as \begin{align} \label{eq:dks-submodular-guarantee} \disp(S) + (1 - 1/e - \gamma)f(S^{\OPT}) - \mathbb{E}[\disp(T) + f(S^{u, v})] \leq \gamma \cdot \Delta^* \cdot k(k - 1). \end{align} Taking the difference between~\eqref{eq:optimal-s-decompose-2} and~\eqref{eq:current-s-decompose-2} and applying~\eqref{eq:dks-submodular-guarantee}, we have \begin{align*} &\disp(S^{\OPT}) + (1 - 1/e - \gamma)f(S^{\OPT}) - \mathbb{E}[\disp(S^{u, v}) + f(S^{u, v})] \\ &\leq \disp(\bcB{z}{\Delta^*}, S') - \disp(\bcB{z}{\Delta^*}, T') + \gamma \cdot 2\Delta^* \cdot k(k - 1). \\ (\text{Our choice of } \gamma) &\leq \disp(\bcB{z}{\Delta^*}, S') - \disp(\bcB{z}{\Delta^*}, T') + 0.001\epsilon \Delta \cdot p(p - 1) \\ (\text{\Cref{lem:max-diversification-structural}}) &\leq \disp(\bcB{z}{\Delta^*}, S') - \disp(\bcB{z}{\Delta^*}, T') + 0.1\epsilon \dive(S^{\OPT}). \end{align*} Now, since $|S'| = |T'| \leq p$ and $S', T' \subseteq \cB{z}{\Delta}$, we have \begin{align*} \disp(\bcB{z}{\Delta^*}, S') - \disp(\bcB{z}{\Delta^*}, T') &\leq |\bcB{z}{\Delta^*}| \cdot |S'| \cdot ((\Delta^* + \Delta) - (\Delta^* - \Delta)) \\ &\leq 2 |\bcB{z}{\Delta^*}| \cdot |S'| \cdot \Delta \\ (\text{Our choice of } \Delta^*) &\leq 0.1\epsilon \cdot |\bcB{z}{\Delta^*}| \cdot |S'| \cdot (\Delta^* - \Delta) \\ &\leq 0.1\epsilon \cdot \disp(\bcB{z}{\Delta^*}, S') \\ &\leq 0.1\epsilon \cdot \disp(S^{\OPT}). \end{align*} Combining the above two inequalities, we get \begin{align*} \mathbb{E}[\dive(S^{u, v})] &\geq \disp(S^{\OPT}) + (1 - 1/e - \gamma)f(S^{\OPT}) - 0.2\epsilon \dive(S^{\OPT}) \\ &\geq (1 - \epsilon) \disp(S^{\OPT}) + (1 - 1/e - \epsilon) f(S^{\OPT}), \end{align*} completing our proof. \end{proof} \fi \section{Introduction} A fundamental task in databases in general and in search engines in particular is the selection and ordering of the results to a given query. Suppose that we have already retrieved the set of appropriate answers $S_q$ to a query $q$ by a certain preliminary process. Which item from the (possibly huge) set $S_q$ should be presented \emph{first}? Which should be the first ten? Besides the obvious approach of ranking the \emph{most relevant} answers first, perhaps the second most important consideration is that the output set should satisfy certain \emph{diversity} requirements. If a user searches for ``Barcelona'' it would be desirable that the first ten results contain a mix of items containing, e.g. general details of the city, tourist information, and news about the associated soccer team, even though the most relevant items in certain absolute terms may only pertain to the latter. There are various natural ways to formalize what makes a set of results diverse, and much research has gone into this \emph{Search Diversification} topic in the past two and a half decades in various context (see e.g. \cite{CarbonellG98,AgrawalGHI09,GollapudiS09,BhaskaraGMS16,BansalJKN10,kulesza2012determinantal,rodrygo2015search,BorodinJLY17,DrosouJPS17,BasteJMPR19,FominGPP021,Moumoulidou0M21,HKKLO21,DBLP:conf/kdd/AbbassiMT13,DBLP:conf/pods/IndykMMM14,DBLP:conf/spaa/EpastoMZ19,DBLP:conf/aaai/ZadehGMZ17}). Recently, there have also been extensive research efforts into algorithmic fairness (see e.g. a survey~\cite{fairness-survey}). Some of these fairness notions (e.g.~\cite{Chierichetti0LV17,BackursIOSVW19}) are also closely related to diversity: a set of results that is not diverse enough (e.g. returning only pictures of members of one group when a user searches for ``scientists'') could be problematic in terms of fairness. A well-known work on search diversification \cite{CarbonellG98} suggests that a diverse set of results is one that satisfies the following: The $k^{th}$ result in the list should maximize the sum\footnote{To be more precise, it is a weighted average of the two terms.} of: (1) the relevance to the query, and (2) the total distance to the first $k-1$ results in the list. The success of this natural notion of diversification may be attributed to the fact that it can be computed efficiently with a greedy algorithm. However, it may be a bit too simplistic and the objectives that real-world search engines seem to optimize for are actually closer to other, more complicated (to compute) notions of diversity that have been proposed in follow-up works (e.g. \cite{BansalJKN10,GollapudiS09,BorodinJLY17}). The goal of this paper is to investigate the time complexity of computing these latter, more intricate definitions of the search diversification task. Since such problems are NP-Hard even for restricted settings, and since approximate solutions are typically acceptable in this context, our focus is on understanding their time vs. approximation trade-offs. Our results reduce the gaps in the literature, completely resolving the complexity of some of the most natural notions. \subsection{Diversified Search Ranking} The first problem we study is a diversified search ranking problem formulated by Bansal et al.~\cite{BansalJKN10}. Here we are given a collections $\mathcal{S}$ of subsets of $[n]$ and, for each $S \in \mathcal{S}$, a positive integer $k_S$. Our goal is to find a permutation $\pi: [n] \to [n]$ that minimizes the \emph{discounted cumulative gain (DCG)} defined as \begin{align} \label{eq:dcg-def} \DCG_{\mathcal{S}, \mathbf{k}}(\pi) := \sum_{S \in \mathcal{S}} \frac{1}{\log(t_{\pi}(S) + 1)}, \end{align} where $t_{\pi}(S)$ is defined as the earliest time the set $S$ is covered $k_S$ times, i.e. $\min \{i \in [n] |S \cap \pi([i])| \geq k_S\}$. This formulation relates to diversification by viewing the output $\pi$ as the ranking of the documents to be shown, and each topic corresponds to a set $S$ of documents related to that topic. With this interpretation, the DCG favors rankings that display ``diverse topics as early in the ranking as possible''. Bansal et al.~\cite{BansalJKN10} gave a polynomial-time approximation scheme (PTAS) for the problem in the special case that $k_S = 1$ for all $S \in \mathcal{S}$ with running time $n^{2^{O(\log(1/\epsilon)/\epsilon)}} m^{O(1)}$. On the other hand, for the case of general $k_S$'s, they give a quasipolynomial-time approximation scheme with running time $n^{(\log \log n)^{O(1/\epsilon)}} m^{O(1)}$ and left as an open question whether a PTAS exists. We resolve this open question by giving a PTAS for the more general problem; the running time we obtain for this more general problem is similar to the running time obtained by Bansal et al.'s PTAS for the special case $k_S = 1$. We then show that this is indeed the best possible (under some complexity assumption). \begin{theorem} \label{thm:main-ptas} There is a randomized PTAS for maximizing DCG that runs in time $n^{2^{O(\log(1/\epsilon)/\epsilon)}} \cdot m^{O(1)}$. \end{theorem} The above running time is doubly exponential in $1/\epsilon$, and Bansal et al.~\cite{BansalJKN10} asked whether this dependency is necessary even for the special case $k_S = 1$. We also answer this question by showing that the doubly exponential is necessary, assuming the Gap Exponential Time Hypothesis (Gap-ETH)\footnote{Gap-ETH~\cite{Din16,ManurangsiR17} asserts that there is no $2^{o(n)}$-time algorithm to distinguish between a satisfiable $n$-variable 3SAT formula and one which is not even $(1 - \epsilon)$-satisfiable for some $\epsilon > 0$}: \begin{theorem} \label{thm:dcg-lb} Assuming Gap-ETH, for any function $g$, there is no PTAS for maximizing DCG that runs in time $g(\epsilon) \cdot (nm)^{2^{o(1/\epsilon)}}$. Moreover, this holds even when restricted to instances with $k_S = 1$ for all $S \in \mathcal{S}$. \end{theorem} \subsection{Max-Sum Dispersion} The second problem we consider is the so-called \emph{Max-Sum Dispersion} problem where we are given a metric space $(U, d)$ where $|U| = n$ and an integer $p \geq 2$. The goal is to select $S \subseteq U$ of size $p$ that maximizes \begin{align*} \disp(S) := \sum_{\{u, v\} \subseteq S} d(u, v). \end{align*} Roughly speaking, if the metric determines how different the items are, then our goal is to pick items that are ``as diverse as possible'' according to the $\disp$ objective. The Max-Sum Dispersion problem is a classic problem that has been studied since the 80s~\cite{MoonC84,Kuby87,RaviRT94,HassinRT97,BorodinJLY17}. Previous works have given 0.5-approximation algorithm for the problem in polynomial time~\cite{HassinRT97,BorodinJLY17}. We observe that the known NP-hardness reduction, together with newer hardness of approximation results for the Densest $k$-Subgraph problem with perfect completeness, yields strong lower bounds for the problem. (\Cref{app:hardness-from-dks}.) For example, if we assume the Strongish Planted Clique Hypothesis~\cite{ManurangsiRS21}, then no $(0.5 + \epsilon)$-approximation algorithm is possible in $n^{o(\log n)}$ time. In other words, to achieve an improvement over the known approximation ratio, the algorithm must run in $n^{\Omega(\log n)}$ time. Complementing this, we provide a quasipolynomial-time approximation scheme that runs in time $n^{O_\epsilon(\log n)}$: \begin{theorem} \label{thm:qptas-dispersion} There is a QPTAS for Max-Sum Dispersion that runs in time $n^{O(\log n / \epsilon^4)}$. \end{theorem} \subsection{Max-Sum Diversification} Finally, we consider a generalization of Max-Sum Dispersion where, in addition to the metric space $(U, d)$, we are now also given a monotone set function $f$ (which we can access via a value oracle) and the goal is to select a set $S \subseteq U$ of size $p$ that maximizes $$\dive(S) := \disp(S) + f(S).$$ This problem is referred to as \emph{Max-Sum Diversification}. The Max-Sum Diversification problem is more expressive than Max-Sum Dispersion. For example, the value $f(S)$ in the objective may be used to encode how relevant the selected set $S$ is to the given query, in addition to the diversity objective expressed by $\disp(S)$. Borodin et al.~\cite{BorodinJLY17} gave a 0.5-approximation algorithm for the problem when $f$ is a monotone submodular function. Since Max-Sum Diversification is a generalization of Max-Sum Dispersion, our aforementioned lower bounds also imply that improving on this 0.5 factor requires at least $n^{\Omega(\log n)}$ time. Furthermore, submodular Max-Sum Diversification is also a generalization of maximizing monotone submodular function subject to a cardinality constraint. For this problem, an $(1 - 1/e)$-approximation algorithm is known and it is also known that achieving better than this ratio is NP-hard~\cite{Feige98}. Therefore, it is impossible to achieve a better-than-$(1 - 1/e)$ approximation even in (randomized) quasi-polynomial time, assuming NP $\nsubseteq RTIME(n^{O(\log n)})$. Here we manage to provide such a tight quasi-polynomial time approximation algorithm: \begin{theorem} \label{thm:max-sum-div-apx} For any $\epsilon > 0$, there exists a randomized $n^{O(\log n / \epsilon^4)}$-time $(1 - 1/e - \epsilon)$-approximation algorithm for submodular Max-Sum Diversification. \end{theorem} We remark that an interesting special case of submodular Max-Sum Diversification is when $f$ is linear, i.e. $f(S) = \sum_{u \in S} f(u)$. In this case, Gollapudi and Sharma~\cite{GollapudiS09} provided an approximation-preserving reduction from the problem to the Max-Sum Dispersion. Therefore, our QPTAS for the latter (\Cref{thm:qptas-dispersion}) also yields a QPTAS for this special case of Max-Sum Dispersion. \section{Preliminaries} \input{prelim} \section{Diversified Search Ranking} \input{dcg} \section{Max-Sum Dispersion} \label{sec:disp} \input{dispersion} \section{Max-Sum Diversification} \label{sec:diversification} \input{diversification} \section{Conclusion} In this work, we consider three problems related to diversification: DCG in diversified search ranking, Max-Sum Dispersion and Max-Sum Diversification. For DCG, we give a PTAS and prove a nearly matching running time lower bound. For Max-Sum Dispersion, we give a QPTAS and similarly provide evidence for nearly matching running time lower bounds. Finally, we give a quasi-polynomial time algorithm for Max-Sum Diversification that achieves an approximation ratio arbitrarily close to $(1 - 1/e)$, which is also tight given the $(1 - 1/e + o(1))$ factor NP-hardness of approximating Maximum $k$-Coverage~\cite{Feige98}. Our algorithms for DCG and Max-Sum Diversification are randomized and it remains an interesting open question whether there are deterministic algorithms with similar running times and approximation ratios. \iffullversion \section*{Acknowledgment} We are grateful to Karthik C.S. for insightful discussions, and to Badih Ghazi for encouraging us to work on the problems. \fi \ificalp \bibliographystyle{plainurl} \fi \iffullversion \bibliographystyle{alpha} \fi \subsection{Concentration Inequalities} For our randomized approximation algorithms, we will need some standard concentration inequalities. First, we will use the following version of Chernoff bound which gives a tail bound on the sum of i.i.d. random variables. (See e.g.~\cite{MitzenmacherU-book} for a proof.) \begin{lemma}[Chernoff bound] \label{lem:chernoff} Let $X_1, \dots, X_r \in [0, 1]$ be independent random variables, $S := X_1 + \cdots + X_r$ and $\mu := \mathbb{E}[S]$. Then, for any $\delta \in [0, 1]$, we have \begin{align*} \Pr[|S - \mu| > \delta \mu] \leq 2 \exp\left(-\frac{\delta^2 \mu}{3}\right). \end{align*} Furthermore, for any $\delta \geq 0$, we have \begin{align*} \Pr[S > (1 + \delta)\mu] \leq \exp\left(- \frac{\delta^2 \mu}{2 + \delta}\right). \end{align*} \end{lemma} It will also be convenient to have a concentration of sums of random variables that are drawn without replacement from a given set. For this, we will use (a without-replacement version of) the Hoeffding's inequality, stated below. (See e.g.~\cite{bardenet2015concentration}.) \begin{lemma}[Hoeffding's inequality] \label{lem:hoeffding} Let $X_1, \dots, X_r$ be random variables drawn without replacement from a multiset $\mathcal{X} \subseteq [0, 1]$, $A := \frac{1}{r}\left(X_1 + \cdots + X_r\right)$ and $\mu := \mathbb{E}[A]$. Then, for any $\delta \in [0, 1]$, we have \begin{align*} \Pr[|A - \mu| > \delta] \leq 2 \exp\left(-2\delta^2 r\right). \end{align*} \end{lemma} \subsection{Densest $k$-Subgraph} For both our Max-Sum Dispersion and Max-Sum Diversification problems, we will use as a subroutine algorithms for (variants of) the \emph{Densest $k$-Subgraph (\textsc{DkS})} problem. In \textsc{DkS}, we are given a set $V$ of nodes, weights $w: \binom{V}{2} \to [0, 1]$ and an integer $k$, the goal is to find a subset $T \subseteq V$ with $|T| = k$ that maximizes $\den(T) := \frac{1}{|T|(|T|-1)/2} \sum_{\{u, v\} \subseteq T} w(\{u, v\})$. An {\em additive QPTAS} is an algorithm running in quasipolynomial time for any fixed $\epsilon > 0$ such that its output $T$ satisfies $\den(T) \geq \OPT - \epsilon$; Barman~\cite{Barman18} gave such an algorithm for \textsc{DkS}. We will in fact use a slightly generalized version of the problem where a subset $I \subseteq V$ of vertices is given as an input and these vertices must be picked in the solution $T$ (i.e. $I \subseteq T$). To avoid cumbersomeness, we also refer to this generalized version as \textsc{DkS}. It is not hard to see\footnote{In fact, in \Cref{subsec:submodular-dks}, we also give a more general algorithm than the one stated in~\Cref{thm:qptas-dks} which can also handle an additional monotone submodular function.} that Barman's algorithm~\cite{Barman18} extends easily to this setting: \begin{theorem} \label{thm:qptas-dks} There is an additive QPTAS for \textsc{DkS}\ that runs in time $n^{O(\log n / \epsilon^2)}$. \end{theorem} \textsc{DkS}\ is a classic problem in approximation algorithms literature, and many approximation algorithms~\cite{FS97,SW98,FL01,FPK01,AHI02,GL09,BCCFV10,Barman18} and hardness results~\cite{Feige02,Kho06,RS10,alon2011inapproximability,BCVGZ12,BravermanKRW17,Manurangsi17,ChalermsookCKLM20} have been proved over the years. Most of these works focus on \emph{multiplicative} approximation; the best known polynomial-time algorithm in this setting has an approximation ratio of $n^{1/4 + \epsilon}$ for any constant $\epsilon > 0$~\cite{BCCFV10} and there are evidences that achieving subpolynomial ratio in polynomial time is unlikely~\cite{Manurangsi17,BCVGZ12,CMMV17}. As for \emph{additive} approximation, it is known that an approximation scheme that runs in time $n^{\tilde{o}(\log n)}$ would break the exponential time hypothesis (ETH)~\cite{BravermanKRW17}; therefore, the running time in~\Cref{thm:qptas-dks} (in terms of $n$) is tight up to $\poly\log \log n$ factor in the exponent. We provide additional discussions on related results in~\Cref{app:hardness-from-dks}. \subsection{Submodular Maximization over a Matroid Constraint} For our approximation algorithm for Max-Sum Diversification, we will also need an approximation algorithm for \emph{monotone submodular maximization under a matroid constraint}. In this problem, we are given a monotone submodular set function $f: 2^X \to \mathbb{R}_{\geq 0}$ over a ground set $X$ together with a matroid $\mathcal{M} = (X, \mathcal{I})$. The function $f$ is given via a value oracle and $\mathcal{M}$ can be accessed via a membership oracle (which answers questions of the form ``does $S$ belong to $\mathcal{I}$?''). The goal is to find $S \in \mathcal{I}$ that maximizes $f(S)$. C{\u{a}}linescu et al. gave a randomized algorithm with approximation ratio $(1 - 1/e)$ for the problem, which we will use in our algorithm. \begin{theorem}[\cite{CalinescuCPV11}] \label{thm:submodular-matroid} There exists a randomized polynomial-time $(1 - 1/e)$-approximation algorithm for maximizing a montone submodular function over a matroid constraint. \end{theorem}
1,116,691,498,035
arxiv
\section{Introduction} The problem of estimating a sparse unknown parameter vector from noisy measurements has been analyzed intensively in the past few years \cite{tropp06, donoho06, candes06, candes07}, and has already given rise to numerous successful signal processing algorithms \cite{elad06, dabov07, dabov08, protter09, elad05}. In this paper, we consider the setting in which noisy measurements of a deterministic vector $\mb{x}_0$ are available. It is assumed that $\mb{x}_0$ has a sparse representation $\mb{x}_0 = \mb{D}{\boldsymbol \alpha}_0$, where $\mb{D}$ is a given dictionary and most of the entries of ${\boldsymbol \alpha}_0$ equal zero. Thus, only a small number of ``atoms,'' or columns of $\mb{D}$, are required to represent $\mb{x}_0$. The challenges confronting an estimation technique are to recover either $\mb{x}_0$ itself or its sparse representation ${\boldsymbol \alpha}_0$. Several practical approaches turn out to be surprisingly successful in this task. Such approaches include the Dantzig selector (DS) \cite{candes07} and basis pursuit denoising (BPDN), which is also referred to as the Lasso \cite{chen98, tropp06, donoho06}. A standard measure of estimator performance is the mean-squared error (MSE). Several recent papers analyzed the MSE obtained by methods such as the DS and BPDN \cite{candes07, ben-haim09c}. To determine the quality of estimation approaches, it is of interest to compare their achievements with theoretical performance limits: if existing methods approach the performance bound, then they are nearly optimal and further improvements in the current setting are impossible. This motivates the development of lower bounds on the MSE of estimators in the sparse setting. Since the parameter to be estimated is deterministic, the MSE is in general a function of the parameter value. While there are lower bounds on the worst-case achievable MSE among all possible parameter values \cite[\S7.4]{candes06b}, the actual performance for a specific value, or even for most values, might be substantially lower. Our goal is therefore to characterize the minimum MSE obtainable for each particular parameter vector. A standard method of achieving this objective is the Cram\'er--Rao bound (CRB) \cite{kay93, shao03}. The fact that $\mb{x}_0$ has a sparse representation is of central importance for estimator design. Indeed, many sparse estimation settings are underdetermined, meaning that without the assumption of sparsity, it is impossible to identify the correct parameter from its measurements, even without noise. In this paper, we treat the sparsity assumption as a deterministic prior constraint on the parameter. Specifically, we assume that $\mb{x}_0 \in \SS$, where $\SS$ is the set of all parameter vectors which can be represented by no more than $s$ atoms, for a given integer $s$. Our results are inspired by the well-studied theory of the constrained CRB \cite{gorman90, marzetta93, StoicaNg98, ben-haim09}. This theory is based on the assumption that the constraint set can be defined using the system of equations $\mb{f}(\mb{x})=\mb{0}$, $\mb{g}(\mb{x})\le\mb{0}$, where $\mb{f}$ and $\mb{g}$ are continuously differentiable functions. The resulting bound depends on the derivatives of the function $\mb{f}$. However, sparsity constraints cannot be written in this form. This necessitates the development of a bound suitable for non-smooth constraint sets \cite{ben-haim09d}. In obtaining this modified bound, we also provide new insight into the meaning of the general constrained CRB\@. In particular, we show that the fact that the constrained CRB is lower than the unconstrained bound results from an expansion of the class of estimators under consideration. With the aforementioned theoretical tools at hand, we obtain lower bounds on the MSE in a variety of sparse estimation problems. Our bound limits the MSE achievable by any estimator having a pre-specified bias function, for each parameter value. Particular emphasis is given to the unbiased case; the reason for this preference is twofold: First, when the signal-to-noise ratio (SNR) is high, biased estimation is suboptimal. Second, for high SNR values, the unbiased CRB is achieved by the maximum likelihood (ML) estimator. While the obtained bounds differ depending on the exact problem definition, in general terms and for unbiased estimation the bounds can be described as follows. For parameters having maximal support, i.e., parameters whose representation requires the maximum allowed number $s$ of atoms, the lower bound equals the MSE of the ``oracle estimator'' which knows the locations (but not the values) of the nonzero representation elements. On the other hand, for parameters which do not have maximal support (a set which has Lebesgue measure zero in $\SS$), our lower bound is identical to the CRB for an unconstrained problem, which is substantially higher than the oracle MSE\@. The correspondence between the CRB and the MSE of the oracle estimator (for all but a zero-measure subset of the feasible parameter set $\SS$) is of practical interest since, unlike the oracle estimator, the CRB is achieved by the ML estimator at high SNR\@. Our bound can thus be viewed as an alternative justification for the common use of the oracle estimator as a baseline against which practical algorithms are compared. This gives further merit to recent results, which demonstrate that BPDN and the DS both achieve near-oracle performance \cite{candes07, ben-haim09c}. However, the existence of parameters for which the bound is much higher indicates that oracular performance cannot be attained for \emph{all} parameter values, at least using unbiased techniques. Indeed, as we will show, in many sparse estimation scenarios, one cannot construct \emph{any} estimator which is unbiased for all sparsely representable parameters. Our contribution is related to, but distinct from, the work of Babadi et al.\ \cite{babadi09}, in which the CRB of the oracle estimator was derived (and shown to equal the aforementioned oracle MSE). Our goal in this work is to obtain a lower bound on the performance of estimators which are not endowed with oracular knowledge; consequently, as explained above, for some parameter values the obtained CRB will be higher than the oracle MSE\@. It was further shown in \cite{babadi09} that when the measurements consist of Gaussian random mixtures of the parameter vector, there exists an estimator which achieves the oracle CRB at high SNR; this is shown to hold on average over realizations of the measurement mixtures. The present contribution strengthens this result by showing that for any given (deterministic) well-behaved measurement setup, there exists a technique (namely, the ML estimator) achieving the CRB at high SNR\@. Thus, convergence to the CRB is guaranteed for all measurement settings, and not merely when averaging over an ensemble of such settings. The rest of this paper is organized as follows. In Section~\ref{se:sparse backgnd}, we review the sparse setting as a constrained estimation problem. Section~\ref{se:crb} defines a generalization of sparsity constraints, which we refer to as locally balanced constraint sets; the CRB is then derived in this general setting. In Section~\ref{se:sparse bounds}, our general results are applied back to some specific sparse estimation problems. In Section~\ref{se:numer}, the CRB is compared to the empirical performance of estimators of sparse vectors. Our conclusions are summarized in Section~\ref{se:discuss}. Throughout the paper, boldface lowercase letters $\v$ denote vectors while boldface uppercase letters $\mb{M}$ denote matrices. Given a vector function $\mb{f}: {\mathbb R}^n \rightarrow {\mathbb R}^k$, we denote by $\partial \mb{f} / \partial \mb{x}$ the $k \times n$ matrix whose $ij$th element is $\partial f_i / \partial x_j$. The support of a vector, denoted $\supp(\v)$, is the set of indices of the nonzero entries in $\v$. The Euclidean norm of a vector $\v$ is denoted $\|\v\|_2$, and the number of nonzero entries in $\v$ is $\|\v\|_0$. Finally, the symbols $\Ra{\mb{M}}$, $\Nu{\mb{M}}$, and $\mb{M}^\dagger$ refer, respectively, to the column space, null space, and Moore--Penrose pseudoinverse of the matrix $\mb{M}$. \section{Sparse Estimation Problems} \label{se:sparse backgnd} In this section, we describe several estimation problems whose common theme is that the unknown parameter has a sparse representation with respect to a known dictionary. We then review some standard techniques used to recover the unknown parameter in these problems. In Section~\ref{se:numer} we will compare these methods with the performance bounds we develop. \subsection{The Sparse Setting} \label{ss:sparse setting} Suppose we observe a measurement vector $\mb{y} \in {\mathbb R}^m$, given by \begin{equation} \label{eq:y=Ax+w} \mb{y} = \mb{A}\mb{x}_0 + \mb{w} \end{equation} where $\mb{x}_0 \in {\mathbb R}^n$ is an unknown deterministic signal, $\mb{w}$ is independent, identically distributed (IID) Gaussian noise with zero mean and variance $\sigma^2$, and $\mb{A}$ is a known $m \times n$ matrix. We assume the prior knowledge that there exists a sparse representation of $\mb{x}_0$, or, more precisely, that \begin{equation} \label{eq:def S} \mb{x}_0 \in \SS \triangleq \left\{ \mb{x} \in {\mathbb R}^n : \mb{x} = \mb{D} {\boldsymbol \alpha}, \|{\boldsymbol \alpha}\|_0 \le s \right\}. \end{equation} In other words, the set $\SS$ describes signals $\mb{x}$ which can be formed from a linear combination of no more than $s$ columns, or atoms, from $\mb{D}$. The dictionary $\mb{D}$ is an $n \times p$ matrix with $n \le p$, and we assume that $s < p$, so that only a subset of the atoms in $\mb{D}$ can be used to represent any signal in $\SS$. We further assume that $\mb{D}$ and $s$ are known. Quite a few important signal recovery applications can be formulated using the setting described above. For example, if $\mb{A}=\mb{I}$, then $\mb{y}$ consists of noisy observations of $\mb{x}_0$, and recovering $\mb{x}_0$ is a denoising problem \cite{elad06, dabov07}. If $\mb{A}$ corresponds to a blurring kernel, we obtain a deblurring problem \cite{dabov08}. In both cases, the matrix $\mb{A}$ is square and invertible. Interpolation and inpainting can likewise be formulated as \eqref{eq:y=Ax+w}, but in those cases $\mb{A}$ is an underdetermined matrix, i.e., we have $m<n$ \cite{elad05}. For all of these estimation scenarios, our goal is to obtain an estimate ${\widehat{\x}}$ whose MSE is as low as possible, where the MSE is defined as \begin{equation} \label{eq:def MSE} {\mathrm{MSE}} \triangleq \E{ \| {\widehat{\x}} - \mb{x}_0 \|_2^2 }. \end{equation} Note that $\mb{x}_0$ is deterministic, so that the expectation in \eqref{eq:def MSE} (and throughout the paper) is taken over the noise $\mb{w}$ but not over $\mb{x}_0$. Thus, the MSE is in general a function of $\mb{x}_0$. In the above settings, the goal is to estimate the unknown signal $\mb{x}_0$. However, it may also be of interest to recover the coefficient vector ${\boldsymbol \alpha}_0$ for which $\mb{x}_0 = \mb{D}{\boldsymbol \alpha}_0$, e.g., for the purpose of model selection \cite{tropp06, candes07}. In this case, the goal is to construct an estimator ${\widehat{\alf}}$ whose MSE $E\{ \|{\widehat{\alf}}-{\boldsymbol \alpha}_0\|_2^2 \}$ is as low as possible. Unless $\mb{D}$ is unitary, estimating ${\boldsymbol \alpha}_0$ is not equivalent to estimating $\mb{x}_0$. Note, however, that when estimating ${\boldsymbol \alpha}_0$, the matrices $\mb{A}$ and $\mb{D}$ can be combined to obtain the equivalent problem \begin{equation} \label{eq:y=H alf + w} \mb{y} = \H {\boldsymbol \alpha}_0 + \mb{w} \end{equation} where $\H \triangleq \mb{A}\mb{D}$ is an $m \times p$ matrix and \begin{equation} \label{eq:def T} {\boldsymbol \alpha}_0 \in {\mathcal T} = \{ {\boldsymbol \alpha} \in {\mathbb R}^p : \|{\boldsymbol \alpha}\|_0 \le s \} . \end{equation} Therefore, this problem can also be seen as a special case of \eqref{eq:y=Ax+w} and \eqref{eq:def S}. Nevertheless, it will occasionally be convenient to refer specifically to the problem of estimating ${\boldsymbol \alpha}_0$ from \eqref{eq:y=H alf + w}. Signal estimation problems differ in the properties of the dictionary $\mb{D}$ and measurement matrix $\mb{A}$. In particular, problems of a very different nature arise depending on whether the dictionary is a basis or an overcomplete frame. For example, many approaches to denoising yield simple shrinkage techniques when $\mb{D}$ is a basis, but deteriorate to NP-hard optimization problems when $\mb{D}$ is overcomplete \cite{natarajan95}. A final technical comment is in order. If the matrix $\H$ in \eqref{eq:y=H alf + w} does not have full column rank, then there may exist different feasible parameters ${\boldsymbol \alpha}_1$ and ${\boldsymbol \alpha}_2$ such that $\H{\boldsymbol \alpha}_1 = \H{\boldsymbol \alpha}_2$. In this case, the probability distribution of $\mb{y}$ will be identical for these two parameter vectors, and the estimation problem is said to be unidentifiable \cite[\S1.5.2]{lehmann98}. A necessary and sufficient condition for identifiability is \begin{equation} \label{eq:spark req} \spark(\H) > 2s \end{equation} where $\spark(\H)$ is defined as the smallest integer $k$ such that there exist $k$ linearly dependent columns in $\H$ \cite{donoho03}. We will adopt the assumption \eqref{eq:spark req} throughout the paper. Similarly, in the problem \eqref{eq:y=Ax+w} we will assume that \begin{equation} \label{eq:spark req D} \spark(\mb{D}) > 2s. \end{equation} \subsection{Estimation Techniques} \label{ss:est techniques} We now review some standard estimators for the sparse problems described above. These techniques are usually viewed as methods for obtaining an estimate ${\widehat{\alf}}$ of the vector ${\boldsymbol \alpha}_0$ in \eqref{eq:y=H alf + w}, and we will adopt this perspective in the current section. One way to estimate $\mb{x}_0$ in the more general problem \eqref{eq:y=Ax+w} is to first estimate ${\boldsymbol \alpha}_0$ with the methods described below and then use the formula ${\widehat{\x}} = \mb{D}{\widehat{\alf}}$. A widely-used estimation technique is the ML approach, which provides an estimate of ${\boldsymbol \alpha}_0$ by solving \begin{equation} \label{eq:ml} \min_{\boldsymbol \alpha} \|\mb{y} - \H{\boldsymbol \alpha}\|_2^2 \quad \text{s.t. } \|{\boldsymbol \alpha}\|_0 \le s. \end{equation} Unfortunately, \eqref{eq:ml} is a nonconvex optimization problem and solving it is NP-hard \cite{natarajan95}, meaning that an efficient algorithm providing the ML estimator is unlikely to exist. In fact, to the best of our knowledge, the most efficient method for solving \eqref{eq:ml} for general $\H$ is to enumerate the $\binom{p}{s}$ possible $s$-element support sets of ${\boldsymbol \alpha}$ and choose the one for which $\|\mb{y} - \H{\boldsymbol \alpha}\|_2^2$ is minimal. This is clearly an impractical strategy for reasonable values of $p$ and $s$. Consequently, several efficient alternatives have been proposed for estimating ${\boldsymbol \alpha}_0$. One of these is the $\ell_1$-penalty version of BPDN \cite{tropp06}, which is defined as a solution $\half_{\mathrm{BP}}$ to the quadratic program \begin{equation} \label{eq:bpdn} \min_{\boldsymbol \alpha} \tfrac{1}{2} \|\mb{y} - \H{\boldsymbol \alpha}\|_2^2 + \gamma \|{\boldsymbol \alpha}\|_1 \end{equation} with some regularization parameter $\gamma$. More recently, the DS was proposed \cite{candes07}; this approach estimates ${\boldsymbol \alpha}_0$ as a solution $\half_{\mathrm{DS}}$ to \begin{equation} \label{eq:ds} \min_{\boldsymbol \alpha} \|{\boldsymbol \alpha}\|_1 \quad \text{s.t. } \|\H^T (\mb{y} - \H{\boldsymbol \alpha}) \|_\infty \le \tau \end{equation} where $\tau$ is again a user-selected parameter. A modification of the DS, known as the Gauss--Dantzig selector (GDS) \cite{candes07}, is to use $\half_{\mathrm{DS}}$ only to estimate the support of ${\boldsymbol \alpha}_0$. In this approach, one solves \eqref{eq:ds} and determines the support set of $\half_{\mathrm{DS}}$. The GDS estimate is then obtained as \begin{equation} \label{eq:gds} \half_{\mathrm{GDS}} = \begin{cases} \H_{\half_{\mathrm{DS}}}^\dagger \mb{y} & \text{on the support set of $\half_{\mathrm{DS}}$} \cr \mb{0} & \text{elsewhere} \end{cases} \end{equation} where $\H_{\half_{\mathrm{DS}}}$ consists of the columns of $\H$ corresponding to the support of $\half_{\mathrm{DS}}$. Previous research on the performance of these estimators has primarily examined their worst-case MSE among all possible values of ${\boldsymbol \alpha}_0 \in {\mathcal T}$. Specifically, it has been shown \cite{candes07} that, under suitable conditions on $\H$, $s$, and $\tau$, the DS of \eqref{eq:ds} satisfies \begin{equation} \label{eq:ds wc bound} \|{\boldsymbol \alpha}_0 - \half_{\mathrm{DS}}\|_2^2 \le C s \sigma^2 \log p \quad \text{with high probability} \end{equation} for some constant $C$. It follows that the MSE of the DS is also no greater than a constant times $s \sigma^2 \log p$ for all ${\boldsymbol \alpha}_0 \in {\mathcal T}$ \cite{candes06b}. An identical property was also demonstrated for BPDN \eqref{eq:bpdn} with an appropriate choice of $\gamma$ \cite{ben-haim09c}. Conversely, it is known that the worst-case error of \emph{any} estimator is at least a constant times $s \sigma^2 \log p$ \cite[\S7.4]{candes06b}. Thus, both BPDN and the DS are optimal, up to a constant, in terms of worst-case error. Nevertheless, the MSE of these approaches for specific values of ${\boldsymbol \alpha}_0$, even for a vast majority of such values, might be much lower. Our goal differs from this line of work in that we characterize the \emph{pointwise} performance of an estimator, i.e., the MSE for specific values of ${\boldsymbol \alpha}_0$. Another baseline with which practical techniques are often compared is the oracle estimator, given by \begin{equation}\label{eq:def xo} \half_{\mathrm{oracle}} = \begin{cases} \H_{{\boldsymbol \alpha}_0}^\dagger \b & \text{on the set $\supp({\boldsymbol \alpha}_0)$} \\ \mb{0} & \text{elsewhere} \end{cases} \end{equation} where $\H_{{\boldsymbol \alpha}_0}$ is the submatrix constructed from the columns of $\H$ corresponding to the nonzero entries of ${\boldsymbol \alpha}_0$. In other words, $\half_{\mathrm{oracle}}$ is the least-squares (LS) solution among vectors whose support coincides with $\supp({\boldsymbol \alpha}_0)$, which is assumed to have been provided by an ``oracle.'' Of course, in practice the support of ${\boldsymbol \alpha}_0$ is unknown, so that $\half_{\mathrm{oracle}}$ cannot actually be implemented. Nevertheless, one often compares the performance of true estimators with $\half_{\mathrm{oracle}}$, whose MSE is given by \cite{candes07} \begin{equation} \label{eq:oracle mse} \sigma^2 \Tr((\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1}). \end{equation} Is \eqref{eq:oracle mse} a bound on estimation MSE\@? While $\half_{\mathrm{oracle}}$ is a reasonable technique to adopt if $\supp({\boldsymbol \alpha}_0)$ is known, this does not imply that \eqref{eq:oracle mse} is a lower bound on the performance of practical estimators. Indeed, as will be demonstrated in Section~\ref{se:numer}, when the SNR is low, both BPDN and the DS outperform $\half_{\mathrm{oracle}}$, thanks to the use of shrinkage in these estimators. Furthermore, if $\supp({\boldsymbol \alpha}_0)$ is known, then there exist biased techniques which are better than $\half_{\mathrm{oracle}}$ for \emph{all} values of ${\boldsymbol \alpha}_0$ \cite{ben-haim06}. Thus, $\half_{\mathrm{oracle}}$ is neither achievable in practice, nor optimal in terms of MSE\@. As we will see, one can indeed interpret \eqref{eq:oracle mse} as a lower bound on the achievable MSE, but such a result requires a certain restriction of the class of estimators under consideration. \section{The Constrained Cram\'er--Rao Bound} \label{se:crb} A common technique for determining the achievable performance in a given estimation problem is to calculate the CRB, which is a lower bound on the MSE of estimators having a given bias \cite{kay93}. In this paper, we are interested in calculating the CRB when it is known that the parameter $\mb{x}$ satisfies sparsity constraints such as those of the sets $\SS$ of \eqref{eq:def S} and ${\mathcal T}$ of \eqref{eq:def T}. The CRB for constrained parameter sets has been studied extensively in the past \cite{gorman90, marzetta93, StoicaNg98, ben-haim09}. However, in prior work derivation of the CRB assumed that the constraint set is given by \begin{equation} \label{eq:constr} {\mathcal X} = \{ \mb{x} \in {\mathbb R}^n: \mb{f}(\mb{x})=\mb{0}, \ \mb{g}(\mb{x})\le\mb{0} \} \end{equation} where $\mb{f}(\mb{x})$ and $\mb{g}(\mb{x})$ are continuously differentiable functions. We will refer to such ${\mathcal X}$ as continuously differentiable sets. As shown in prior work \cite{gorman90}, the resulting bound depends on the derivatives of the function $\mb{f}$. Yet in some cases, including the sparse estimation scenarios discussed in Section~\ref{se:sparse backgnd}, the constraint set cannot be written in the form \eqref{eq:constr}, and the aforementioned results are therefore inapplicable. Our goal in the current section is to close this gap by extending the constrained CRB to constraint sets ${\mathcal X}$ encompassing the sparse estimation scenario. We begin this section with a general discussion of the CRB and the class of estimators to which it applies. This will lead us to interpret the constrained CRB as a bound on estimators having an incompletely specified bias gradient. This interpretation will facilitate the application of the existing constrained CRB to the present context. \begin{figure*} \centerline{\includegraphics{locally_balanced.eps}} \caption{In a locally balanced set such as a union of subspaces (a) and an open ball (b), each point is locally defined by a set of feasible directions along which an infinitesimal movement does not violate the constraints. The curve (c) is not characterized in this way and thus is not locally balanced.} \label{fi:loc bal} \end{figure*} \subsection{Bias Requirements in the Constrained CRB} \label{ss:bias req} In previous settings for which the constrained CRB was derived, it was noted that the resulting bound is typically lower than the unconstrained version \cite[Remark~4]{gorman90}. At first glance, one would attribute the reduction in the value of the CRB to the fact that the constraints add information about the unknown parameter, which can then improve estimation performance. On the other hand, the CRB separately characterizes the achievable performance for each value of the unknown parameter $\mb{x}_0$. Thus, the CRB at $\mb{x}_0$ applies even to estimators designed specifically to perform well at $\mb{x}_0$. Such estimators surely cannot achieve further gain in performance if it is known that $\mb{x}_0 \in {\mathcal X}$. Why, then, is the constrained CRB lower than the unconstrained bound? The answer to this apparent paradox involves a careful definition of the class of estimators to which the bound applies. To obtain a meaningful bound, one must exclude some estimators from consideration. Unless this is done, the bound will be tarnished by estimators of the type ${\widehat{\x}} = \x_\mathrm{u}$, for some constant $\x_\mathrm{u}$, which achieve an MSE of $0$ at the specific point $\mb{x} = \x_\mathrm{u}$. It is standard practice to circumvent this difficulty by restricting attention to estimators having a particular bias $\b(\mb{x}) \triangleq \E{{\widehat{\x}}} - \mb{x}$. In particular, it is common to examine unbiased estimators, for which $\b(\mb{x}) = \mb{0}$. However, in some settings, it is impossible to construct estimators which are unbiased for all $\mb{x} \in {\mathbb R}^n$. For example, suppose we are to estimate the coefficients ${\boldsymbol \alpha}_0$ of an overcomplete dictionary based on the measurements given by \eqref{eq:y=H alf + w}. Since the dictionary is overcomplete, its nullspace is nontrivial; furthermore, each coefficient vector in the nullspace yields an identical distribution of the measurements, so that an estimator can be unbiased for one of these vectors at most. The question is whether it is possible to construct estimators which are unbiased for some, but not all, values of $\mb{x}$. One possible approach is to seek estimators which are unbiased for all $\mb{x} \in {\mathcal X}$. However, as we will see later in this section, even this requirement can be too strict: in some cases it is impossible to construct estimators which are unbiased for all $\mb{x} \in {\mathcal X}$. More generally, the CRB is a \emph{local} bound, meaning that it determines the achievable performance at a particular value of $\mb{x}$ based on the statistics at $\mb{x}$ and at nearby values. Thus, it is irrelevant to introduce requirements on estimation performance for parameters which are distant from the value $\mb{x}$ of interest. Since we seek a locally unbiased estimator, one possibility is to require unbiasedness at a single point, say $\x_\mathrm{u}$. As it turns out, it is always possible to construct such a technique: this is again ${\widehat{\x}} = \x_\mathrm{u}$, which is unbiased at $\x_\mathrm{u}$ but nowhere else. To avoid this loophole, one can require an estimator to be unbiased in the neighborhood \begin{equation} {{\mathcal B}_\eps(\x_0)} = \left\{ \mb{x} \in {\mathbb R}^m : \|\mb{x}-\mb{x}_0\|_2 < \varepsilon \right\} \end{equation} of $\mb{x}_0$, for some small $\varepsilon$. It follows that both the bias $\b(\mb{x})$ and the bias gradient \begin{equation} \label{eq:def B} \mb{B}(\mb{x}) \triangleq \pd{\b}{\mb{x}} \end{equation} vanish at $\mb{x} = \mb{x}_0$. This formulation is the basis of the unconstrained unbiased CRB, a lower bound on the covariance at $\mb{x}_0$ which applies to all estimators whose bias gradient is zero at $\mb{x}_0$. It turns out that even this requirement is too stringent in constrained settings. As we will see in Section~\ref{ss:sparse}, estimators of the coefficients of an overcomplete dictionary must have a nonzero bias gradient matrix. The reason is related to the fact that unbiasedness is required over the set ${{\mathcal B}_\eps(\x_0)}$, which, in the overcomplete setting, has a higher dimension than the number of measurements. However, it can be argued that one is not truly interested in the bias at all points in ${{\mathcal B}_\eps(\x_0)}$, since many of these points violate the constraint set ${\mathcal X}$. A reasonable compromise is to require unbiasedness over ${{\mathcal B}_\eps(\x_0)} \cap {\mathcal X}$, i.e., over the neighborhood of $\mb{x}_0$ restricted to the constraint set ${\mathcal X}$. This leads to a weaker requirement on the bias gradient $\mb{B}$ at $\mb{x}_0$. Specifically, the derivatives of the bias need only be specified in directions which do not violate the constraints. The exact formulation of this requirement depends on the nature of the set ${\mathcal X}$. In the following subsections, we will investigate various constraint sets and derive the corresponding requirements on the bias function. It is worth emphasizing that the dependence of the CRB on the constraints is manifested through the class of estimators being considered, or more specifically, through the allowed estimators' bias gradient matrices. By contrast, the unconstrained CRB applies to estimators having a fully specified bias gradient matrix. Consequently, the constrained bound applies to a wider class of estimators, and is thus usually lower than the unconstrained version of the CRB\@. In other words, estimators which are unbiased in the constrained setting, and thus applicable to the unbiased constrained CRB, are likely to be biased in the unconstrained context. Since a wider class of estimators is considered by the constrained CRB, the resulting bound is lower, thus explaining the puzzling phenomenon described in the beginning of this subsection. \subsection{Locally Balanced Constraints} \label{ss:loc bal} We now consider a class of constraint sets, called locally balanced sets, which encompass the sparsity constraints of Section~\ref{se:sparse backgnd}. Roughly speaking, a locally balanced set is one which is locally defined at each point by the directions along which one can move without leaving the set. Formally, a metric space ${\mathcal X}$ is said to be locally balanced if, for all $\mb{x} \in {\mathcal X}$, there exists an open set ${\mathcal C} \subset {\mathcal X}$ such that $\mb{x} \in {\mathcal C}$ and such that, for all $\mb{x}' \in {\mathcal C}$ and for all $|\lambda| \le 1$, we have \begin{equation} \label{eq:def loc bal} \mb{x} + \lambda (\mb{x}' - \mb{x}) \in {\mathcal C}. \end{equation} As we will see, locally balanced sets are useful in the context of the constrained CRB, as they allow us to identify the feasible directions along which the bias gradient must be specified. An example of a locally balanced set is given in Fig.~\ref{fi:loc bal}(a), which represents a union of two subspaces. In Fig.~\ref{fi:loc bal}(a), for any point $\mb{x} \in {\mathcal X}$, and for any point $\mb{x}' \in {\mathcal X}$ sufficiently close to $\mb{x}$, the entire line segment between $\mb{x}$ and $\mb{x}'$, as well as the line segment in the opposite direction, are also in ${\mathcal X}$. This illustrates the fact that any union of subspaces is locally balanced, and, in particular, so are the sparse estimation settings of Section~\ref{se:sparse backgnd} \cite{eldar09, eldar09b, gedalyahu09}. As another example, consider any open set, such as the open ball in Fig.~\ref{fi:loc bal}(b). For such a set, any point $\mb{x}$ has a sufficiently small neighborhood ${\mathcal C}$ such that, for any $\mb{x}' \in {\mathcal C}$, the line segment connecting $\mb{x}$ to $\mb{x}'$ is contained in ${\mathcal X}$. On the other hand, the curve in Fig.~\ref{fi:loc bal}(c) is not locally balanced, since the line connecting $\mb{x}$ to any other point on the set does not lie within the set.\footnote{We note in passing that since the curve in Fig.~\ref{fi:loc bal}(c) is continuously differentiable, it can be locally approximated by a locally balanced set. Our derivation of the CRB can be extended to such approximately locally balanced sets in a manner similar to that of \cite{gorman90}, but such an extension is not necessary for the purposes of this paper.} Observe that the neighborhood of a point $\mb{x}$ in a locally balanced set ${\mathcal X}$ is entirely determined by the set of feasible directions $\v$ along which infinitesimal changes of $\mb{x}$ do not violate the constraints. These are the directions $\v = \mb{x}' - \mb{x}$ for all points $\mb{x}' \ne \mb{x}$ in the set ${\mathcal C}$ of \eqref{eq:def loc bal}. Recall that we seek a lower bound on the performance of estimators whose bias gradient is defined over the neighborhood of $\mb{x}_0$ restricted to the constraint set ${\mathcal X}$. Suppose for concreteness that we are interested in unbiased estimators. For a locally balanced constraint set ${\mathcal X}$, this implies that \begin{equation} \label{eq:pre T-unbias} \mb{B} \v = \mb{0} \end{equation} for any feasible direction $\v$. In other words, all feasible directions must be in the nullspace of $\mb{B}$. This is a weaker condition than requiring the bias gradient to equal zero, and is thus more useful for constrained estimation problems. If an estimator ${\widehat{\x}}$ satisfies \eqref{eq:pre T-unbias} for all feasible directions $\v$ at a certain point $\mb{x}_0$, we say that ${\widehat{\x}}$ is ${\mathcal X}$-unbiased at $\mb{x}_0$. This terminology emphasizes the fact that ${\mathcal X}$-unbiasedness depends both on the point $\mb{x}_0$ and on the constraint set ${\mathcal X}$. Consider the subspace $\mathcal F$ spanned by the feasible directions at a certain point $\mb{x} \in {\mathcal X}$. We refer to $\mathcal F$ as the feasible subspace at $\mb{x}$. Note that $\mathcal F$ may include infeasible directions, if these are linear combinations of feasible directions. Nevertheless, because of the linearity of \eqref{eq:pre T-unbias}, any vector $\u \in \mathcal F$ satisfies $\mb{B}\u = \mb{0}$, even if $\u$ is infeasible. Thus, ${\mathcal X}$-unbiasedness is actually a property of the feasible subspace $\mathcal F$, rather than the set of feasible directions. Since ${\mathcal X}$ is a subset of a finite-dimensional Euclidean space, $\mathcal F$ is also finite-dimensional, although different points in ${\mathcal X}$ may yield subspaces having differing dimensions. Let $\u_1, \ldots, \u_l$ denote an orthonormal basis for $\mathcal F$, and define the matrix \begin{equation} \label{eq:def U} \mb{U} = [ \u_1, \ldots, \u_l ]. \end{equation} Note that $\u_i$ and $\mb{U}$ are functions of $\mb{x}$. For a given function $\mb{x}$, different orthonormal bases can be chosen, but the choice of a basis is arbitrary and will not affect our results. As we have seen, ${\mathcal X}$-unbiasedness at $\mb{x}_0$ can alternatively be written as $\mb{B}\u = \mb{0}$ for all $\u \in \mathcal F$, or, equivalently \begin{equation} \label{eq:T-unbias} \mb{B}\mb{U} = \mb{0}. \end{equation} The constrained CRB can now be derived as a lower bound on all ${\mathcal X}$-unbiased estimators, which is a weaker requirement than ``ordinary'' unbiasedness. Just as ${\mathcal X}$-unbiasedness was defined by requiring the bias gradient matrix to vanish when multiplied by any feasible direction vector, we can define ${\mathcal X}$-biased estimators by requiring a specific value (not necessarily zero) for the bias gradient matrix when multiplied by a feasible direction vector. In an analogy to \eqref{eq:T-unbias}, this implies that one must define a value for the matrix $\mb{B}\mb{U}$. Our goal is thus to construct a lower bound on the covariance at a given $\mb{x}$ achievable by any estimator whose bias gradient $\mb{B}$ at $\mb{x}$ satisfies $\mb{B}\mb{U} = \P$, for a given matrix $\P$. This is referred to as specifying the ${\mathcal X}$-bias of the estimator at $\mb{x}$. \subsection{The CRB for Locally Balanced Constraints} It is helpful at this point to compare our derivation with prior work on the constrained CRB, which considered continuously differentiable constraint sets of the form \eqref{eq:constr}. It has been previously shown \cite{gorman90} that inequality constraints of the type $\mb{g}(\mb{x}) \le \mb{0}$ have no effect on the CRB\@. Consequently, we will consider constraints of the form \begin{equation} \label{eq:eq constr} {\mathcal X} = \{ \mb{x} \in {\mathbb R}^n: \mb{f}(\mb{x})=\mb{0} \}. \end{equation} Define the $k \times n$ matrix $\mb{F}(\mb{x}) = \partial \mb{f} / \partial \mb{x}$. For simplicity of notation, we will omit the dependence of $\mb{F}$ on $\mb{x}$. Assuming that the constraints are non-redundant, $\mb{F}$ is a full-rank matrix, and thus one can define an $n \times (n-k)$ matrix $\mb{W}$ (also dependent on $\mb{x}$) such that \begin{equation} \mb{F}\mb{W} = \mb{0}, \quad \mb{W}^T\mb{W} = \mb{I}. \end{equation} The matrix $\mb{W}$ is closely related to the matrix $\mb{U}$ spanning the feasible direction subspace of locally balanced sets. Indeed, the column space $\Ra{\mb{W}}$ of $\mb{W}$ is the tangent space of ${\mathcal X}$, i.e., the subspace of ${\mathbb R}^n$ containing all vectors which are tangent to ${\mathcal X}$ at the point $\mb{x}$. Thus, the vectors in $\Ra{\mb{W}}$ are precisely those directions along which infinitesimal motion from $\mb{x}$ does not violate the constraints, up to a first-order approximation. It follows that if a particular set ${\mathcal X}$ is both locally balanced and continuously differentiable, its matrices $\mb{U}$ and $\mb{W}$ coincide. Note, however, that there exist sets which are locally balanced but not continuously differentiable (and vice versa). With the above formulation, the CRB for continuously differentiable constraints can be stated as a function of the the matrix $\mb{W}$ and the bias gradient $\mb{B}$ \cite{ben-haim09}. In fact, the resulting bound depends on $\mb{B}$ only through $\mb{B}\mb{W}$. This is to be expected in light of the discussion of Section~\ref{ss:bias req}: The bias should be specified only for those directions which do not violate the constraint set. Furthermore, the proof of the CRB in \cite[Theorem~1]{ben-haim09} depends not on the formulation \eqref{eq:eq constr} of the constraint set, but merely on the class of bias functions under consideration. Consequently, one can state the bound without any reference to the underlying constraint set. To do so, let $\mb{y}$ be a measurement vector with pdf $p(\mb{y};\mb{x})$, which is assumed to be differentiable with respect to $\mb{x}$. The Fisher information matrix (FIM) $\mb{J}(\mb{x})$ is defined as \begin{equation} \label{eq:def J} \mb{J}(\mb{x}) = \E{{\boldsymbol \Delta} {\boldsymbol \Delta}^T} \end{equation} where \begin{equation} \label{eq:def bD} {\boldsymbol \Delta} = \pd{\log p(\mb{y};\mb{x})}{\mb{x}}. \end{equation} We assume that the FIM is well-defined and finite. We further assume that integration with respect to $\mb{y}$ and differentiation with respect to $\mb{x}$ can be interchanged, a standard requirement for the CRB\@. We then have the following result. \begin{theorem} \label{th:crb} Let ${\widehat{\x}}$ be an estimator and let $\mb{B} = \partial \b / \partial \mb{x}$ denote the bias gradient matrix of ${\widehat{\x}}$ at a given point $\mb{x}_0$. Let $\mb{U}$ be an orthonormal matrix, and suppose that $\mb{B}\mb{U}$ is known, but that $\mb{B}$ is otherwise arbitrary. If \begin{equation} \label{eq:UUM in UUJUU} \Ra{\mb{U}(\mb{U}+\mb{B}\mb{U})^T)} \subseteq \Ra{{\U\U^T\J\U\U^T}} \end{equation} then the covariance of ${\widehat{\x}}$ at $\mb{x}_0$ satisfies \begin{equation} \label{eq:th:crb} \Cov({\widehat{\x}}) \succeq (\mb{U}+\mb{B}\mb{U}) \left(\mb{U}^T\mb{J}\mb{U} \right)^\dagger (\mb{U}+\mb{B}\mb{U})^T. \end{equation} Equality is achieved in \eqref{eq:th:crb} if and only if \begin{equation} \label{eq:th:crb eq cond} {\widehat{\x}} = \mb{x}_0 + \b(\mb{x}_0) + (\mb{U}+\mb{B}\mb{U}) \left( \mb{U}^T\mb{J}\mb{U} \right)^\dagger \mb{U}^T {\boldsymbol \Delta} \end{equation} in the mean square sense, where ${\boldsymbol \Delta}$ is defined by \eqref{eq:def bD}. Conversely, if \eqref{eq:UUM in UUJUU} does not hold, then there exists no finite-variance estimator with the required bias gradient. \end{theorem} As required, no mention of constrained estimation is made in Theorem~\ref{th:crb}; instead, partial information about the bias gradient is assumed. Apart from this restatement, the theorem is identical to \cite[Theorem~1]{ben-haim09}, and its proof is unchanged. However, the above formulation is more general in that it can be applied to any constrained setting, once the constraints have been translated to bias gradient requirements. In particular, Theorem~\ref{th:crb} provides a CRB for locally balanced sets if the matrix $\mb{U}$ is chosen as a basis for the feasible direction subspace of Section~\ref{ss:loc bal}. \section{Bounds on Sparse Estimation} \label{se:sparse bounds} In this section, we apply the CRB of Theorem~\ref{th:crb} to several sparse estimation scenarios. We begin with an analysis of the problem of estimating a sparse parameter vector. \subsection{Estimating a Sparse Vector} \label{ss:sparse} Suppose we would like to estimate a parameter vector ${\boldsymbol \alpha}_0$, known to belong to the set ${\mathcal T}$ of \eqref{eq:def T}, from measurements $\mb{y}$ given by \eqref{eq:y=H alf + w}. To determine the CRB in this setting, we begin by identifying the feasible subspaces $\mathcal F$ corresponding to each of the elements in ${\mathcal T}$. To this end, consider first vectors ${\boldsymbol \alpha} \in {\mathcal T}$ for which $\|{\boldsymbol \alpha}\|_0 = s$, i.e., vectors having maximal support. Denote by $\{ i_1, \ldots, i_s \}$ the support set of ${\boldsymbol \alpha}$. Then, for all $\delta$, we have \begin{equation} \|{\boldsymbol \alpha} + \delta \mb{e}_{i_k}\|_0 = \|{\boldsymbol \alpha}\|_0 = s, \quad k=1,\ldots,s \end{equation} where $\mb{e}_j$ is the $j$th column of the identity matrix. Thus ${\boldsymbol \alpha} + \delta \mb{e}_{i_k} \in {\mathcal T}$, and consequently, the vectors $\{ \mb{e}_{i_1}, \ldots, \mb{e}_{i_s} \}$ are all feasible directions, as is any linear combination of these vectors. On the other hand, for any $j \notin \supp({\boldsymbol \alpha})$ and for any nonzero $\delta$, we have $\|{\boldsymbol \alpha} + \delta \mb{e}_j\|_0 = s+1$, and thus $\mb{e}_j$ is not a feasible direction; neither is any other vector which is not in $\spn\{\mb{e}_{i_1}, \ldots, \mb{e}_{i_s}\}$. It follows that the feasible subspace $\mathcal F$ for points having maximal support is given by $\spn\{\mb{e}_{i_1}, \ldots, \mb{e}_{i_s}\}$, and a possible choice for the matrix $\mb{U}$ of \eqref{eq:def U} is \begin{equation} \label{eq:U when =s} \mb{U} = [ \mb{e}_{i_1}, \ldots, \mb{e}_{i_s} ] \quad \text{for } \|{\boldsymbol \alpha}\|_0 = s. \end{equation} The situation is different for points ${\boldsymbol \alpha}$ having $\|{\boldsymbol \alpha}\|_0 < s$. In this case, vectors $\mb{e}_i$ corresponding to \emph{any} direction $i$ are feasible directions, since \begin{equation} \|{\boldsymbol \alpha} + \delta \mb{e}_i\|_0 \le \|{\boldsymbol \alpha}\|_0 + 1 \le s. \end{equation} Because the feasible subspace is defined as the span of all feasible directions, we have \begin{equation} \mathcal F \supseteq \spn\{ \mb{e}_1, \ldots, \mb{e}_p \} = {\mathbb R}^p. \end{equation} It follows that $\mathcal F = {\mathbb R}^p$ and thus a convenient choice for the matrix $\mb{U}$ is \begin{equation} \label{eq:U when <s} \mb{U} = \mb{I} \quad \text{for } \|{\boldsymbol \alpha}\|_0 < s. \end{equation} Consequently, whenever $\|{\boldsymbol \alpha}\|_0 < s$, a specification of the ${\mathcal T}$-bias amounts to completely specifying the usual estimation bias $\b(\mb{x})$. To invoke Theorem~\ref{th:crb}, we must also determine the FIM $\mb{J}({\boldsymbol \alpha})$. Under our assumption of white Gaussian noise, $\mb{J}({\boldsymbol \alpha})$ is given by \cite[p.~85]{kay93} \begin{equation} \label{eq:half J} \mb{J}({\boldsymbol \alpha}) = \frac{1}{\sigma^2} \H^T\H. \end{equation} Using \eqref{eq:U when =s}, \eqref{eq:U when <s}, and \eqref{eq:half J}, it is readily shown that \begin{equation} \label{eq:half UJU} \mb{U}^T\mb{J}\mb{U} = \begin{cases} \frac{1}{\sigma^2} \H_{\boldsymbol \alpha}^T \H_{\boldsymbol \alpha} & \text{when } \|{\boldsymbol \alpha}\|_0 = s \\ \frac{1}{\sigma^2} \H^T \H & \text{when } \|{\boldsymbol \alpha}\|_0 < s \end{cases} \end{equation} where $\H_{\boldsymbol \alpha}$ is the $p \times s$ matrix consisting of the columns of $\H$ indexed by $\supp({\boldsymbol \alpha})$. We now wish to determine under what conditions \eqref{eq:UUM in UUJUU} holds. Consider first points ${\boldsymbol \alpha}_0$ for which $\|{\boldsymbol \alpha}_0\|_0 = s$. Since, by \eqref{eq:spark req}, we have $\spark(\H)>s$, it follows that in this case $\mb{U}^T\mb{J}\mb{U}$ is invertible. Therefore \begin{equation} \Ra{{\U\U^T\J\U\U^T}} = \Ra{\mb{U}\U^T}. \end{equation} Since \begin{equation} \Ra{\mb{U}\U^T(\mb{I}+\mb{B}^T)} \subseteq \Ra{\mb{U}\U^T} \end{equation} we have that condition \eqref{eq:UUM in UUJUU} holds when $\|{\boldsymbol \alpha}_0\|_0=s$. The condition \eqref{eq:UUM in UUJUU} is no longer guaranteed when $\|{\boldsymbol \alpha}_0\|_0 < s$. In this case, $\mb{U}=\mb{I}$, so that \eqref{eq:UUM in UUJUU} is equivalent to \begin{equation} \label{eq:I+B in HH} \Ra{\mb{I}+\mb{B}^T} \subseteq \Ra{\H^T\H}. \end{equation} Using the fact that $\Ra{\H^T\H} = \Ra{\H^T}$ and that, for any matrix $\mb{Q}$, $\Ra{\mb{Q}^T} = \Nu{\mb{Q}}^\perp$, we find that \eqref{eq:I+B in HH} is equivalent to \begin{equation} \label{eq:N(H) in N(I+B)} \Nu{\H} \subseteq \Nu{\mb{I}+\mb{B}}. \end{equation} Combining these conclusions with Theorem~\ref{th:crb} yields the following CRB for the problem of estimating a sparse vector. \begin{theorem} \label{th:alf} Consider the estimation problem \eqref{eq:y=H alf + w} with ${\boldsymbol \alpha}_0$ given by \eqref{eq:def T}, and assume that \eqref{eq:spark req} holds. For a finite-variance estimator ${\widehat{\alf}}$ of ${\boldsymbol \alpha}_0$ to exist, its bias gradient matrix $\mb{B}$ must satisfy \eqref{eq:N(H) in N(I+B)} whenever $\|{\boldsymbol \alpha}_0\|_0 < s$. Furthermore, the covariance of any estimator whose ${\mathcal T}$-bias gradient matrix is $\mb{B}\mb{U}$ satisfies \begin{align} \label{eq:th:alf} \Cov({\widehat{\alf}}) &\succeq \sigma^2 (\mb{I}+\mb{B}) (\H^T\H)^\dagger (\mb{I}+\mb{B}^T) \notag\\ &\hspace{11em} \text{ when } \|{\boldsymbol \alpha}_0\|_0 < s, \notag\\ \Cov({\widehat{\alf}}) &\succeq \sigma^2 (\mb{U}+\mb{B}\mb{U}) (\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1} (\mb{U}+\mb{B}\mb{U})^T \notag\\ &\hspace{11em} \text{ when } \|{\boldsymbol \alpha}_0\|_0 = s. \end{align} Here, $\H_{{\boldsymbol \alpha}_0}$ is the matrix containing the columns of $\H$ corresponding to $\supp({\boldsymbol \alpha}_0)$. \end{theorem} Let us examine Theorem~\ref{th:alf} separately in the underdetermined and well-determined cases. In the well-determined case, in which $\H$ has full row rank, the nullspace of $\H$ is trivial, so that \eqref{eq:N(H) in N(I+B)} always holds. It follows that the CRB is always finite, in the sense that we cannot rule out the existence of an estimator having any given bias function. Some insight can be obtained in this case by examining the ${\mathcal T}$-unbiased case. Noting also that $\H^T\H$ is invertible in the well-determined case, the bound for ${\mathcal T}$-unbiased estimators is given by \begin{align} \label{eq:alf well-det unbiased} \Cov({\widehat{\alf}}) &\succeq \sigma^2 (\H^T\H)^{-1} &\text{ when } \|{\boldsymbol \alpha}_0\|_0 &< s, \notag\\ \Cov({\widehat{\alf}}) &\succeq \sigma^2 \mb{U} (\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1} \mb{U}^T &\text{ when } \|{\boldsymbol \alpha}_0\|_0 &= s. \end{align} From this formulation, the behavior of the CRB can be described as follows. When ${\boldsymbol \alpha}_0$ has non-maximal support ($\|{\boldsymbol \alpha}_0\|_0 < s$), the CRB is identical to the bound which would have been obtained had there been no constraints in the problem. This is because $\mb{U}=\mb{I}$ in this case, so that ${\mathcal T}$-unbiasedness and ordinary unbiasedness are equivalent. As we have seen in Section~\ref{ss:bias req}, the CRB is a function of the class of estimators under consideration, so the unconstrained and constrained bounds are equivalent in this situation. The bound $\sigma^2 (\H^T\H)^{-1}$ is achieved by the unconstrained LS estimator \begin{equation} {\widehat{\alf}} = (\H^T\H)^{-1}\H^T\mb{y} \end{equation} which is the minimum variance unbiased estimator in the unconstrained case. Thus, we learn from Theorem~\ref{th:alf} that for values of ${\boldsymbol \alpha}_0$ having non-maximal support, no ${\mathcal T}$-unbiased technique can outperform the standard LS estimator, which does not assume any knowledge about the constraint set ${\mathcal T}$. On the other hand, consider the case in which ${\boldsymbol \alpha}_0$ has maximal support, i.e., $\|{\boldsymbol \alpha}_0\|_0 = s$. Suppose first that $\supp({\boldsymbol \alpha}_0)$ is known, so that one must estimate only the nonzero values of ${\boldsymbol \alpha}_0$. In this case, a reasonable approach is to use the oracle estimator \eqref{eq:def xo}, whose covariance matrix is given by $\sigma^2 \mb{U} (\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1} \mb{U}^T$ \cite{candes07}. Thus, when ${\boldsymbol \alpha}_0$ has maximal support, Theorem~\ref{th:alf} states that ${\mathcal T}$-unbiased estimators can perform, at best, as well as the oracle estimator, which is equivalent to the LS approach when the support of ${\boldsymbol \alpha}_0$ is known. The situation is similar, but somewhat more involved, in the underdetermined case. Here, the condition \eqref{eq:N(H) in N(I+B)} for the existence of an estimator having a given bias gradient matrix no longer automatically holds. To interpret this condition, it is helpful to introduce the mean gradient matrix $\mb{M}({\boldsymbol \alpha})$, defined as \begin{equation} \mb{M}({\boldsymbol \alpha}) = \pd{\E{{\widehat{\alf}}}}{{\boldsymbol \alpha}} = \mb{I} + \mb{B}. \end{equation} The matrix $\mb{M}({\boldsymbol \alpha})$ is a measure of the sensitivity of an estimator to changes in the parameter vector. For example, a ${\mathcal T}$-unbiased estimator is sensitive to any \emph{feasible} change in ${\boldsymbol \alpha}$. Thus, $\Nu{\mb{M}}$ denotes the subspace of directions to which ${\widehat{\alf}}$ is insensitive. Likewise, $\Nu{\H}$ is the subspace of directions for which a change in ${\boldsymbol \alpha}$ does not modify $\H{\boldsymbol \alpha}$. The condition \eqref{eq:N(H) in N(I+B)} therefore states that for an estimator to exist, it must be insensitive to changes in ${\boldsymbol \alpha}$ which are unobservable through $\H{\boldsymbol \alpha}$, at least when $\|{\boldsymbol \alpha}\|_0 < s$. No such requirement is imposed in the case $\|{\boldsymbol \alpha}\|_0 = s$, since in this case there are far fewer feasible directions. The lower bound \eqref{eq:th:alf} is similarly a consequence of the wide range of feasible directions obtained when $\|{\boldsymbol \alpha}\|_0 < s$, as opposed to the tight constraints when $\|{\boldsymbol \alpha}\|_0 = s$. Specifically, when $\|{\boldsymbol \alpha}\|_0 < s$, a change to any component of ${\boldsymbol \alpha}$ is feasible and hence the lower bound equals that of an unconstrained estimation problem, with the FIM given by $\sigma^{-2} \H^T \H$. On the other hand, when $\|{\boldsymbol \alpha}\|_0 = s$, the bound is effectively that of an estimator with knowledge of the particular subspace to which ${\boldsymbol \alpha}$ belongs; for this subspace the FIM is the submatrix $\mb{U}^T\mb{J}\mb{U}$ given in \eqref{eq:half UJU}. This phenomenon is discussed further in Section~\ref{se:discuss}. Another difference between the well-determined and underdetermined cases is that when $\H$ is underdetermined, an estimator cannot be ${\mathcal T}$-unbiased for all ${\boldsymbol \alpha}$. To see this, recall from \eqref{eq:T-unbias} that ${\mathcal T}$-unbiased estimators are defined by the fact that $\mb{B}\mb{U}=\mb{0}$. When $\|{\boldsymbol \alpha}\|_0 < s$, we have $\mb{U}=\mb{I}$ and thus ${\mathcal T}$-unbiasedness implies $\mb{B}=\mb{0}$, so that $\Nu{\mb{I}+\mb{B}} = \{ \mb{0} \}$. But since $\H$ is underdetermined, $\Nu{\H}$ is nontrivial. Consequently, \eqref{eq:N(H) in N(I+B)} cannot hold for ${\mathcal T}$-unbiased estimators when $\|{\boldsymbol \alpha}\|_0 < s$. The lack of ${\mathcal T}$-unbiased estimators when $\|{\boldsymbol \alpha}_0\|_0 < s$ is a direct consequence of the fact that the feasible direction set at such ${\boldsymbol \alpha}_0$ contains all of the directions $\mb{e}_1, \ldots, \mb{e}_p$. The conclusion from Theorem~\ref{th:alf} is then that no estimator can be expected to be unbiased in such a high-dimensional neighborhood, just as unbiased estimation is impossible in the $p$-dimensional neighborhood ${{\mathcal B}_\eps({\boldsymbol \alpha}_0)}$, as explained in Section~\ref{ss:bias req}. However, it is still possible to obtain a finite CRB in this setting by further restricting the constraint set: if it is known that $\|{\boldsymbol \alpha}_0\|_0 = \tilde{s} < s$, then one can redefine ${\mathcal T}$ in \eqref{eq:def T} by replacing $s$ with $\tilde{s}$. This will enlarge the class of estimators considered ${\mathcal T}$-unbiased, and Theorem~\ref{th:alf} would then provide a finite lower bound on those estimators. Such estimators will not, however, be unbiased in the sense implied by the original constraint set. While an estimator cannot be unbiased for \emph{all} ${\boldsymbol \alpha} \in {\mathcal T}$, unbiasedness is possible at points ${\boldsymbol \alpha}$ for which $\|{\boldsymbol \alpha}\|_0 = s$. In this case, Theorem~\ref{th:alf} produces a bound on the MSE of a ${\mathcal T}$-unbiased estimator, obtained by calculating the trace of \eqref{eq:th:alf} in the case $\mb{B}\mb{U}=\mb{0}$. This bound is given by \begin{equation} \label{eq:crb T-unbiased} \E{\|{\widehat{\alf}} - {\boldsymbol \alpha}_0\|_2^2} \ge \sigma^2 \Tr((\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1}), \quad \|{\boldsymbol \alpha}_0\|_0 = s. \end{equation} The most striking feature of \eqref{eq:crb T-unbiased} is that it is identical to the oracle MSE \eqref{eq:oracle mse}. However, the CRB is of additional importance because of the fact that the ML estimator achieves the CRB in the limit when a large number of independent measurements are available, a situation which is equivalent in our setting to the limit $\sigma \rightarrow 0$. In other words, an MSE of \eqref{eq:crb T-unbiased} is achieved at high SNR by the ML approach \eqref{eq:ml}, as we will illustrate numerically in Section~\ref{se:numer}. While the ML approach is computationally intractable in the sparse estimation setting, it is still implementable in principle, as opposed to $\half_{\mathrm{oracle}}$, which relies on unavailable information (namely, the support set of ${\boldsymbol \alpha}_0$). Thus, Theorem~\ref{th:crb} gives an alternative interpretation to comparisons of estimator performance with the oracle. Observe that the bound \eqref{eq:crb T-unbiased} depends on the value of ${\boldsymbol \alpha}_0$ (through its support set, which defines $\H_{{\boldsymbol \alpha}_0}$). This implies that some values of ${\boldsymbol \alpha}_0$ are more difficult to estimate than others. For example, suppose the $\ell_2$ norms of some of the columns of $\H$ are significantly larger than the remaining columns. Measurements of a parameter ${\boldsymbol \alpha}_0$ whose support corresponds to the large-norm columns of $\H$ will then have a much higher SNR than measurements of a parameter corresponding to small-norm columns, and this will clearly affect the accuracy with which ${\boldsymbol \alpha}_0$ can be estimated. To analyze the behavior beyond this effect, it is common to consider the situation in which the columns $\mb{h}_i$ of $\H$ are normalized so that $\|\mb{h}_i\|_2 = 1$. In this case, for sufficiently incoherent dictionaries, $\Tr((\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1})$ is bounded above and below by a small constant times $s$, so that the CRB is similar for all values of ${\boldsymbol \alpha}_0$. To see this, let $\mu$ be the coherence of $\H$ \cite{tropp06}, defined (for $\H$ having normalized columns) as \begin{equation} \mu \triangleq \max_{i \ne j} \left| \mb{h}_i^T \mb{h}_j \right| . \end{equation} By the Gershgorin disc theorem, the eigenvalues of $\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0}$ are in the range $[1 - s\mu, 1 + s\mu]$. It follows that the unbiased CRB \eqref{eq:crb T-unbiased} is bounded above and below by \begin{equation} \frac{s\sigma^2}{1+s\mu} \le \sigma^2 \Tr((\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1}) \le \frac{s\sigma^2}{1-s\mu}. \end{equation} Thus, when $s$ is somewhat smaller than $1/\mu$, the CRB is roughly equal to $s \sigma^2$ for all values of ${\boldsymbol \alpha}_0$. As we have seen in Section~\ref{ss:est techniques}, for sufficiently small $s$, the worst-case MSE of practical estimators, such as BPDN and the DS, is $O(s \sigma^2 \log p)$. Thus, practical estimators come almost within a constant of the unbiased CRB, implying that they are close to optimal for all values of ${\boldsymbol \alpha}_0$, at least when compared with unbiased techniques. \subsection{Denoising and Deblurring} \label{ss:deblur} We next consider the problem \eqref{eq:y=Ax+w}, in which it is required to estimate not the sparse vector ${\boldsymbol \alpha}_0$ itself, but rather the vector $\mb{x}_0 = \mb{D} {\boldsymbol \alpha}_0$, where $\mb{D}$ is a known dictionary matrix. Thus, $\mb{x}_0$ belongs to the set $\SS$ of \eqref{eq:def S}. We assume for concreteness that $\mb{D}$ has full row rank and that $\mb{A}$ has full column rank. This setting encompasses the denoising and deblurring problems described in Section~\ref{ss:sparse setting}, with the former arising when $\mb{A}=\mb{I}$ and the latter obtained when $\mb{A}$ represents a blurring kernel. Similar calculations can be carried out when $\mb{A}$ is rank-deficient, a situation which occurs, for example, in some interpolation problems. Recall from Section~\ref{ss:sparse setting} the assumption that every $\mb{x} \in \SS$ has a \emph{unique} representation $\mb{x} = \mb{D}{\boldsymbol \alpha}$ for which ${\boldsymbol \alpha}$ is in the set ${\mathcal T}$ of \eqref{eq:def T}. We denote by $\r(\cdot)$ the mapping from $\SS$ to ${\mathcal T}$ which returns this representation. In other words, $\r(\mb{x})$ is the unique vector in ${\mathcal T}$ for which \begin{equation} \mb{x} = \mb{D} \r(\mb{x}) \quad \text{and} \quad \|\r(\mb{x})\|_0 \le s. \end{equation} Note that while the mapping $\r$ is well-defined, actually calculating the value of $\r(\mb{x})$ for a given vector $\mb{x}$ is, in general, NP-hard. In the current setting, unlike the scenario of Section~\ref{ss:sparse}, it is always possible to construct an unbiased estimator. Indeed, even without imposing the constraint \eqref{eq:def S}, there exists an unbiased estimator. This is the LS or maximum likelihood estimator, given by \begin{equation} {\widehat{\x}} = (\mb{A}^T\mb{A})^{-1} \mb{A}^T \mb{y}. \end{equation} A standard calculation demonstrates that the covariance of ${\widehat{\x}}$ is \begin{equation} \label{eq:LS cov} \sigma^2 (\mb{A}^T\mb{A})^{-1}. \end{equation} On the other hand, the FIM for the setting \eqref{eq:y=Ax+w} is given by \begin{equation} \label{eq:deb J} \mb{J} = \frac{1}{\sigma^2} \mb{A}^T\mb{A}. \end{equation} Since $\mb{A}$ has full row rank, the FIM is invertible. Consequently, it is seen from \eqref{eq:LS cov} and \eqref{eq:deb J} that the LS approach achieves the CRB $\mb{J}^{-1}$ for unbiased estimators. This well-known property demonstrates that in the unconstrained setting, the LS technique is optimal among all unbiased estimators. The LS estimator, like any unbiased approach, is also $\SS$-unbiased. However, with the addition of the constraint $\mb{x}_0 \in \SS$, one would expect to obtain improved performance. It is therefore of interest to obtain the CRB for the constrained setting. To this end, we first note that since $\mb{J}$ is invertible, we have $\Ra{{\U\U^T\J\U\U^T}} = \Ra{\mb{U}\U^T}$ for any $\mb{U}$, and consequently \eqref{eq:UUM in UUJUU} holds for any matrix $\mb{B}$. The bound \eqref{eq:th:crb} of Theorem~\ref{th:crb} thus applies regardless of the bias gradient matrix. For simplicity, in the following we derive the CRB for $\SS$-unbiased estimators. A calculation for arbitrary $\SS$-bias functions can be performed along similar lines. Consider first values $\mb{x} \in \SS$ such that $\|\r(\mb{x})\|_0 < s$. Then, $\|\r(\mb{x}) + \delta \mb{e}_i\|_0 \le s$ for any $\delta$ and for any $\mb{e}_i$. Therefore, \begin{equation} \mb{x} + \delta \mb{D} \mb{e}_i \in \SS \end{equation} for any $\delta$ and $\mb{e}_i$. In other words, the feasible directions include all columns of $\mb{D}$. Since it is assumed that $\mb{D}$ has full row rank, this implies that the feasible subspace $\mathcal F$ equals ${\mathbb R}^n$, and the matrix $\mb{U}$ of \eqref{eq:def U} can be chosen as $\mb{U}=\mb{I}$. \begin{figure*} \centerline{% \subfigure[]{% \includegraphics{plot_ssp_snr.eps} \label{fi:snr}} \hfil % \subfigure[]{% \includegraphics{plot_ssp.eps} \label{fi:spar}} } \caption{MSE of various estimators compared with the unbiased CRB \eqref{eq:crb T-unbiased}, for (a) varying SNR and (b) varying sparsity levels.} \label{fi:sim} \end{figure*} Next, consider values $\mb{x} \in \SS$ for which $\|\r(\mb{x})\|_0 = s$. Then, for sufficiently small $\delta>0$, we have $\|\r(\mb{x}) + \delta \v\|_0 \le s$ if and only if $\v = \mb{e}_i$ for some $i \in \supp(\r(\mb{x}))$. Equivalently, \begin{equation} \mb{x} + \delta \v \in \SS \text{ if and only if } \v = \mb{D}\mb{e}_i \text{ and } i \in \supp(\r(\mb{x})). \end{equation} Consequently, the feasible direction subspace in this case corresponds to the column space of the matrix $\mb{D}_\mb{x}$ containing the $s$ columns of $\mb{D}$ indexed by $\supp(\r(\mb{x}))$. From \eqref{eq:spark req D} we have $\spark(\mb{D})>s$, and therefore the columns of $\mb{D}_\mb{x}$ are linearly independent. Thus the orthogonal projector onto $\mathcal F$ is given by \begin{equation} \label{eq:def P} \P \triangleq \mb{U}\U^T = \mb{D}_\mb{x} (\mb{D}_\mb{x}^T \mb{D}_\mb{x})^{-1} \mb{D}_\mb{x}^T. \end{equation} Combining these calculations with Theorem~\ref{th:crb} yields the following result. \begin{theorem} \label{th:deblur} Consider the estimation setting \eqref{eq:y=Ax+w} with the constraint \eqref{eq:def S}, and suppose $\spark(\mb{D}) > 2s$. Let ${\widehat{\x}}$ be a finite-variance $\SS$-unbiased estimator. Then, \begin{align} \Cov({\widehat{\x}}) &\succeq \sigma^2 (\mb{A}^T \mb{A})^{-1} &\text{when } \|\r(\mb{x})\|_0 < s, \notag\\ \Cov({\widehat{\x}}) &\succeq \sigma^2 \left( \P \mb{A}^T\mb{A} \P \right)^\dagger &\text{when } \|\r(\mb{x})\|_0 = s. \label{eq:th:deblur} \end{align} Here, $\P$ is given by \eqref{eq:def P}, in which $\mb{D}_\mb{x}$ is the $n \times s$ matrix consisting of the columns of $\mb{D}$ participating in the (unique) $s$-element representation $\mb{D}{\boldsymbol \alpha}$ of $\mb{x}$. \end{theorem} As in Theorem~\ref{th:alf}, the bound exhibits a dichotomy between points having maximal and non-maximal support. In the former case, the CRB is equivalent to the bound obtained when the support set is known, whereas in the latter the bound is equivalent to an unconstrained CRB. This point is discussed further in Section~\ref{se:discuss}. \section{Numerical Results} \label{se:numer} In this section, we demonstrate the use of the CRB for measuring the achievable MSE in the sparse estimation problem \eqref{eq:y=H alf + w}. To this end, a series of simulations was performed. In each simulation, a random $100 \times 200$ dictionary $\H$ was constructed from a zero-mean Gaussian IID distribution, whose columns $\mb{h}_i$ were normalized so that $\|\mb{h}_i\|_2=1$. A parameter ${\boldsymbol \alpha}_0$ was then selected by choosing a support uniformly at random and selecting the nonzero elements as Gaussian IID variables with mean $0$ and variance $1$. Noisy measurements $\mb{y}$ were obtained from \eqref{eq:y=H alf + w}, and ${\boldsymbol \alpha}_0$ was then estimated using BPDN \eqref{eq:bpdn}, the DS \eqref{eq:ds}, and the GDS \eqref{eq:gds}. The regularization parameters were chosen as $\tau = 2\sigma \sqrt{\log p}$ and $\gamma = 4\sigma \sqrt{\log(p-s)}$, rules of thumb which are motivated by a theoretical analysis \cite{ben-haim09c}. The MSE of each estimate was then calculated by repeating this process with different realizations of the random variables. The unbiased CRB was calculated using \eqref{eq:crb T-unbiased}. In this case, the unbiased CRB equals the MSE of the oracle estimator \eqref{eq:def xo}, but as we will see below, interpreting \eqref{eq:crb T-unbiased} as a bound on unbiased estimators provides further insight into the estimation problem. A first set of experiments was conducted to examine the CRB at various SNR levels. In this simulation, the ML estimator \eqref{eq:ml} was also computed, in order to verify its convergence to the CRB at high SNR\@. Since the ML approach is computationally prohibitive when $p$ and $s$ are large, this necessitated the selection of the rather low support size $s=3$. The MSE and CRB were calculated for 15 SNR values by changing the noise standard deviation $\sigma$ between $1$ and $10^{-3}$. The MSE of the ML approach, as well as the other estimators of Section~\ref{ss:est techniques}, is compared with the CRB in Fig.~\ref{fi:snr}. The convergence of the ML estimator to the CRB is clearly visible in this figure. The performance of the GDS is also impressive, being as good or better than the ML approach. Apparently, at high SNR, the DS tends to correctly recover the true support set, in which case GDS \eqref{eq:gds} equals the oracle \eqref{eq:def xo}. Perhaps surprisingly, applying a LS estimate on the support set obtained by BPDN (which could be called a ``Gauss--BPDN'' strategy) does not work well at all, and in fact results in higher MSE than a direct application of BPDN. (The results for the Gauss--BPDN method are not plotted in Fig.~\ref{fi:sim}.) Note that some estimation techniques outperform the oracle MSE (or CRB) at low SNR\@. It may appear surprising that a practical technique such as the DS outperforms the oracle. The explanation for this stems from the fact that the CRB \eqref{eq:crb T-unbiased} is a lower bound on the MSE of \emph{unbiased} estimators. The bias of most estimators tends to be negligible in low-noise settings, but often increases with the noise variance $\sigma^2$. Indeed, when $\sigma^2$ is as large as $\|{\boldsymbol \alpha}_0\|_2^2$, the measurements carry very little useful information about ${\boldsymbol \alpha}_0$, and an estimator can improve performance by shrinkage. Such a strategy, while clearly biased, yields lower MSE than a naive reliance on the noisy measurements. This is indeed the behavior of the DS and BPDN, since for large $\sigma^2$, the $\ell_1$ regularization becomes the dominant term, resulting in heavy shrinkage. Consequently, it is to be expected that such techniques will outperform even the best unbiased estimator at low SNR, as indeed occurs in Fig.~\ref{fi:snr}. The performance of the estimators of Section~\ref{ss:est techniques}, excluding the ML method, was also compared for varying sparsity levels. To this end, the simulation was repeated for 15 support sizes in the range $1 \le s \le 30$, with a constant noise standard deviation of $\sigma = 0.01$. The results are plotted in Fig.~\ref{fi:spar}. While a substantial gap exists between the CRB and the MSE of the practical estimators in this case, the general trend in both cases describes a similar rate of increase as $s$ grows. Interestingly, a drawback of the GDS approach is visible in this setting: as $s$ increases, correct support recovery becomes more difficult, and shrinkage becomes a valuable asset for reducing the sensitivity of the estimate to random measurement fluctuations. The LS approach practiced by the GDS, which does not perform shrinkage, leads to gradual performance deterioration. Results similar to Fig.~\ref{fi:sim} were obtained for a variety of related estimation scenarios, including several deterministic, rather than random, dictionaries $\H$. \section{Discussion} \label{se:discuss} In this paper, we extended the CRB to constraint sets satisfying the local balance condition (Theorem~\ref{th:crb}). This enabled us to derive lower bounds on the achievable performance in various estimation problems (Theorems \ref{th:alf} and~\ref{th:deblur}). In simple terms, Theorems \ref{th:alf} and~\ref{th:deblur} can be summarized as follows. The behavior of the CRB differs depending on whether or not the parameter has maximal support (i.e., $\|{\boldsymbol \alpha}\|_0 = s$). In the case of maximal support, the bound equals that which would be obtained if the sparsity pattern were known; this can be considered an ``oracle bound''. On the other hand, when $\|{\boldsymbol \alpha}\|_0 < s$, performance is identical to the unconstrained case, and the bound is substantially higher. We now discuss some practical implications of these conclusions. To simplify the discussion, we consider the case of unbiased estimators, though analogous conclusions can be drawn for any bias function. When $\|{\boldsymbol \alpha}\|_0 = s$ and all nonzero elements of ${\boldsymbol \alpha}$ are considerably larger than the standard deviation of the noise, the support set can be recovered correctly with high probability (at least if computational considerations are ignored). Thus, in this case an estimator can mimic the behavior of the oracle, and the CRB is expected to be tight. Indeed, in the high SNR limit, the ML estimator achieves the unbiased CRB\@. On the other hand, when the support of ${\boldsymbol \alpha}$ is not maximal, the unbiasedness requirement demands sensitivity to changes in all components of ${\boldsymbol \alpha}$, and consequently the bound coincides with the unconstrained CRB\@. Thus, as claimed in Section~\ref{se:crb}, in underdetermined cases no estimator is unbiased for all ${\boldsymbol \alpha} \in \SS$. An interesting observation can also be made concerning maximal-support points ${\boldsymbol \alpha}$ for which some of the nonzero elements are close to zero. The CRB in this ``low-SNR'' case corresponds to the oracle MSE, but as we will see, the bound is loose for such values of ${\boldsymbol \alpha}$. Intuitively, at low-SNR points, any attempt to recover the sparsity pattern will occasionally fail. Consequently, despite the optimistic CRB, it is unlikely that the oracle MSE can be achieved. Indeed, the covariance matrix of any finite-variance estimator is a continuous function of ${\boldsymbol \alpha}$ \cite{lehmann98}, and the fact that performance is bounded by the (much higher) unconstrained bound when $\|{\boldsymbol \alpha}\|_0 < s$ implies that performance must be similarly poor for low SNR\@. This excessive optimism is a result of the local nature of the CRB\@: The bound is a function of the estimation setting only in an $\varepsilon$-neighborhood of the parameter itself. Indeed, the CRB depends on the constraint set only through the feasible directions, which were defined in Section~\ref{ss:loc bal} as those directions which do not violate the constraints for \emph{sufficiently small} deviations. Thus, for the CRB, it is entirely irrelevant if some of the components of ${\boldsymbol \alpha}$ are close to zero, as long as $\supp({\boldsymbol \alpha})$ is held constant. A tighter bound for sparse estimation problems may be obtained using the Hammersley--Chapman--Robbins (HCR) approach \cite{Hammersley50, ChapmanRobbins51, gorman90}, which depends on the constraints at points beyond the local neighborhood of $\mb{x}$. Such a bound is likely to yield tighter results for low SNR values, and will create a smooth transition between the regions of maximal and non-maximal support. However, the bound will depend on more complex properties of the estimation setting, such as the distance between $\mb{D}{\boldsymbol \alpha}$ and feasible points with differing supports. The derivation of such a bound is a subject for further research. \section*{Acknowledgement} The authors would like to thank Yaniv Plan for helpful discussions. The authors are also grateful to the anonymous reviewers for their comments, which considerably improved the presentation of the paper. \bibliographystyle{IEEEtran}
1,116,691,498,036
arxiv
\section{Introduction } \label{sec:intro} Hypothesis testing, or the problem of deciding the probability distribution that generated a given observation, is one of the main problems in statistics, finding applications in social, biological, medical and data sciences, including information theory, image processing and signal processing. Depending on the specific subject and underlying assumptions, hypothesis testing has been termed classification, model selection, discrimination, or signal detection. The simplest instance of this problem is the binary case, i.e., deciding which of the two probability distributions have generated the observation. Depending on whether or not priors on the hypotheses are available, the problem is referred to as Bayesian or non-Bayesian. In the Bayesian setting, the average probability of error emerges as the key performance measure. Instead, when priors are not available the average error probability cannot be computed and one must consider a tradeoff among the pairwise error probabilities (see e.g. \cite{Lehmann,poor2013introduction}). In \cite{Neyman}, Neyman and Pearson studied non-Bayesian binary case and considered the minimization of one pairwise error probability subject to the other being upper-bounded by a constant. In this setting, they proved that a hypothesis test that computes the ratio between the two probability distributions for the given observation and compares it to a threshold is optimal. In order to implement the optimal likelihood ratio test, one must wait to process the whole block of observation data at once. In some applications, it may be preferable to attempt to make a decision as promptly as possible. In \cite{Wald1}, Wald proposed a sequential test that attempts to make a decision at every time instant, or waits one time instant for the arrival of a new observation sample; the optimality of the above test was established in \cite{Wald}. A critical underlying assumption in the above and follow-up works, is that the probability distributions of each of the hypotheses are known and thus, can be employed by testing algorithms. While highly desirable, this is an optimistic assumption that is difficult to guarantee in practice. A number of solutions have been proposed in the literature. Composite hypothesis testing considers the case where the distributions that generate the observation belong to known families of distributions. Hoeffding proposed an asymptotically optimal test for this setting in \cite{Hoeffding}. Classification assumes no knowledge of the underlying probability distributions but assumes the availability of training data (see e.g. \cite{ziv1988classification,CoverClass,Gutman}). Robust hypothesis testing assumes a statistical model of the variability of the true distribution, which is then used to design the optimal test for that robustness model \cite{Huber, Kassam, Levy, poor2013introduction}. The simplicity of the Neyman and Pearson's likelihood ratio or Wald's sequential tests has brought them into practice, even in settings where the distributions are unknown. In this work, we consider an alternative to the above methods. We assume that the true probability distributions $P_0$ and $P_1$ are unknown, but instead, two fixed probability distributions $\hat P_0$ and $\hat P_1$ are used for testing using the optimal tests for the cases where the distributions are known. We refer to this case to as mismatched hypothesis testing. In this work, we study the exponential decay of the probability of error, or error exponents in short, as a proxy for the performance of hypothesis testing. In particular, we consider the error exponent tradeoff between both pairwise error probabilities. We consider the worst case error exponent tradeoff when the actual distributions generating the observation are within a certain distance of the test distributions. As a measure of distance, we use the family of R\'enyi relative entropies \cite{renyi1961measures} (see also \cite{Harremos}) as well as $f$-divergences \cite{ali1966general,csiszar1967information} with $f(t)$ such that its second derivative at $t=1$ is bounded. We study the behavior of the worst-case tradeoff when the distance between the test and the true distributions is small and provide an approximation of the worst-case exponent as an expansion around the matched exponent, i.e., the exponent attained when the distributions are known. In addition, we extend the results to the case where the mismatch is not in the testing distributions but in the observation: an adversary has modified the observation data within a certain divergence of the observation type. This paper is structured as follows. Section \ref{sec:pre} introduces notation and reviews the preliminaries about likelihood ratio, Hoeffding's generalized likelihood ratio and sequential testing. Sections \ref{sec:lrt}, \ref{sec:glrt} and \ref{sec:sequentialHT} discuss our main results for the likelihood ratio, generalized likelihood ratio and sequential probability ratio tests in the mismatched setting. Section \ref{sec:adv} discusses the case where the mismatch is not in the distribution, but in the observation, i.e., the cases where the observation data has been tampered by an adversary. Proofs of the main results can be found in the Appendices. \section{Preliminaries} \label{sec:pre} Consider the binary hypothesis testing problem \cite{Lehmann} where an observation $\xv=(x_1,\dotsc,x_k)\in\Xc^k$ is generated from two possible distributions $P^k_0$ and $P^k_1$ defined on the probability simplex $\Pc(\Xc^k)$, for some observation alphabet $\Xc$. We assume that $P^k_0$ and $P^k_1$ are product distributions, i.e., $P_0^k(\xv)=\prod_{i=1}^k P_0(x_i)$, and similarly for $P_1^k$. For simplicity, we assume that both $P_0(x)>0$ and $P_1(x)>0$ for each $x\in\Xc$. In the following, we describe the settings considered in the paper. \subsection{Likelihood Ratio Test} In this setting $k$ is assumed to be a fixed integer $n$. Let $\phi: \Xc^n \rightarrow \{0,1\}$ be a hypothesis test that decides which distribution generated the observation $\xv$. We consider deterministic tests $\phi$ that decide in favor of $P_0$ if $\xv\in \Ac_0$, where $\Ac_0\subset \Xc^n$ is the decision region for the first hypothesis, and decides in favor of $P_0$ otherwise. We define $\Ac_1=\Xc^n \setminus \Ac_0$ to be the decision region for the second hypothesis. The test performance is determined by the pairwise error probabilities \begin{equation}\label{eq:e1} \epsilon_0 (\phi)= \sum_{\bx \in \Ac_1} P_0^n(\bx),~~~~ \epsilon_1 (\phi)= \sum_{\bx \in \Ac_0} P_1^n(\bx). \end{equation} A hypothesis test is said to be optimal whenever it achieves the optimal error probability tradeoff given by \begin{equation}\label{eq:trade} \min_{\phi: \epsilon_1 (\phi) \leq \xi} \epsilon_0 (\phi), \end{equation} where $\xi\in[0,1]$. The likelihood ratio test defined as \begin{equation} \phi(\xv)= \mathbbm{1} \bigg \{ \frac{P_1^n(\bx)}{P_0^n(\bx)} \geq e^{n\gamma} \bigg\}, \end{equation} was shown in \cite{Neyman} to attain the optimal tradeoff \eqref{eq:trade} for every $\gamma$. The type of a sequence $\bx= (x_1,\ldots,x_n)$ is defined as $\Th(a)=\frac{N(a|\bx)}{n}$, where $N(a|\bx)$ is the number of occurrences of the symbol $a\in\Xc$ in the string. The likelihood ratio test can also be expressed as a function of the type of the observation $\Th$ as \cite{Cover} \begin{align}\label{eq:LRTtype} \phi(\Th)= \mathbbm{1} \big\{ D(\Th\|P_0)-D(\Th\|P_1) \geq \gamma \big\}, \end{align} where $D(P\|Q)= \sum_{x\in\Xc} P(x) \log \frac{P(x)}{Q(x)}$ is the relative entropy between distributions $P$ and $Q$. This expression of the likelihood ratio test will be used extensively throughout the paper. In this paper, we study the asymptotic exponential decay of the pairwise error probabilities as the observation length $n$ tends to infinity, i.e., \begin{equation} E_0 \triangleq \liminf_{n\to\infty} -\frac{1}{n} \log \epsilon_0 (\phi), ~~~E_1 \triangleq \liminf_{n\to\infty} -\frac{1}{n} \log \epsilon_1 (\phi). \label{eq:exponents} \end{equation} These limits are known to exist for i.i.d. observations, as is the case of this paper. In order to study the tradeoff between error exponents, it is sufficient to consider deterministic tests. The optimal error exponent tradeoff $(E_0,E_1)$ is defined as \begin{align}\label{eq:tradefix} E_1(E_0) \triangleq \sup \big\{E_1\in \mathbb{R}_{+}: \exists \phi , \exists N_0 \in \ZZ_+ \ \text{s.t.} \ \forall n>N_0, ~ \epsilon_0(\phi) \leq e^{-nE_0} ~ \text{and} ~ \epsilon_1(\phi) \leq e^{-nE_1}\big\}. \end{align} By using Sanov's Theorem \cite{Cover,Dembo}, the optimal error exponent tradeoff $(E_0,E_1)$, attained by the likelihood ratio test, can be shown to be \cite{Blahut} \begin{align} E_0=\min_{Q \in \Qc_0} D(Q\|P_0)\label{eq:min1},\\ E_1=\min_{Q \in \Qc_1} D(Q\|P_1)\label{eq:min2}, \end{align} where \begin{align} \Qc_0&= \big\{Q\in \Pc(\Xc): D(Q\| P_0)-D(Q\|P_1) \geq \gamma \big\}, \label{eq:constraint1} \\ \Qc_1&= \big\{Q\in \Pc(\Xc): D(Q\| P_0)-D(Q\|P_1) \leq \gamma \big\}. \label{eq:constraint2} \end{align} The minimizing distribution in \eqref{eq:min1}, \eqref{eq:min2} is the tilted distribution \begin{equation}\label{eq:tilted} Q_{\lambda}(x)= \frac{ P_{0}^{1-\lambda}(x) P_{1}^{\lambda}(x) } {\sum_{a \in \Xc } P_{0}^{1-\lambda}(a) P_{1}^{\lambda}(a) }, ~~~0\leq\lambda \leq 1, \end{equation} whenever $\gamma$ satisfies $-D(P_0\|P_1) \leq \gamma \leq D(P_1\|P_0)$. In this case, $\lambda$ is the solution of \begin{equation}\label{eq:KKTgamma} D(Q_{\lambda}\| P_0)-D(Q_{\lambda} \| P_1) = \gamma. \end{equation} Instead, if $\gamma<-D(P_0\|P_1)$, the optimal distribution in \eqref{eq:min1} is $Q_\lambda(x)= P_0(x)$ and $E_0=0$, and if $\gamma>D(P_1\|P_0)$, the optimal distribution in \eqref{eq:min2} is $Q_\lambda(x)= P_1(x)$ and $E_1=0$. Equivalently, the dual expressions of \eqref{eq:min1} and \eqref{eq:min2} can be derived by substituting the minimizing distribution \eqref{eq:tilted} into the Lagrangian yielding \cite{Blahut,Dembo} \begin{align} E_0&=\max_{\lambda \geq 0 } \lambda \gamma - \log \Big ( \sum_{x\in \Xc} P_0^{1-\lambda}(x) P_1^{\lambda}(x) \Big ), \\ E_1&=\max_{\lambda \geq 0 } -\lambda \gamma - \log \Big ( \sum_{x\in \Xc} P_0^{\lambda}(x) P_1^{1-\lambda}(x) \Big ). \end{align} The Stein regime is defined as the highest error exponent under one hypothesis when the error probability under the other hypothesis is at most some fixed $ \epsilon \in (0,\frac{1}{2})$ \cite{Cover} \begin{align}\label{eq:steindef} E_1^{(\epsilon)} \triangleq \sup \big \{E_1\in \mathbb{R}_{+}: \exists \phi , \exists n_0 \in \ZZ_+ \ \text{s.t.} \ \forall n>n_0 ~ \epsilon_0 (\phi)\leq \epsilon \quad \text{and} \quad \epsilon_1(\phi) \leq e^{-nE_1} \big \}. \end{align} The optimal $E_1^{(\epsilon)}$, given by \cite{Cover} \begin{equation}\label{eq:stein2} E_1^{(\epsilon)} = D(P_0\|P_1), \end{equation} can be achieved by setting the threshold in \eqref{eq:LRTtype} to be ${\gamma} = -D(P_0\|P_1)+\frac{c}{\sqrt{n}}$, where $c$ is a constant that depends on distributions $P_0, P_1$ and $\epsilon$. \subsection{Generalized Likelihood Ratio Test} In this setting, $k$ is also a fixed integer $n$ similarly to the likelihood ratio test case. We consider the composite hypothesis testing problem where only the first distribution $P_0$ is known, and no prior information is available regarding the second distribution $P_1$. Hoeffding proposed in \cite{Hoeffding} the following generalized likelihood ratio test for when $P_0$ is known, while the second distribution is restricted to $\Pc(\Xc)$, \begin{equation} \phi(\bx)= \mathbbm{1} \bigg \{ \frac{\sup_{\substack{ P_1 \in \Pc(\Xc) } }P_1^n (\xv)}{P_0^n(\bx)} \geq e^{n\gamma} \bigg\}. \end{equation} Similarly to \eqref{eq:LRTtype}, Hoeffding's generalized likelihood ratio test can be expressed as a function of the type of the observation $\Th$ as \cite{Cover,Merhav} \begin{align}\label{eq:GLRTtype} \phi(\Th)= \mathbbm{1} \big\{ D(\Th\|P_0) \geq \gamma \big\}. \end{align} The pairwise error probabilities are defined as exactly as in \eqref{eq:e1} where $\Ac_0$ and $\Ac_1$ are the corresponding decision regions of Hoeffding's test. The optimal error exponent tradeoff is defined as in \eqref{eq:tradefix}. By using Sanov's Theorem, for any $0 \leq \gamma \leq D(P_1\|P_0)$ the error exponents of Hoeffding's generalized likelihood ratio test are given by \begin{align} E_0&=\min_{Q: D(Q\|P_0) \geq \gamma } D(Q\|P_0)=\gamma\label{eq:minH1},\\ E_1&=\min_{Q: D(Q\|P_0) \leq \gamma} D(Q\|P_1)\label{eq:minH2}. \end{align} The minimizing distribution in \eqref{eq:minH2} is the tilted distribution \begin{equation}\label{eq:tiltedH} Q_{\mu}(x)= \frac{ P_{0}^{\frac{\mu}{1+\mu}}(x) P_{1}^{\frac{1}{1+\mu}}(x) } {\sum_{a \in \Xc } P_{0}^{\frac{\mu}{1+\mu}}(a) P_{1}^{\frac{1}{1+\mu}}(a) }, ~~~0\leq\mu, \end{equation} and the $\mu$ is the solution to $D(Q_\mu\|P_0)=\gamma$. Therefore, by comparing \eqref{eq:tilted} and \eqref{eq:tiltedH}, the optimizing distributions have the same form and there exist some thresholds for likelihood ratio test and Hoeffding's test such that $Q_\mu =Q_\lambda$. Hence Hoeffding's test can achieve the optimal error exponent tradeoff \cite{Merhav}. \subsection{Sequential Probability Ratio Test} In the sequential setting, the number of samples $k$ is a random variable called the stopping time $\tau$ taking values in $\ZZ_{+}$. A sequential hypothesis test is a pair $\Phi=(\phi: \Xc^\tau \rightarrow \{0,1\},\tau)$ where for every $n\geq 0$ the event $\{\tau\leq n\} \in \mathscr{F}_n$ where $\mathscr{F}_n$ is the sigma algebra induced by random variables $X_1,\ldots,X_n$, i.e., $ \sigma(X_1,\ldots,X_n)$. Moreover, $\phi$ is a $\mathscr{F}_{\tau}$ measurable decision rule, i.e., the decision rule determined by causally observing the sequence $X_i$. In other words, at each time instant, the test attempts to make a decision in favor of one of the hypotheses or chooses to take a new sample. The two possible pairwise error probabilities that measure the performance of the test are defined as \begin{equation}\label{eq:errprob} \epsilon_0(\Phi)=\PP_0\big [\phi(X^{\tau})\neq 0 \big] ~ \text{,} ~ \epsilon_1(\Phi)=\PP_1\big[\phi(X^{\tau})\neq 1 \big], \end{equation} where the probabilities are over $P_0, P_1$, respectively. There are two definitions of achievable error exponents in the literature. According to \cite{Csiszar} the optimal error exponent tradeoff is defined as \begin{align}\label{eq:tradeseq1} E_1(E_0) \triangleq \sup \Big \{E_1\in \mathbb{R}^{+}: \exists \Phi ,\ \exists \ n \in \ZZ_{+} \text{ s.t.} \ \mathbb{E}_{P_0} [\tau] \leq n, \mathbb{E}_{P_1} [\tau] \leq n, \ \epsilon_0(\Phi) \leq 2^{- n E_0} ~ \text{and} ~ \epsilon_1(\Phi) \leq 2^{- n E_1} \Big \} . \end{align} Alternatively, the expected stopping time can be different under each hypothesis by design to increase the reliability under one of the hypotheses by taking a larger number of samples compared to the number of samples the alternative hypothesis. Accordingly, \cite{Poly} defined the error exponent tradeoff as \begin{align}\label{eq:tradeseq2} E_1(E_0) \triangleq \sup \Big \{E_1\in \mathbb{R}^{+}&: \exists \Phi ,\ \exists \ n_0, n_1 \in \ZZ_{+}, \text{s.t.} \ \mathbb{E}_{P_0} [\tau] \leq n_0, \nonumber \\ & \mathbb{E}_{P_1}[\tau] \leq n_1, \ \epsilon_0(\Phi) \leq 2^{- n_0 E_0} ~ \text{and} ~ \epsilon_1(\Phi) \leq 2^{- n_1 E_1} \Big \}, \end{align} which allows different stopping times under different hypothesis. The sequential probability ratio test (SPRT) $\Phi=(\phi,\tau)$ was proposed by Wald in \cite{Wald1}. The stopping time is defined as follows \begin{align} \tau=\inf \big\{n\geq1:S_n& \geq \gamma_0 \ \text{or} \ S_n \leq -\gamma_1\big\}, \label{eq:deftau} \end{align} where \begin{align}\label{eq:LLR} S_n=\sum_{i=1}^n \log \frac{P_0(x_i)}{P_1(x_i)}, \end{align} is the the accumulated log-likelihood ratio (LLR) of the observed sequence $\xv$ and $\gamma_0, \gamma_1$ are two positive real numbers. Moreover, the test makes a decision according to the rule \begin{align} \phi= \begin{cases} 0& \text{if } S_\tau \geq \gamma_0 \\ 1 & \text{if } S_\tau \leq - \gamma_1,\\ \end{cases} \label{eq:defsprt} \end{align} It is shown in \cite{Berk} that the above test attains the optimal error exponent tradeoff, i.e., as thresholds $\gamma_0, \gamma_1$ approach infinity, the test achieves the best error exponent trade-off in \eqref{eq:tradeseq1} and \eqref{eq:tradeseq2}. It is known that the error probabilities of sequential probability ratio test as a function of $\gamma_0$ and $\gamma_1$ are \cite{Woodroofe} \begin{align} \epsilon_0 = c_0 \cdot e^{-\gamma_1 } \quad , \quad \epsilon_1 =c_1 \cdot e^{-\gamma_0 } , \end{align} as $\gamma_0, \gamma_1 \rightarrow \infty$ where $c_0, c_1$ are positive constants. Moreover, it can also be shown that \begin{align} \mathbb{E}_{P_0} [\tau] &= \frac{ \gamma_0}{D(P_0\|P_1)}(1+o(1)) \label{eq:SPRT1}, \\ \mathbb{E}_{P_1} [\tau] &= \frac{ \gamma_1}{D(P_1\|P_0)}(1+o(1)) \label{eq:SPRT2}. \end{align} Therefore, according to definition \eqref{eq:tradeseq1}, the optimal error exponent tradeoff is given by, \begin{equation} E_0 = D(P_1\|P_0) , ~ E_1= D(P_0\|P_1), \end{equation} where thresholds $\gamma_0, \gamma_1$ are chosen as \begin{equation}\label{eq:thresh} \gamma_0=n\big (D(P_0\|P_1)+o(1)\big),~ \gamma_1= n\big(D(P_1\|P_0)+o(1)\big). \end{equation} Hence, the sequential probability ratio test achieves the Stein regime error exponents of the standard likelihood ratio test simultaneously. Moreover, according to definition \eqref{eq:tradeseq2} the optimal error exponent tradeoff is given by \begin{equation} E_0 = \ell D(P_1\|P_0) , ~ E_1= \frac{1}{\ell} D(P_0\|P_1), \end{equation} where $\ell= \frac{n_1}{n_0} $. Equivalently, we have \begin{equation} \label{eq:seqexp} E_0 E_1 = D(P_0\|P_1)D(P_1\|P_0). \end{equation} To achieve \eqref{eq:seqexp}, thresholds $\gamma_0, \gamma_1$ should be chosen as \begin{equation}\label{eq:threshell} \gamma_0=n_0\big (D(P_0\|P_1)+o(1)\big) ,~ \gamma_1= n_1\big(D(P_1\|P_0)+o(1)\big). \end{equation} \section{Likelihood Ratio Testing Sensitivity} \label{sec:lrt} In this section, we study the robustness of mismatched likelihood ratio testing. We first derive the optimal error exponent tradeoff and find the worst-case error exponent when the true distribution lies in a small relative entropy ball around the testing distribution. Then, we study the deviation of the worst case error exponent around the matched likelihood ratio test exponent for small divergence balls, where the divergence is either the R\'enyi divergence of order $\alpha$ or the $f$-divergence with $\frac{d^2f(t)}{dt^2}\big|_{t=1}=\alpha$. Let $\hat{P}_0(x)$ and $\hat{P}_1(x)$ be the testing distributions used in the likelihood ratio test with threshold $\hat{\gamma}$ given by \begin{align}\label{eq:LRTtypeMM} \hat{\phi}(\Th)= \mathbbm{1} \big\{ D(\Th\|\hat{P}_0)-D(\Th\|\hat{P}_1) \geq \hat{\gamma} \big\}. \end{align} For simplicity, we assume that both $\hat P_0(x)>0$ and $\hat P_1(x)>0$ for each $x\in\Xc$. We are interested in the achievable error exponent tradeoff of the mismatched likelihood ratio test, i.e., \begin{align} \hat{E}_1(\hat{E}_0) \triangleq \sup& \big\{\hat{E}_1\in \mathbb{R}_{+}: \exists \hat{\gamma} , \exists N_0 \in \ZZ _+ \ \text{s.t.} \ \forall n>N_0, \epsilon_0 \leq e^{-n\hat{E}_0} ~ \text{and} ~ \epsilon_1 \leq e^{-n\hat{E}_1}\big\}. \label{eq:opttestmism} \end{align} \begin{theorem}\label{thm:mismatchLRT} For fixed $\hat{P}_0, \hat{P}_1 \in \Pc(X)$ the optimal error exponent tradeoff in \eqref{eq:opttestmism} is given by \begin{align} \hat{E}_0&=\min_{Q \in \hat{\Qc}_0} D(Q\|P_0) \label{eq:LRTmis1},\\ \hat{E}_1&=\min_{Q \in \hat{\Qc}_1} D(Q\|P_1)\label{eq:LRTmis2}, \end{align} where \begin{align} \hat{\Qc}_0&= \big\{Q\in \Pc(\Xc): D(Q\| \hat{P}_0)-D(Q\|\hat{P}_1) \geq \hat{\gamma} \big \}, \label{eq:qhat1}\\ \hat{\Qc}_1&= \big\{Q\in \Pc(\Xc): D(Q\| \hat{P}_0)-D(Q\|\hat{P}_1) \leq \hat{\gamma} \big \}. \label{eq:qhat2} \end{align} The minimizing distributions in \eqref{eq:LRTmis1} and \eqref{eq:LRTmis2} are \begin{equation}\label{eq:tiltedMM1} \hat{Q}_{\lambda_0}(x)= \frac{ P_0(x) \hat{P}_{0}^{-\lambda_0}(x) \hat{P}_{1}^{\lambda_0}(x) } {\sum_{a \in \Xc } P_0 (a) \hat{P}_{0}^{-\lambda_0}(a) \hat{P}_{1}^{\lambda_0}(a) },~~\lambda_0\geq0, \end{equation} \begin{equation}\label{eq:tiltedMM2} \hat{Q}_{\lambda_1}(x)= \frac{ P_1(x) \hat{P}_{1}^{-\lambda_1}(x) \hat{P}_{0}^{\lambda_1}(x) } {\sum_{a \in \Xc } P_1 (a) \hat{P}_{1}^{-\lambda_1}(a) \hat{P}_{0}^{\lambda_1}(a) },~~\lambda_1\geq0 \end{equation} respectively, where $ \lambda_0$ is chosen so that \begin{equation}\label{eq:KKTgamma1} D(\hat{Q}_{\lambda_0}\|\hat{P}_0)-D(\hat{Q}_{\lambda_0} \| \hat{P}_1) = \hat{\gamma}, \end{equation} whenever $D(P_0\|\hat{P}_0)- D(P_0\|\hat{P}_1) \leq \hat{\gamma}$, and otherwise, $\hat{Q}_{\lambda_0}(x)=P_0(x)$ and $\hat{E}_0=0$. Similarly, $ \lambda_1 \geq 0$ is chosen so that \begin{equation}\label{eq:KKTgamma2} D(\hat{Q}_{\lambda_1}\|\hat{P}_0)-D(\hat{Q}_{\lambda_1} \| \hat{P}_1) = \hat{\gamma}, \end{equation} whenever $ D(P_1 \|\hat{P}_0) -D(P_1 \|\hat{P}_1) \geq\hat{\gamma},$ and otherwise, $\hat{Q}_{\lambda_1}(x)=P_1(x)$ and $\hat{E}_1=0$. Furthermore, the dual expressions for the type-\RNum{1} and type-\RNum{2} error exponents are \begin{align}\label{eq:dual} \hat{E}_0&=\max_{\lambda \geq 0 } \lambda \hat{\gamma} - \log \Big ( \sum_{x\in \Xc} P_0(x) \hat{P}_0^{-\lambda}(x) \hat{P}_1^{\lambda}(x) \Big ), \\ \hat{E}_1&=\max_{\lambda \geq 0 } -\lambda \hat{\gamma} - \log \Big ( \sum_{x\in \Xc} \hat{P}_0^{\lambda}(x) P_1(x) \hat{P}_1^{-\lambda}(x) \Big ). \end{align} \end{theorem} \begin{proof} Theorem \ref{thm:mismatchLRT}, proved in Appendix \ref{apx:TmismatchLRT} follows from a direct application of Sanov's Theorem. \end{proof} \begin{remark} For mismatched likelihood ratio testing, the optimizing distributions $\hat{Q}_{\lambda_0}, \hat{Q}_{\lambda_1}$ can be different, since the decision regions only depend on the mismatched distributions. However, if $\hat{P}_0, \hat{P}_1$ are tilted with respect to $P_0$ and $P_1$, then both $\hat{Q}_{\lambda_0}, \hat{Q}_{\lambda_1}$ are also tilted respect to $P_0$ and $P_1$. This implies that for any set of mismatched distributions $\hat{P}_0, \hat{P}_1$ that are tilted with respect to generating distributions, the mismatched likelihood ratio test achieves the optimal error exponent tradeoff in \eqref{eq:tradefix}. \end{remark} \begin{theorem}\label{thm:stein} In the Stein regime, the mismatched likelihood ratio test achieves \begin{equation}\label{eq:steinMM2} \hat{E}_1^{(\epsilon)}=\min_{Q \in \hat{\Qc}_1} D(Q\|P_1), \end{equation} with threshold \begin{equation}\label{eq:steinthresh2} \hat{\gamma}=D(P_0\|\hat{P}_0) -D(P_0\|\hat{P}_1) +\sqrt{\frac{V(P_0,\hat{P}_0,\hat{P}_1)}{n}}\Qsf^{-1}(\epsilon), \end{equation} where \begin{equation} V(P_0,\hat{P}_0,\hat{P}_1) = {\rm Var}_{P_0}\bigg[\log \frac{\hat{P}_0(X)}{\hat{P}_1(X)} \bigg ], \end{equation} is the variance of the random variable $\log \frac{\hat{P}_0(X)}{\hat{P}_1(X)}$ where $X$ is distributed according to $P_0$, and $\Qsf^{-1}(\epsilon)$ is the inverse cumulative distribution function of a zero-mean unit-variance Gaussian random variable. \end{theorem} \begin{proof} Theorem \ref{thm:stein}, proved in Appendix \ref{apx:Tstein} follows from Central Limit Theorem. \end{proof} \begin{remark} Note that since $P_0$ satisfies the constraint in \eqref{eq:steinMM2} then $\hat{E}_1^{(\epsilon)} \leq {E}_1^{(\epsilon)}$. In fact, if $\hat{P}_0, \hat{P}_1$ are tilted respect to $P_0, P_1$ then this inequality is met with equality. Moreover, it is easy to find a set of data and test distributions where $\hat{E}_1^{(\epsilon)} < {E}_1^{(\epsilon)}$. \end{remark} Next, we study the worst-case error-exponent performance of mismatched likelihood ratio testing when the distributions generating the observation fulfill \begin{equation}\label{eq:ball2} P_0 \in \Bc(\hat P_0,r_0),~~ P_1 \in \Bc(\hat P_1, r_1), \end{equation} where \begin{equation}\label{eq:ball} \Bc(Q,r)= \big\{P\in\Pc(\Xc): d(Q,P) \leq r \big\}, \end{equation} is a ball centered at distribution $Q$ containing all distributions whose distance is smaller or equal than radius $r$, and for the R\'enyi divergence of positive order $\alpha$ where $\alpha\neq 1$ we set $d(Q,P)=D_\alpha(Q\|P)=\frac{1}{\alpha-1} \log \sum_{x\in\Xc} Q(x)^\alpha P(x)^{1-\alpha}$, and for $\alpha=1$, the continuity in $\alpha$ leads to defining the R\'enyi divergence of order $1$ to be the relative entropy. Similarly, given a convex and twice differentiable function $f$, we set $d(Q,P)=D_{f}(Q\|P)=\sum_{x\in\Xc} P(x) f\Big( \frac{P(x)}{Q(x)} \Big) $ to be the $f$-divergence, and we set $\alpha=\frac{d^2f(t)}{dt^2}\Big|_{t=1}$. For every $P_0$ the achievable error exponent $\hat{E}_0$ does not depend on $P_1$ therefore, for every $r_0, r_1 \geq 0$ the least favorable exponents $\underline{\hat{E}}_0(r_0), \underline{\hat{E}}_1(r_1)$ can be written as \begin{align} \underline{\hat{E}}_0(r_0)&=\min_{P_0 \in \Bc(\hat P_0,r_0) } \ \ \min_{Q \in \hat{\Qc}_0} D(Q\|P_0),\label{eq:MMlower1}\\ \underline{\hat{E}}_1(r_1)&=\min_{P_1 \in \Bc(\hat P_1,r_1)} \ \ \min_{ Q \in \hat{\Qc}_1 } D(Q\|P_1), \label{eq:MMlower2} \end{align} where $\hat{\Qc}_0, \hat{\Qc}_1 $ are defined in \eqref{eq:qhat1}, \eqref{eq:qhat2}. Then, for any distribution pair $P_0 \in\Bc (\hat P_0,r_0), P_1 \in \Bc(\hat P_1,r_1)$, the corresponding error exponent pair $(\hat{E}_0, \hat{E}_1)$ satisfies \begin{equation}\label{eq:LU1} \underline{\hat{E}}_0(r_0) \leq \hat{E}_0 \text{,} \quad \underline{\hat{E}}_1(r_1) \leq \hat{E}_1. \end{equation} Figure \ref{fig:mismatch} depicts the mismatched probability distributions and the mismatched likelihood ratio test as a hyperplane dividing the probability space into the two decision regions. The worst-case achievable error exponents of mismatched likelihood ratio testing for data distributions in a divergence ball are essentially the minimum relative entropy between two sets of convex probability distributions. Specifically, the minimum relative entropy between $\Bc(\hat P_0,r_0)$ and $\hat \Qc_1$ gives $\underline{\hat{E}}_0(r_0)$, and similarly for $\underline{\hat{E}}_1(r_1)$. Observe that in the matched case, i.e., $\hat P_0=P_0$ and $\hat P_1=P_1$, $\hat{Q}_{\lambda_0}= \hat{Q}_{\lambda_1}$. \begin{figure}[!h] \centering \begin{tikzpicture}[scale=0.85] \draw (5,1.2) -- (0,-5) -- (10,-5) --(5,1.2) ; \draw [line width=0.3mm, dashed] (4,-0.05) -- (5.,-5) ; \node at (7.29,-4.65) {\small $D(Q\|\hat{P}_0)-D(Q\|\hat{P}_1) = \hat{\gamma}$}; \node[draw,circle,inner sep=1pt,fill] at (3,-3.5) {}; \node at (3.3,-3.5) {\small $ \hat{P}_0$}; \node[draw,circle,inner sep=1pt,fill] at (6.5,-2.6) {}; \node at (6.7,-2.8) {\small $\hat{P}_1$}; \draw (2.7,-3.5) ellipse (1cm and 0.7cm); \draw (6.5,-2.5) ellipse (1 cm and 0.7 cm); \node at (1,-4.6) {$\Pc({\Xc})$}; \draw [->,>=stealth] (6.5,-2.6) -- (6.8,-1.85); \draw [->,>=stealth] (3,-3.5) -- (3.2,-4.1); \node at (2.9,-3.9) {\small $r_0$}; \node at (6.9,-2.25) {\small $r_1$}; \node[draw,circle,inner sep=1pt,fill] at (3,-3) {}; \node[draw,circle,inner sep=1pt,fill] at (6,-2.2) {}; \draw [line width=0.25mm, gray]plot [smooth, tension=1] coordinates{(3,-3) (4,-2.75) (4.55,-2.7)} ; \draw [line width=0.25mm, gray]plot [smooth, tension=1] coordinates{ (6,-2.2) (5.5,-2.1) (4.47,-2.4)} ; \node[draw,circle,inner sep=1pt,fill] at (4.55,-2.7) {}; \node[draw,circle,inner sep=1pt,fill] at (4.47,-2.4) {}; \node at (4.9,-2.8) {\tiny $\hat{Q}_{\lambda_0}$}; \node at (4.15,-2.2) {\tiny $\hat{Q}_{\lambda_1}$}; \node at (3.8,-2.52) {\small $\hat E_0$}; \node at (5.2,-1.9) {\small $\hat E_1$}; \node at (2.75,-3.1) {\small $P_0$}; \node at (6,-2.5) {\small $P_1$}; \node at (3.6,-1.4) {\small $\Ac_0$}; \node at (5.3,-0.7) {\small $\Ac_1$}; \node at (2.8,-4.5) {\small $\Bc(\hat P_0,r_0)$}; \node at (7,-3.5) {\small $\Bc(\hat P_1,r_1)$}; \end{tikzpicture} \caption{Mismatched likelihood ratio test with real distributions in divergence balls $\Bc(\hat P_0,r_0), \Bc(\hat P_1,r_1)$. } \label{fig:mismatch} \end{figure} Furthermore, since the R\'enyi divergence $D_\alpha(Q\|P)$ is convex in $P$ for $\alpha\geq0$ \cite{Harremos}, and $f$-divergence $D(Q\|P)$ is convex in $P$ \cite{Csiszar}, then \eqref{eq:MMlower1} is a convex optimization problem and the KKT conditions are also sufficient. In addition, for the relative entropy, writing the Lagrangian gives \begin{align}\label{eq:lagrange} L(Q,P_0,\lambda_0,\lambda_0', \nu_0, \nu_0')&= D(Q\|P_0) + \lambda_0 \big( D(Q\|\hat{P}_1) -D(Q\|\hat{P}_0) +\hat{\gamma} \big) \nonumber \\ &~~~~+ \lambda_0' \big ( D(\hat{P}_0\|P_0) -r_0\big ) + \nu_0 \Big ( \sum_{x\in \Xc} Q(x)-1\Big )+ \nu_0' \Big ( \sum_{x\in \Xc} P_0(x)-1\Big ). \end{align} where $\lambda_0,\lambda_1, \nu_0,\nu_1$ are the Lagrange multipliers corresponding to the optimization \eqref{eq:MMlower1} constraints. Differentiating with respect to $Q(x)$ and $P_0(x)$ and setting the derivatives to zero we have \begin{align} 1+\log \frac{Q(x)}{P_0(x)} +\lambda_0 \log \frac{\hat{P}_0(x)}{\hat{P}_1(x)} + \nu_0&=0,\label{eq:lagrange1}\\ -\frac{Q(x)}{P_0(x)}-\lambda_0' \frac{\hat{P}_0(x)}{P_0(x)}+\nu_0'&=0, \label{eq:lagrange2} \end{align} respectively. Solving equations \eqref{eq:lagrange1}, \eqref{eq:lagrange2} for every $x\in\Xc$ we obtain \begin{align}\label{eq:lowerworstKKT1} \underline{Q}_{\lambda_0}(x)&= \frac{ \underline{P}_0(x) \hat{P}_0^{-\lambda_0}(x) \hat{P}_1^{\lambda_0}(x) } {\sum_{a \in \Xc } \underline{P}_0(a) \hat{P}_0^{-\lambda_0}(a) \hat{P}_1^{\lambda_0}(a) },\\ \underline{P}_0(x)&=\frac{1}{1+\lambda_0'} \underline{Q}_{\lambda_0}(x) + \Big(1-\frac{1}{1+\lambda_0'}\Big) \hat{P}_0(x), \label{eq:lowerworstKKT11} \end{align} where $ \lambda_0 \geq 0$. Moreover, from the complementary slackness condition \cite{Boyd} if for all $P_0$ in $\Bc(\hat{P}_0,r_0)$, $D(P_0\|\hat{P}_0)- D(P_0\|\hat{P}_1) < \hat{\gamma}$ then \begin{align} D(\underline{Q}_{\lambda_0}\|\hat{P}_0)-D(\underline{Q}_{\lambda_0} \| \hat{P}_1) &=\hat{ \gamma},\\ D(\hat{P}_0\| \underline{P}_0) &= r_0, \end{align} Otherwise, if there exists a $\underline{P}_0$ in $\Bc(\hat P_0,r_0) $ such that $D(\underline{P}_0\|\hat{P}_0)- D(\underline{P}_0\|\hat{P}_1) \geq \hat{\gamma}$, then for this distribution $\hat{E}_0=0$. Therefore, if \begin{equation}\label{eq:gammaworsL} \max_{P_0 \in\Bc(\hat P_0,r_0) } D(P_0\| \hat{P}_0) - D(P_0\|\hat{P}_1) < \hat{\gamma} \end{equation} holds, for all $P_0$ in the relative entropy ball, then $\underline{\hat{E}}_0(r_0) >0$, otherewise $\underline{\hat{E}}_0(r_0) =0$. Similar steps hold for the second hypothesis by only substituting the distributions. Armed with the previous results, we are ready to study how the worst-case error exponents $(\underline{\hat{E}}_0, \underline{\hat{E}}_1)$ behave when the divergence ball radii $r_0,r_1$ are small. In particular, we derive a Taylor series expansion of the worst-case error exponent. This approximation can also be interpreted as the worst-case sensitivity of the test, i.e., how does the test perform when actual distributions are very close to the mismatched distributions. \begin{theorem}\label{thm:lowerworst} Consider a hypothesis testing setting with mismatch, with true distributions $P_0, P_1$ and testing distributions $\hat P_0,\hat P_1$. For every $r_i \geq 0$, where $i\in \{0,1\}$, and threshold $\hat \gamma$ satisfying \begin{equation}\label{eq:threshcodsen} -D(\hat{P}_0\|\hat{P}_1) \leq \hat{\gamma} \leq D(\hat{P}_1\|\hat{P}_0), \end{equation} the worst-case error exponents $\underline{\hat{E}}_i(r_i)$ can be expressed as \begin{equation}\label{eq:worstapproxlrt} \underline{\hat{E}}_i (r_i) = E_i - \sqrt{r_i \cdot \theta_i(\hat{P}_0,\hat{P}_1,\hat{\gamma})}+ o \big(\sqrt{r_i} \big), \end{equation} where \begin{equation}\label{eq:sensitivity} \theta_i(\hat{P}_0,\hat{P}_1,\hat{\gamma}) =\frac{2}{\alpha} {\rm Var}_{\hat{P}_i} \bigg(\frac{\hat{Q}_{\lambda}(X)}{\hat{P}_i(X)} \bigg) \end{equation} and $\hat{Q}_{\lambda}(X)$ is the minimizing distribution in \eqref{eq:tilted} for test $\hat{\phi}$. \end{theorem} Observe that as a result of the $\sqrt{r}$ expansion, the slope of the exponent for small $r$ tends to infinity, which implies that the likelihood ratio test is very sensitive to mismatch. \begin{corollary}\label{cor:varderivative} For every $\hat{P}_0,\hat P_1 \in \Pc(\Xc)$, and $\hat{\gamma}$ satisfying \eqref{eq:threshcodsen} \begin{align} \frac{\partial }{\partial \hat{\gamma}}\theta_0(\hat{P}_0,\hat{P}_1,\hat{\gamma}) \geq 0, ~~~ \frac{\partial }{\partial \hat{\gamma}}\theta_1(\hat{P}_0,\hat{P}_1,\hat{\gamma}) \leq 0. \end{align} \end{corollary} This corollary shows that $\theta_0(\hat{P}_0,\hat{P}_1,\hat{\gamma})$ is a non-decreasing function of $\hat{\gamma}$, i.e., as $\hat{\gamma}$ increases from $-D(\hat{P}_0\|\hat{P}_1) $ to $D(\hat{P}_1\|\hat{P}_0)$, the worst-case exponent $\underline{\hat E}_0(r_0)$ becomes more sensitive to mismatch. Conversely, $\theta_1(\hat{P}_0,\hat{P}_1,\hat{\gamma})$ is a non-increasing function of $\hat{\gamma}$, i.e., as $\hat{\gamma}$ increases from $-D(\hat{P}_0\|\hat{P}_1) $ to $D(\hat{P}_1\|\hat{P}_0)$, the worst-case exponent $ \underline{\hat E}_1(r_1)$ becomes less sensitive (more robust) to mismatch. Moreover, when $\lambda=\frac{1}{2}$, we have \begin{equation} \hat{Q}_{\frac{1}{2}}(x)=\frac{\sqrt{\hat{P}_0(x) \hat{P}_1(x) }}{ \sum_{a \in \Xc } \sqrt{\hat{P}_0(a) \hat{P}_1(a) } }, \end{equation} and then $\theta_0(\hat{P}_0,\hat{P}_1,\hat{\gamma})=\theta_1(\hat{P}_0,\hat{P}_1,\hat{\gamma})$. In addition, $\hat{Q}_{\frac{1}{2}}$ minimizes ${E}_0 + {E}_1$ yielding \cite{Veeravalli} \begin{align} {E}_1 + {E}_2&= \min_{Q \in \Pc(\Xc)} D(Q\|\hat{P}_0) + D(Q\|\hat{P}_1)\\& = 2 B(\hat{P}_0,\hat{P}_1) \end{align} where $B(\hat{P}_0,\hat{P}_1)=-\log\sum_{x\in\Xc}\sqrt{\hat{P}_0(x)\hat{P}_1(x)}$ is the Bhattacharyya distance between the mismatched distributions $\hat{P}_0$ and $\hat{P}_1$. This suggests that having equal sensitivity (or robustness) for both hypotheses minimizes the sum of the exponents. \begin{example} When $\gamma=0$ the likelihood ratio test becomes the maximum-likelihood test, which is known to achieve the lowest average probability of error in the Bayes setting for equal priors. For fixed priors $\pi_0,\pi_1$, the error probability in the Bayes setting is $\bar\epsilon= \pi_0\epsilon_0 +\pi_1\epsilon_1$, resulting in the following error exponent \cite{Cover} \begin{equation} \bar E= \lim_{n\rightarrow \infty} -\frac{1}{n} \log \bar \epsilon = \min \{E_0,E_1\}, \end{equation} assuming that the priors $\pi_0,\pi_1$ are independent of $n$. Consider $\hat{P}_0 =\text{Bern}(0.1)$ , $\hat{P}_1 =\text{Bern}(0.8)$. Also, assume $r_0=r_1=r$. Figure \ref{fig:worstRsen} shows the worst-case error exponent in the Bayes setting given by $\min \{\underline{\hat E}_0,\underline{\hat E}_1\}$ by solving \eqref{eq:MMlower1} and \eqref{eq:MMlower2} as well as $ \min \{\underline{\tilde{E}}_0,\underline{\tilde{E}}_1\} $ by the approximations in \eqref{eq:worstapproxlrt} for the R\'enyi divergence balls of order $\alpha\in \{\frac{1}{2},1,2 \}$. Similarly, Figure \ref{fig:worstRsenfdiver} shows the worst-case error exponent in the Bayes setting for two $f$-divergences, $\chi^2$, and Hellinger divergences. We can see that the approximation is good, especially for small radii $r$. Moreover, it can be seen that error exponents are very sensitive to mismatch for small $r$, i.e., the slope of the worst-case exponent goes to infinity as $r$ approaches to zero. \begin{figure}[htp] \centering \input{renyi.tex} \caption{Worst-case achievable Bayes error exponent for the R\'enyi divergence balls of order $\alpha\in \{\frac{1}{2},1,2 \}$. The solid lines correspond to the optimization problems in \eqref{eq:MMlower1}, \eqref{eq:MMlower2} and the dashed lines correspond to the approximated Bayes exponent using Theorem \ref{thm:lowerworst}. } \label{fig:worstRsen} \end{figure} \begin{figure}[htp] \centering \input{fdiver.tex} \caption{Worst-case achievable Bayes error exponent for $\chi^2$ and Hellinger divergence balls. The solid lines correspond to the optimization problems in \eqref{eq:MMlower1}, \eqref{eq:MMlower2} and the dashed lines correspond to the approximated Bayes exponent using Theorem \ref{thm:lowerworst}. } \label{fig:worstRsenfdiver} \end{figure} \end{example} \section{Generalized Likelihood Ratio Testing Sensitivity } \label{sec:glrt} Next, we study the performance of Hoeffding's test under mismatch. Similarly to previous section, $P_0$ denotes the actual distribution that generated the observation and $\hat{P}_0$ indicates the mismatched distribution used in the test. Hoeffding's test using the mismatched distribution $\hat{P}_0$ with threshold $\hat{\gamma}$ is given by \begin{equation} \hat{\phi}(\Th)= \mathbbm{1}\{ D(\Th\|\hat{P}_0) \geq \hat \gamma \}. \end{equation} Note that since the original test by Hoeffding does not depend on the second distribution (cf. \eqref{eq:GLRTtype}), the test is also independent of the second probability distribution in the mismatched case. Therefore, by Sanov's theorem, for every $P_0, P_1$ the error exponent $\hat{E}_1$ is equal to \begin{equation} \hat{E}_1=\min_{Q\in \Pc(\Xc): D(Q\|\hat{P}_0)\leq \hat{\gamma}} D(Q\|P_1). \end{equation} The above optimization is a convex problem and by KKT conditions the minimizer is the tilted distribution between $\hat{P}_0$ and $P_1$ given by, \begin{equation}\label{eq:tiltedmu} \hat{Q}_{\mu}(x)= \frac{ \hat{P}_{0}^{\frac{\mu}{1-\mu}}(x) P_{1}^{\frac{1}{1-\mu}}(x) } {\sum_{a \in \Xc } \hat{P}_{0}^{\frac{\mu}{1-\mu}}(a) P_{1}^{\frac{1}{1-\mu}}(a) }, \end{equation} and where $\mu$ is the solution to \begin{equation}\label{eq:KKTgamma} D(\hat{Q}_{\mu}\| \hat{P}_0)= \hat{\gamma}. \end{equation} Similarly, by Sanov's theorem, the error exponent $\hat{E}_0$ is \begin{equation} \hat{E}_0=\min_{Q\in \Pc(\Xc): D(Q\|\hat{P}_0)\geq \hat{\gamma}} D(Q\|P_0). \label{eq:e0hoeffmism} \end{equation} Unfortunately, the solution to the above error exponent cannot be derived by convex optimization since the constraint is the complement of a convex set. In the following, we introduce an upper bound to the achievable exponents. \begin{theorem}\label{thm:hoeffupper} For fixed $\hat{P}_0, P_0 \in \Pc(\Xc)$ the error exponent $\hat{E}_0$ of Hoeffding's test with mismatch is upper bounded by \begin{equation} \hat{E}_0 \leq \hat{\gamma} - \sqrt{2\hat{\gamma}} \|\hat{P}_0-P_0\|_{{\rm TV}}, \end{equation} where $\|\cdot\|_{{\rm TV}}$ is the total variational distance. \end{theorem} Observe from Theorem \ref{thm:hoeffupper} that the highest achievable exponent in Hoeffding's test is equal to the achievable exponent when $\hat{P}_0=P_0$, i.e., the mismatch will always result in suboptimal error exponent tradeoff. However, the likelihood ratio testing can still achieve an optimal error exponent tradeoff under mismatch if the mismatched distributions are tilted version of actual distributions. The universality of Hoeffding's test can explain the higher sensitivity of Hoeffding's test toward mismatch. We now focus on the worst-case error-exponent performance of the mismatched Hoeffding test when the distributions generating the observation fulfil \eqref{eq:ball2}, i.e., they are inside a divergence ball of radii $r_0, r_1$. Figure \ref{fig:mismatchGLRT} illustrates the mismatched probability distributions and the mismatched Hoeffding test as the relative entropy ball centered at $\hat{P}_0$ divides the probability space into the two decision regions. \begin{figure}[!h] \centering \begin{tikzpicture}[scale=0.85] \draw (5,2) -- (0,-5) -- (10,-5) --(5,2) ; \node[draw,circle,inner sep=1pt,fill] at (4,-2.5) {}; \node at (3.85,-2.7) {\small $ \hat{P}_0$}; \node[draw,circle,inner sep=1pt,fill] at (7.5,-2.5) {}; \node at (7.5,-2.8) {\small $P_1$}; \draw (4,-2.5) ellipse (1*0.8 cm and 0.9*0.8cm); \draw (4,-2.5) ellipse (1.8*0.8 cm and 1.5*0.8cm); \node at (1.,-4.7) {\small $\Pc({\Xc})$}; \draw [->,>=stealth] (4,-2.5) -- (4.5,-1.9); \node at (4.1,-2.1) {\small $r_0$}; \draw [->,>=stealth] (4,-2.5) -- (3.1,-1.55); \node at (3.45,-1.7) {\small $\hat{\gamma}$}; \node[draw,circle,inner sep=1pt,fill] at (4.57,-3) {}; \draw [gray]plot [smooth, tension=1] coordinates{(4.57,-3) (4.9,-3.2) (5.08,-3.3)} ; \draw [gray]plot [smooth, tension=1] coordinates{ (5.43,-2.5) (6.5,-2.6) (7.5,-2.5)} ; \node at (5,-2.9) {\small $\hat{E}_0$}; \node at (6.5,-2.3) {\small $\hat{E}_1$}; \node at (4.6,-3.3) {\small $P_0$}; \node at (3.,-2.9) {\small $\Ac_0$}; \node at (5.3,-0.7) {\small $\Ac_1$}; \end{tikzpicture} \caption{Mismatched Hoeffding's test with real distribution $P_0$ from a divergence ball $\Bc(\hat{P}_0,r_0)$.} \label{fig:mismatchGLRT} \end{figure} For every $P_0$ the achievable error exponent $\hat{E}_0$ does not depend on $P_1$ therefore, for every $r_0$, the least favorable exponents $\underline{\hat{E}}_0(r_0),$ defined in \eqref{eq:e0hoeffmism} can be written as \begin{align}\label{eq:worstHoef} \underline{\hat{E}}_0(r_0)&=\min_{\substack{P_0:P_0 \in \Bc(\hat P_0,r_0) \\ Q\in \Pc(\Xc): D(Q\|\hat{P}_0) \geq \hat{\gamma}}} D(Q\|P_0), \end{align} where $\Bc(\hat P_0,r_0)$ is the divergence ball centered at $\hat P_0$ with radius $r_0$ \eqref{eq:ball2} with divergence measure parametrized by $\alpha$. As opposed to the mismatched likelihood ratio test where the worst achievable exponent could be found by solving a convex problem, here, the optimization problem in \eqref{eq:worstHoef} is nonconvex and in in principle difficult to solve. However, as the next theorem states, we are still able to perform a Taylor series expansion to find the behavior of the worst exponent $\underline{\hat{E}}_0(r_0)$ when the radius of the relative entropy ball $r_0$ is small. \begin{theorem}\label{thm:lowerworstHoef} Consider a mismatched generalized likelihood ratio test with real and test distributions $P_0,P_1$ and $\hat P_0$, respectively. For every $ r_0 \geq 0$, we have that the error exponent $\underline{\hat{E}}_0(r_0)$ can be approximated as \begin{equation}\label{eq:worstapproxHoef} \underline{\hat{E}}_0(r_0) = E_0 - \sqrt{r_0\cdot \theta_0(\hat{P}_0,\hat{\gamma})}+o(\sqrt{r_0}), \end{equation} where \begin{equation}\label{eq:hoefsen} \theta_0(\hat{P}_0,\hat{\gamma}) = \max_{\substack{ Q: D(Q\|\hat{P}_0) = \hat{\gamma}}}\frac{2}{\alpha} {\rm Var}_{\hat{P}_0} \bigg(\frac{Q(X)}{\hat{P}_0(X)} \bigg), \end{equation} is the sensitivity of Hoeffding's test with mismatch. \end{theorem} Observe that whole the expressions \eqref{eq:sensitivity} and \eqref{eq:hoefsen} are structurally similar, \eqref{eq:hoefsen} has an additional optimization step. The following result compares the sensitivities of the worst-case mismatched likelihood ratio and Hoeffding's test sensitivities. \begin{corollary}\label{cor:comparing} Let $\hat{P}_0$ be fixed and $\hat{P}_1$ be some distribution used in the likelihood ratio test. Also, let Hoeffding's test sensitivity denoted by $\theta_0^{\rm h}(\hat{P}_0,\hat\gamma^{\rm h})$, and $\theta_0^{\rm lrt}(\hat{P}_0,\hat{P}_1,\hat\gamma^{\rm lrt})$ be the sensitivity of likelihood ratio test when the threshold $\hat\gamma^{\rm lrt}$ is chosen such that the type-\RNum{1} error exponent is equal to $\hat \gamma^{\rm h}$. Then, we have \begin{equation}\label{eq:cor} 1 \leq \frac{\theta_0^{\rm h}(\hat P_0,\hat\gamma^{\rm h})}{\theta_0^{\rm lrt}(\hat P_0,\hat P_1,\hat\gamma^{\rm lrt})}\leq \sqrt{ \frac{4}{\min_{x\in \Xc} \hat P_0(x)}}. \end{equation} \end{corollary} \section{Sequential Probability Ratio Testing Sensitivity } \label{sec:sequentialHT} The sequential probability ratio test $\hat{\Phi}=(\hat{\phi},\hta)$ with thresholds $\hgo, \hgt$ and mismatched distributions $\hpo, \hpt$ is given by \begin{align} \hta=\inf \{n\geq1: \hat{S}_n& \geq \hgo \ \mathrm{or} \ \hat{S}_n \leq -\hgt\}, \end{align} where \begin{align} \hat{S}_n=\sum_{i=1}^n \log \frac{\hat{P}_0(x_i)}{\hat{P}_1(x_i)}, \end{align} and \begin{align} \hat{\phi}= \begin{cases} 0 & \text{if } \hat{S}_{\hta} \geq \hgo \\ 1 & \text{if } \hat{S}_{\hta} \leq - \hgt.\\ \end{cases} \end{align} Similarly to the previous sections, in order to study the sensitivity of the mismatched sequential ratio test, we first study the highest achievable error exponents, i.e, \begin{align}\label{eq:tradeseqMM1} \hEt(\hEo) \triangleq \sup \Big \{\hEt \in \mathbb{R}_{+}&: \exists \hgo,\hgt ,\ \exists \ n \in \ZZ_{+} \ \text{ s.t.} \notag\\ &\forall \ \mathbb{E}_{P_0} [\hta] \leq n, \mathbb{E}_{P_1} [\hta] \leq n, \ \epsilon_0(\hat{\Phi}) \leq 2^{- n \hEo} \quad \text{and} \quad \epsilon_1(\hat{\Phi}) \leq 2^{- n\hEt } \Big \} , \end{align} which is analogous to the definition in \eqref{eq:tradeseq1}. Similarly to \eqref{eq:tradeseq2}, we can also define the following tradeoff \begin{align}\label{eq:tradeseqMM2} \hEt(\hEo) \triangleq \sup \Big \{\hEt \in \mathbb{R}_{+}&: \exists \hgo, \hgt ,\ \exists n_0, n_1 \in \ZZ_{+}, \text{s.t.} \nonumber \\ & \mathbb{E}_{P_0} [\hta] \leq n_0, \mathbb{E}_{P_1}[\hta] \leq n_1, \ \epsilon_0(\hat{\Phi}) \leq 2^{- n_0 \hEo} ~ \text{and} ~ \epsilon_1(\hat{\Phi}) \leq 2^{- n_1 \hEt} \Big \}. \end{align} The next theorem provides the error exponents $\hEo,\hEt$ and the average stopping time $\mathbb{E}_{P_0}[\hta], \mathbb{E}_{P_1}[\hta]$ of the mismatched sequential probability ratio test as a function of thresholds $\hgo, \hgt$. \begin{theorem}\label{thm:seqMM} For fixed probability measures $\hat{P}_0, \hat{P}_1$, let \begin{equation}\label{eq:posdrift} 0<D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0), 0<D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1). \end{equation} Then, as $\hgo,\hgt \rightarrow \infty$, the pairwise probabilities of error $\heo,\het$ are given by \begin{align} \heo &= \hat c_0 \cdot e^{-\frac{D(P_0\|P_1)}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)}\hgt } \label{eq:MMexp1}, \\ \het &= \hat c_1 \cdot e^{-\frac{D(P_1\|P_0)}{D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)}\hgo } \label{eq:MMexp2}, \end{align} where $\hat c_0, \hat c_1$ are positive constants. Furthermore, the expected stopping times are given by \begin{align} \label{eq:SPRTthresh} \mathbb{E}_{P_0}[\hta]&=\frac{ \hgo}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)}(1+o(1)),\\ \mathbb{E}_{P_1}[\hta]&=\frac{ \hgt}{D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)} (1+o(1)). \end{align} \end{theorem} The next result states that if the average drift of the likelihood ratio changes sign under mismatch, the probability of error under that hypothesis tends to one. \begin{theorem} \label{thm:negdrift} For fixed $\hat{P}_0, \hat{P}_1$, let \begin{equation}\label{eq:negdrift} D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)<0 ~ , ~ D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)<0 \end{equation} Then, as thresholds $\hgo, \hgt$ approach infinity, \begin{equation} \hat{\epsilon}_0 \rightarrow 1 \quad , \quad \hat{\epsilon}_1 \rightarrow 1 . \end{equation} \end{theorem} \begin{corollary}\label{cor:exp} Under the conditions of Theorem \ref{thm:seqMM}, the achievable error exponent tradeoff according to \eqref{eq:tradeseqMM1} is given by \begin{align} \hEo &= D(P_0\|P_1) \frac{ D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1) }{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0) } , \\ \hEt&= D(P_1\|P_0)\frac{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0) }{D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)} , \end{align} where to achieve these exponents thresholds $\hgo, \hgt$ should be chosen as \begin{align}\label{eq:thresh1} \hgo&=n\big ( D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0) +o(1)\big), \\ \hgt&= n\big(D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1) +o(1)\big). \label{eq:thresh2} \end{align} Moreover, the achievable error exponents according to \eqref{eq:tradeseqMM2} satisfy \begin{equation} \hEo = \ell D(P_0\|P_1) , \quad \hEt= \frac{1}{\ell} D(P_1\|P_0), \end{equation} where $\ell= \frac{D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)}\frac{n_1}{n_0} $. Equivalently, we have that \begin{equation} \label{eq:seqexpMM} \hEo \hEt = D(P_0\|P_1)D(P_1\|P_0). \end{equation} To achieve \eqref{eq:seqexpMM}, thresholds $\hgo, \hgt$ should be chosen as \begin{align} \hgo&=n_0\big (D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)+o(1)\big) \label{eq:threshMM1},\\ \hgt&= n_1\big(D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1))+o(1)\big) \label{eq:threshMM2}. \end{align} \end{corollary} By comparing \eqref{eq:seqexp}, \eqref{eq:seqexpMM} we can conclude that mismatched sequential probability ratio test has the same performance as the case with no mismatch, i.e., there exist thresholds $\hgo, \hgt$ such that the expected stopping time condition is met, and the error exponents satisfy \eqref{eq:seqexpMM}. The intuition behind the existence of thresholds such that the optimal tradeoff is achievable relies on the fact that the mismatched distributions only cause a change in the drifts of the random walk generated by $\hat{S}_n$, and hence one can choose the thresholds appropriately to rescale the random walk behavior to achieve the optimal exponents. However, choosing $\hgo, \hgt$ to achieve \eqref{eq:seqexpMM} requires the knowledge of true probability measures $P_0, P_1$ by \eqref{eq:threshMM1}, \eqref{eq:threshMM2}, which might not be possible. Having this in mind, we consider the performance of the mismatched probability ratio test when the thresholds are selected from \eqref{eq:thresh1}, \eqref{eq:thresh2}, \eqref{eq:threshMM1} and \eqref{eq:threshMM2} but replacing $P_0,P_1$ by the mismatched measures $\hat P_0,\hat P_1$. In fact, this is precisely the relevant practical scenario where only the testing probability measures $\hat P_0,\hat P_1$ are available. In this scenario, mismatch in probability measures will induce a mismatch in expected stopping time and the error exponents. Consider the case where \eqref{eq:thresh} is used with mismatched measures $\hat P_0,\hat P_1$, \begin{equation}\label{eq:thresh3} \hgo=n\big (D(\hat{P}_0\|\hat{P}_1)+o(1)\big) ~,~ \hgt= n\big(D(\hat{P}_1\|\hat{P}_0)+o(1)\big). \end{equation} Using \eqref{eq:thresh3} and by Theorem \ref{thm:seqMM} we obtain \begin{align} \mathbb{E}_{P_0}[\hta]&=n\frac{ D(\hat{P}_0\|\hat{P}_1)}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)}(1+o(1)),\\ \mathbb{E}_{P_1}[\hta]&=n\frac{ D(\hat{P}_1\|\hat{P}_0)}{D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)} (1+o(1)). \end{align} Therefore, the mismatch in the thresholds, induces expected stopping times that may be larger than $n$. Letting $\eta^{-1}=\max \bigg \{\frac{ D(\hat{P}_0\|\hat{P}_1)}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)}, \frac{ D(\hat{P}_1\|\hat{P}_0)}{D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)} \bigg \}$, and according to definition \eqref{eq:tradeseqMM1} we have the following exponents, \begin{align} \hEo &= \frac{D(P_0\|P_1)D(\hat{P}_1\|\hat{P}_0)}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \eta, \label{eq:threshMMM1} \\ \hEt &= \frac{D(P_1\|P_0)D(\hat{P}_0\|\hat{P}_1)}{D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)} \eta \label{eq:threshMMM2}. \end{align} Similarly to \eqref{eq:threshell}, for the second definition of exponent, we need to multiply one of the thresholds by $\ell$, and the corresponding exponents will be equal to $\ell$ and $\frac{1}{\ell}$ times the above exponents. We now analyze the worst-case error exponents, defined as \begin{align} \underline{\hat{E}}_i(r_i) \triangleq \min_{P_i\in\Bc(\hat P_i,r_i)} \hat E_i,~~ i\in \{0,1 \}, \label{eq:worst_case} \end{align} where $\Bc(Q,r)$ is the divergence ball of radius $r$ centered at distribution $Q$ defined in \eqref{eq:ball}. From \eqref{eq:threshMMM1}, we observe error exponents of mismatched sequential probability ratio test are a function of both data distributions $ P_0, P_1$, as opposed to the fixed sample-size setting where $\hat E_0$ is independent of ${P}_1$. The next theorem shows the behavior of the worst-case exponents when the true distributions are within a small divergence ball of radii $r_0,r_1$ and center $\hat{P}_0,\hat{P}_1$, respectively. \begin{theorem}\label{thm:lowerworstseq} Let $P_i, \hat{P}_i$ are defined on the probability simplex $\Pc(\Xc)$ and $r_i \geq 0$, for $i\in\{0,1\}$. Define $\bar\imath=1-i$ to be the complement of index $i$. Then, the worst-case error exponents can be approximated as \begin{equation}\label{eq:worstapproxseq} \underline{\hat{E}}_i (r_i) = E_i - \min\bigg \{ \sum_{j=0}^{1} \sqrt{r_j\cdot \theta_{i,j}(\hat{P}_0,\hat{P}_1)}, \sqrt{r_{\bar\imath}\cdot \theta_{\bar\imath}(\hat{P}_0,\hat{P}_1)} \bigg \} + o \big(\sqrt{r_0} +\sqrt{r_1} \ \big), \end{equation} where \begin{align}\label{eq:senseq} \theta_{i,j}(\hat{P}_0,\hat{P}_1) &= \begin{cases} \frac{2}{\alpha} {\rm Var}_{\hat{P}_i} \Big( \rho_i \log \frac{\hat{P}_i(X)}{\hat{P}_{\bar\imath}(X)} \Big) &i=j\\ \frac{2}{\alpha} {\rm Var}_{\hat{P}_j} \Big( \rho_i \frac{\hat{P}_i(X)}{\hat{P}_j(X)} \Big) & i\neq j \end{cases}\\ \theta_{\bar\imath}(\hat{P}_0,\hat{P}_1)&= \frac{2}{\alpha} {\rm Var}_{\hat{P}_{\bar\imath}} \bigg( \log \frac{\hat{P}_{i}(X)}{\hat{P}_{\bar\imath}(X)} +\rho_i \frac{\hat{P}_i}{\hat{P}_{\bar\imath}} \bigg),\\ \rho_i &= \frac{D(\hat{P}_{\bar\imath}\|\hat{P}_i)}{D(\hat{P}_i\|\hat{P}_{\bar\imath})}. \end{align} \end{theorem} Next assuming $r_0=r_1=r$, we obtain the following result. \begin{corollary}\label{cor:seq} For every $r=r_0= r_1 \geq 0$, and $i\in\{0,1\}, \bar\imath=1-i$, \begin{align}\label{eq:seqcor} & \underline{\hat{E}}_i (r) = E_i - \sqrt{r\cdot \theta_{\bar\imath}(\hat{P}_0,\hat{P}_1)} + o \big(\sqrt{r} \big), \end{align} where \begin{equation} \theta_{\bar\imath}(\hat{P}_0,\hat{P}_1)= \frac{2}{\alpha} {\rm Var}_{\hat{P}_{\bar\imath}} \bigg( \log \frac{\hat{P}_{i}(X)}{\hat{P}_{\bar\imath}(X)} +\rho_i \frac{\hat{P}_i}{\hat{P}_{\bar\imath}} \bigg). \end{equation} \end{corollary} As an example, consider $\hat{P}_0 =\text{Bern}(0.1)$ , $\hat{P}_1 =\text{Bern}(0.8)$, and $r=r_0=r_1$ and the relative entropy is used as the $f$-divergence ball measure of distance. Figure \ref{fig:worstRsenseq} shows the worst-case error exponent given by solving non-convex optimization problem in \eqref{eq:worst_case} with precision of $10^{-3}$ as well as the approximation $ \underline{\tilde{E}}_0$ obtained from \eqref{eq:seqcor} by ignoring the $o(\sqrt{r})$ term. Observe that there exists some gap between the approximation $\underline{\tilde{E}}_0$ and the actual exponent $\underline{\hat{E}}_0$ in \eqref{eq:worst_case}. The approximtion consists of a linear approximation of the objective and second order approximation of constraints and computing it is straightforward for arbitrary distributions and radii. Instead, computing the exact optimization problem $\underline{\tilde{E}}_0$ \eqref{eq:worst_case} is difficult, as it is a nonconvex optimization problem involving a highly nonlinear objective, cf. Eqs. \eqref{eq:threshMMM1}--\eqref{eq:worst_case}. \begin{figure}[htp] \centering \input{seq.tex} \caption{Worst-case achievable type-\RNum{1} error exponent with a relative entropy ball of radius $r$. The solid line corresponds to the optimization problems in \eqref{eq:worst_case}, and the dashed line corresponds to the approximated exponent using Theorem \ref{thm:lowerworstseq}. } \label{fig:worstRsenseq} \end{figure} \section{Adversarial Setting} \label{sec:adv} In this section, we study the sensitivity of hypothesis testing under a perturbation of the observed samples by an adversary. \subsection{Likelihood Ratio test} We consider the worst-case scenario where an adversarial can change the sample type to $\Th'$, where the change is assumed to be limited to a divergence ball around the type of actual sequence $\Th$ generated by either of the hypothesis, i.e., $d(\Th',\Th)\leq r$. Similar to the case with the distribution mismatch, by direct application of Sanov's theorem, we can find the worst-case exponents by solving the optimization \begin{align} \underline{\hat{E}}_0(r)= \min_{\substack{ \hat{Q} \in \mathcal{Q}^{\rm adv}_0 \\ Q \in \Bc(\hat{Q},r) }} D(Q\|P_0),\label{eq:testworst1}\\ \underline{\hat{E}}_1(r)= \min_{\substack{ \hat{Q} \in \mathcal{Q}^{\rm adv}_1 \\ Q \in \Bc(\hat{Q},r) }} D(Q\|P_1),\label{eq:testworst2} \end{align} where \begin{align} \mathcal{Q}^{\rm adv}_0&= \big\{\hat{Q}\in \mathcal{P}(\mathcal{X}): D(\hat{Q}\| {P}_0)-D(\hat{Q}\|{P}_{1}) \geq {\gamma} \big \}, \label{eq:Q1set}\\ \mathcal{Q}^{\rm adv}_1&= \big\{\hat{Q}\in \mathcal{P}(\mathcal{X}): D(\hat{Q}\| {P}_0)-D(\hat{Q}\|{P}_{1}) \leq {\gamma} \big \}, \label{eq:Q2set} \end{align} Furthermore, in the case where $\Bc$ is the $f$-divergence ball or the R\'enyi divergence ball of order $\alpha \in[0,1]$, $d(\hat{Q},Q)$ is jointly convex \cite{Harremos,Csiszar}. Hence, \eqref{eq:testworst1} is a convex optimization problem and, the KKT conditions are also sufficient. Writing the Lagrangian for $\alpha=1$ we have \begin{align}\label{eq:lagrange} L(Q,\hat{Q},\lambda_1,\lambda_2, \nu_1, \nu_2)&= D(Q\|P_0) + \lambda_1 \big( D(\hat{Q}\|{P}_1)-D(\hat{Q}\|P_0) +\gamma \big) + \lambda_2 \big ( D(Q\|\hat{Q}) -r\big ) \nonumber \\ &~~~~~+ \nu_1 \Big ( \sum_{x\in \mathcal{X}} Q(x)-1\Big )+ \nu_2 \Big ( \sum_{x\in \mathcal{X}} \hat{Q}(x)-1\Big ). \end{align} Differentiating with respect to $Q(x)$ and $\hat{Q}(x)$ and setting the derivatives to zero, we have \begin{align} 1+\log \frac{Q(x)}{P_0(x)} + \lambda_2 \Bigg (1+\log \frac{Q(x)}{\hat{Q}(x)}\Bigg) + \nu_1&=0,\label{eq:lagrangetest1}\\ \lambda_1 \log \frac{{P}_0(x)}{{P}_1(x)} - \lambda_2 \frac{Q(x)}{\hat{Q}(x)} + \nu_2&=0,\label{eq:lagrangetest2} \end{align} respectively. Solving equations \eqref{eq:lagrangetest1}, \eqref{eq:lagrangetest2} for every $x\in\Xc$, we obtain \begin{align}\label{eq:lowerworstKKT1} Q_{\lambda_1,\lambda_2}(x)&= \frac{ P_0(x)} {\Big( 1-\frac{\lambda_1}{\lambda_2}\gamma+\frac{\lambda_1}{\lambda_2}\log \frac{P_0(x)}{P_1(x)} \Big ) ^ {\lambda_2}} \times \Bigg( \sum_{a\in \Xc} \frac{ P_0(a)} {\Big( 1-\frac{\lambda_1}{\lambda_2}\gamma+\frac{\lambda_1}{\lambda_2}\log \frac{P_0(a)}{P_1(a)} \Big ) ^ {\lambda_2}} \Bigg)^{-1},\\ \hat{Q}_{\lambda_1,\lambda_2}(x)&=\frac{ P_0(x)} {\Big( 1-\frac{\lambda_1}{\lambda_2}\gamma+\frac{\lambda_1}{\lambda_2}\log \frac{P_0(x)}{P_1(x)} \Big ) ^ {1+\lambda_2}}\times \Bigg( \sum_{a\in \Xc} \frac{ P_0(a)} {\Big( 1-\frac{\lambda_1}{\lambda_2}\gamma+\frac{\lambda_1}{\lambda_2}\log \frac{P_0(a)}{P_1(a)} \Big ) ^ {1+\lambda_2}} \Bigg)^{-1} , \end{align} where $\lambda_1, \lambda_2$ can be find by solving $D(Q_{\lambda_1,\lambda_2}\|\hat{Q}_{\lambda_1,\lambda_2})=r$ and $D(\hat{Q}_{\lambda_1,\lambda_2}\|{P}_1) -D(\hat{Q}_{\lambda_1,\lambda_2}\|P_0) = \gamma$. Next, we will study the error exponents' worst-case sensitivity when the radius of the divergence ball is small. \begin{theorem}\label{thm:adverLRT} For every $r\geq0$ we have \begin{equation} \underline{\hat{E}}_i(r)= E_i - \sqrt{r \cdot \theta_i(P_0,P_1,\gamma) }+ o \big(\sqrt{r} \big), \end{equation} where \begin{equation}\label{eq:sampsen} \theta_i(P_0,P_1,\gamma)=\frac{2}{\alpha} \text{Var}_{Q_{\lambda}} \bigg(\log \frac{Q_{\lambda}(X)}{{P}_i(X)} \bigg), \end{equation} and ${Q}_{\lambda}(X)$ is the minimizing distribution in \eqref{eq:tilted}. \end{theorem} \begin{remark} Unlike the distribution mismatch where $\frac{\partial \theta_0^{\rm dist}(\hat{P}_0,\hat{P}_1,\hat{\gamma})}{\partial \gamma} \geq 0 , \frac{\partial \theta_1^{\rm dist}(\hat{P}_0,\hat{P}_1,\hat{\gamma})}{\partial \gamma} \leq 0 $, the sensitivities of the likelihood ratio test towards sample mismatch are not strictly-increasing nor strictly-decreasing. Instead, it can be shown that the derivative of the sample sensitivities respect to the threshold $\hat{\gamma}$ are proportional to the skewness of the random variable $\log \frac{P_1(X)}{P_0(X)}$ under the distribution $Q_\lambda(X)$ which can have any sign, depending on $P_0$ and $P_1$. However, similarly to the case of distribution mismatch, the sensitivities under both distributions are equal when $\lambda=\frac{1}{2}$, i.e., $\theta_0(\hat{P}_0,\hat{P}_1,\hat{\gamma})=\theta_1(\hat{P}_0,\hat{P}_1,\hat{\gamma})$. More generally, for any $\lambda$, we have \begin{equation} \frac{\theta_0(\hat{P}_0,\hat{P}_1,\hat{\gamma})}{\theta_1(\hat{P}_0,\hat{P}_1,\hat{\gamma})} =\Big(\frac{\lambda}{1-\lambda} \Big) ^2. \end{equation} \end{remark} Next, we will compare the sensitivity of the likelihood ratio test toward distribution mismatch and sample mismatch. \begin{corollary} For every $P_0, P_1 \in \Pc(\Xc)$ and $i\in\{0,1\}$, we have \begin{equation}\label{eq:lowersenadv} \Bigg(\min_k\frac{P_i(k)}{Q_\lambda(k)} \Bigg ) \theta_i^{\rm dist} - E_i^2\leq \theta_i^{\rm adv}\leq\theta_i^{\rm dist}, \end{equation} where $\theta_i^{\rm dist}, \theta_i^{\rm adv}$ are the likelihood ratio test distribution and adversarial setting sensitivities in \eqref{eq:sensitivity} and \eqref{eq:sampsen}. \end{corollary} As can be seen from the above result, having distribution mismatch renders the test performance more sensitive than an adversary tampering with the observation when the divergence ball radii are equal. \begin{example} Consider $\hat{P}_0 =\text{Bern}(0.1)$ , $\hat{P}_1 =\text{Bern}(0.8)$. By finding the optimizing distribution $Q_\lambda$ we have $\theta_0^{\rm dist}=1.136$, $\theta_0^{\rm adv} = 0.854$ and the lower bound in \eqref{eq:lowersenadv} equals to $0.151$. \end{example} Observe that for small radii such that the approximations for worst case mismatch and worst case adversarial settings are accurate, if $\frac{r^{\rm adv} }{r_i^{\rm dist} } =\frac{\theta_i^{\rm dist} }{\theta_i^{\rm adv} }$ then the worst case error exponents under both scenarios would be equal. \subsection{Generalized Likelihood Ratio Test} Next, we consider the worst-case Hoeffding's generalized likelihood test in the adversarial setting. Similarly to the likelihood ratio test, we assume the observer receives samples with the type $\Th'$ which satisfies $\Th \in \Bc(\Th',r)$ and $\Th$ is the type of the original sequence generated by the unknown hypothesis. By direct application of Sanov's theorem, we have \begin{align} \underline{\hat{E}}_0(r)= \min_{\substack{ D(\hat{Q}\|P_0)\geq \gamma \\ Q \in \Bc(\hat{Q},r) }} D(Q\|P_0),\label{eq:testworstH1}\\ \underline{\hat{E}}_1(r)= \min_{\substack{ D(\hat{Q}\|P_0)\leq \gamma \\ Q \in \Bc(\hat{Q},r) }} D(Q\|P_1),\label{eq:testworstH2} \end{align} In this scenario, unlike the case with distribution mismatch, the adversary can change both error exponents. It is clear that $\underline{E}_0$ is non-convex optimization, and for the $f$-divergence or the R\'enyi divergence of order $\alpha \in [0,1]$, $\underline{E}_1$ is a convex optimization and hence easy to solve. Like previous sections, we will look into the sensitivity of error exponents when the divergence ball radius is small. \begin{theorem}\label{thm:advGLRT} For every $r\geq0$, the worst-case error exponents can be approximated as \begin{equation}\label{eq:worstapproxHoef} \underline{\hat{E}}_i(r)= \gamma - \sqrt{ r \cdot \theta_i(P_0,P_1, \gamma) }+o(\sqrt{r_0}), \end{equation} where \begin{align}\label{eq:hoefsensamp} \theta_0(P_0, P_1, \gamma) &= \frac{2}{\alpha} \max_{\substack{ \hat{Q}: D(\hat{Q}\|P_0) = \gamma}} {\rm Var}_{\hat{Q}} \bigg(\log \frac{\hat{Q}(X)}{P_0(X)} \bigg),\\ \theta_1(P_0,P_1,\gamma)&=\frac{2}{\alpha} {\rm Var}_{{Q}_{\mu}} \bigg(\log \frac{{Q}_{\mu}(X)}{P_1(X)} \bigg), \label{eq:Hoefadver1} \end{align} are the type-\RNum{1} and type-\RNum{2} error exponents' sensitivities of the Hoeffding's test, and ${Q}_{\mu}(X)$ is the minimizing distribution in \eqref{eq:tiltedH}. \end{theorem} In this scenario, both exponents will be affected by a change in the observation type as opposed to distribution mismatch, where only the first exponent would change as a result of the mismatch. Similarly to the likelihood ratio test, we have that distribution mismatch is more sensitive than adversarial observation perturbation. \begin{corollary}\label{cor:samp_dist} For every $P_0, P_1 \in \Pc(\Xc)$ we have \begin{equation}\label{eq:lowersen} \theta_0^{\rm adv}\leq\theta_0^{\rm dist}, \end{equation} where $\theta_0^{\rm dist}, \theta_0^{\rm adv}$ are the generalized likelihood ratio test distribution and sample sensitivities in \eqref{eq:hoefsen} and \eqref{eq:hoefsensamp}. \end{corollary} \subsection{Sequential Probability Ratio Test} Finally, we study the impact of adversarial observation perturbation on the error exponents of the sequential probability ratio test due to sample mismatch. In this setting, we assume that the adversary can perturb the received sequence type in a divergence ball with a specific radius when the tests stop. This model is the worst possible adversarial setting as the adversarial can see future samples and change the whole sequence until the stopping time to maximize the stopping time and also minimize the error exponents. We find a lower bound on the error exponents and an upper bound on the average stopping time in this adversarial setting. \begin{theorem}\label{thm:SPRTadver} For every $r\geq0$, $i\in\{0,1\}$, define $\bar\imath=1-i$ to be the complement of index $i$. The worst-case error exponents can be approximated as \begin{equation} \underline{\hat{E}}_0 (r) \underline{\hat{E}}_1 (r) \geq \Big(D(P_1\|P_0)-2\sqrt{r\cdot \theta_0(P_0,P_1) }\Big)\Big(D(P_0\|P_1)-2\sqrt{r\cdot \theta_1(P_0,P_1) }\Big) +o(\sqrt{r}). \end{equation} where \begin{equation} \theta_i(P_0,P_1)=\frac{2}{\alpha} \text{Var}_{P_{\bar\imath}} \Bigg(\log \frac{P_{\bar\imath}}{{P}_i} \Bigg) \end{equation} \end{theorem} Theorem \ref{thm:SPRTadver} provides only a lower bound to the worst-case error exponents in the adversarial setting. Thus, we cannot compare the sensitivities derived in the mismatched case with the adversarial worst-case sensitivity. \appendices \section{Proof of Theorem \ref{thm:mismatchLRT}}\label{apx:TmismatchLRT} We show the result for $ \hat{E}_0$ and similar steps are valid for $\hat{E}_1$. The type-\RNum{1} probability of error can be written as \begin{align} \hat{\epsilon}_0 = \sum_{\substack{\bx\in\Xc^n\\D(\Th\|\hat{P}_0)-D(\Th\|\hat{P}_1) \geq \hat{\gamma}}} P_{0}^n(\xv). \label{eq:tailmismatch} \end{align} Applying Sanov's Theorem to \eqref{eq:tailmismatch} to get \eqref{eq:LRTmis1} is immediate. The optimization problem in \eqref{eq:LRTmis1} consists of the minimization of a convex function over linear constraints. Therefore, the KKT conditions are also sufficient \cite{Boyd}. Writing the Lagrangian, we have \begin{align}\label{eq:lagrangeMM} L(Q,\lambda,\nu)= &D(Q\|P_0) + \lambda \big ( D(Q\|\hat{P}_1)-D(Q\|\hat{P}_0) +\hat{\gamma} \big ) +\nu \Big ( \sum_{x\in \Xc} Q(x)-1 \Big). \end{align} Differentiating with respect to $Q(x)$ and setting to zero we have \begin{equation}\label{eq:lagrangeder} 1+\log \frac{Q(x)}{P_0(x)} +\lambda \log \frac{\hat{P}_0(x)}{\hat{P}_1(x)} + \nu=0. \end{equation} Solving equations \eqref{eq:lagrangeder} for every $x\in\Xc$ we obtain \eqref{eq:tiltedMM1}. Moreover, from the complementary slackness condition if \cite{Boyd} \begin{equation}\label{eq:threshcondition} D(P_0\|\hat{P}_0)- D(P_0\|\hat{P}_1) \leq \hat{\gamma}, \end{equation} then \eqref{eq:KKTgamma1} should hold. Otherwise, if \eqref{eq:threshcondition} does not hold then $\lambda$ in \eqref{eq:lagrangeder} should be zero and hence $\hat{Q}_{\lambda_0}=P_0$, $\hat{E}_0=0$. Finally, substituting the minimizing distribution $\hat{Q}_{\lambda_0}$ \eqref{eq:tiltedMM1} into \eqref{eq:lagrangeMM} we get the dual expression \begin{equation}\label{eq:lagrange} g(\lambda)= \lambda \hat{\gamma} - \log \Big ( \sum_{x\in \Xc} P_0 \hat{P}_0^{-\lambda}(x) \hat{P}_1^{\lambda}(x) \Big ). \end{equation} Since the optimization problem in \eqref{eq:LRTmis1} is convex, then the duality gap is zero \cite{Boyd}, and this proves the \eqref{eq:dual}. \section{Proof of Theorem \ref{thm:stein}}\label{apx:Tstein} For convenience, in this section, we make explicit the dependence of $\hat{\Qc}_1$ and $\hat{E}_1$ on the threshold of the test $\hat \gamma$, and denote them by $\hat{\Qc}_1({\hat{\gamma}})$ and $\hat{E}_1({\hat{\gamma}})$. First, notice that $\hat{E}_1$ is a non-increasing function of $\hat{\gamma}$ since for every $\hat{\gamma}_1 \leq \hat{\gamma}_2 $ we have \begin{equation} \hat{\Qc}_1({\hat{\gamma}_1}) \subset \hat{\Qc}_1({\hat{\gamma}_2}), \end{equation} hence \begin{equation} \hat{E}_1({\hat{\gamma}_2}) \leq \hat{E}_1({\hat{\gamma}_1}). \end{equation} Therefore, in the Stein's regime we are looking for the smallest threshold such that $\limsup_{n\rightarrow \infty} \hat{\epsilon}_0 \leq \epsilon$. Let \begin{equation}\label{eq:steinthresh} \hat{\gamma}= D(P_0\|\hat{P}_0) - D(P_0\|\hat{P}_1) - \sqrt {\frac{ V(P_0, \hat{P}_0,\hat{P}_1) } {n} } \Qsf^{-1}(\epsilon), \end{equation} where \begin{align} V(P_0, \hat{P}_0,\hat{P}_1)&= {\rm Var}_{P_0}\bigg[ \log \frac{\hat{P}_0}{\hat{P}_1} \bigg ] \nonumber\\ & = \sum_{x\in \Xc} P_0(x) \bigg ( \log \frac{\hat{P}_0}{\hat{P}_1} \bigg )^2 - \big ( D(P_0\|\hat{P}_1) - D(P_0\|\hat{P}_0) \big)^2, \end{align} and $\Qsf^{-1}(\epsilon)$ is the inverse cumulative distribution function of a zero-mean unit-variance Guassian random variable. For such $\hat{\gamma}$, the type-\RNum{1} error probability of the mismatched likelihood ratio test is \begin{align} \hat{\epsilon}_0=&\PP_0 \Bigg [\frac{1}{n} \sum_{i=1}^{n} \log \frac{\hat{P}_0(X_i)}{\hat{P}_1(X_i)} \leq D(P_0\|\hat{P}_1) - D(P_0\|\hat{P}_0) + \sqrt {\frac{ V(P_0, \hat{P}_0,\hat{P}_1) } {n} } \Qsf^{-1}(\epsilon) \Bigg ]. \end{align} Observe that $D(P_0\|\hat{P}_1) - D(P_0\|\hat{P}_0) = \mathbb{E}_{P_0} \Big [ \log \frac{\hat{P}_0(X)}{\hat{P}_1(X)} \Big ]$. Let $ \hat{S}_n=\frac{1}{n}\sum_{i=1}^n \hat\imath(x_i)$, where $\hat\imath(x_i)=\log \frac{\hat{P}_0(x_i)}{\hat{P}_1(x_i)}$. Letting $Z$ be a zero-mean unit-variance Guassian random variable, then, by the central limit theorem we have \begin{align} &\limsup_{n\rightarrow \infty} \hat{\epsilon}_0 &\notag\\ &= \limsup_{n\rightarrow \infty} \PP_0\Bigg [ \frac{ \sqrt{n} \big ( \hat{S}_n- \mathbb{E}_{P_0} [\hat\imath(X)] \big )}{\sqrt {V(P_0, \hat{P}_0,\hat{P}_1) } } \leq \Qsf^{-1}(\epsilon) \Bigg]\\ &=\Pp\big [Z \leq \Qsf^{-1}(\epsilon) \big]\\ &= \epsilon. \end{align} Therefore, asymptotically, the type-\RNum{1} error probability of mismatched likelihood ratio test with $\hat{\gamma}$ in \eqref{eq:steinthresh} is equal to $\epsilon$. Next, we need to show that for any threshold $\hat{\gamma}$ and $\varepsilon>0$ such that \begin{equation}\label{eq:limsupthresh} \limsup_{n \rightarrow \infty} \hat{\gamma} +\varepsilon\leq D(P_0\|\hat{P}_0) - D(P_0\|\hat{P}_1), \end{equation} the type-\RNum{1} probability of error tends to $1$ as the number of observation approaches infinity, which implies that $D(P_0\|\hat{P}_0) - D(P_0\|\hat{P}_1)$ is the lowest possible threshold that meets the constraint $\limsup_{n\rightarrow \infty} \hat{\epsilon}_0 \leq \epsilon$. Hence, the corresponding $\hat E_1({\hat\gamma})$ is this highest type-\RNum{2} exponent that meets the constraint. In order to show this, define the following sets \begin{align} \Ec_{\delta} =& \Big \{ \bx\in\Xc^n:\, \| \Th(x) -P_0(x)\|_\infty < \delta \Big \},\\ \Dc =& \big\{ \bx\in\Xc^n:\, \big|D(\Th\|\hat{P}_0)-D(\Th\|\hat{P}_1) -D(P_0\|\hat{P}_0)+D(P_0\|\hat{P}_1) \big | < \varepsilon \big\}, \\ \bar \Dc =& \big\{ \bx\in\Xc: \,D(\Th\|\hat{P}_0)-D(\Th\|\hat{P}_1) - D(P_0\|\hat{P}_0)+D(P_0\|\hat{P}_1) \geq - \varepsilon \big\}. \end{align} where $\|.\|_{\infty}$ is the norm infinity. From the continouity of $D(.\|\hat{P})$ we have that for any $\varepsilon >0$ such that \begin{equation}\label{eq:epsilondelta} \big |D(\Th\|\hat{P}_0)-D(\Th\|\hat{P}_1) - D(P_0\|\hat{P}_0)+D(P_0\|\hat{P}_1) \big | < \varepsilon. \end{equation} there exists $\delta>0$ such that for all $\Th$ satisfying \begin{equation} \| \Th(x) -P_0(x)\|_\infty < \delta, \end{equation} \eqref{eq:epsilondelta} holds. Therefore, when \eqref{eq:limsupthresh} holds \begin{align} \liminf_{n\rightarrow \infty} \epsilon_0 (\hat{\phi} ) \geq& \liminf_{n\rightarrow \infty} \sum_{x\in\bar \Dc} P_0^n(\xv)\\ \geq &\liminf_{n\rightarrow \infty} \sum_{x\in \Dc} P_0^n(\xv). \end{align} Now from the continuity argument, there exists a $\delta$ such that \begin{equation} \sum_{x\in\Dc} P_0^n(\xv) \geq \sum_{x\in\Ec_{\delta}} P_0^n(\xv). \end{equation} Set $\delta_n=\sqrt{\frac{\log n}{n}}$. Thus, for sufficiently large $n$, $\delta_n \leq \delta$, Therefore, we have \begin{align} \liminf_{n\rightarrow \infty} \epsilon_0 (\hat{\phi} ) &\geq \liminf_{n \rightarrow \infty} \sum_{\xv\in\Ec_{\delta_n}} P_0^n(\xv)\\ &\geq \lim_{n \rightarrow \infty} 1- \frac{2|\Xc|}{n}\\ &=1. \end{align} where the last step is by Hoeffding's inequality \cite{Hoeffdingineq} and union bound. Therefore, for any $\hat{\gamma} < D(P_0\|\hat{P}_0) - D(P_0\|\hat{P}_1)$ type-\RNum{1} error goes to unity which concludes the theorem. \section{Proof of Theorem \ref{thm:lowerworst}}\label{apx:Tlowerworst} We will use the following two lemmas which are the local approximation of the R\'enyi Divergences and $f$-divergences. \begin{lemma} Let $P$ and $Q$ be two probability distributions over the same alphabet $\Xc$, the R\'enyi divergence of order $\alpha$ can be locally approximated by \begin{equation} D_\alpha(P\|Q)=D_\alpha(Q\|P)=\frac{\alpha}{2} \sum_{x\in\Xc} \frac{\big(P(x)-Q(x)\big )^2}{P(x)}+ o(\|P(x)-Q(x))\|^2). \end{equation} \end{lemma} \begin{proof} Let $\delta P(x)=Q(x)-P(x)$ for $x\in\Xc$. Using the second order Taylor expansion we have \begin{align} D_\alpha(P\|Q)&=\frac{1}{\alpha-1} \log \sum_{x\in\Xc} P(x)^\alpha Q(x)^{1-\alpha}\\ &=\frac{1}{\alpha-1} \log \sum_{x\in\Xc} P(x)^\alpha P(x)^{1-\alpha} \Bigg(1+\frac{\delta P(x)}{P(x)} \Bigg )^{(1-\alpha)} \\ &= \frac{1}{\alpha-1} \log \sum_{x\in\Xc} P(x) \Bigg(1+(1-\alpha)\frac{\delta P(x)}{P(x)} + \frac{\alpha(\alpha-1)}{2}\frac{\delta P^2(x)}{P^2(x)} +o\bigg(\frac{\delta P^2(x)}{P^2(x)}\bigg ) \Bigg )\\ &= \frac{1}{\alpha-1} \log \sum_{x\in\Xc} P(x)+(1-\alpha)\delta P(x) + \frac{\alpha(\alpha-1)}{2}\frac{\delta P^2(x)}{P(x)} +o\big(\delta P^2(x)\big ) \\ &= \frac{1}{\alpha-1} \log \Bigg( 1+ \sum_{x\in\Xc} \frac{\alpha(\alpha-1)}{2}\frac{\delta P^2(x)}{P(x)} +o\big(\delta P^2(x)\big ) \Bigg ) \\ &= \frac{\alpha}{2}\sum_{x\in\Xc} \frac{\delta P^2(x)}{P(x)} +o\big(\delta P^2(x)\big ). \end{align} $D_\alpha(Q\|P)$ can be approximated in the same way. \end{proof} \begin{lemma} Let $P$ and $Q$ be two probability distributions over the same alphabet $\Xc$, and let the convex function $f(t)$ to be twice differentiable at $t = 1$. Then the $f$-divergence using such function can be locally approximated by \begin{equation} D_f(P\|Q)=D_f(Q\|P)=\frac{f''(1)}{2} \sum_{x\in\Xc} \frac{\big(P(x)-Q(x)\big )^2}{P(x)}+ o(\|P(x)-Q(x))\|^2) \end{equation} \end{lemma} \begin{proof} The proof is similar to previous lemma. \end{proof} We show the result under the first hypothesis, and similar steps are valid for the second hypothesis. Also, since the second-order approximation of the R\'enyi divergence and the family of twice differentiable $f$-divergences are only different in a constant, by setting $\alpha$ to be the order of the R\'enyi divergence and $\alpha$ to be $f''(1)$ we can prove the result for both divergences simultaneously. Consider the first minimization in \eqref{eq:MMlower1} over $Q$, i.e., \begin{equation}\label{eq:LRTmis1perturb} \hat{E}_0= \min_{ Q \in \hat{\Qc}_0 } D(Q\|P_0). \end{equation} Observe that by assumption, $\hat P_0(x)>0$ for each $x\in\Xc$. Therefore, for every $\alpha$ there exists a positive $\bar{r}_0$ such that $P_0(x)>0$ for every $P_0 \in \Bc(\hat P_0,\bar{r}_0)$ (for example in the case of the R\'enyi divergence with $\alpha\geq1$, $P_0(x)>0$ for every finite $r_0$). Hence, for $P_0 \in \Bc(\hat P_0,\bar{r}_0)$, the relative entropy $D(Q\|P_0)$ is continuously differentiable in both $Q, P_0$ for some positive $\bar{r}_0$. Moreover, the constraints in \eqref{eq:LRTmis1perturb} are continuously differentiable with respect to $Q$ and also trivially with respect to $P_0$, since the constraints do not depend on $P_0$. Hence, the optimization in \eqref{eq:LRTmis1perturb} is minimizing a continuously differentiable function over a compact set with continuously differentiable constraints. Hence, by the maximum theorem \cite{Walker}, $\hat{E}_0$ is a continuous function of $P_0$ for all $P_0 \in \Bc(\hat P_0,\bar{r}_0)$ with finite radius $\bar{r}_0$. Also, by the envelope theorem\cite{Segal} we have \begin{equation} \frac{ \partial \hat{E}_0 }{\partial P_0(x)}= -\frac{\hat{Q}_{\lambda_0}(x)}{P_0(x)}. \end{equation} Define the vectors \begin{align} \nabla \hat{E}_0 &= \bigg( -\frac{\hat{Q}_{\lambda}(x_1)}{\hat{P}_0(x_1)},\dotsc, -\frac{\hat{Q}_{\lambda}(x_{|\Xc|})}{\hat{P}_0(x_{|\Xc|})}\bigg)^T,\\ \thetav_{P_0} &= \big(P_0(x_1)-\hat{P}_0(x_1),\dotsc,P_0(x_{|\Xc|})-\hat{P}_0(x_{|\Xc|})\big)^T. \end{align} Applying the Taylor expansion to $\hat{E}_0$ around $P_0=\hat{P}_0$, we obtain \begin{align}\label{eq:linearapprox} \hat{E}_0=E_0 + \thetav_{P_0}^{T} \nabla \hat{E}_0 + o(\| \thetav_{P_0} \|_{\infty}). \end{align} By substituting the expansion \eqref{eq:linearapprox} for the first minimization in \eqref{eq:MMlower1} we obtain \begin{equation}\label{eq:approxworstlrt} \underline{\hat{E}}_0 (r_0) = \min_{P_0 \in \Bc(\hat P_0,r_0) } E_0 + \thetav_{P_0}^{T} \nabla\hat{E}_0 + o(\|\thetav_{P_0} \|_{\infty}). \end{equation} Now, we further approximate the outer minimization constraint in \eqref{eq:MMlower1}. By approximating $d(\hat{P}_0, P_0 )$ we get \cite{Zheng} \begin{equation}\label{eq:KLapprox} d(\hat{P}_0 , P_0 ) = \frac{1}{2} \thetav_{P_0}^T \Jm(\hat{P}_0) \thetav_{P_0} + o (\| \thetav_{P_0} \|^2_{\infty}), \end{equation} where \begin{equation} \Jm(\hat{P}_0)=\diag\bigg( \frac{\alpha}{\hat{P}_0(x_1)},\dotsc,\frac{\alpha}{\hat{P}_0(x_{|\Xc|})}\bigg) \end{equation} is the Fisher information matrix. Therefore, \eqref{eq:approxworstlrt} can be approximated as \begin{align}\label{eq:worstapproxopt} \underline{\hat{E}}_0 (r_0) =& \min_{\substack{ \frac{1}{2} \thetav_{P_0}^T \Jm(\hat{P}_0) \thetav_{P_0} +o (\| \thetav_{P_0} \|^2_{\infty}) \leq r_{0} \\ \onev^T\thetav_{P_0}=0}} \Big \{ E_0 + \thetav_{P_0}^{T} \nabla \hat{E}_0+o(\|\thetav_{P_0} \|_{\infty}) \Big \} \\ =& \min_{\substack{ \frac{1}{2} \thetav_{P_0}^T \Jm(\hat{P}_0) \thetav_{P_0} \leq r_{0} \\ \onev^T\thetav_{P_0}=0}} \Big \{ E_0 + \thetav_{P_0}^{T} \nabla \hat{E}_0 \Big \}+o(\sqrt{r_0}), \label{eq:approxerrorlrt} \end{align} where to get \eqref{eq:approxerrorlrt} we have taken $o(\|\thetav_{P_0} \|_{\infty})$ out of the minimization and substitute it with $o(\|\thetav^*_{P_0}(r_0) \|_{\infty})$ where $\thetav^*_{P_0} $ is the optimizing solution to the minimization. Moreover, in approximating the inequality constraint, we incur an error of the order $o(\sqrt{r_0})$ in $\|\thetav^*_{P_0}\|_{\infty}$. Also, from the inequality constraint and the restriction it imposes on the length of the vector $\thetav_{P_0} $ we have that $\|\thetav^*_{P_0}\|_{\infty} \leq c\sqrt{r_0}+o(\sqrt{r_0})$ where $c$ is independent from $r_0$, from which we obtain \eqref{eq:approxerrorlrt}. The optimization problem in \eqref{eq:approxerrorlrt} is convex and hence the KKT conditions are sufficient. The corresponding Lagrangian is given by \begin{align}\label{eq:laglrt} L(\thetav_{P_0}, \lambda,\nu) &= E_0 + \thetav_{P_0}^{T} \nabla \hat{E}_0 + \lambda \Big (\frac{1}{2}\thetav_{P_0}^T \Jm(\hat{P}_0) \thetav_{P_0} - R_{1} \Big) +\nu ( \onev^T\thetav_{P_0} ). \end{align} Differentiating with respect to $\thetav_{P_0}$ and setting to zero, we have \begin{equation}\label{eq:KKTsenlrt} \nabla \hat{E}_0 + \lambda \Jm(\hat{P}_0)\thetav_{P_0} +\nu \onev=0. \end{equation} Therefore, \begin{equation}\label{eq:deltaPsolutionlrt} \thetav_{P_0} =-\frac{1}{\lambda} \Jm^{-1}(\hat{P}_0) \big (\nabla \hat{E}_0 +\nu \onev \big ). \end{equation} Note that if $\lambda=0$ then from \eqref{eq:KKTsenlrt} $ \nabla \hat{E}_0= -\nu \onev$ which cannot be true for thresholds satisfying \eqref{eq:threshcodsen} since $\hat{Q}_{\lambda} \neq \hat{P}_0$. Therefore, from the complementary slackness condition \cite{Boyd} the inequality constraint in \eqref{eq:approxerrorlrt} should be satisfied with equality. By solving $\frac{1}{2}\thetav_{P_0}^T \Jm(\hat{P}_0) \thetav_{P_0} = r_0$ and $\bold{1}^T\thetav_{P_0} =0 $ and substituting $\lambda, \nu$ in \eqref{eq:deltaPsolutionlrt}, we obtain \begin{equation}\label{eq:deltaPlrt} \thetav_{P_0} =-\frac{ \psiv}{\sqrt{\psiv^T \Jm(\hat{P}_0)\psiv} }\sqrt{2r_0}, \end{equation} where \begin{equation} \psiv= \Jm^{-1}(\hat{P}_0)\Bigg (\nabla \hat{E}_0 -{\onev^T\Jm^{-1}(\hat{P}_0) \nabla \hat{E}_0 } \onev\Bigg ). \end{equation} Substituting \eqref{eq:deltaPlrt} into \eqref{eq:laglrt} yields \eqref{eq:worstapproxlrt}. \section{Proof of Corollary \ref{cor:varderivative}}\label{apx:Lvarderivative} We show the result under the first hypothesis and similar steps are valid under the second hypothesis. To prove the Theorem we need the following lemma. \begin{lemma}\label{lem:convex} Consider the following optimization problem \begin{equation} E(\gamma)= \min_{ \mathbb{E}_Q [X] \geq \gamma } D(Q\|P). \end{equation} Then $E(\gamma)$ is convex in $\gamma$. \end{lemma} \begin{proof} Let \begin{equation} Q^{*}_{1} = \argmin_{ \mathbb{E}_Q [X] \geq \gamma_1} D(Q\|P) ~~ Q^{*}_{2} = \argmin_{ \mathbb{E}_Q [X] \geq \gamma_2} D(Q\|P). \end{equation} From the convexity of the relative entropy, for any $\beta \in (0,1)$, \begin{align} D\big(\beta Q^*_1 +(1-\beta) Q^*_2 \| P \big) &\leq \beta D( Q^*_1 \| P) +(1-\beta) D( Q^*_2 \| P)\\ &= \beta \min_{ \mathbb{E}_Q [X] \geq \gamma_1 } D(Q\|P) +(1-\beta) \min_{ \mathbb{E}_Q [X] \geq \gamma_2 } D(Q\|P). \end{align} Furthermore, since $Q^*_1, Q^*_2$ satisfy their correspending optimization constraints, then $\mathbb{E}_{Q^*_1}[X] \geq \gamma_1$, $\mathbb{E}_{Q^*_2}[X] \geq \gamma_2$ , hence \begin{equation} \mathbb{E}_{\beta Q^*_1 +(1-\beta) Q^*_2}[X] \geq \beta \gamma_1+ (1-\beta) \gamma_2. \end{equation} Therefore, $\beta Q^*_1 +(1-\beta) Q^*_2$ satisfies the optimization constraint when $\gamma= \beta \gamma_1 + (1-\beta) \gamma_2$, then \begin{align} \min_{ \mathbb{E}_{Q} [X] \leq \beta \gamma_1+(1-\beta) \gamma_2} D(Q\|P)& \leq D(\beta Q^*_1 +(1-\beta) Q^*_2 \| P)\\ &\leq \beta \min_{ \mathbb{E}_Q [X] \geq \gamma_1 } D(Q\|P) +(1-\beta) \min_{ \mathbb{E}_Q [X] \geq \gamma_2 } D(Q\|P). \end{align} Hence $E(\gamma)$ is convex in $\gamma$. \end{proof} From above lemma we can show that $\lambda$ is a non-decreasing function of $\hat{\gamma}$. From the envelope theorem \cite{Segal} \begin{equation} \frac{\partial \hat{E}_0 }{\partial \hat{\gamma}} = \lambda^*, \end{equation} where $\lambda^*$ is the optimizing $\lambda$ in \eqref{eq:tilted} for the test $\hat{\phi}$. Therefore \begin{equation} \frac{\partial \lambda^* }{\partial \hat{\gamma}}= \frac{\partial ^2 \hat{E}_0 }{\partial \hat{\gamma}^2} \geq 0, \end{equation} where the inequality is from convexity of $\hat{E}_0$ respect to $\hat{\gamma}$. Therefore, we only need to consider the behavior of ${\rm Var}_{\hat{P}_0} \Big[\frac{\hat{Q}_{\lambda}(X)}{\hat{P}_0(X)} \Big]$ as $\lambda$ changes. Taking the derivative of variance respect to $\lambda$, we have \begin{align} \frac{\partial }{\partial \lambda}{\rm Var}_{\hat{P}_0} \Bigg[\frac{\hat{Q}_{\lambda}(X)}{\hat{P}_0(X)} \Bigg]&=\sum_{x\in \Xc} \frac{2{\hat{Q}_{\lambda}}(x)}{\hat{P_0}(x)} \frac{\partial \hat{Q}_{\lambda}(x) }{\partial \lambda}\\ &= \sum_{x\in \Xc} \frac{2{\hat{Q}_{\lambda}}(x)}{\hat{P_0}(x)} \Bigg( \hat{Q}_{\lambda}(x) \log \frac{\hat{P}_1(x)}{\hat{P}_0(x)}-\hat{Q}_{\lambda}(x) \sum_{x' \in \Xc} \hat{Q}_{\lambda}(x')\log \frac{\hat{P}_1(x')}{\hat{P}_0(x')} \Bigg ) \\ &=2 \mathbb{E}_{\hat{Q}_{\lambda}} \bigg[ \frac{\hat{Q}_{\lambda}(X)}{\hat{P}_0(X)} \log \frac{\hat{P}_1(X)}{\hat{P}_0(X)} \bigg ]- 2 \mathbb{E}_{\hat{Q}_{\lambda}} \bigg [ \frac{\hat{Q}_{\lambda}(X)}{\hat{P}_0(X)}\bigg ] \mathbb{E}_{\hat{Q}_{\lambda}} \bigg [ \log \frac{\hat{P}_1(X)}{\hat{P}_0(X)} \bigg ]. \end{align} Substituting $\hat{Q}_{\lambda}(X)$ as a function of $\lambda$ we get \begin{align} &\frac{\sum_{a\in \Xc} \hat{P}_0^{1-\lambda}(a) \hat{P}_1^{\lambda}(a) }{2} \frac{\partial }{\partial \lambda}{\rm Var}_{\hat{P}_0} \bigg[\frac{\hat{Q}_{\lambda}(X)}{\hat{P}_0(X)} \bigg]\nonumber \\ &=\mathbb{E}_{\hat{Q}_{\lambda}} \bigg [ \bigg (\frac{\hat{P}_{1}(X)}{\hat{P}_0(X)} \bigg )^{\lambda} \log \frac{\hat{P}_1(X)}{\hat{P}_0(X)} \bigg ] - \mathbb{E}_{\hat{Q}_{\lambda}} \bigg [ \bigg (\frac{\hat{P}_1(X)}{\hat{P}_0(X)} \bigg ) ^{\lambda} \bigg ] \mathbb{E}_{\hat{Q}_{\lambda}} \bigg [ \log \frac{\hat{P}_1(X)}{\hat{P}_0(X)} \bigg ]. \label{eq:varlogsum} \end{align} Let $r(X)= \Big (\frac{\hat{P}_1(X)}{\hat{P}_0(X)}\Big )^{\lambda}$, then \begin{align} \mathbb{E}_{\hat{Q}_{\lambda}} &\bigg [ \bigg (\frac{\hat{P}_{1}(X)}{\hat{P}_0(X)} \bigg )^{\lambda} \log \frac{\hat{P}_1(X)}{\hat{P}_0(X)} \bigg ]-\mathbb{E}_{\hat{Q}_{\lambda}} \bigg [ \bigg (\frac{\hat{P}_1(X)}{\hat{P}_0(X)} \bigg ) ^{\lambda} \bigg ] \mathbb{E}_{\hat{Q}_{\lambda}} \bigg [ \log \frac{\hat{P}_1(X)}{\hat{P}_0(X)} \bigg ] \nonumber \\ &=\frac{1}{\lambda} \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) \log r(X) ]- \frac{1}{\lambda} \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) ] \mathbb{E}_{\hat{Q}_{\lambda}} [ \log r(X) ]. \label{eq:varlogsum} \end{align} Note that $\hat{Q}_{\lambda}(x),r(x)$ are positive for all $x\in \Xc$. Therefore, using the log-sum inequality \cite{Cover} for the first term and Jensen inequality \cite{Cover} for the second term in \eqref{eq:varlogsum}, we obtain \begin{align} \frac{\lambda \sum_{a\in \Xc} \hat{P}_0^{1-\lambda}(a) \hat{P}_1^{\lambda}(a) }{2}\frac{\partial }{\partial \lambda}{\rm Var}_{\hat{P}_0} \Bigg[\frac{\hat{Q}_{\lambda}(X)}{\hat{P}_0(X)} \Bigg]& \geq \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) ] \log \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) ]- \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) ] \mathbb{E}_{\hat{Q}_{\lambda}} [ \log r(X) ] \\ &\geq \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) ] \log \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) ]- \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) ] \log \mathbb{E}_{\hat{Q}_{\lambda}} [ r(X) ]\\ &=0. \end{align} Also, the above inequalities are met with equality when both log-sum and Jensen's inequalities are met with equality, which happens when $\lambda=0$. Therefore, for $ \lambda>0$, ${\rm Var}_{\hat{P}_0} \Big[\frac{\hat{Q}_{\lambda}(X)}{\hat{P}_0(X)} \Big]$ is an increasing function of $\lambda$ and consequently \begin{equation} \frac{\partial }{\partial \hat{\gamma}}\theta_0(\hat{P}_0,\hat{P}_1,\hat{\gamma}) \geq 0. \end{equation} \section{Proof of Theorem \ref{thm:hoeffupper}}\label{apx:Thoeffupper} By Sanov's theorem, the error exponent is given by \begin{align} \hat{E}_0&=\min_{Q: D(Q\|\hat{P}_0) \geq \hat{\gamma}} D(Q\|P_0). \end{align} The above optimization problem corresponds to the minimization of a convex function over the complement of a convex set, which achieves its optimum value on its boundry. Let $Q^*$ be chosen such that $D(Q^*\|\hat{P}_0) = \hat{\gamma}$ and $P_0=\beta Q^* +(1-\beta) \hat{P}_0$ where $ 0 \leq \beta \leq 1$. \begin{align} \hat{E}_0 &= \min_{Q: D(Q\|\hat{P}_0) \geq \hat{\gamma}} D(Q\|\beta Q^* +(1-\beta) \hat{P}_0) \\ &\leq \min_{Q: D(Q\|\hat{P}_0) \geq \hat{\gamma}} \beta D(Q\| Q^*)+(1-\beta) D(Q\| \hat{P}_0) \\ &= (1-\beta) \hat{\gamma}, \label{eq:upperHoef} \end{align} where the inequality is by the convexity of relative entropy. In order to get equality \eqref{eq:upperHoef} we lower bound the minimization by \begin{align} \min_{Q: D(Q\|\hat{P}_0) \geq \hat{\gamma}} &\beta D(Q\| Q^*)+(1-\beta) D(Q\| \hat{P}_0) \\ & \geq \min_{Q: D(Q\|\hat{P}_0) \geq \hat{\gamma}} (1-\beta) D(Q\| \hat{P}_0)\\ & \geq (1-\beta) \hat{\gamma}, \end{align} and by choosing $Q=Q^*$ we can achieve this lower bound. Next we find an lower bound to $\beta$. By definition of $Q^*$ we have \begin{equation} D\Big(\frac{1}{\beta} P_0 - \frac{1-\beta}{\beta} \hat{P}_0 \| \hat{P}_0 \Big) =\hat{\gamma}. \end{equation} Next, by Pinsker's inequality and lower bounding the relative entropy we get \begin{equation}\label{eq:alphalower} 2\Big\|\frac{1}{\beta} P_0 - \frac{1-\beta}{\beta} \hat{P}_0 - \hat{P}_0 \Big\|_{\rm TV}^2=\frac{2}{\beta^2}\| P_0-\hat{P}_0\|^2_{\rm TV} \leq \hat{\gamma}. \end{equation} By equations \eqref{eq:upperHoef} and \eqref{eq:alphalower} we conclude the result. \section{Proof of Theorem \ref{thm:lowerworstHoef}}\label{apx:TlowerworstHoef} As opposed to the likelihood ratio test, we first consider the minimization over $P_0$ for a fixed $Q$, and proceed with a Taylor expansion of $D(Q\|P_0)$ around $P_0=\hat{P}_0$ to get \begin{equation} \underline{\hat{E}}_0 (r_0)= \min_{\substack{P_0: D(\hat{P}_0\| P_0)\leq r_0 \\ Q: D(Q\|\hat{P}_0) = \hat{\gamma}}} D(Q\|\hat{P}_0) + \thetav_{P_0}^{T} \nabla {E}_0 +o(\| \thetav_{P_0} \|_{\infty}), \end{equation} where \begin{align} \nabla E_0 &= \bigg( -\frac{Q(x_1)}{\hat{P}_0(x_1)},\dotsc, -\frac{{Q}(x_{|\Xc|})}{\hat{P}_0(x_{|\Xc|})}\bigg)^T,\\ \thetav_{P_0} &= \big(P_0(x_1)-\hat{P}_0(x_1),\dotsc,P_0(x_{|\Xc|})-\hat{P}_0(x_{|\Xc|})\big)^T, \end{align} and we replaced the inequality constraint by an equality one, since the problem is that of optimizing a convex function over the complement of the convex set which attains its optimal value on the boundary of the set. Next by optimizing over $P_0$ and similarly to the proof of Theorem \ref{thm:lowerworst}, by substituting $Q$ with $\hat{Q}_{\lambda}$ we get \begin{align} \underline{\hat{E}}_0 (r_0)&= \min_{\substack{ Q: D(Q\|\hat{P}_0) = \hat{\gamma}}} D(Q\|\hat{P}_0) - \sqrt{\frac{2}{\alpha} {\rm Var}_{\hat{P}_0} \Bigg[\frac{Q(X)}{\hat{P}_0(X)} \Bigg] r_0} + o(\sqrt{r_0}) \\ &= \hat{\gamma} - \max_{\substack{ Q: D(Q\|\hat{P}_0) = \hat{\gamma}}} \sqrt{\frac{2}{\alpha} {\rm Var}_{\hat{P}_0} \Bigg[\frac{Q(X)}{\hat{P}_0(X)} \Bigg] r_0} + o(\sqrt{r_0}), \end{align} which completes the proof. \section{Proof of Corollary \ref{cor:comparing}}\label{apx:Ccomparing} Let $\hat{Q}_{\lambda}$ be the minimizing distribution of likelihood ratio test. By letting $Q=Q_{\lambda}$ in \eqref{eq:hoefsen} we get $D(\hat{Q}_{\lambda} \|P_0)=\hat{\gamma}$ which satisfies the maximization constraint, therefore \begin{align} \theta_0^{\rm h}(\hat{P}_0,\hat{\gamma}^{\rm h}) &\geq \sqrt{\frac{2}{\alpha} {\rm Var}_{\hat{P}_0} \Bigg[\frac{Q(X)}{\hat{P}_0(X)} \Bigg]} \\ &= \theta_0^{\rm lrt}(\hat{P}_0,\hat{P}_1,\hat{\gamma}^{\rm lrt}), \end{align} which gives the lower bound. To upper bound the sensitivity ratio, we have \begin{align} \frac{\theta_0^{\rm h}(\hat{P}_0,\hat{\gamma}^{\rm h})}{\theta_0^{\rm lrt}(\hat{P}_0,\hat{P}_1,\hat{\gamma}^{\rm lrt})}\leq \frac{\max_{{ Q: D(Q\|\hat{P}_0) = {\hat{\gamma}^{\rm h}}}} \sqrt{ {\rm Var}_{\hat{P}_0} \Big[\frac{Q(X)}{\hat{P}_0(X)} \Big]} }{\min_{\hat{P}_1: D(\hat{Q}_\lambda\|\hat{P}_0) = \hat{\gamma}^{\rm h} } \sqrt{ {\rm Var}_{\hat{P}_0} \Big[\frac{\hat{Q}_\lambda(X)}{\hat{P}_0(X)} \Big]} } \end{align} Also, note that ${\rm Var}_{\hat{P}_0} \Big[\frac{Q(X)}{\hat{P}_0(X)} \Big] = \chi^2(Q\|\hat{P}_0)$ where $\chi^2$ is the chi-squared distance. For every $\hat{P}_0, Q$ we have $D(Q\|\hat{P}_0) \leq \chi^2(Q\|\hat{P}_0)$ \cite{Choosing}. Therefore, \begin{equation}\label{eq:lower} \min_{\hat{P}_1 : D(\hat{Q}_\lambda\|\hat{P}_0) = \hat{\gamma}^{\rm h} } \sqrt{ {\rm Var}_{\hat{P}_0} \Bigg[\frac{\hat{Q}_\lambda(X)}{\hat{P}_0(X)} \Bigg]} \geq \sqrt{\hat{\gamma}^{\rm h}}. \end{equation} In addition, we upper bound the chi-squared distance as follows \begin{align} \chi^2(Q\|\hat{P}_0)&\leq \frac{1}{\min_{x\in \Xc} \hat{P}_0(x)} \|Q-\hat{P}_0\|_2^2 \nonumber \\ &\leq \frac{1}{\min_{x\in \Xc} \hat{P}_0(x)} \| Q-\hat{P}_0\|_1^2 \nonumber \\ &\leq \frac{4}{\min_{x\in \Xc} \hat{P}_0(x)} D(Q\|\hat{P}_0), \end{align} where we used Pinsker's inequality \cite{Cover} in the last step. Hence, we have \begin{equation}\label{eq:upper} \max_{{ Q: D(Q\|\hat{P}_0) = {\hat{\gamma}^{\rm h}}}} \sqrt{ {\rm Var}_{\hat{P}_0} \Bigg[\frac{Q(X)}{\hat{P}_0(X)} \Bigg] } \leq \sqrt{\frac{4}{\min_{x\in \Xc} \hat{P}_0(x)} }. \end{equation} Finally, from \eqref{eq:lower}, \eqref{eq:upper} we conclude \eqref{eq:cor}. \section{Proof of Theorem \ref{thm:seqMM}}\label{apx:TseqMM} The proof described below holds in general and can be used for continuous probability distributions. From the absolute continuity assumption, let the log-likelihood ratio be bounded by a positive constant $c$, i.e., \begin{equation} \bigg |\log \frac{\hpo}{\hpt}\bigg| \leq c ~~~ \forall x. \end{equation} We use the following results. \begin{theorem}[\cite{Woodroofe}] \label{thm:converge} Let $S_n=\sum_{i=1}^{n} Z_i$ be a random walk where $Z_i$ is some non-lattice random variable\footnote{A random variable $Z$ is said to be lattice if and only if $\sum_{k=-\infty}^{\infty} \text{Pr}[ Z=a+kd]=1$ for some non-negative $a,d$. Otherwise, it is said to be non-lattice.} generated in the i.i.d fashion with $\mathbb{E}[Z_i] >0$. For $\gamma>0$, let \begin{equation} \tau=\inf\{n\geq 1: S_n \geq \gamma\}. \end{equation} Also, let $R_{\gamma}\triangleq S_{\tau}-\gamma$. Then $R_{\gamma}$ converges in distribution to a random variable $R$ with distribution $Q$ as $\gamma\to\infty$. Moreover, if $Z$ is lattice random variable, then $R_{\gamma}$ has a limiting distribution $Q_d$ as $\gamma\to \infty$ through multiples of $d$. \end{theorem} The next result shows that under conditions \eqref{eq:posdrift}, the mismatched sequential probability ratio test stops at a finite time. \begin{lemma}\label{lem:finite} Let $\hta_0$ be the the smallest time that the mismatched sequential probability ratio test crosses threshold $\hgo$, i.e., \begin{align}\label{eq:tau1} \hta_{0}=\inf \{n\geq1: \hat{S}_n& \geq \hgo\}. \end{align} Also, assume that conditions \eqref{eq:posdrift} hold. Then, \begin{equation}\label{eq:finiteupper} \PP_0[\hta_0 \geq n] \leq e^{d\hgo} e^{-(n-1) E(0)}, \end{equation} where $E(0),d >0$. Also, as $ \hgo\to \infty $, $\hta_0\to \infty$ almost surely. \end{lemma} \begin{proof} By Chernoff bound \cite{Dembo}, the probability of passing the threshold under the first hypothesis at a time after $n$ can be upper bounded by \begin{align} \PP_0[\hta_0 \geq n] &\leq \PP_0 \Big[ \hat{S}_{n-1} \leq \hgo\Big ]\\ &= \PP_0 \Bigg[ \sum_{i=1}^{n-1} \log \frac{\hat{P}_0(x_i)}{\hat{P}_1(x_i)} \leq \hgo\Bigg ] \\ & \leq e^{-(n-1) E\big(\frac{\hgo}{n-1}\big)} \label{eq:chernoff}, \end{align} where \begin{equation}\label{eq:lagrangelem} E(\gamma)= \sup_{s \geq 0} \Big \{- s{\gamma} - \hat{\kappa}(s) \Big\} , \end{equation} and \begin{equation} \hat{\kappa}(s)= \log \E_{P_0} \Bigg[ \frac{\hat{P}_1^s}{\hat{P}_0^s} \Bigg], \end{equation} is the cumulant function of the mismatched log-likelihood ratio. Note that for each $s$ the objective function in \eqref{eq:lagrangelem} is linear in $\gamma$, so $E(\gamma)$ is the pointwise supremum of a family of linear functions, hence convex \cite{Boyd}. By the convexity of $E(\gamma)$ we have the following lower bound \begin{equation}\label{eq:linearseq} E \Big (\frac{\hgo}{n-1} \Big ) \geq E(0)+ \frac{\partial E(\gamma)}{\partial \gamma} \Bigg|_{\gamma =0} \frac{\hgo}{n-1}. \end{equation} In order to show that $ E(0)>0$ it suffices to show that $\hat{\kappa}'(s=0) <0$. Taking derivative of $\hat{\kappa}(s)$ respect to $s$ and setting $s=0$, we have \begin{align} \hat{\kappa}'(0)&= \E_{P_0} \Bigg [\log \frac{\hat{P}_1}{\hat{P}_0} \Bigg ]\\ &= D(P_0\|\hat{P}_0)- D(P1\|\hat{P}_1)<0, \end{align} where the last step is by the assumption in \eqref{eq:posdrift}. Finally, by the envelope theorem \cite{Segal} we get \begin{align} \frac{\partial E(\gamma)}{\partial \gamma} \Bigg|_{\gamma =0} = -s^*(\gamma=0), \end{align} where $s^*({\gamma=0})$ is the optimizing value of $s$ in \eqref{eq:lagrangelem} evaluated when $\gamma=0$. (Note that this value is unique since $\hat{\kappa}(s)$ is strictly convex in $s$ \cite{Dembo}). By the constraint of the optimization problem in \eqref{eq:lagrangelem}, we have $s\geq0$. Also, from $\hat{\kappa}'(0)<0$, we get $s^*({\gamma=0})\neq0$. Therefore, we conclude that $s^*(\gamma=0) >0$ and \begin{align} d&\triangleq { \Bigg | \frac{\partial E(\gamma)}{\partial \gamma} \Big|_{\gamma =0} \Bigg | } = s^*(\gamma=0) > 0. \end{align} Finally, substituting \eqref{eq:linearseq} into \eqref{eq:chernoff} we get \eqref{eq:finiteupper}. Furthermore, \begin{align} \PP_0[\hta_0 \leq n] &\leq \PP_0 \Bigg[ \sum_{i=1}^{k} \log \frac{\hat{P}_0(x_i)}{\hat{P}_1(x_i)} \geq \hgo, k \leq n\Bigg ]\\ &\leq \PP_0 \Big[ kc \geq \hgo, k\leq n \Big]\\ &\leq \PP_0[nc\geq \hgo]. \end{align} Taking $\hgo > nc$, then $\PP_0[\hta_0 \leq n]=0$. Therefore, as $\hgo\rightarrow \infty$ then $\hta_0\rightarrow \infty$ a.s. \end{proof} We now proceed with the proof of the Theorem. We show the result for the type-\RNum{2} error probability; a similar proof holds for the type-\RNum{1} case. The type-\RNum{2} probability of error of mismatched sequential probability ratio test is \begin{align} \het &=\E_{P_1} \big[ \mathds{1} \{\hat{S}_{\hta} \geq \hgo\} \big ]\\ &=\E_{P_0} \big [ e^{-{S}_{\hta}} \mathds{1} \{\hat{S}_{\hta} \geq \hgo\} \big ], \label{eq:errprobseq1} \end{align} where $S_{\hta}$ is the log-likelihood ratio under no mismatch in \eqref{eq:LLR} evaluated at the time where the mismatched test stops. Recall the definition of $\hta_{0}$ in \eqref{eq:tau1} and $\hat{R}_{\hgo}=\hat{S}_{\hta_{0}}-\hgo$. Observe that, if $\hat{S}_{\hta} \geq \hgo$, then we have $\hta=\hta_{0}$. Multipling the exponent in \eqref{eq:errprobseq1} by $\frac{\hat{S}_{\hta_{0}} }{\hat{S}_{\hta_{0}} }$ and substituting $\hro$ we get \begin{align}\label{eq:errprobseq2} \het &= \E_{P_0} \Big [ e^{-{S}_{\hta_{0}}} \mathds{1} \{\hat{S}_{\hta} \geq \hgo\} \Big ] \\ &= \E_{P_0} \Big[ e^{-\frac{{S}_{\hta_{0}}}{\hat{S}_{\hta_{0}}}.\hat{S}_{\hta_{0}} } \mathds{1} \{\hat{S}_{\hta} \geq \hgo\} \Big ] \\ &=\E_{P_0} \Big[ e^{-\frac{{S}_{\hta_{0}}}{\hat{S}_{\hta_{0}}}(\hro+\hgo)} \mathds{1}\{\hat{S}_{\hta} \geq \hgo\} \Big ]. \label{eq:err} \end{align} Let $\mu=\frac{S_{\hta_{0}}}{\hta_{0}}$, $\hat{\mu}=\frac{\hat{S}_{\hta_{0}}}{\hta_{0}}$. By Lemma \ref{lem:finite}, $\hta_{0} \rightarrow \infty$ as $\hgo \rightarrow \infty$ a.s., and therefore by the WLLN \begin{align} \mu &\xrightarrow[]{p} D(P_0\|P_1),\\ \hat{\mu} &\xrightarrow[]{p} D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0). \end{align} Also, since $\hat{\mu}>0$ almost surely, by using the continuous mapping theorem \cite{Resnick} we have \begin{align} \frac{\mu}{\hat{\mu}}= \frac{S_{\hta_{0}}}{\hat{S}_{\hta_{0}}}\xrightarrow[]{p} \frac{D(P_0\|P_1)}{ D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)}. \end{align} Moreover, by Theorem \ref{thm:converge}, $\hro$ converges in distribution to a random variable $\hat{R}_0$ with limiting distribution $\hat{Q}_0$ under $P_0$ (through multiples of d in the lattice case). By the slutsky's theorem \cite{Resnick}, \begin{equation} {\frac{{S}_{\hta_{0}}}{\hat{S}_{\hta_{0}}}\cdot\hro} \xrightarrow[]{d} \frac{D(P_0\|P_1)}{ D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \hat{R}_0. \end{equation} Thus, letting $\hgo \rightarrow \infty$ in \eqref{eq:err} we get \begin{align} \lim_{\hgo \rightarrow \infty}\het&= \E_{P_0} \Big [ e^{-\frac{D(P_0\|P_1)}{ D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} (\hat{R}_0+\hgo)} \Big ]\\ &=\hat{c}_1 \cdot e^{-\frac{D(P_0\|P_1)}{ D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \hgo}. \end{align} To prove \eqref{eq:SPRTthresh}, we show the converges of $\hta_0$ in probability as well as its uniform integrability. Therefore, we can conclude its convergence in $L^1$ norm (and hence in expectation). Finally, from the convergence of $\hta_0$, we obtain the convergence of $\hta$. First, by the finiteness of $\hta_0$ for every $\hgo$ and definition of $\hta_0$, there exist a finite $\hta_0$ with probability one such that \begin{align}\label{eq:convergpseq} \hat{s}_{\hta_0-1} < \hgo \leq \hat{s}_{\hta_0}~~~ \text{w.p}.1. \end{align} Also, by the WLLN and Lemma \ref{lem:finite} as $\hgo \rightarrow \infty$, we get \begin{align} \frac{\hat{S}_{\hta_0}}{\hta_0} \xrightarrow[]{p} D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0),\label{eq:convergp1}\\ \frac{\hat{S}_{\hta_0-1}}{\hta_0-1} \xrightarrow[]{p} D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0).\label{eq:convergp2} \end{align} Therefore, by \eqref{eq:convergpseq}, \eqref{eq:convergp1}, \eqref{eq:convergp2} we can conclude that \begin{equation}\label{eq:convP1adv} \frac{\hta_0}{\hgo} \xrightarrow[]{p} \frac{1}{ D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \end{equation} as $\hgo\rightarrow \infty$. To show the convergence in $L^1$ we only need to prove the uniform integrability of the sequence of random variables $\frac{\hta_0}{ \hgo}$, where $\hta_0$ is a random variable that depends on parameter $\hgo$. Equivalently, we need to show that, \begin{equation}\label{eq:uniform} \lim_{t \rightarrow \infty} \sup_{\hgo} \mathbb{E}_{P_0} \Bigg [\frac{\hta_0}{ \hgo} \mathds{1}\Big \{\frac{\hta_0}{ \hgo} \geq t \Big \} \Bigg] =0. \end{equation} We can upper bound the given expectation in \eqref{eq:uniform} as \begin{align} \underbrace{\mathbb{E}_{P_0} \Bigg [\frac{\hta_0- \floor{t \hgo} }{\hgo} \mathds{1}\Big \{{\hta_0} \geq \floor{t \hgo} \Big \} \Bigg]}_{A}+ \underbrace{ t \mathbb{E}_{P_0} \Bigg [ \mathds{1}\Big \{\frac{\hta_0}{ \hgo} \geq t \Big \} \Bigg]}_{B}. \end{align} The second term can be upper bounded by \eqref{eq:finiteupper} as \begin{align} B&= t \PP_0[ \hta_0 \geq t \hgo ] \leq t e^{E(0)} e^{-\hgo(tE(0)-d)}. \end{align} The first expectation can be also written as the following sum \begin{align} A&= \frac{1}{\hgo} \sum_{m=1}^{\infty} \PP_0\big [\hta_0-\floor{t\hgo}\geq m \big ], \end{align} and by \eqref{eq:finiteupper} \begin{align} A\leq \frac{1}{\hgo} t e^{-\hgo (t E(0)-d)} \sum_{m=1}^{\infty} e^{-(m-2) E(0)}. \end{align} Hence $A$ and $B$ are vanishing as $t\rightarrow\infty$ for every $\hgo$ giving the uniform integrability of $\frac{\hta_0}{ \hgo}$, and hence convergence in $L^1$ \cite{Bill}, i.e, \begin{equation} \label{eq:convergT1} \lim_{\hgo \rightarrow \infty} \mathbb{E}_{P_0} \Bigg [ \Bigg|\frac{\hta_0}{ \hgo}- \frac{1}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \Bigg | \Bigg]=0. \end{equation} Finally, we prove the convergence of $\hta$. By \eqref{eq:MMexp1}, \eqref{eq:convP1adv} and the union bound, we obtain \begin{align} \PP_0 \bigg[\bigg|\frac{\hta}{ \hgo}- \frac{1}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \bigg | \geq \epsilon \bigg] &\leq \PP_0 \bigg[\bigg|\frac{\hta}{ \hgo}- \frac{1}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \bigg | \geq \epsilon , \hat \phi =0 \bigg] + \PP_0[\hat{\phi}=1 ]\\ &=\PP_0 \bigg[\bigg|\frac{\hta_0}{ \hgo}- \frac{1}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \bigg | \geq \epsilon\bigg] + \heo, \end{align} which tends to $0$ as $\hgo \rightarrow \infty$, establishing the convergence of $\frac{\hta}{ \hgo}$ in probability. Now, using that $\hta\leq \hta_0$ we have \begin{align}\label{eq:upper1} \mathbb{E}_{P_0} \Bigg [\frac{\hta}{ \hgo} \mathds{1}\Big \{\frac{\hta}{ \hgo} \geq t \Big \} \Bigg] \leq \mathbb{E}_{P_0} \Bigg [\frac{\hta_0}{ \hgo} \mathds{1}\Big \{\frac{\hta_0}{ \hgo} \geq t \Big \} \Bigg]. \end{align} Therefore, uniform integrability of $\hta_0$ gives the uniform integrability of $\hta$, and hence convergence in $L^1$ norm and also expectation of $\frac{\hta}{ \hgo}$, which concludes the proof. \section{Proof of Theorem \ref{thm:negdrift}}\label{apx:Tnegdrift} \begin{proof} Defining $\hta_1$ similar to \eqref{eq:tau1}, we have \begin{equation} \hta_{1}=\inf \{n\geq1: \hat{S}_n < -\hgt\}. \end{equation} The probability of making the right decision can be bounded as \begin{align} \PP_0[\hat{\phi}=0] &=\PP_0\big [\hat{S}_n ~ \text{passes } \hgo \text{ before passing } - \hgt \big ] \\ & \leq \sum_{n=1}^{\infty} \PP_0 \big [ \hta_0 \leq n, \hta_1 > n\big]\\ &\leq \sum_{n=1}^{\infty} \min\big\{ \PP_0 [ \hta_0 \leq n ], \PP_0 [\hta_1 > n] \big \}. \end{align} We can bound both terms similar to the proof of lemma \ref{lem:finite} by \begin{align} \PP_0[\hat{\phi}=1]& \leq \sum_{n=1}^{\infty} \min\Big\{ e^{-a\hat{\gamma}_0} e^{-n \tilde{E}(0)}, e^{-a\hat{\gamma}_1} e^{-n \tilde{E}(0)} \Big \} \\ & =\min\Big\{{c_0}e^{-a\hat{\gamma}_0} , {c_1}e^{-a\hat{\gamma}_1} \Big \}, \end{align} where $a>0,$ $\tilde{E}(0) >0$ due to \eqref{eq:negdrift}. Finally, \begin{equation} \heo \geq 1- \max\Big\{{c_1}e^{-a\hat{\gamma}_0} , {c_2}e^{-a\hat{\gamma}_1} \Big \} \end{equation} which goes to $1$ as $\hat{\gamma}_0, \hat{\gamma}_1$ approach infinity. \end{proof} \section{Proof of Corollary \ref{cor:exp}}\label{apx:Cexp} \begin{proof} Theorem \ref{thm:seqMM} gives an asymptotic expression for the error probability $\hat{\epsilon}_i$ and expected stopping time $\mathbb{E}_{P_i}[\tau]$ of the mismatched sequential probability ratio test for $i\in \{0,1\}$ as a function of thresholds. To find the largest error exponents of the test as defined in \eqref{eq:tradeseqMM1} and \eqref{eq:tradeseqMM2} we should find the largest thresholds $\hgo, \hgt$ such that they satisfy the expected time condition since type-\RNum{1} and type-\RNum{2} error exponents are increasing function of $\hgo, \hgt$ by \eqref{eq:MMexp1}, \eqref{eq:MMexp2}. It is easy to check that thresholds in corollary are the largest thresholds satisfying the expected stopping time conditions, which concludes the proof. \end{proof} \section{Proof of Theorem \ref{thm:lowerworstseq}}\label{apx:Tlowerworstseq} We show the result under hypothesis $0$, and similar steps are valid for hypothesis $1$. Observe that \eqref{eq:threshMMM1} can be written as \begin{align} &\hEo = D(P_0\|P_1) \cdot \min\Bigg \{ \frac{D(\hat{P}_1\|\hat{P}_0)}{D(\hat{P}_0\|\hat{P}_1)}, \frac{D(P_1\|\hat{P}_0)-D(P_1\|\hat{P}_1)}{D(P_0\|\hat{P}_1)-D(P_0\|\hat{P}_0)} \Bigg \}. \label{eq:threshMMM1sim} \end{align} From \eqref{eq:worst_case} and \eqref{eq:threshMMM1sim}, we need to compute two minimizations, the first of which over $P_0$. To this end, we exchange the order of these minimizations and apply a Taylor series expansion to the first term of \eqref{eq:threshMMM1sim} around $P_0=\hat{P}_0, P_1=\hat{P}_1$ we obtain \begin{align} \hEo =D(\hat{P}_1\|\hat{P}_0) &+ \min \Big \{ \rho_0 \dv_0^T \thetav_{P_0} +\rho_0 \dv_1^T \thetav_{P_1} , \dv_2^T \thetav_{P_1} \Big \} + o(\| \thetav_{P_0} \|_{\infty}+\| \thetav_{P_1} \|_{\infty}), \label{eq:expansion} \end{align} where for $i=0,1$, \begin{align} \thetav_{P_i}&= \big(P_i(x_1)-\hat{P}_i(x_1),\dotsc,P_i(x_{|\Xc|})-\hat{P}_i(x_{|\Xc|})\big)^T,\\ \dv_0&= \bigg( 1+\log \frac{\hat{P}_{0}(x_1)}{\hat{P}_1(x_1)},\dotsc,1+\log \frac{\hat{P}_{0}(x_1)}{\hat{P}_1(x_1)} \bigg)^T,\\ \dv_1&= \bigg( -\frac{\hat{P}_{0}(x_1)}{\hat{P}_1(x_1)},\dotsc, -\frac{\hat{P}_{0}(x_{|\Xc|})}{\hat{P}_1(x_{|\Xc|})}\bigg)^T,\\ \dv_2&= \bigg( 1+\log \frac{\hat{P}_{1}(x_1)}{\hat{P}_0(x_1)},\dotsc,1+\log \frac{\hat{P}_{1}(x_1)}{\hat{P}_0(x_1)} \bigg)^T+\rho_0\dv_1, \end{align} and $\rho_0=\frac{D(\hat{P}_1\|\hat{P}_0)}{D(\hat{P}_0\|\hat{P}_1)}$. By substituting expansion \eqref{eq:expansion} into \eqref{eq:worst_case} we obtain \begin{align}\label{eq:approxworst} \underline{\hat{E}}_0(r_0) &= D(\hat{P}_1\|\hat{P}_0) + \min \Big \{ \rho_0 \min_{\substack{P_0 \in \Bc(\hat P_0,r_0)\\ P_1 \in \Bc(\hat P_1,r_1)}} \dv_0^T \thetav_{P_0} + \dv_1^T \thetav_{P_1},\min_{\substack{P_0 \in \Bc(\hat P_0,r_0)\\ P_1 \in \Bc(\hat P_1,r_1)}} \dv_2^T \thetav_{P_1} \Big \} + o(\| \thetav_{P_0} \|_{\infty}+\| \thetav_{P_1} \|_{\infty}). \end{align} Now, we further approximate the outer minimization constraint in \eqref{eq:worst_case}, or, equivalently, the minimizations over the divergence balls in \eqref{eq:approxworst} to get \begin{align}\label{eq:worstapproxopt} \underline{\hat{E}}_0(r_0) &= D(\hat{P}_1\|\hat{P}_0) + \min \Bigg \{ \rho_0 \min_{\substack{P_0 \in \underline\Bc(\hat P_0,r_0)\\ P_1 \in \underline\Bc(\hat P_1,r_1)}} \dv_0^T \thetav_{P_0} + \dv_1^T \thetav_{P_1}, \min_{\substack{P_0 \in \underline\Bc(\hat P_0,r_0)\\ P_1 \in \underline\Bc(\hat P_1,r_1)}} \dv_2^T \thetav_{P_1} \Bigg \} + o(\| \thetav_{P_0} \|_{\infty}+\| \thetav_{P_1} \|_{\infty}). \end{align} where \begin{equation} \underline\Bc(\hat P_i,r_i) = \big\{\thetav_i \in \R^{|\Xc|}: \thetav_{P_i}^T \Jm_i \thetav_{P_i} \leq2 r_{i} ,\onev^T\thetav_{P_i}=0 \big\}. \end{equation} and \begin{equation} \Jm_i=\diag\bigg( \frac{\alpha}{\hat{P}_i(x_1)},\dotsc,\frac{\alpha}{\hat{P}_i(x_{|\Xc|})}\bigg) \end{equation} is the Fisher information matrix corresponding to hypothesis $i$. Next by optimizing over $P_0$ and $P_1$ and similarly to the proof of Theorem \ref{thm:lowerworst}, by substituting $P_i$ with $\hat{P}_{i}$ we get \eqref{eq:worstapproxseq} \section{Proof of Theorem \ref{thm:adverLRT}}\label{apx:adverLRT} Assume $\hat{Q}$ fixed. Let \begin{align} \underline{\hat{E}}_0(\hat{Q},r)= \min_{\substack{ Q \in \mathcal{B}(\hat{Q},r) }} D(Q\|P_0). \end{align} To derive the Taylor expansion of the optimization we expand the $D(Q\|\hat{Q})$ and $D(Q\|P_1)$ around $Q=\hat{Q}$ to get \begin{equation} \underline{\hat{E}}_0 (\hat{Q},r)= \min_{\substack{Q: \frac{1}{2} \thetav_{\hat{Q}}^T \Jm(\hat{Q}) \thetav_{\hat{Q}} \leq r \\ \onev^T \thetav_{\hat{Q}}=0 }} D(\hat{Q}\|P_0) + \thetav_{\hat{Q}}^{T} \nabla {E}_0 +o(\| \thetav_{\hat{Q}} \|_{\infty}) \end{equation} where \begin{align} \nabla E_0 &= \bigg(1+\log \frac{\hat{Q}(x_1)}{{P}_0(x_1)},\dotsc, 1+\log \frac{\hat{Q}(x_{|\Xc|})}{{P}_0(x_{|\Xc|})}\bigg)^T,\\ \thetav_{\hat{Q}} &= \big(Q(x_1)-\hat{Q}(x_1),\dotsc,Q(x_{|\Xc|})-\hat{Q}(x_{|\Xc|})\big)^T,\\ \Jm&=\diag\bigg( \frac{\alpha}{\hat{Q}(x_1)},\dotsc,\frac{\alpha}{\hat{Q}(x_{|\Xc|})}\bigg). \end{align} Solving this convex optimization, we obtain \begin{equation} \underline{\hat{E}}_0 (\hat{Q},r)= D(\hat{Q}\|P_0) - \sqrt{ \frac{2}{\alpha}\text{Var}_{\hat{Q}} \Big( \log \frac{\hat{Q}}{P_0} \Big ) r} +o(\sqrt{r}). \end{equation} Next, minimizing over $\hat{Q}$ we have \begin{equation} \underline{\hat{E}}_0 (r)= \min_{\hat{Q} \in \mathcal{Q}_0 (\hat{Q}) } D(\hat{Q}\|P_0) - \sqrt{ \frac{2}{\alpha}\text{Var}_{\hat{Q}} \Big( \log \frac{\hat{Q}}{P_0} \Big ) r} +o(\sqrt{r}). \end{equation} We can expand $\underline{E}_0$ around $r=0$ as \begin{equation} \underline{\hat{E}}_0 (r)= \underline{\hat{E}}_0 (r=0)+ \frac{\partial \underline{\hat{E}}_0 (r) }{\partial \sqrt{r}} \Bigg|_{r=0} \sqrt{r} +o(\sqrt{r}). \end{equation} By the envelope theorem \cite{Segal}, we have that \begin{equation}\label{eq:lrtsensamp} \frac{\partial \underline{\hat{E}}_0 (r) }{\partial \sqrt{r}} \Bigg|_{r=0}= -\sqrt{ \frac{2}{\alpha}\text{Var}_{{Q}_\lambda} \Big( \log \frac{{Q}_\lambda}{P_0} \Big ) }, \end{equation} where we used the fact that $\hat{Q}= Q_{\lambda}$ when $r=0$, where $Q_\lambda$ is the optimizing distribution in \eqref{eq:tilted}. Finally, setting $\underline{\hat{E}}_0 (r=0)=E_0$, concludes the proof. \section{Proof of Corollary \ref{cor:samp_dist}} \label{apx:samp_dist} We prove the result for $i=0$, and the same holds for the type-\RNum{2} sensitivity. We can write the sample sensitivity as \begin{equation}\label{eq:thetadv} \theta_0^{\rm adv}= \sum_{a \in \Xc } P_0(a) \frac{Q_\lambda(a)}{P_0(a)} \log ^2 \Bigg ( \frac{Q_\lambda(a)}{P_0(a)} \Bigg ) -D^2(Q_\lambda\|P_0). \end{equation} For every $x\geq0$, we can show the following inequality \begin{equation} x\log^2 x \leq (x-1)^2. \end{equation} Using this we have \begin{equation} \theta_0^{\rm adv}\leq \sum_{a \in \Xc } P_0(a) \Bigg(\frac{Q_\lambda(a)}{P_0(a)}-1 \Bigg)^2 =\chi^2(Q_\lambda\|P_0)=\theta_0^{\rm dist}. \end{equation} To prove the inequality let $f_1(x)= (x-1)^2 - x\log^2 x$. Taking the second derivative we have \begin{equation} f_1''(x)=2\Big(1-\frac{\log x}{x} -\frac{1}{x}\Big) \geq 0, \end{equation} where we used $\log x \leq x-1$. Hence $f_1(x)$ is convex and the first order condition is sufficient to find the minimum of $f_1(x)$. Setting $x=1$ we get $f_1(1)=0, f_1'(1)=0$ and therefore $f_1(x)\geq 0$. To prove the lower bound, for every $x\geq 0$ we have \begin{equation} x-1 \leq x \log x. \end{equation} Applying this inequality to \eqref{eq:thetadv} we get \begin{align} \theta_0^{\rm adv} \geq& \sum_{a \in \Xc } P_0(a) \Bigg (\frac{Q_\lambda(a)}{P_0(a)}\Bigg )^{-1} \Bigg(\frac{Q_\lambda(a)}{P_0(a)}-1 \Bigg)^2 -D^2(Q_\lambda\|P_0)\\ \geq & \Bigg(\min_i\frac{P_0(i)}{Q_\lambda(i)} \Bigg ) \theta_0^{\rm dist} -E_0^2, \end{align} which concludes the proof. \section{Proof of Theorem \ref{thm:advGLRT}} \label{apx:advGLRT} We show the result under the first hypothesis; similar steps are valid for the second hypothesis. Unlike the likelihood ratio test, we first consider the minimization over $P_0$ for a fixed $Q$ and we perform a Taylor expansion of $D(Q\|P_0)$ around $Q=\hat{Q}$ to get \begin{equation}\label{eq:optsenH} \underline{\hat{E}}_0 (r)= \min_{\substack{ D(\hat{Q}\|P_0)= \gamma \\ Q \in \Bc(\hat{Q},r) }} D(\hat{Q}\|P_0) + \thetav_{Q}^{T} \nabla {E}_0 +o(\| \thetav_{Q} \|_{\infty}), \end{equation} where \begin{align} \nabla E_0 &= \bigg( 1+\log \frac{\hat{Q}(x_1)}{{P}_0(x_1)},\dotsc, 1+\log \frac{\hat{Q}(x_{|\Xc|})}{P_0(x_{|\Xc|})}\bigg)^T,\\ \thetav_{Q} &= \big(Q(x_1)-\hat{Q}(x_1),\dotsc,Q(x_{|\Xc|})-\hat{Q}(x_{|\Xc|})\big)^T. \end{align} We have replaced the inequality constraint with equality since the the optimal value of the minimization will be attained at the boundary. Next, by solving \eqref{eq:optsenH} over $Q$ for fixed $\hat{Q}$, we get \begin{align} \underline{\hat{E}}_0 (r)&= \min_{\substack{ \hat{Q}: D(\hat{Q}\|P_0) = \gamma}} D(\hat{Q}\|P_0) - \sqrt{\frac{2}{\alpha} {\rm Var}_{\hat{Q}} \Bigg[\log \frac{\hat{Q}(X)}{P_0(X)} \Bigg] r} + o(\sqrt{r}) \\ &= \gamma- \max_{\substack{ \hat{Q}: D(\hat{Q}\|P_0) = \gamma}} \sqrt{\frac{2}{\alpha} {\rm Var}_{\hat{Q}} \Bigg[\log \frac{\hat{Q}(X)}{P_0(X)} \Bigg] r} + o(\sqrt{r}). \end{align} Similarly for the type-\RNum{2} error exponent, we have \begin{align} \underline{\hat{E}}_1 (r)&= \min_{\substack{ \hat{Q}: D(\hat{Q}\|P_0) \leq \gamma}} D(\hat{Q}\|P_1) - \sqrt{\frac{2}{\alpha} {\rm Var}_{\hat{Q}} \Bigg[\log \frac{\hat{Q}(X)}{P_1(X)} \Bigg] r} + o(\sqrt{r}). \end{align} Next, by the envelope theorem \cite{Segal} and similarly to the proof of Theorem \ref{thm:adverLRT} we get \eqref{eq:Hoefadver1}. \section{Proof of Theorem \ref{thm:SPRTadver}} \label{apx:SPRTadver} Assume samples are drawn by $P_0$. First, we find a bound to the probability of error as a function of the threshold. The type-\RNum{1} probability of error of sequential probability ratio test under disturbed samples $\Th'$ can be upper bounded by \begin{align} \epsilon_0 \leq \sum_{t=1}^{\infty} \PP_0\Big [ & t( D(\Th' \| {P}_0)-D(\Th'\| {P}_1)) \geq \tilde{\gamma}_1 , \Th \in \Bc(\Th',r) \Big ]. \end{align} By the method of types we have \begin{align}\label{eq:uppersensamp} \epsilon_0 &\leq \sum_{t=1}^{\infty} \sum_{\substack{\hat{Q} \in \hat{\mathcal{Q}}_{\tilde{\gamma}_1}(t)\\ Q \in \Bc(\hat{Q},r)} } e^{ -t D(Q\| P_0) }\\ &\leq \sum_{t=1}^{\infty} (t+1)^{|\mathcal{X}|} e^{- \underline{E}_{ \tilde{\gamma}_1}(r,t)}, \end{align} where \begin{equation}\label{eq:set} \hat{\mathcal{Q}}_{\gamma}(t)=\Big \{\hat{Q}: D(\hat{Q}\| {P}_0)-D(\hat{Q}\| {P}_1) \geq \frac{\gamma}{t} \Big \}, \end{equation} \begin{equation}\label{eq:Eadverseq} \underline{E}_{\gamma}(r,t)=t \min_{\hat{Q} \in \hat{\mathcal{Q}}_{{\gamma}}(t)} \min_{ Q \in \Bc(\hat{Q},r) } D(Q\|P_0). \end{equation} Let $ \tilde{\gamma}_1=\gamma_1+\frac{ |\Xc|+2}{\lambda^*_1} \log (t+1)$, where $\lambda_1^*$ is the optimal Lagrange multiplier corresponding to the constraint in \eqref{eq:set} of optimization in \eqref{eq:Eadverseq} when $\gamma=\gamma_1$. We expand $\underline{E}_{ \tilde{\gamma}_1}(r,t)$ around $ \tilde{\gamma}_1=\gamma_1$. Similarly to Lemma \ref{lem:convex} it can be shown that $\underline{E}_{\gamma}(r,t)$ is convex in $\gamma$, hence \begin{equation} \underline{E}_{\tilde{\gamma}_1}(r,t) \geq \underline{E}_{\gamma_1}(r,t) + \frac{\partial \underline{E}_{\gamma}(r,t) }{\partial \gamma}\Bigg |_{\gamma=\gamma_1} \frac{ |\Xc|+2}{\lambda^*_1} \log (t+1). \end{equation} By the envelope theorem we have \begin{equation} \frac{\partial \underline{E}_{\gamma}(r,t) }{\partial \gamma}\Bigg |_{\gamma=\gamma_1} ={\lambda_1^*} \geq 0. \end{equation} Furthermore, the inequlaity is strict if $\frac{\gamma_1}{t}\geq -D(P_0\|P_1)$, hence by choosing $\gamma_1\geq0$ for every $t$ this condition is satisfied. Hence we can upper bound \eqref{eq:uppersensamp} by \begin{align} \epsilon_0 & \leq \sum_{t=1}^{\infty} (t+1)^{-2} e^{- \underline{E}_{\gamma_1}(r,t)},\\ & \leq \frac{\pi^2}{6} e^{- \min_{t\geq1} \underline{E}_{\gamma_1}(r,t)}. \label{eq:uppersensamp_app} \end{align} Next, by Taylor expanding $\underline{E}_{\gamma_1}(r,t)$, we have \begin{equation}\label{eq:Taylorseqsamp} \underline{E}_{\gamma_1}(r,t)= \underline{E}_{\gamma_1}\big(r=0,t\big)+\frac{\partial \underline{E}_{\gamma_1}(r,t) }{\partial \sqrt{r}}\Bigg |_{r=0,t} \sqrt{r} +o(\sqrt{r}). \end{equation} Also, using \eqref{eq:lrtsensamp} in the proof of theorem \ref{thm:adverLRT}, we get \begin{equation}\label{eq:senseqsam} \frac{\partial \underline{E}_{\gamma_1}(r,t) }{\partial \sqrt{r}}\Bigg |_{r=0,t}=-t \sqrt{\frac{2}{\alpha}\text{Var}_{Q_{\lambda(t)}} \Bigg(\log \frac{Q_{\lambda(t)}}{{P}_0} \Bigg)}, \end{equation} where $Q_{\lambda(t)}$ is the optimizing distribution in \eqref{eq:tilted} for the case where $\gamma=\frac{\gamma_1}{t}$ in \eqref{eq:constraint1}. Let \begin{equation} \underline{E}_{\gamma_1}(r)=\min_{t \geq 0} \underline{E}_{\gamma_1}(r,t). \end{equation} From the first order condition, we have \begin{align} \frac{d \underline{E}_{\gamma_1}(r) }{d \sqrt{r}} &=\frac{\partial \underline{E}_{\gamma_1}(r,t^*(r)) }{\partial \sqrt{r}} + \frac{\partial \underline{E}_{\gamma_1}(r,t^*(r))}{\partial t^*(r)} \cdot \frac{d t^*(r)}{d \sqrt{r}}\\ &=\frac{\partial \underline{E}_{\gamma_1}(r,t^*(r)) }{\partial \sqrt{r}}. \end{align} Hence \begin{equation}\label{eq:envseq} \frac{d \underline{E}_{\gamma_1}(r) }{d \sqrt{r}}\Bigg |_{r=0}= \frac{\partial \underline{E}_{\gamma_1}(r,t^*(r)) }{\partial \sqrt{r}} \Bigg |_{r=0,t^*(r=0)}. \end{equation} % To find the $t^*(r=0)$, note that $D({Q}\| {P}_1)\geq 0$, hence \begin{align} \underline{E}_{\gamma_1}(r=0,t)&=\min_{{Q}: D({Q}\| {P}_0) \geq \frac{\gamma_1}{t}+D({Q}\| {P}_1) } tD(Q\|P_0) \label{eq:optime}\\ &\geq \gamma_1. \end{align} Letting $\gamma_1=nD(P_1\|P_0)$, $t=n$ will achieve this minimum. Additionally, $t^*(r=0)=n$ is the unique solution. To see this we can write the optimization in \eqref{eq:optime} in the dual form as \begin{equation} \underline{E}_{\gamma_1}(r=0,t)= \max_{\lambda \geq 0} \gamma_1\lambda -t \log \Big ( \sum_{x\in \Xc} P_0^{1-\lambda}(x) P_1^{\lambda}(x) \Big ). \end{equation} Since $\underline{E}_{\gamma_1}(r=0,t)$ is the supremum of linear functions in $t$, therefore it is convex in $t$. Also by the envelope theorem, we have \begin{equation} \frac{\partial \underline{E}_{\gamma_1}(r=0,t)}{\partial t}=- \log \Big ( \sum_{x\in \Xc} P_0^{1-\lambda^*}(x) P_1^{\lambda^*}(x) \Big ), \end{equation} and setting this to zero, we can conclude that first order condition only satisfies if $\lambda=0$ or $\lambda=1$, i.e., $Q_\lambda=P_0$ or $Q_\lambda=P_1$ should be the optimizer in \eqref{eq:optime}, and it is clear that only $\frac{\gamma_1}{t}=D(P_1\|P_0)$ can satisfy this condition which shows the uniqueness of the solution. Then by \eqref{eq:senseqsam}, \eqref{eq:envseq} and substituding $t^*(r=0)=n$ , we obtain that \begin{equation}\label{eq:firstder} \frac{d \underline{E}_{\gamma_1}(r) }{d \sqrt{r}}=-n\sqrt{\frac{2}{\alpha}\text{Var}_{P_1} \Bigg(\log \frac{P_1}{{P}_0} \Bigg)}. \end{equation} Also, we have \begin{equation}\label{eq:zeroder} \underline{E}_{\gamma_1}(r=0)=\underline{E}_{\gamma_1}\big(r=0,t^*(r=0)\big)=nD(P_1\|P_0). \end{equation} Finally, By \eqref{eq:firstder}, \eqref{eq:zeroder} and Taylor expanding $\underline{E}_{\gamma_1}(r)$ around $r=0$ as the function of $\sqrt{r}$, we get \begin{align}\label{eq:seqsampsenexp} \epsilon_0 \leq c \cdot e^{- n\Big (D(P_1\|P_0)-\sqrt{\frac{2}{\alpha}\text{Var}_{P_1} \big(\log \frac{P_1}{{P}_0} \big) } \Big )}, \end{align} where $c$ is a positive constant. Next, we find the average worst-case expected stopping time $ \mathbb{E}_{P_0}[\underline{\hta}], \mathbb{E}_{P_1}[\underline{\hta}]$. We can write the accumulated log-likelihood ratio $\hat{S}_{n}$ evaluated at adversarial samples with the type $\Th'$ by \begin{equation} \frac{\hat{S}_{n}}{n}=\frac{S_{n}}{n}+ \thetav_{\Th}^{T} \nabla \hat{S}, \end{equation} where \begin{align} \nabla \hat{S} &= \bigg( \log \frac{P_{0}(x_1)}{P_1(x_1)},\dotsc, \log \frac{P_{0}(x_{|\Xc|})}{P_1(x_{|\Xc|})}\bigg)^T,\\ \thetav_{\Th} &= \big(\Th'(x_1)-\Th(x_1),\dotsc,\Th'(x_{|\Xc|})-\Th(x_{|\Xc|})\big)^T, \end{align} and $\Th$ is the type of the original samples at time $n$. Assume $\Th$ is fixed and the adversary is trying to maximize the stopping time, or equivalently reducing the $\hat{S}_n$ under the first hypothesis. Therefore, letting $\underline{\hat{S}}_n= \min_{ \hat{Q}: d(\hat{Q},\Th)\leq r}\hat{S}_n $ we have \begin{align} \frac{\underline{\hat{S}}_n}{n}&=\frac{S_n}{n}+ \min_{ \hat{Q}: d(\hat{Q},\Th)\leq r} \thetav_{\Th}^{T} \nabla \hat{S}\\ &=\frac{S_n}{n}- \sqrt{\frac{2}{\alpha}\text{Var}_{Q} \Bigg(\log \frac{P_0}{{P}_1} \Bigg)r } +o(\sqrt{r}). \end{align} Let \begin{align}\label{eq:tau1a} \underline{\hta}_{0}=\inf \{n\geq1: {\underline{\hat{S}}_n}& \geq\tilde{ \gamma}_0 \}, \end{align} to be the worst-case stopping time under the adversarial perturbation. Similarly to proof of the Theorem \ref{thm:lowerworstseq}, it is easy to show that the worst case stopping time $\underline{\hta}_{0}$ tends to infinity as $\tilde{\gamma}_0\rightarrow \infty$, and also for every finite $\tilde{\gamma}_0$ the stopping time is finite with probability one. Let $\tilde{\gamma}_0 =\gamma_0 +\frac{ |\Xc|+2}{\lambda^*_2} \log (t+1)$ where $\lambda_2^*$ is the optimal Lagrange multiplier defined similarly to $\lambda_1^*$ for the type-\RNum{2} error exponent. By the WLLN and continuous mapping theorem as $\gamma_0 \rightarrow \infty$ we have \begin{align} \frac{\underline{\hat{S}}_{\underline{\hta}_{0}}}{\underline{\hta}_{0}} \xrightarrow[]{p} D(P_0\|P_1)- \sqrt{\frac{2}{\alpha}\text{Var}_{P_0} \Bigg(\log \frac{P_0}{{P}_1} \Bigg) r } +o(\sqrt{r}),\label{eq:convergp1samp}\\ \frac{\underline{\hat{S}}_{\underline{\hta}_{0}-1}}{\underline{\hta}_{0}-1} \xrightarrow[]{p} D(P_0\|P_1)- \sqrt{\frac{2}{\alpha}\text{Var}_{P_0} \Bigg(\log \frac{P_0}{{P}_1} \Bigg)r } +o(\sqrt{r}). \label{eq:convergp2samp} \end{align} It is easy to show that there exist a finite $\underline{\hta}_{0}$ with probability one such that \begin{align}\label{eq:convergp} \hat{s}_{\underline{\hta}_{0}-1} \leq \tilde{\gamma}_0 < \hat{s}_{\underline{\hta}_{0}}~~~ \text{with probability } 1. \end{align} Therefore, from \eqref{eq:convergp1samp}, \eqref{eq:convergp2samp}, and \eqref{eq:convergp}, we conclude that \begin{equation}\label{eq:convP1} \frac{\underline{\hta}_{0}}{\gamma_0} \xrightarrow[]{p} \Bigg ( D(P_0\|P_1)- \sqrt{\frac{2}{\alpha}\text{Var}_{P_0} \Bigg(\log \frac{P_0}{{P}_1} \Bigg)r } +o(\sqrt{r}) \Bigg )^{-1}, \end{equation} as $\gamma_0 \rightarrow \infty$. Similarly to the proof of Theorem \ref{thm:lowerworstseq} we can show the convergence in expectation by proving the uniform integrablity of $\frac{\underline{\hta}_0}{ \gamma_0}$ as $\gamma_0 \rightarrow \infty$ as well as convergence of $\frac{\underline{\hta}}{\gamma_0}$ by using the convergence of $\frac{\underline{\hta}_{0}}{\gamma_0} $ . Hence \begin{align} \label{eq:SPRTadvtime} \mathbb{E}_{P_0}[\underline{\hta}]&=\frac{ \gamma_0}{D(P_0\|P_1)}+\frac{\gamma_0 \sqrt{\frac{2}{\alpha}\text{Var}_{P_0} \Big(\log \frac{P_0}{{P}_1} \Big)r }}{D^2(P_0\|P_1)} +o(1)+o(\sqrt{r}),\\ \mathbb{E}_{P_1}[\underline{\hta}]&=\frac{ \gamma_1}{D(P_1\|P_0)}+\frac{\gamma_1 \sqrt{\frac{2}{\alpha}\text{Var}_{P_1} \Big(\log \frac{P_1}{{P}_0} \Big)r }}{D^2(P_1\|P_0)} +o(1)+o(\sqrt{r}), \end{align} Finally letting $\gamma_0=nD(P_0\|P_1), \gamma_1=D(P_1\|P_0)$ we get \begin{align} \label{eq:SPRTadvtime} \mathbb{E}_{P_0}[\underline{\hta}]&=n+\frac{ \sqrt{\frac{2}{\alpha}\text{Var}_{P_0} \Big(\log \frac{P_0}{{P}_1} \Big)r }}{D(P_0\|P_1)}n +o(1)+o(\sqrt{r}),\\ \mathbb{E}_{P_1}[\underline{\hta}]&=n+\frac{ \sqrt{\frac{2}{\alpha}\text{Var}_{P_1} \Big(\log \frac{P_1}{{P}_0} \Big)r }}{D(P_1\|P_0)}n +o(1)+o(\sqrt{r}), \end{align} and using \eqref{eq:seqsampsenexp} the worst case error exponent will satisfy \begin{equation} \underline{\hat{E}}_0 (r) \underline{\hat{E}}_1 (r) \geq D(P_0\|P_1)D(P_1\|P_0) \Bigg (1- \frac{ 2\sqrt{\frac{2}{\alpha}\text{Var}_{P_0} \big(\log \frac{P_0}{{P}_1} \big)r }}{D(P_0\|P_1)}-\frac{2 \sqrt{\frac{2}{\alpha}\text{Var}_{P_1} \big(\log \frac{P_1}{{P}_0} \big)r }}{D(P_1\|P_0)}\Bigg) +o(\sqrt{r}), \end{equation} which concludes the proof. \bibliographystyle{ieeebib} \bibliographystyle{ieeetr}
1,116,691,498,037
arxiv
\section{Introduction} Duality is an important phenomenon in quantum field theory allowing to relate two different theories. One example in $(2+1)D$~\cite{Townsend:1983xs,nieu} is the equivalence between the self-dual (SD) model, which does not possess gauge invariance, and the gauge-invariant Maxwell-Chern-Simons (MCS) model~\cite{Deser:1981wh}. Different aspects of this equivalence were studied in the literature, see for example~\cite{Karlhede:1986qd,Fradkin:1994tt,Bralic:1995ip,Banerjee:1995yf,Banerjee:1996sp,Gomes:1997mf,Anacleto:2001rp,Minces:1999tp}. The most important results of these papers were to establish the mapping between a massive Thirring model and the Maxwell-Chern-Simons theory, and between the self-dual model and the Maxwell-Chern-Simons theory. The equivalence was also studied in the supersymmetric counterparts of the SD and MCS models, both in the free case~\cite{Karlhede:1986qf} as well as in the presence of interactions with a scalar matter superfield~\cite{Ferrari:2006vy}. However, as we will argue shortly, there remains some delicate intricacies which motivated us to reexamine the duality in the supersymmetric case. In the present decade, a considerable interest has been devoted to the study of field theories in noncommutative spacetime and the possibility of Lorentz symmetry violation, mainly due to their relevance to quantum gravity. In this context, the duality in a noncommutative spacetime was considered in~\cite{Gomes:2008pi}, and in the presence of Lorentz violation in~\cite{Furtado:2008gs}. The duality between the models can in principle be proved within two frameworks. The first of them is the gauge embedding method~\cite{Anacleto:2001rp,Ferrari:2006vy}, whose essence consists in the extension of the self-dual model to a gauge theory by adding to its Lagrangian carefully chosen terms that vanish on-shell. The equivalence of the resulting gauge model and the starting SD theory can be seen by comparing their equations of motion, and can also be tested at the quantum level. The second framework is the master action method, used for example in~\cite{nieu,Gomes:1997mf}, based on some primordial action (the master action) involving both the MCS and SD fields, coupled to some matter. Integration of this master action over the MCS field yields the SD action, whereas integration over the SD produces the MCS action, with appropriate couplings to the matter in both cases. Proceeding one step further, one can integrate over the remaining SD field in the first case, or over the MCS field in the second, finding the same effective self-interaction for the matter in both situations. When the SD field is coupled to a bosonic matter, one complication arises, in the sense that the model is actually equivalent to a ``modified'' MCS theory, with a field-dependent factor in front of the Maxwell term~\cite{Anacleto:2001rp}. The source of this complication is essentially the appearance of quartic vertices involving the matter and the vector fields. When considering the duality in the supersymmetric case, the most natural matter supermultiplet is represented by a scalar superfield, which also couples to the vector (fermionic) superfields with a quartic vertex, so the same difficulty arises: the supersymmetric SD model is equivalent to a modified MCS theory~\cite{Ferrari:2006vy}. The presence of the quartic vertices also precludes an extension of the proof of the duality for noncommutative theories (which, however, have been studied in the context of the Seiberg-Witten map, see for example~\cite{nccs,Harikumar:2005ry}). One might wonder whether an interaction with a fermionic superfield, which does not induce a quartic vertex in the classical action, could make the study of the duality more transparent, and the aim of this work is to show that this is so, at least in the commutative case. The price to pay is that the fermionic matter superfield we have to introduce in such a study describes a non-minimal supersymmetric multiplet, involving four bosonic and four fermionic degrees of freedom. The structure of this work looks as follows. In Section~\ref{classical}, we present the master action, and use the equations of motion to establish the duality at the classical level. In Section~\ref{quantum}, we study the duality at the quantum level, by inspecting the generating of the SMCS and SSD theories. All this work is made for quite general couplings; some particular cases are discussed in Section~\ref{instances}. In Section~\ref{matter}, the physical content of the fermionic matter superfield introduced by us is made explicit. In the Summary, the results are discussed; in particular, we comment on the possible extension of our work to the noncommutative spacetime. \section{The duality at the classical level}\label{classical} As a first step, we introduce the following master Lagrangian describing the interaction of a spinorial matter superfield $\Psi^\alpha$ with the spinor superfields $f_{\alpha}$ (which will be further identified with the self-dual superfield) and $A_{\alpha}$ (which will be further identified with the Maxwell-Chern-Simons superfield), \begin{eqnarray}\label{eq1} \mathcal{L}_{\rm master}=-\frac{m^2}{2}f^{\alpha}f_{\alpha}+m~f^{\alpha}W_{\alpha} +\frac{m}{2}A^{\alpha}W_{\alpha} +k^{\alpha}f_{\alpha}+j^{\alpha}A_{\alpha}+\mathcal{L}_{M}(\Psi)~, \end{eqnarray} \noindent where $\mathcal{L}_M(\Psi)$ is the quadratic Lagrangian for the spinor matter superfield $\Psi^{\alpha}$; $j^{\alpha}$ and $k^{\alpha}$ are currents depending on this superfield. Explicit forms for $\mathcal{L}_M(\Psi)$ and the currents will be presented later, at the moment, we can say that $j^{\alpha}$ is necessarily conserved ($D_{\alpha}j^{\alpha}=0$) due to gauge invariance. Here $W_{\alpha}\equiv\frac{1}{2}D^{\beta}D_{\alpha}A_{\beta}$ is the gauge invariant superfield strength constructed from the superfield $A_{\alpha}$. The Lagrangian $\mathcal{L}_{\rm master}$ is the natural superfield generalization of the one used in~\cite{Gomes:1997mf}, with the notations and conventions of~\cite{Gates:1983nr}. The equations of motion for the $A^\alpha$ and $f^\alpha$ superfields derived from Eq.~(\ref{eq1}) can be used to obtain the duality at the classical level. Varying the action $\int d^5 z\,\mathcal{L}_{\rm master}$ with respect to $f^\alpha$ we obtain, \begin{equation}\label{ident1} f_{\alpha} =\frac{1}{m^{2}}k_{\alpha}+\frac{1}{m}W_{\alpha}\, \end{equation} \noindent which, inserted in Eq.~(\ref{eq1}), yields $\mathcal{L}_{\rm master} = \mathcal{L}_{\rm SMCS}$, with \begin{eqnarray}\label{eq7a} \mathcal{L}_{\rm SMCS}&=&\frac{1}{2}W^{\alpha}W_{\alpha} +\frac{m}{2}A^{\alpha}W_{\alpha}-\frac{\alpha}{4}(D^{\alpha}A_{\alpha})^2 \nonumber\\ &&+\left(j^{\alpha}+\frac{1}{2m}D^{\beta}D^{\alpha}k_{\beta}\right)A_{\alpha} +\frac{1}{2m^2}k^{\alpha}k_{\alpha}+\mathcal{L}_{M}(\Psi)~. \end{eqnarray} \noindent This last Lagrangian describes the supersymmetric Maxwell-Chern-Simons (SMCS) field coupled to the matter through the ``minimal'' coupling $A^{\alpha}j_{\alpha}$, plus a ``magnetic'' coupling $\frac{1}{2m}A^{\alpha}D^{\beta}D_{\alpha}k_{\beta}=\frac{1}{m}W^\alpha k_\alpha$, and a Thirring-like self-interaction $\frac{1}{2m^2}k^{\alpha}k_{\alpha}$ of the spinorial matter superfield. Varying the master action with respect to $A^\alpha$ provides us with \begin{equation} \label{ident2} W_\alpha + \Omega_\alpha + j_{\alpha} = 0\,, \end{equation} \noindent where $\Omega^{\alpha}\equiv (1/2)D^{\beta}D^{\alpha}f_{\beta}$. At this point, we recall the projectors on the transversal and longitudinal parts of a fermionic superfield $\eta^\alpha$, \begin{equation} \eta_{\parallel}^{\alpha}=-D^{\alpha}D^{\beta}\frac{1}{2D^{2}}\eta_{\beta}\quad;\quad \eta_{\perp}^{\alpha}=D^{\beta}D^{\alpha}\frac{1}{2D^{2}}\eta_{\beta}\,,\label{eq:project} \end{equation} \noindent so that $D^\alpha \, \eta_\alpha^{\perp} = 0$. The explicit form of the transversal projector in Eq.~(\ref{eq:project}) allows us to rewrite Eq.~(\ref{ident2}) as \begin{equation} \label{ident3} A_{\alpha}^{\perp} =-f_{\alpha}^{\perp}-\frac{1}{mD^{2}}j_{\alpha}\,. \end{equation} \noindent Substituting Eqs.~(\ref{ident3}) and ~(\ref{ident2}) into the master Lagrangian, and taking into account that, if $\eta^\alpha$ is transversal, $\eta^\alpha \xi_\alpha = \eta^\alpha \xi^\perp_\alpha$ for any $\xi_\alpha$, we obtain $\mathcal{L}_{\rm master} = \mathcal{L}_{\rm SSD}$, with \begin{eqnarray}\label{eq9a} \mathcal{L}_{\rm SSD}=-\frac{m}{2}f^{\alpha}\Omega_{\alpha} -\frac{m^2}{2}f^{\alpha}f_{\alpha} +\left(k^{\alpha}-j^{\alpha}\right)f_{\alpha} -\frac{1}{2}j^{\alpha}\frac{1}{mD^2}j_{\alpha} +\mathcal{L}_{M}(\Psi)~. \end{eqnarray} \noindent This Lagrangian describes the dynamics of a supersymmetric Self-Dual (SSD) superfield which, besides of the ``minimal'' coupling to the current $k_{\alpha}$, is also coupled in a nonlocal way to the current $j_{\alpha}$. Moreover, a nonlocal Thirring-like term for the $j_{\alpha}$ shows up. Classically, the Lagrangians in Eqs.~(\ref{eq7a}) and (\ref{eq9a}) are equivalent, thus establishing the duality between these SMCS and SSD models at the level of equations of motion. Indeed, we can find an explicit mapping between the superfields and currents of the SMCS theory to their counterparts in the SSD model, such that the corresponding equations of motion are mapped one to the other. The equations of motion derived from the SSD Lagrangian in Eq.~(\ref{eq9a}) can be cast as \begin{equation} m\Omega_{\alpha}+m^{2}f_{\alpha}+j^{\alpha}-k^{\alpha}=0\,,\label{eqq:2} \end{equation} \noindent and \begin{equation} \frac{\delta}{\delta\Psi^\beta}\int d^5 z \mathcal{L}_{M} + \frac{\partial j^{\alpha}} {\partial\Psi^\beta} \left(-f_{\alpha}^{\perp}-\frac{1}{mD^{2}}j_{\alpha}\right)+ \frac{\partial k^{\alpha}}{\partial \Psi^\beta} f_{\alpha}=0\,.\label{eqq:3} \end{equation} \noindent Using the projection operators in Eq.~(\ref{eq:project}), we split Eq.~(\ref{eqq:2}) in the longitudinal, \begin{equation} m^{2}f_{\alpha}^{\parallel}=k_{\alpha}^{\parallel}\,,\label{eqq:2a} \end{equation} \noindent and transversal parts, \begin{equation} m\Omega_{\alpha}^{\perp}+m^{2}f_{\alpha}^{\perp}+j_{\alpha}-k_{\alpha}^{\perp}=0\,.\label{eqq:2b} \end{equation} \noindent Hereafter, we omit the $\perp$ in the current $j$ since we know it is always transversal. We see that the longitudinal part of $f$ is not dynamical, but algebraically related to the longitudinal part of $k_{\alpha}$. The equations of motion derived from the SMCS Lagrangian in Eq.~(\ref{eq7a}) read, \begin{equation} \frac{1}{2}D^{\beta}D_{\alpha}W_{\beta}+mW_{\alpha}+j_{\alpha}+\frac{D^{2}}{m}k_{\alpha}^{\perp}=0\,,\label{eqq:5} \end{equation} \noindent and \begin{equation} \frac{\delta}{\delta\Psi^\beta}\int d^5 z \mathcal{L}_{M} + \frac{\partial j^{\alpha}}{\partial\Psi^\beta} A_{\alpha}^{\perp}+ \frac{\partial k^{\alpha}}{\partial\Psi^\beta} \left(\frac{1}{m^{2}}k_{\alpha}+\frac{1}{m}W_{\alpha}\right)=0\,. \label{eqq:13} \end{equation} \noindent All terms in Eq.~(\ref{eqq:5}) are transversal. Since $A^\alpha$ is a gauge superpotential, under a gauge transformation $\delta A^\alpha = D^\alpha K$, the transversal part $A_{\perp}^{\alpha}$ is invariant, while $\delta A_{\parallel}^{\alpha}=D^{\alpha}K$. Hence, the equations of motion involves the transversal (gauge invariant) part of $A^\alpha$; its longitudinal part is only constrained by the gauge fixing condition we will have to impose to quantize the theory~\cite{foot1}. Comparing Eqs.~(\ref{eqq:3}) and (\ref{eqq:13}), we conclude that one equation is mapped to the other by means of the equations of motion in Eqs.~(\ref{ident1}) and~(\ref{ident3}), so those furnish the identification we were looking for. Taking the longitudinal part of Eq.~(\ref{ident1}), we re-obtain Eq.~(\ref{eqq:2a}). Also, considering the transversal part of Eq.~(\ref{ident1}), \begin{equation} -f_{\alpha}^{\perp}+\frac{1}{m^{2}}k_{\alpha}^{\perp}+\frac{1}{m}W_{\alpha}^{\perp} =0\,,\label{eqq:18} \end{equation} \noindent and replacing $A_{\alpha}^{\perp}$ using Eq.~(\ref{ident3}), we re-obtain Eq.~(\ref{eqq:2b}). Finally, we can map the equation of motion for the SSD field in Eq.~(\ref{eqq:2b}) into the equation of motion for the transversal component of MCS superfield. Indeed, starting from Eq.~(\ref{eqq:2b}), and substituting $f^{\perp}$ using Eq.~(\ref{ident3}), we have \begin{align} m\Omega_{\alpha}^{\perp} & +m^{2}f_{\alpha}^{\perp}+j_{\alpha}-k_{\alpha}^{\perp}=\nonumber \\ = & -mW_{\alpha}^{\perp}-m^{2}A_{\alpha}^{\perp}- \frac{m}{D^{2}}j_{\alpha}-k_{\alpha}^{\perp} = 0\,.\label{eqq:20} \end{align} \noindent Applying $\frac{1}{2}D^{\alpha}D_{\beta}$ to this equation, we obtain \begin{equation} m\left[\frac{1}{2}D^{\alpha}D_{\beta}W_{\alpha}^{\perp} +mW_{\beta}^{\perp}+\, j_{\alpha}+\frac{D^{2}}{m}k_{\alpha}^{\perp}\right]\,=\,0\,,\label{eqq:21} \end{equation} \noindent which is equivalent to Eq.~(\ref{eqq:5}). In summary, the transversal (gauge invariant) part of the SMCS superfield can be mapped to the transversal part of SSD superfield. No relation exists between their longitudinal parts, however. In the SSD model, $f^\parallel$ is algebraically related to $k^\parallel$, while in the SMCS model, the longitudinal (gauge dependent) part of $A^\alpha$ is not coupled to other fields or currents, being constrained only by the choice of the gauge fixing. \section{The duality at the quantum level}\label{quantum} Having discussed the duality between $\mathcal{L}_{\rm SMCS}$ and $\mathcal{L}_{\rm SSD}$ at the level of equations of motion, we can now investigate whether this duality exists at the quantum level, by comparing the corresponding generating functionals. We will see that both theories lead to the same generating functional for the $\Psi^\alpha$ superfield, and in this sense we will say that $\mathcal{L}_{\rm SMCS}$ and $\mathcal{L}_{\rm SSD}$, as given in Eqs.~(\ref{eq7a}) and~(\ref{eq9a}), are quantum equivalent. To this end, we have to include in the master Lagrangian a gauge fixing term, so that we can find a propagator for the SMCS superfield. We consider, then, the master generating functional, \begin{align}\label{eq2} Z(k^{\alpha},j^{\alpha},\Psi^{\alpha})& = \mathcal{N} \int \mathcal{D}f_{\alpha}\,\mathcal{D}A_{\alpha} \times \nonumber\\ &\times \exp \, i \int {d^5z} \left\{ \mathcal{L}_{\rm master}(f^{\alpha},A^{\alpha},\Psi^{\alpha},j^{\alpha},k^{\alpha}) -\frac{\alpha}{4}(D^{\alpha}A_{\alpha})^2 \right\}~, \end{align} \noindent where $\mathcal{N}$ is a field independent normalization factor. We will further need the formula for the Gaussian path integral over a Grassmannian field $X_{\alpha}$, \begin{eqnarray}\label{eq3} \int~\mathcal{D}X_{\alpha}~ \exp\left\{ \,i \left[ \frac{1}{2}X^{\alpha}{\mathcal{O}_{\alpha}}^{\beta}X_{\beta} +J^{\alpha}X_{\alpha}\right] \right\} \,=\, \exp\left\{ -i\left[\frac{1}{2}J^{\alpha}{(\mathcal{O}^{-1})_{\alpha}}^{\beta}J_{\beta} \right]\right\}, \end{eqnarray} \noindent up to a factor depending on ${\rm det}~{\mathcal{O}_{\alpha}}^{\beta}$, which will be irrelevant in this work, and omitting the proper superspace integrations in the exponents. By means of Eq.~(\ref{eq3}), we can perform the functional integration in Eq.~(\ref{eq2}) over the superfield $f^{\alpha}$, with \begin{eqnarray}\label{eq4} {({\mathcal{O}^{-1}_1})_{\alpha}}^{\beta}=-\frac{1}{m^2}{\delta_{\alpha}}^{\beta}~, \end{eqnarray} \noindent and we end up with, \begin{eqnarray}\label{eq5} Z(k^{\alpha},j^{\alpha},\Psi^{\alpha})&=& \mathcal{N} \int \mathcal{D}A_{\alpha}\,\exp\Big{\{}i\int\!{d^5z}\Big[\frac{1}{2}(m~W^{\alpha}+k^{\alpha})\frac{{\delta_{\alpha}}^{\beta}}{m^2}(m~W_{\beta}+k_{\beta})\nonumber\\ &+&\frac{m}{2}A^{\alpha}W_{\alpha}+j^{\alpha}A_{\alpha}-\frac{\alpha}{4}(D^{\alpha}A_{\alpha})^2+\mathcal{L}_{M}\Big]\Big{\}}~, \end{eqnarray} \noindent which, after an integration by parts, can be cast as \begin{equation}\label{eq6} Z(k^{\alpha},j^{\alpha},\Psi^{\alpha})=\mathcal{N} \int \mathcal{D}A_{\alpha}\, \exp \, i \int {d^5z} \left\{ \mathcal{L}_{\rm SMCS} -\frac{\alpha}{4}(D^{\alpha}A_{\alpha})^2 \right\}~, \end{equation} \noindent where $\mathcal{L}_{\rm SMCS}$ is the SMCS Lagrangian we found in Eq.~(\ref{eq7a}). To integrate the generating functional in Eq.~(\ref{eq2}) over $A_{\alpha}$, we use the inverse of the quadratic part in $A^{\alpha}$ of the master Lagrangian in Eq.~(\ref{eq1}), including the gauge-fixing term, \begin{eqnarray}\label{eq8} {(\mathcal{O}^{-1}_2)_{\beta}}^{\gamma}=\frac{1}{2}\Big(\frac{D^{\gamma}D_{\beta}}{m\Box} +\frac{D_{\beta}D^{\gamma}}{\alpha\Box}\Big)~. \end{eqnarray} \noindent Using Eq.~(\ref{eq3}), the functional integration in Eq.~(\ref{eq2}) over $A^{\alpha}$ can be performed, arriving at the following generating functional for the $f^{\alpha}$ and matter superfields, \begin{eqnarray}\label{eq9} Z(k^{\alpha},j^{\alpha},\Psi^{\alpha})&=&\int \mathcal{D}f_{\alpha} \,\exp\left\{ i\int {d^5z}~\mathcal{L}_{\rm SSD}\right\}~, \end{eqnarray} \noindent where $\mathcal{L}_{\rm SSD}$ is the SSD model defined in Eq.~(\ref{eq9a}). To complete the proof of the equivalence of the SMCS and the SSD theories, we integrate the generating functionals in Eq.~(\ref{eq6}) over $A_{\alpha}$ and Eq.~(\ref{eq9}) over $f^{\alpha}$. The relevant propagators are \begin{subequations}\label{props} \begin{eqnarray} {(\mathcal{O}_{SMCS}^{-1})_{\beta}}^{\alpha}&=&\frac{1}{2}\Big[\frac{D^{\alpha}D_{\beta}}{\Box(D^2+m)} +\frac{1}{\alpha}\frac{D_{\beta}D^{\alpha}}{\Box}\Big]~,\label{prop1}\\ {(\mathcal{O}_{SSD}^{-1})_{\beta}}^{\alpha}&=&\frac{1}{2}\Big[\frac{D_{\beta}D^{\alpha}}{m^2D^2} -\frac{1}{m}\frac{D^{\alpha}D_{\beta}}{D^2(D^2+m)}\Big]~ \nonumber \\ &=& \frac{1}{2m^2} \left[ \frac{D^\alpha D_\beta}{D^2+m} - 2 \delta^\alpha_{\,\,\beta} \right] \label{prop2} \,. \end{eqnarray} \end{subequations} \noindent The integration over $A^{\alpha}$ and $f^{\alpha}$ respectively in Eq.~(\ref{eq6}) and Eq.~(\ref{eq9}) results in the same effective Lagrangian, \begin{eqnarray}\label{eq10} \mathcal{L}_{eff}&=&-\frac{1}{4m^2}j^{\alpha}\Big(\frac{D^{\beta}D_{\alpha}}{D^2+m}-2{\delta_\alpha}^{\beta}\Big)j_{\beta} -\frac{1}{2m}j^{\alpha}\frac{1}{D^2}j_{\alpha} +\frac{1}{m}k^{\alpha}\frac{1}{D^2+m}j_{\alpha}\nonumber\\ &-&\frac{1}{4m^2}k^{\alpha}\Big(\frac{D^{\beta}D_{\alpha}}{D^2+m}-2{\delta_\alpha}^{\beta}\Big)k_{\beta} +\mathcal{L}_{M}(\Psi)~, \end{eqnarray} \noindent as it should. This ensures the quantum equivalence between the two models, irrespective of the choice of the currents $j^{\alpha}$ and $k^{\alpha}$ (whereas $D^\alpha\,j_\alpha=0$). One last note before closing this section. The physical content of the model in Eq.~(\ref{eq9}) can be more clearly seen by means of the field redefinition, \begin{equation} A_\alpha \, = \, B_\alpha - f_\alpha \, \end{equation} \noindent which allows us to rewrite Eq.~(\ref{eq1}), apart from surface terms, as \begin{align}\label{eq1eq} \mathcal{L}_{\rm master}=&-\frac{m^2}{2}f^{\alpha}f_{\alpha} -\frac{m}{2}~f^{\alpha}\Omega_{\alpha}+\frac{m}{2} B^{\alpha} \left( \frac{1}{2}D^{\beta}D_{\alpha}B_{\beta} \right) \nonumber\\ &+\left(k^{\alpha} - j^\alpha\right)f_{\alpha}+j^{\alpha}B_{\alpha} +\mathcal{L}_{M}(\Psi)~, \end{align} \noindent From Eq.~(\ref{eq1eq}), we see that $\mathcal{L}_{\rm master}$ describes a propagating field governed by a Self-Dual Lagrangian, together with a pure topological Chern-Simons field $B^{\alpha}$. We can use the propagators in Eqs.~(\ref{props}) to find the superfields $f^{\alpha}$ and $B^{\alpha}$ in terms of the corresponding sources, \begin{subequations}\label{src} \begin{eqnarray} f_\beta &=& - {(\mathcal{O}_{SSD}^{-1})_{\beta}}^{\alpha} \left(k_\alpha - j_\alpha \right) \,, \label{src1} \\ B_\beta &=& - {(\mathcal{O}_{SMCS}^{-1})_{\beta}}^{\alpha} j_\alpha \,. \label{src2} \end{eqnarray} \end{subequations} \noindent In particular, for the $B^{\alpha}$, the gauge-dependent part of $(\mathcal{O}_{SSD}^{-1})$ drops out, and we find \begin{equation} B_\alpha = - \frac{1}{m D^2} \left( D^{\beta}D_{\alpha}\frac{1}{2D^{2}} \right) j_\alpha = - \frac{1}{m D^2} j_\alpha \,, \end{equation} \noindent since $j^\alpha$ is transversal. The field-strength corresponding to this superpotential is found to be \begin{equation}\label{CS} W_{B}^{\,\,\alpha} = \frac{1}{2}D^{\beta}D^{\alpha}B_{\beta} = -\frac{1}{m}j^\alpha \,. \end{equation} \noindent This is the supersymmetric version of the well known relation between the source and the field strength generated by this source in the Chern-Simons model~\cite{morosov}. Substituting Eqs.~(\ref{src}) in the master Lagrangian, one obtains \begin{eqnarray}\label{eq10a} \mathcal{L}_{eff}&=& -\frac{1}{4m^2} \left(k^\alpha - j^{\alpha}\right) \left(\frac{D^{\beta}D_{\alpha}}{D^2+m}\right)\left(k_\beta- j_\beta \right) +\frac{1}{2 m^2} \left(k^\alpha - j^{\alpha}\right) \left(k_\alpha- j_\alpha \right) \nonumber\\ &&-\frac{1}{2m}j^{\alpha}\frac{1}{D^2}j_{\alpha} \, , \end{eqnarray} \noindent which is the same as~(\ref{eq10}). \section{Some particular instances of the duality}\label{instances} Having studied the correspondence between the SSD and the SMCS models in the presence of arbitrary matter currents, now we consider some interesting particular cases. The case (a) corresponds to the choice $j^{\alpha}=0$, the matter superfield interacting with the vector superfields only through the current $k^{\alpha}$. In this case, we can summarize our results as \begin{subequations}\label{sdeq11} \begin{align}\label{sdeq11a} \mathcal{L}_{\rm SMCS}^{(a)}=&\frac{1}{2}W^{\alpha}W_{\alpha} +\frac{m}{2}A^{\alpha}W_{\alpha}-\frac{\alpha}{4}(D^{\alpha}A_{\alpha})^2 +\frac{1}{2m}A^{\alpha}D^{\beta}D_{\alpha}k_{\beta}\nonumber\\ &+\frac{1}{2m^2}k^{\alpha}k_{\alpha}+\mathcal{L}_{M}(\Psi)~,\\ \mathcal{L}_{\rm SSD}^{(a)}=&-\frac{m}{2}f^{\alpha}\Omega_{\alpha} -\frac{m^2}{2}f^{\alpha}f_{\alpha}+k^{\alpha}f_{\alpha} +\mathcal{L}_{M}(\Psi)~,\label{sdeq11b}\\ \mathcal{L}_{eff}^{(a)}=& -\frac{1}{4m^2}k^{\alpha}\Big(\frac{D^{\beta}D_{\alpha}}{D^2+m}\Big)k_{\beta} +\frac{1}{2m^2}k^{\alpha}k_{\alpha} +\mathcal{L}_{M}(\Psi)~.\label{sdeq11c} \end{align} \end{subequations} \noindent Comparing Eqs.~(\ref{sdeq11b}) and~(\ref{sdeq11c}), we see that the minimal interaction $k^\alpha f_\alpha$ induces in the $\Psi^\alpha$ effective Lagrangian a non-local interaction mediated by a massive degree of freedom, plus a contact interaction between the matter currents. Besides, we note that Eq.~(\ref{sdeq11a}) already contains the contact term, and the non-minimal interaction $A^{\alpha}D^{\beta}D_{\alpha}k_{\beta}$ between the matter and the Chern-Simons field is responsible for describing the non-local interaction in Eq.~(\ref{sdeq11c}. Comparing the equations of motion for the matter, in Eqs.~(\ref{eqq:3}) and (\ref{eqq:13}), in this particular case we recognize the identification \begin{eqnarray}\label{sdeq11cc} f_{\gamma}=\frac{W_{\gamma}}{m}+\frac{k_{\gamma}}{m^2}\,, \end{eqnarray} \noindent which have been found in~\cite{Ferrari:2006vy}, when studying the duality in the presence of a scalar matter superfield. Indeed, Eq.~(\ref{sdeq11b}) is analogous to the starting point of~\cite{Ferrari:2006vy}; here, however, the dual SMCS description, Eq.~(\ref{sdeq11a}), is simpler since it does not contain the field-dependent factor in front of the Maxwell term, due to the absence of quartic couplings. Case (b) is a theory where matter interacts only through the current $j^{\alpha}$, i.e., $k^{\alpha}=0$. In this case we have \begin{subequations}\label{sdeq12} \begin{align} \mathcal{L}_{\rm SMCS}^{(b)}=&\frac{1}{2}W^{\alpha}W_{\alpha} +\frac{m}{2}A^{\alpha}W_{\alpha}-\frac{\alpha}{4}(D^{\alpha}A_{\alpha})^2 +j^{\alpha}A_{\alpha}+\mathcal{L}_{M}(\Psi)~,\label{sdeq12a} \\ \mathcal{L}_{\rm SSD}^{(b)}=&-\frac{m}{2}f^{\alpha}\Omega_{\alpha} -\frac{m^2}{2}f^{\alpha}f_{\alpha} -f^{\alpha}j_{\alpha} -\frac{1}{2m}j^{\alpha}\frac{1}{D^2}j_{\alpha} +\mathcal{L}_{M}(\Psi)~, \label{sdeq12b} \\ \mathcal{L}_{eff}^{(b)}=& -\frac{1}{4m^2}j^{\alpha}\Big(\frac{D^{\beta}D_{\alpha}}{D^2+m}\Big)j_{\beta} +\frac{1}{2m^2}j^{\alpha}j_{\alpha} -\frac{1}{2m}j^{\alpha}\frac{1}{D^2}j_{\alpha} +\mathcal{L}_{M}(\Psi)~.\label{sdeq12c} \end{align} \end{subequations} \noindent Now, the minimal coupling of $j^\alpha$ to the Chern-Simons superfield corresponds to a non-local interaction mediated by a massive degree of freedom and a contact term for the $j^\alpha$, both similar to the ones in Eq.~(\ref{sdeq11c}), plus an additional Chern-Simons interaction (see discussion regarding Eq.~(\ref{CS})). Furthermore, if we substitute $j^\alpha=\frac{1}{2m}D^\beta D^\alpha g_\beta$ in Eqs.~(\ref{sdeq12a}) and~(\ref{sdeq12c}), we obtain in $\mathcal{L}_{eff}^{(b)}$ only the non-local interaction $\frac{1}{4m^2}g^{\alpha}\Big(\frac{D^{\beta}D_{\alpha}}{D^2+m}\Big)g_{\beta}$, which is consistent with the results discussed for the case (a). Finally, the explicit mapping between the (transversal parts of the) SSD and the SMCS superfields is given by \begin{equation} A_{\alpha}^{\perp} =-f_{\alpha}^{\perp}-\frac{1}{mD^{2}}j_{\alpha}\,, \end{equation} \noindent while their longitudinal parts are unrelated, as we pointed out earlier. Case (c) corresponds to the choice $j^\alpha = k^\alpha$; in this case, from Eq.~(\ref{eq9a}), we decouple the matter from the self-dual superfield, so we end up with a free SSD superfield plus a Chern-Simons interaction between the matter currents. This is equivalent, from Eq.~(\ref{eq7a}), to a model including a local Thirring interaction, along with a special coupling $\left(\frac{D^2+m}{m} j^\alpha\right)A_\alpha$ to the Maxwell-Chern-Simons superfield. In other words, the coupling $\left(\frac{D^2+m}{m} j^\alpha\right)A_\alpha$ induces, in the effective action, the terms $-\frac{1}{2m^2}j^\alpha j_\alpha - j^\alpha \frac{1}{2mD^2}j_\alpha$. Finally, the case (d) is the choice $j^{\alpha}=-\frac{1}{2m}D^{\beta}D^{\alpha}k_{\beta}$. In this case, we decouple matter and SMCS in Eq.~(\ref{eq7a}); from this, we immediately see that the effective Lagrangian for the matter contains only a Thirring $\frac{1}{2m^2}k^\alpha k_\alpha$ interaction. On the other side, from Eq.~(\ref{eq9a}), the same dynamics is described by a Self-Dual model with the coupling $\left(\frac{k^\alpha + \frac{D^2}{m} k^\alpha_\perp}{m} \right)f_\alpha$ between matter and self-dual superfields, plus a Thirring-like interaction $-\frac{1}{4m^3}k^\alpha D^\beta D_\alpha k_\beta$. \section{The matter content}\label{matter} As we discussed in the Introduction, the equivalence between the SD and the MCS theories is more simply established when these superfields interact with a fermionic matter superfield $\Psi_{\alpha}$ throught the currents $j^{\alpha}$ and $k^{\alpha}$, which, together with the matter free Lagrangian $\mathcal{L}_{M}\left(\Psi\right)$, have not been specified so far (except for the requirement that $j^{\alpha}$ is conserved, so that it can be coupled to a gauge superfield). In this section, we write explicitly a free Lagrangian $\mathcal{L}_{M}\left(\Psi\right)$, and investigate its physical degrees of freedom. We also point out a simple choice for the current $j^{\alpha}$. Even if we cannot give a more physical motivation for the introduction of such matter supermultiplet, at least we can explicitly demonstrate that a sensible dynamics can be constructed for such a model. The component expansion of the fermionic superfield $\Psi^{\alpha}$ is given by \begin{equation} \Psi^{\alpha}=\psi^{\alpha}+\theta^{\alpha}b+i\theta_{\beta}b^{\beta\alpha}-\theta^{2}\varphi^{\alpha}\,, \label{eq:1} \end{equation} \noindent where $b^{\beta\alpha}$ is a symmetric bispinor (a three-dimensional vector field), $b$ is a scalar, $\psi_{\alpha}$ and $\varphi_{\alpha}$ are three-dimensional spinors. Since we want to couple this matter to a gauge superfield, these fields are complex. The complex conjugate $\overline{\Psi}^{\alpha}$ can be written as \begin{equation} \overline{\Psi}^{\alpha}=\overline{\psi}^{\alpha}+\theta^{\alpha}\overline{b}+i\theta_{\beta}\overline{b}^{\beta\alpha}-\theta^{2}\overline{\varphi}^{\alpha}\,.\label{eq:2} \end{equation} We choose to study the case when the matter interacts though the current $j^\alpha$, minimally coupled to the gauge superfield $A^\alpha$: the corresponding Lagrangian appears in Eq.~(\ref{sdeq12a}). Here we will focus only in the part of the action involving the $\Psi_{\alpha}$, i.e., \begin{equation} S_{M} = \int d^5 z \, \mathcal{L}_{M}\left(\Psi\right) + \int d^5 z\, A^\alpha j_\alpha \,. \end{equation} \noindent The proposed quadratic action for the matter superfield is given by \begin{equation} \int d^5 z \, \mathcal{L}_{M}\left(\Psi\right)\,=\,-\int d^{5}z\,\overline{\Psi}^{\alpha}\left(i\partial_{\alpha\beta}-M\, C_{\alpha\beta}\right)\Psi^{\beta}\,,\label{eq:5} \end{equation} \noindent while the current $j^\alpha$ reads, \begin{equation} j^{\alpha}\,=\,\frac{i g}{2} D_{\beta} \left(\overline{\Psi}^{\alpha}\Psi^{\beta}+\overline{\Psi}^{\beta}\Psi^{\alpha}\right)\,.\label{eq:15} \end{equation} \noindent This form of the matter current is obtained by the usual substitution \begin{equation} \partial_{\alpha\beta}\rightarrow\nabla_{\alpha\beta}=\partial_{\alpha\beta}-g\,D_{(\alpha}A_{\beta)} \,,\label{eq:13} \end{equation} \noindent in the quadratic Lagrangian $\mathcal{L}_{M}\left(\Psi\right)$, so that the action $S_M$ turns out to be invariant under the gauge transformations \begin{equation} \Psi^{\alpha}\rightarrow e^{igK}\Psi^{\alpha}\quad;\quad\overline{\Psi}^{\alpha}\rightarrow e^{-igK}\overline{\Psi}^{\alpha}\quad;\quad A_{\alpha}\rightarrow A_{\alpha}+D_{\alpha}K\,,\label{eq:14} \end{equation} \noindent $K$ being a real scalar superfield. The coupling constant $g$ has mass dimension $1/2$, which in principle signals a super-renormalizable theory. By explicit computation, we verify that $D_{\alpha}j^{\alpha}=0$, as it should. Actually, for this particular form of the current $j^\alpha$, this conservation equation reduces to \begin{equation} i \partial_{\alpha\beta} \left( \overline{\Psi}^\alpha \Psi^\beta \right)\,=\,0\,. \end{equation} The component expansion of $S_{M}$ can be obtained with the help of the formulae in Appendix \ref{app}. The expansion of the $A^\alpha$ superfield in components reads \begin{equation} A^{\alpha}=\alpha^{\alpha}+\theta^{\alpha}a+i\theta_{\beta}a^{\beta\alpha}-\theta^{2}\beta^{\alpha} \,,\label{eq:2a} \end{equation} \noindent and, for simplicity, we will work in the Wess-Zumino gauge, so that $\alpha^\alpha = 0$ and $a=0$. The remaining vector field corresponds to the photon and the spinor to the photino. The matter action $S_M$ can be written, in terms of component fields, as follows, \begin{equation} S_M \, = \, S_M ^ {(1/2)} + S_M ^ {(1)} + S_M ^ {\textrm{(int)}}\, \end{equation} \noindent where \begin{equation} S_M ^ {(1/2)}\,=\,\int d^{3}x \, \left[ \overline{\varphi} \left(i\,\gamma^a\partial_a-M\right) \psi +\overline{\psi} \left(i\,\gamma^a\partial_a-M\right) \varphi \right]\,,\label{eq:7} \end{equation} \begin{equation} S_M ^ {(1)}\,=\,-\int d^{3}x \,\left[ \frac{1}{2} \varepsilon^{abc}\overline{b}_{a}\partial_{b}b_{c} +\frac{M}{2}\,\overline{b}^{a}b_{a} +\overline{b} \partial^a b_a + {b} \partial^a \overline{b}_a - 2 M \overline{b} b \right]\,,\label{eq:8} \end{equation} \noindent and \begin{align} S_M ^ {\textrm{(int)}} \, = \, g \int d^5 z &\left[ a_a \left( \overline{\psi} \gamma^a \varphi + \overline{\varphi} \gamma^a \psi \right) -\frac{1}{2} a_a \varepsilon^{abc}\partial_b \left( \overline{\psi} \gamma_c \psi \right) \right. \\ &\left. +\overline{b} (\psi \beta) + b (\overline{\psi} \beta) + \frac{i}{2} b_a \left( \overline{\psi} \gamma^a \beta \right) + \frac{i}{2} \overline{b}_a \left( {\beta} \gamma^a \psi \right) \right] \,. \nonumber \end{align} \noindent In writing these equations, we have used that $\left(i\,\gamma^a\partial_a-M\right)_{\alpha\cdot}^{\cdot\beta}=\left(i\,\partial_{\alpha\cdot}^{\cdot\beta}-\delta_{\alpha\cdot}^{\cdot\beta}M\right)$, and the relation between a bispinor and a vector $X^{\alpha\beta} = 1/2 (\gamma^a)^{\alpha\beta} X_a$ (see Appendix); latin indices run from $0$ to $2$. The $b$ is an auxiliary superfield, that can be eliminated by means of its equation of motion, and we obtain \begin{equation} S_M ^ {(1)}\,=\,-\int d^{3}x \,\left[ \frac{1}{2} \varepsilon^{abc}\overline{b}_{a}\partial_{b}b_{c} +\frac{M}{2}\,\overline{b}^{a}b_{a} - \frac{1}{2M} \left( \partial^a \overline{b}_a \right) \left( \partial^b b_b \right) \right]\,,\label{eq:8b} \end{equation} The action $S_M ^ {(1)}$ corresponds to a kind of gauge-fixed Chern-Simons theory, with a Proca mass term. There is no gauge symmetry associated to the vector field $b_{\mu}$, thus the \emph{complex} $b_{a}$ has four propagating degrees of freedom with mass $M$, as can be seen by its propagator in momentum space, \begin{equation} \Delta_{ab}\left(k\right)\,=\,\frac{i}{k^{2}+M^{2}-i\varepsilon} \left[i\varepsilon_{abc}k^{c}+M g_{ab}\right]\,.\label{eq:12} \end{equation} \noindent This propagator is clearly not transversal, as it should be. From the action $S_M ^ {(1/2)}$, one obtains the usual equations of motion for spinors in three dimensions, \begin{equation} \left( i\gamma^a\partial_a - M \right) \psi=0\quad;\quad \left( i\gamma^a\partial_a - M \right)\varphi=0\,.\label{eq:10} \end{equation} \noindent Since each real spinor has one on-shell degree of freedom, the action describes the propagation of \emph{four} fermionic degrees of freedom. Notice however the mixing between the spinors $\psi_{\alpha}$ and $\varphi_{\alpha}$ already in the quadratic part of the action. The propagator in Eq.~(\ref{eq:12}) has components with indefinite metric, as can be seen by simple inspection. Not surprisingly, the same problem appears in the fermionic sector: if we try to disentangle the $\psi^\alpha$ and $\varphi^\alpha$ fields by diagonalizing the quadratic action in Eq.~(\ref{eq:7}), the new fermionic kinetic terms end up with opposite signs, also indicating and indefinite metric in the space of quantum states. This is not an unusual feature in quantum field theory. In fact, the presence of indefinite metric actually permeates the quantization of any gauge theory; in those cases the quantization has to be suplemented by selection rules to extract physically relevant results. We intend to come back to this issue in a future publication. \section{Summary} Let us summarize our results. We have studied the dual equivalence of the supersymmetric self-dual and Maxwell-Chern-Simons theories, coupled to a fermionic matter superfield. We have shown these models to be equivalent at the classical level, by looking at their equations of motion, which actually provides us with a mapping between fields and currents of both theories. At the quantum level, their equivalence follows from the equality of the effective generating functional they induce for the matter. The duality holds in presence of matter currents $j^\alpha$ and $k^\alpha$ that are quite arbitrary, the only requirement being that $D_\alpha j^\alpha=0$ so that $j^\alpha$ can be coupled to the gauge superfield. The duality in the presence of such a non-minimal matter superfield is much simpler than with an usual scalar matter superfield, discussed in~\cite{Ferrari:2006vy}. We have also shown how a sensible dynamics can be given to such an unusual supermultiplet, as well as how they can be coupled to the gauge superfield. As a final remark, we comment on the possible extension of our results to the noncommutative case. We remark that, for a scalar matter superfield, no such extension was possible using the gauge embedding method~\cite{Ferrari:2006vy}. In the present case, all manipulations needed to verify the duality were done without specifying the real nature of the currents $j_{\alpha}$ and $k_{\alpha}$, that can be treated as composite fields. We recall the property of the Moyal-Groenewald *-product that, inside an integral, one *-product in a monomial of fields can be replaced by an usual product, i.e.~\cite{reviews}, \begin{equation} \int d^3 x \, \phi_1 * \phi_2 * \phi_3 * \cdots * \phi_n = \int d^3 x \, \phi_1 \left( \phi_2 * \phi_3 * \cdots * \phi_n\right)\,. \end{equation} \noindent That means we can generalize the master action, substituting all usual products by Moyal-Groenewald products, and we end up with \begin{align}\label{nc1} S_{\rm master}^{\rm (*)}=\int d^5z & \left[-\frac{m^2}{2}f^{\alpha}f_{\alpha}+m~f^{\alpha}W_{\alpha} +\frac{m}{2}A^{\alpha}W_{\alpha} \right. \nonumber\\ &\left. + k_{*}^{\alpha}f_{\alpha}+j_{*}^{\alpha}A_{\alpha}+\mathcal{L}_{M}(\Psi) \right]~, \end{align} \noindent where the *-product appears only inside the currents $j$ and $k$. In this way, the proof of equivalence between the SSD and SMCS theories follows as in the previous sections. Notice, however, that these are not full-fledged noncommutative SSD or SMCS theories, since the *-product only affects the matter currents. The difficulties in studying the equivalence between noncommutative SSD and SMCS theories cannot be solved by the methods presented in this paper; see however \cite{Gomes:2008pi}. \vspace{1cm} {\bf Acknowledgments.} This work was partially supported by Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq) and Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP). A.C.L. is supported by FAPESP project No. 2007/08604-1.
1,116,691,498,038
arxiv
\section*{appendices}
1,116,691,498,039
arxiv
\section{Introduction} \label{sec-intro} \subsection{Ratner's theorem and its effective versions} \par In 1991, Ratner \cite{ratner_2} proved the following fundamental theorem on equidistribution of unipotent orbits in homogeneous spaces: \begin{thm} \label{thm:ratner} Let $G$ be a Lie group and $\Gamma$ be a lattice in $G$, namely, the homogeneous space $X = G/\Gamma$ admits a $G$-invariant probability measure. Let $U = \{u(r): r\in {\mathbb{R}}\}$ be a one-parameter unipotent subgroup of $G$. For any $x \in X$, the closure of the $U$-orbit $U x$ of $x$ is a closed $L$-orbit $Lx$ for some Lie subgroup $L \subset G$. Moreover, $Ux$ is equidistributed in $Lx$ with respect to the unique $L$-invariant probability measure $\mu_L$, namely, for any $f \in C^\infty_c(X)$ (where $C^\infty_c(X)$ denotes the space of smooth functions on $X$ with compact support), \[ \lim_{T \to \infty} \frac{1}{T} \int_{-T/2}^{T/2} f(u(r)x) {\mathrm{d}} r = \int_{Lx} f {\mathrm{d}} \mu_L .\] \end{thm} \par Since Ratner's proof relies on her measure classification theorem (cf. \cite{ratner-acta}, \cite{ratner-invent}, \cite{Ratner}) which proves Raghunathan's conjecture, it does not tell how fast it tends to equidistribution. Therefore, a natural question is whether we can make the equidistribution effective, namely, giving an explicit upper bound on the difference \[ \left| \frac{1}{T} \int_{-T/2}^{T/2} f(u(r)x) {\mathrm{d}} r - \int_{Lx} f {\mathrm{d}} \mu_L \right| \] for given $T >0$. Moreover, effective versions of equidistribution of $U$-orbits have many applications to number theory, see \cite{lindenstrauss-margulis}, \cite{einsiedler-mohammadi}, \cite{LM-preprint2021}, \cite{LMW-preprint2022}, \cite{Venkatesh2010}, \cite{nelson-venkatesh}, \cite{einsiedler-margulis-venkatesh}, \cite{chow-yang}, \cite{Browning2016} and references therein for details. As a result, proving effective versions of Ratner's theorem has attracted much attention and has been a major challenge in homogeneous dynamics. \par In recent years, people have made significant progress in establishing effective equidistribution results. For $G$ being nilpotent, effective version of Ratner's theorem was established by Green and Tao \cite{green_tao2012}. For $G = {\mathrm{SL}}(2, {\mathbb{R}})$, since the one-parameter unipotent subgroup is horospherical with respect to the diagonal subgroup, one can apply the thickening argument developed in Margulis's thesis (cf. \cite{margulis-thesis}, \cite{Klein-Mar-Effective-Equid}) and spectral gap results for unitary representations of semisimple groups to establish effective equidistribution. See \cite{sarnak1982}, \cite{burger1990}, \cite{Flaminio-Forni}, \cite{Sarnak-Ubis}, \cite{Strombergsson2004} and \cite{Strombergsson2013} for works in this setting. In fact, equidistribution for this case was proved before Ratner's theorem, see \cite{furstenberg1973}, \cite{dani-smillie}. Using similar argument one can also establish effective equidistribution results for horospherical unipotent orbits in homogeneous spaces. See \cite{Klein-Mar-Effective-Equid}, \cite{edwards-effective} for result in this setting. Using techniques from Fourier analysis, Strombergsson \cite{Strombergsson2015} established the effective equidistribution for $G/\Gamma = {\mathrm{SL}}(2, {\mathbb{R}}) \ltimes {\mathbb{R}}^2/ {\mathrm{SL}}(2,{\mathbb{Z}}) \ltimes {\mathbb{Z}}^2$ and $U$ being a one-parameter unipotent (and horospherical) subgroup in the semisimple part. Building on Strombergsson's result, Chow and the author \cite{chow-yang} proved an effective equidistribution result for a special family of one-parameter unipotent orbits in $G/\Gamma = {\mathrm{SL}}(3,{\mathbb{R}})/{\mathrm{SL}}(3,{\mathbb{Z}})$. As an application, we proved that Gallagher's theorem in multiplicative Diophantine approximation holds for almost every point on any given planar straight line. The reader is also refered to \cite{Browning2016} for a similar effective equidistibution result which has applications to number theory. Strombergsson's result was recently generalized by Kim \cite{kim2021} to ${\mathrm{SL}}(n, {\mathbb{R}}) \ltimes {\mathbb{R}}^n/ {\mathrm{SL}}(n,{\mathbb{Z}}) \ltimes {\mathbb{Z}}^n$ with the unipotent subgroup being horospherical in the semisimple part. Recently, Lindenstrauss, Mohammadi and Wang \cite{LMW-preprint2022} established effective Ratner's theorem for unipotent orbits in $G/\Gamma$ where $G = {\mathrm{SL}}(2, {\mathbb{R}}) \times {\mathrm{SL}}(2, {\mathbb{R}})$ or ${\mathrm{SL}}(2,\C)$. The reader is refered to \cite{einsiedler-margulis-venkatesh} and \cite{einsiedler-margulis-mohammadi-venkatesh} for effective equidistribution of closed orbits of maximal semisimple subgroups which can be regarded as an effective version of a result by Mozes and Shah \cite{Mozes_Shah}. \par Concerning effective density, there are also significant results. Lindenstrauss and Margulis proved \cite{lindenstrauss-margulis} an effective density result for unipotent orbits in ${\mathrm{SL}}(3,{\mathbb{R}})/{\mathrm{SL}}(3,{\mathbb{Z}})$ and applied it to prove an effective version of Oppenheim's conjecture. Recently, Lindenstrauss and Mohammadi \cite{LM-preprint2021} established an optimal effective density result or unipotent orbits in $G/\Gamma$ for $G = {\mathrm{SL}}(2, {\mathbb{R}}) \times {\mathrm{SL}}(2, {\mathbb{R}})$ or ${\mathrm{SL}}(2, \C)$. \subsection{Notation} \label{subsec-notation} \par Throughout this paper we will fix the following notation. \par Given a normed vector space $V$ and $r >0$, let $B_V(r)$ denote the set of vectors in $V$ with norm $\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec r$. Given a Lie group $J$ and $r >0$, let $B^J(r)$ denote the $r$-neighborhood of the identity in $J$. Given a quantity $\mathcal A$, let $O(\mathcal A)$ denote a quantity whose absolute value is $\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C \mathcal A$ where $C >0$ is an absolute constant. For an interval $I \subset {\mathbb{R}}$, let $|I|$ denote the length of $I$. For $L >0$, let $[L]$ denote the interval $[-L/2, L/2]$. \subsection{Main results} \label{subsec-main-results} \par In this paper we will focus on the following case: Let $G = {\mathrm{SL}}(3,{\mathbb{R}})$, $\Gamma = {\mathrm{SL}}(3,{\mathbb{Z}})$, $X = G/\Gamma$. Let $U = \{u(r): r\in {\mathbb{R}} \}$ be a one-parameter unipotent subgroup of $G$ defined as follows: \begin{equation} \label{eq:def-U} u(r) := \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & r \\ 0 & 0 & 1 \end{bmatrix}. \end{equation} Let $\mu_G$ denote the unique $G$-invariant probability measure on $X$. \par For $\beta > 0$, let us define \begin{equation} \label{eq:def-compact-set} X_\beta := \{ x = g\Gamma \in X: \|g \vv v\| \ge \beta \text{ for any } \vv v \in {\mathbb{Z}}^3 \text{ or } \bigwedge^2 {\mathbb{Z}}^3. \} \end{equation} \par By Mahler's criterion, every compact subset of $X$ is contained in $X_\beta$. \par For $t \in {\mathbb{R}}$, let us denote \begin{equation} \label{eq:define-a0} a_0(t) := \begin{bmatrix} e^{t/3} & & \\ & e^{-t/6} & \\ & & e^{-t/6} \end{bmatrix} \in G, \end{equation} and \begin{equation} \label{eq:define-b} a(t) := \begin{bmatrix} 1 & & \\ & e^{t/2} & \\ & & e^{-t/2} \end{bmatrix} \in G. \end{equation} We will prove the following result concerning effective equidistribution of $U$-orbit: \begin{thm} \label{thm:effective-U-orbit} There exist $C > 0$, $\eta >0$ and $t_0 >1$ such that for any $t \ge t_0$ and any $x \in X$, at least one of the following holds: \par (1) For any $f \in C^\infty_c(X)$ we have \[ \left| \int_{[1]} f(u(r e^t )x) {\mathrm{d}} r - \int f {\mathrm{d}} \mu_G \right| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C e^{-\eta t } \|f\|_S,\] where $\|\cdot\|_S$ denotes a fixed Sobolev norm; \par (2) $a(-t) a_0(\ell' + \ell) x \not\in X_{e^{-|\ell|/3}}$ for some $0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell' \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$ and $ 0.9 t \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |\ell| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$. \end{thm} \par Proving Theorem \ref{thm:effective-U-orbit} is equivalent to proving the following theorem: \begin{thm} \label{thm:main-thm} For any $x \in G/\Gamma$, one of the following holds: \par (1) \[ \left| \int_{[1]} f(a(t) u(r) x ) {\mathrm{d}} r - \int f {\mathrm{d}} \mu_G \right| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C e^{-\eta t} \|f\|_S;\] \par (2) $ a_0 (\ell' + \ell )x \not\in X_{e^{-|\ell|/3}}$ for some $ 0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell' \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$ and $ 0.9 t \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |\ell| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$. \end{thm} \par In fact, it follows from the following equality: \[ \int_{[1]} f(a(t) u(r) x ) {\mathrm{d}} r = \int_{[1]} f(u(r e^t )y) {\mathrm{d}} r, \] where $y = a(t)x$. \par Using Theorem \ref{thm:effective-U-orbit}, we can easily prove the following two corollaries on effective equidistribution of expanding curves under translates of diagonal subgroups: \begin{cor} \label{cor:straight-line} Let us denote $a_1 (t) := a(t) a_0(t)$. Let $ \vv w = (w_1, w_2) \in {\mathbb{R}}^2$ be a vector with Diophantine exponent $\omega(\vv w) = \kappa \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1/2 + 1/10$, namely, for any positive integer $q \in {\mathbb{Z}}_+$, we have $$\max \{\langle q w_1 \rangle, \langle q w_2 \rangle\} \ge q^{-\kappa},$$ where $\langle \cdot \rangle$ denotes the distance to the nearest integer. For $\vv v =(v_1, v_2) \in {\mathbb{R}}^2$ let us denote \[ n(\vv v) := \begin{bmatrix} 1 & & v_2 \\ & 1 & v_1 \\ & & 1 \end{bmatrix}. \] Let us define \[\varphi: [1] \to {\mathbb{R}}^2 \] by $\varphi(r) := (w_1 r + w_2 , r)$. Then there exist $C, \eta, t_0 >0$ such that for any $t \ge t_0$ and any $f \in C^\infty_c(X)$ we have \[ \left| \int_{[1]} f(a_1(t) n(\varphi(r) )\Gamma) {\mathrm{d}} r - \int f {\mathrm{d}} \mu_G \right| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C e^{-\eta t} \|f\|_S. \] \end{cor} \begin{proof}[Proof assuming Theorem \ref{thm:main-thm}] \par Note that \[ a_1(t) n(\varphi(r)) = z(w_1) a(t) u(r) a_0(t) n^\ast (-w_1, w_2) ,\] where \[ z(w_1) := \begin{bmatrix} 1 & w_1 & \\ & 1 & \\ & & 1 \end{bmatrix}, \] and \[ n^\ast (-w_1, w_2) := \begin{bmatrix} 1 & -w_1 & w_2 \\ & 1 & \\ & & 1 \end{bmatrix}. \] Then for any $f \in C^\infty_c(X)$ with zero integral, we have that \[ \int_{[1]} f(a_1(t) n(\varphi(r))\Gamma) {\mathrm{d}} r = \int_{[1]} f_{w_1}(a(t) u(r) a_0(t) n^\ast (-w_1, w_2)\Gamma) {\mathrm{d}} r, \] where $f_{w_1}(x) = f(z(w_1)x)$. Note that since $\mu_G$ is $G$-invariant, $f_{w_1}$ also has zero integral. \par Let $x = a_0(t) n^\ast(-w_1, w_2)\Gamma$. Note that the Diophantine condition on $\vv w$ ensures that $a(\ell + \ell')x \in X_{e^{-|\ell|/3}}$ for any $0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell' \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$ and $0.9t \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |\ell| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$. Applying Theorem \ref{thm:main-thm} with $x = a_0(t) n^\ast(-w_1, w_2)\Gamma$ and $f_{w_1}$, we have that \[ \left| \int_{[1]} f_{w_1}(a(t) u(r) a_0(t) n^\ast (-w_1, w_2)\Gamma) {\mathrm{d}} r \right| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C e^{-\eta t} \|f_{w_1}\|_S. \] Noting that $\|f_{w_1}\|_S \ll \|f\|_S$, we complete the proof. \end{proof} \begin{remark} \label{rmk:expanding-line} An ineffectively version of Corollary \ref{cor:straight-line} was proved in \cite{KNSY} for $\omega(\vv w) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2$ using Ratner's theorem. The authors also proved that the equidistribution does not hold for $\omega(\vv w) > 2$. By analyzing the Diophantine condition in Theorem \ref{thm:main-thm} more carefully, one can easily get a better upper bound on $\omega(\vv w)$ which ensures effective equidistribution. It is an interesting problem if one can get effective equidistribution for $\omega(\vv w) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2$. \end{remark} \begin{cor} \label{cor:curve} Let us use the same notation as in Corollary \ref{cor:straight-line}. Let $\psi : [1] \to {\mathbb{R}}^2$ be a smooth non-degenerate curve in ${\mathbb{R}}^2$, namely, derivatives of $\psi$ span the whole space ${\mathbb{R}}^2$ at every $r \in [1]$. Then there exist $C, \eta, t_0$ such that for any $t \ge t_0$ and any $f \in C^\infty_c(X)$ we have \[ \left| \int_{[1]} f(a_1(t) n(\psi(r) )\Gamma) {\mathrm{d}} r - \int f {\mathrm{d}} \mu_G \right| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C e^{-\eta t} \|f\|_S. \] \end{cor} \begin{proof} \par Let us denote $\psi(r) = (r, \psi_2(r))$. Let us divide $[0,1]$ into small pieces of size $e^{-t/2}$. Let us fix a small piece $\Delta(r_0) = [r_0 - 1/2 e^{-t/2} , r_0 + 1/2 e^{-t/2}]$. Then \[\{a_1(t)n(\psi(r))\Gamma: r \in \Delta(r_0) \}\] can be approximated by \[ \{z(\psi_2'(r_0)) a(t/2) u(r) x: r \in [-1/2,1/2]\} \] where \[ z (\psi_2'(r_0)) = \begin{bmatrix} 1 & \psi_2'(r_0) & \\ & 1 & \\ & & 1 \end{bmatrix},\] as defined in Corollary \ref{cor:straight-line}, and \[ x = a_0(t/2) a_1(t/2) z(-\psi_2'(r_0)) n(\psi(r_0)) \Gamma. \] Then if we can show that \[ \{ a(t/2) u(r) x: r \in [1]\} \] effectively equidistributed, we will get that \[\{a_1(t)n(\psi(r)): r \in \Delta(r_0) \}\] is effectively equidistributed. By Theorem \ref{thm:main-thm}, if we can prove that for any $0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell' \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t/2$ and any $0.495 t \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |\ell| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t/2$, \[ a_0(\ell + \ell')a_0(t/2) a_1(t/2) z(-\psi_2'(r_0)) n(\psi(r_0)) \Gamma \in X_{e^{-t/10}}, \] then we are done. Now for a fixed $\ell'' \in [-t/2 , t]$, let us estimate the measure of \[ \mathfrak m_{\ell''} := \{ r\in [1]:a_0(\ell'')a_0(t/2) a_1(t/2) z(-\psi_2'(r_0)) n(\psi(r_0)) \Gamma \not\in X_{e^{-t/10}} \}.\] By \cite[Theorem 1.4]{Bernik-Kleinbock-Margulis}, we have that $|\mathfrak m_{\ell''}| = O(e^{-\alpha_2 t})$ for some constant $\alpha_2 >0$ independent of $\ell''$. Let us remove all $\mathfrak m_{\ell''}$ from $[1]$ and get a subset $\mathfrak m \subset [1]$. Then we have that $$\left|[1]\setminus \mathfrak m\right| = O(t e^{-\alpha_2 t}),$$ and for any $r_0 \in \mathfrak m$, \[\{a_1(t)n(\psi(r))\Gamma: r \in \Delta(r_0) \}\] is effectively equidistributed in $X$. Combining these two facts we conclude that the whole orbit \[ \{a_1(t) n(\psi(r)) \Gamma : r \in [1] \} \] is effectively equidistributed. This completes the proof. \end{proof} \begin{remark} \label{rmk:expanding-curve} The ineffective version of Corollary \ref{cor:curve} was proved in \cite{Shah_2} using Ratner's theorem. \end{remark} \subsection*{Acknowledgements} The author thanks Wen Huang, Dmitry Kleinbock, Elon Lindenstrauss, Amir Mohammadi, Nimish Shah, Ralf Spatzier, Zhiren Wang and Barak Weiss for valuable discussions and Victor Beresnevich, Sam Chow, Wen Huang, Elon Lindenstrauss and Jens Marklof for valuable comments on an earlier version of the paper. The author is supported in part by NSFC grant No. 12171338. \section{Preliminaries} \label{sec-prelim} \par In this section we recall some basic facts on ${\mathrm{SL}}(3, {\mathbb{R}})$ and its Lie algebra which will be used in the proof of our main theorem. \par Let $G = {\mathrm{SL}}(3,{\mathbb{R}})$, and $H \subset G$ be the following subgroup of $G$: \begin{equation} \label{eq:def-H} H := \left\{ \begin{bmatrix} 1 & \\ & h \end{bmatrix} \in G : h \in {\mathrm{SL}}(2,{\mathbb{R}}) \right\}. \end{equation} Clearly $H$ is isomorphic to ${\mathrm{SL}}(2,{\mathbb{R}})$. \par Let $\mathfrak g$ and $\mathfrak h$ denote the Lie algebras of $G$ and $H$, respectively. \par Consider the adjoit action of $H$ on $\mathfrak g$, we have the following decomposition: \begin{equation} \label{eq:decomp-g} \mathfrak g = \mathfrak h + \mathfrak r_0 + \mathfrak r_1 + \mathfrak r_2, \end{equation} where $\mathfrak r_0, \mathfrak r_1, \mathfrak r_2$ are invariant subspaces with respect to the adjoint action of $H$, and $\mathrm{dim} \mathfrak r_1 = \mathrm{dim} \mathfrak r_2 =2$, $\mathrm{dim} \mathfrak r_0 =1$. In particular, $\mathfrak r_0 = {\mathbb{R}} \mathfrak a_0$ where \[ \mathfrak a_0 = \begin{bmatrix} 1/3 & 0 & 0 \\ 0 & -1/6 & 0 \\ 0 & 0 & -1/6 \end{bmatrix}; \] $\mathfrak r_1 = {\mathbb{R}} \mathbf v_1 + {\mathbb{R}} \mathbf v_2$ where \[ \mathbf v_1 = \begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \] and \[\mathbf v_2 = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{bmatrix};\] $\mathfrak r_2 = {\mathbb{R}} \mathbf w_1 + {\mathbb{R}} \mathbf w_2$ where \[ \mathbf w_1 = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \] and \[\mathbf w_2 = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}.\] The adjoint action of $H$ on $\mathfrak r_1$ and $\mathfrak r_2$ are given as follows: the adjoint action of $H$ on $\mathfrak r_1$ is the same as the standard action of ${\mathrm{SL}}(2,{\mathbb{R}})$ on ${\mathbb{R}}^2$ if we choose $\{\vv v_1, \vv v_2\}$ as the basis; for $h$ corresponding to $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ and $\vv w = x_1 \vv w_1 + x_2 \vv w_2$, \[{\mathrm{Ad}} (h) \vv w = (a x_1 - b x_2) \vv w_1 + (-c x_1 + d x_2) \vv w_2.\] \par Let $\{\mathfrak a , \mathfrak u, \mathfrak u^\ast \}$ denote the standard basis of $\mathfrak h$, where $\mathfrak a$ corresponds to \[\begin{bmatrix} 1/2 & 0 \\ 0 & -1/2 \end{bmatrix} \in {\mathfrak{sl}}_2({\mathbb{R}}),\] $\mathfrak u$ corresponds to \[\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \in {\mathfrak{sl}}_2({\mathbb{R}}), \] and $\mathfrak u^\ast$ corresponds to \[ \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \in {\mathfrak{sl}}_2({\mathbb{R}}).\] \par Let us denote \[ A := \left\{ a(t) = \exp(t \mathfrak a): t \in {\mathbb{R}} \right\} \subset H, \] \[U := \left\{u(r) := \exp(r \mathfrak u) : r \in {\mathbb{R}} \right\} \subset H,\] and \[U^\ast := \left\{u^\ast (t) = \exp(r \mathfrak u^\ast): r \in {\mathbb{R}} \right\} \subset H.\] Note that $a_0(t) := \exp(t \mathfrak a_0)$, then the adjoint action of $a_0(t)$ on $\mathfrak g$ is as follows: \[ {\mathrm{Ad}} (a_0(t)) \mathbf h = \mathbf h, \text{ for } \mathbf h \in \mathfrak h, \] \[{\mathrm{Ad}} (a_0(t)) \mathbf v = e^{-t/2} \mathbf v, \text{ for } \mathbf v \in \mathfrak r_1, \] and \[ {\mathrm{Ad}} (a_0(t)) \mathbf w = e^{t/2} \mathbf w, \text{ for } \mathbf w \in \mathfrak r_2. \] Let us denote \begin{equation} \label{eq:def-b} b(t) := a(t)a_0(-t) = \begin{bmatrix} e^{-t/3} & & \\ & e^{2t/3} & \\ & & e^{-t/3} \end{bmatrix} \in G, \end{equation} and \begin{equation} \label{eq:def-a1} a_1(t) := a(t)a_0(t) = \begin{bmatrix} e^{t/3} & & \\ & e^{t/3} & \\ & & e^{-2t/3} \end{bmatrix} \in G. \end{equation} \par Let us denote \begin{equation} \label{eq:r+} \vv r^+ := {\mathbb{R}} \vv w_1 + {\mathbb{R}} \vv v_1, \end{equation} and \begin{equation} \label{eq:r-} \vv r^-:= {\mathbb{R}} \vv w_2 + {\mathbb{R}} \vv v_2, \end{equation} Let us denote by $p_{1,2}: \mathfrak g \to \mathfrak r_1 + \mathfrak r_2$ the projection of $\mathfrak g$ to $\mathfrak r_1 + \mathfrak r_2$. For $i=0, 1,2$, let $p_i : \mathfrak g \to \mathfrak r_i$ denote the projection from $\mathfrak g$ to $\mathfrak r_i$. Let $p_+$, $p_-$, $p_{\vv w_1}$, $p_{\vv v_1}$ and $p_{\mathfrak u^\ast}$ denote the projection to $\vv r^+$, $\vv r^-$, ${\mathbb{R}} \vv w_1$, ${\mathbb{R}} \vv v_1$ and ${\mathbb{R}} \mathfrak u^\ast$, respectively. \subsection{Outline of the proof} \label{subsec-outline} \par In this subsection we will give the outline of the proof. \par The proof is inspired by Ratner's original proof of her measure rigidity theorem \cite{ratner-acta} and recent papers by Lindenstrauss, Mohammadi and Wang \cite{LM-preprint2021,LMW-preprint2022}. However, compared with previous works, we take a quite different approach in this paper. \par We will start with $\mathcal F_0 = a(s)u([1])x$ for $s = t - \delta_1 t$ and analyze the dimension in directions transversal to the $H$-orbit direction. \par In \S \ref{subsec-initialize}, we will show that $\mathcal F_0$ has certain dimension control in the transversal direction unless condition (2) in Theorem \ref{thm:main-thm} holds. Here we are allowed to remove an exponentially small proportion from $\mathcal F_0$. At this step quantitative non-divergence results we prove in \S \ref{sec-quantitative-nondivergence} are needed. This is the starting point of our proof. The argument in this part is similar to the corresponding parts in \cite{LM-preprint2021,LMW-preprint2022}. \par \S \ref{subsec-dimension-improvement} is the crucial part. In that subsection, we will construct a sequence $\{\mathcal F_i : i \in \N\}$ of $U$-orbits starting from $\mathcal F_0$ where $\mathcal F_{i+1} = a(s_i)\mathcal F_i$ for each $i \ge 0$. We will prove that $\mathcal F_{i+1}$ has a better dimension control compared with $\mathcal F_i$ (at a larger scale as a cost). To prove this, we introduce a Kakeya-type model to study the divergence of nearby $U$-orbits and calculate the weighted intersection number of the Kakeya-type model. The outcome of the calculation is the following, either $\mathcal F_{i+1}$ has a better dimension control, or the whole orbit is close to a closed orbit of a subgroup isomorphic to ${\mathrm{SL}}(2, {\mathbb{R}}) \ltimes {\mathbb{R}}^2$. If the former happens we will continue and if latter happens we will stop. At each step we are allowed to remove an exponentially small proportion. This part is novel compared with previous works. \par The inductive construction given as above will provide a $\mathcal F_n$ which either has nice dimension control along all directions transversal to $H$-orbit direction, or is close to a closed ${\mathrm{SL}}(2, {\mathbb{R}}) \ltimes {\mathbb{R}}^2$-orbit and has nice dimension control along directions in the closed orbit and transversal to $H$-orbit direction. Then we can apply Proposition \ref{prop:high-dimension-to-equidistribution-a}, \ref{prop:high-dimension-to-equidistribution-a1} or \ref{prop:high-dimension-to-equidistribution-b} we give in \S \ref{sec-high-dim-to-equidistribution} to conclude effective equidistribution. All these propositions can be proved by following a Van der Corput argument due to Venkatesh (see \cite[\S 3]{Venkatesh2010} and \cite[Proposition 4.2]{LM-preprint2021} for details). \section{High transversal dimension to equidistribution} \label{sec-high-dim-to-equidistribution} \par We will need the following results to get effective equidistribution from high transversal dimension: \begin{prop} \label{prop:high-dimension-to-equidistribution-a} There exist constants $C , \theta >0$ such that the following holds: For any $\epsilon >0$, there exist $ \eta, s_0 >0$ depending on $\epsilon >0$ such that for any $s' \ge s_0$ with $\beta = e^{-\epsilon^2 s'}$, any Borel probability measure $\rho$ on $[\beta]^2$ with dimension larger than $2-\theta$ at scale $s'$, that is, for any interval $I \subset [\beta]^2$ of length $\beta e^{-s'}$, we have \[ \rho(I) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |I|^{2-\theta},\] any function $f \in C_c^\infty(X)$, and any $x \in X$, we have \[ \left| \int_{[1]^2} \int_{[1]} f(a(2s')u(r)\exp(\beta w_1 \vv w_1 + \beta v_1 \vv v_1) x) {\mathrm{d}} r {\mathrm{d}} \rho(w_1, v_1) - \int f {\mathrm{d}} \mu_G \right| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C e^{-\eta s'} \|f\|_S.\] \end{prop} \begin{proof} \par The proof can be completed by following the proof of \cite[Proposition 4.2]{LM-preprint2021} step by step. \end{proof} \par For $a_1(s')$ and $b(s')$, we have similar results: \begin{prop} \label{prop:high-dimension-to-equidistribution-a1} There exist constants $ \theta, C>0$ such that the following holds: For any $\epsilon >0$ , there exist $\eta, s_0 >0$ depending on $\epsilon$, such that for any $s' \ge s_0$ with $\beta = e^{-\epsilon^2 s'}$, any Borel probability measure $\rho$ on $[\beta]$ with dimension larger than $1-\theta$ at scale $s'$, that is, for any interval $I \subset [\beta]$ of length $\beta e^{-s'}$, we have \[ \rho(I) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |I|^{1-\theta},\] any function $f \in C_c^\infty(X)$, and any $x \in X$, we have \[ \left| \int_{[1]} \int_{[1]} f(a_1(s')u(r)\exp(\beta y \vv w_1) x) {\mathrm{d}} r {\mathrm{d}} \rho(y) - \int f {\mathrm{d}} \mu_G \right| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C e^{-\eta s'} \|f\|_S.\] \end{prop} \begin{prop} \label{prop:high-dimension-to-equidistribution-b} There exist constants $ \theta, C>0$ such that the following holds: For any $\epsilon >0$, there exist $\eta, s_0 >0$ depending on $\epsilon$, such that for any $s' \ge s_0$ with $\beta = e^{-\epsilon^2 s'}$, any Borel probability measure $\rho$ on $[\beta]$ with dimension larger than $1-\theta$ at scale $s'$, that is, for any interval $I \subset [\beta]$ of length $\beta e^{-s'}$, we have \[ \rho(I) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |I|^{1-\theta},\] any function $f \in C_c^\infty(X)$, and any $x \in X$, we have \[ \left| \int_{[1]} \int_{[1]} f(b(s')u(r)\exp(\beta y \vv v_1) x) {\mathrm{d}} r {\mathrm{d}} \rho(y) - \int f {\mathrm{d}} \mu_G \right| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C e^{-\eta s'} \|f\|_S.\] \end{prop} \section{Quantitative non-divergence} \label{sec-quantitative-nondivergence} \par This section is devoted to proving the following quantitative non-divergence result: \begin{prop} \label{prop:quantitative-nondivergence} There exist constants $ \alpha_1, C_1 > 0$ such that the following holds: For any $\beta, s >0$, if $x \in X$ satisfies that $ a_0(-s)x , a_0(s)x \not\in X_{e^{- s/3 }}$, then, \[ |\{r \in [1]: a(s)u(r )x \not\in X_\beta \}| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C_1 \beta^{\alpha_1}. \] \end{prop} \begin{proof} \par Let us denote $x = g \Gamma$. By the Kleinbock-Margulis quantitative non-divergence theorem (cf. \cite[Theorem 2.2]{kleinbock2008}, \cite[Theorem 5.2]{Klein_Mar}), if the statement does not hold then there exists $\vv v \in \bigwedge^i {\mathbb{Z}}^3$ (where $i = 1$ or $2$), such that \[ \max_{r \in [1]}\| a(s) u(r) g \vv v\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1. \] \par \textbf{Case 1.} $\vv v \in {\mathbb{Z}}^3$: Let us denote $ \vv v' = a(s) g \vv v = (v'_1, v'_2, v'_3)$. Then \[ a(s) u(r) g \vv v = u(r e^{s}) \vv v' = (v'_1 , v'_2 + r e^{s} v'_3, v'_3). \] Then $\|a(s) u(r ) g \vv v\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1$ implies that $|v'_1| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1 $, $|v'_2 + r e^{s} v'_3| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1$ for any $r \in [1]$. The latter easily implies that $|v'_2| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1$ and $|v'_3| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-s}$. This implies that $\|a_1(-s) \vv v'\|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-s/3}$. Noting that $a_1(s) = a(s) a_0(s)$, we complete the proof for \textbf{Case 1}. \par \textbf{Case 2.} $\vv v \in \bigwedge^2 {\mathbb{Z}}^3$: Let us denote $\vv v' = a(s) g \vv v = (v'_1, v'_2, v'_3 )$ where we use the coordinates with respect to the basis $\{\vv e_2 \wedge \vv e_3, \vv e_1, \wedge \vv e_3, \vv e_1 \wedge \vv e_2\}$. Then \[ a(s) u(r ) g \vv v = u(r e^s) \vv v' = (v'_1 , v'_2 , v'_3 + r e^{s} v'_2). \] By repeating the same argument as in Case 1, we have $|v'_1|, |v'_3| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1$ and $|v'_2| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-s}$ which implies that $\|b(-s) \vv v'\|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-s/3}$. Noting that $b(s) = a(s) a_0(-s)$, we complete the proof for \textbf{Case 2}. \end{proof} \par Note that for any $ \beta> 0 $ small enough and any $x \in X_{\beta^{1/4}}$, $B^G(\beta)x$ embeds into $X$ injectively. \par We will need the following lemma: \begin{lemma} \label{lm:separation-lemma-a} For any $\beta >0$, any $\ell >1$ large enough (depending on $\beta$), and any $x \in X$ with $a(-\ell)x \in X_{\beta^{1/4}}$, if \[ \exp( r_1 e^\ell \mathfrak u + r_2 e^{\ell/2} \vv v_1 + r_3 e^{\ell/2} \vv w_1) x = \exp(\vv v)x \] where $ r_1 , r_2, r_3 \in [\beta]$, $\|(r_1, r_2, r_3)\| \ge \beta/4$, and $\vv v \in B_\mathfrak g (\beta)$. Let us write \[ \vv v = u \mathfrak u + u^\ast \mathfrak u^\ast + a \mathfrak a + a_0 \mathfrak a_0 + \sum_{i=1}^2 w_i \vv w_i + v_i \vv v_i. \] if we have $|u^\ast| < \beta e^{-\ell}$, then $\|(v_2, w_2)\| \ge \beta e^{-\ell/2}$. \end{lemma} \begin{proof} \par For a contradiction, let us assume that $|u^\ast|<\beta e^{-\ell}, |v_2, w_2| < \beta e^{-\ell/2}$. Let us denote $x = g\Gamma$. Then \[ \exp( r_1 e^\ell \mathfrak u + r_2 e^{\ell/2} \vv v_1 + r_3 e^{\ell/2} \vv w_1) g = \exp(\vv v)g \gamma, \] for some $\gamma \in \Gamma$. For $\ell >1$ large enough, we have $\gamma \neq e$. Then we have \[ \exp(-\vv v) \exp( r_1 e^\ell \mathfrak u + r_2 e^{\ell/2} \vv v_1 + r_3 e^{\ell/2} \vv w_1) = g \gamma g^{-1}. \] Then \begin{align*} a(-\ell) g \gamma g^{-1} a(\ell) &= a(-\ell)\exp(-\vv v) \exp( r_1 e^\ell \mathfrak u + r_2 e^\ell \vv w_1) a(\ell) \\ &= \exp(- {\mathrm{Ad}} (a(-\ell))\vv v ) \exp (r_1 \mathfrak u + r_2 \vv v_1 + r_3 \vv w_1). \end{align*} Since $|u^\ast| < \beta e^{-\ell}, |v_2, w_2| < \beta e^{-\ell/2}$, we have $\|{\mathrm{Ad}} (a(-\ell))\vv v\| < \beta$. This implies that \[ a(-\ell) g \gamma g^{-1} a(\ell) \in B^G(\beta), \] which means that $B^G(\beta) a(-\ell) x$ does not embed into $X$ injectively. This contradicts to that $a(-\ell)x \in X_{\beta^{1/4}}$. This completes the proof. \end{proof} \section{Dimension Control} \label{sec-dimension-improvement} \subsection{List of constants} \label{subsec-constants} \par We first list all constants we will use later. \par Let $ \delta_1, \delta_2 = 10^{-5}$ and $\alpha = 10^{-6}$. Let $\theta >0$ be the constant from Theorem \ref{prop:high-dimension-to-equidistribution-a}. Let $C_1, \alpha_1>0$ be the constants as in Proposition \ref{prop:quantitative-nondivergence}. \par Let $C_1 = 10 \alpha^{-1}$, $\epsilon_1 = 10^{-5}\alpha \theta$, $N_1 = 10^5 \epsilon_1^{-1}$, and $\alpha_2 = \alpha_1/4$. \par Let $\epsilon = e^{- 10^5 N_1}$, $\beta = e^{-\epsilon^3 t}$, and $\delta_3 = 10^{-5} \theta \epsilon$. \subsection{Initial construction of transversal sets} \label{subsec-initial-construction} \par We first introduce some notation. \par For $\ell >1$ and a subspace $V \subset \mathfrak g$ generated by some vectors from the canonical basis of $\mathfrak g$, namely, $\{\mathfrak u, \mathfrak a, \mathfrak u^\ast, \mathfrak a_0, \vv v_{1,2}, \vv w_{1,2}\}$, let us define \begin{equation} \label{eq:def-thin} Q(V, \ell) := \{\exp(\vv w): \vv w \in B_\mathfrak g(\beta), \|p_{V}(\vv w)\|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta e^{-\ell}\}, \end{equation} where $p_V$ denotes the projection to $V$ with respect to the canonical basis. \par Let us denote \begin{equation} \label{eq:def-QH} Q^H(\ell) := \{ h = \exp(\vv h) \in B^H(\beta), \| p_{\mathfrak u^\ast}(\vv h) \| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta e^{-\ell} \}, \end{equation} and \begin{equation} \label{eq:def-RH} R^H(\ell) := \{ h = \exp(\vv h) \in B^H(\beta), \| p_{\mathfrak u}(\vv h) \| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta e^{-\ell} \}. \end{equation} \par For $s_1, s_2 \ge 0$, let us denote \begin{equation} \label{eq:def-Q} Q^{s_1}_{ s_2} := \{\exp(\vv w): \vv w \in \mathfrak r_1 + \mathfrak r_2 : \|p_+(\vv w)\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta e^{-s_1}, \|p_- (\vv w)\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta e^{-s_2} \},\end{equation} and $Q_s := Q_s^s$. \par Let $x \in X$ be such that (2) in Theorem \ref{thm:main-thm} does not hold. Let us start with the normalized measure on $\mathcal E = a(s) u([1])x$ where $s = (1 -\delta_1) t >0$. This is a $U$-orbit of length $e^{s}$. Let us define $\mathcal F \subset \mathcal E$ as follows: \begin{defn} \label{def:initial-construction} \par $\mathcal F \subset \mathcal E$ is defined by removing from $\mathcal E$ points $y = a ( s) u(r)x \in \mathcal E$ satisfying $ a_0(\ell') a(-\ell) y \not\in X_{\beta^{1/4}}$ for some $0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \delta_2 t$ and $0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell' \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$. Let $\mu_{\mathcal F}$ denote the normalized $U$-orbit measure on $\mathcal F$. \end{defn} \par By Proposition \ref{prop:quantitative-nondivergence} and our assumption on $x$, it is easy to show that the removed proportion is $O(s^2 \beta^{ \alpha_2})$. Therefore, \begin{equation} \label{eq:decomposition-measure} \mu_{\mathcal E} = \mu_{\mathcal F} + O(s^2 \beta^{ \alpha_2}), \end{equation} where $\mu_{\mathcal E}$ denotes the normalized $U$-invariant measure on $\mathcal E$. \subsection{Initial dimension bound} \label{subsec-initialize} \par We shall prove the following proposition: \begin{prop} \label{prop:initial-bound-dimension} For any $\ell \in [\delta_3 t, t]$, and any $x_0 \in \mathcal F$, we have \[ \mu_{\mathcal F} (Q_\ell B^{A_0 H}(\beta) x_0) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{- \alpha \ell}. \] \end{prop} \begin{proof} \par Let $\eta_1 = 100 \alpha$. Let us divide $a(s)u([1])x$ into pieces of length $e^{\eta_1 \ell}$. \par It suffices to show that for each piece $\tilde \mathcal E$ with $\tilde \mathcal F := \tilde \mathcal E \cap \mathcal F \neq \emptyset$, and any $x_0 \in \tilde \mathcal F$, \[ |\tilde \mathcal F \cap Q_\ell B^{A_0 H}(\beta) x_0| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{- \alpha \ell} e^{\eta_1 \ell}. \] \par For a contradiction, let us assume that the statement does not hold for some $x_0\in \tilde \mathcal F$. Then we can find at least $e^{\eta_2 \ell}$ (where $\eta_2 = \eta_1/4$) many $x \in \tilde \mathcal F \cap Q_\ell B^{A_0 H}(\beta) x_0$, $r_x \in [1]$ such that \[ u(r_x e^{2\alpha \ell}) x = \exp (\vv v_x) a_0^x h_x x, \] where $a_0^x \in B^{A_0}(\beta)$, $h_x \in B^H(\beta)$, and $\vv v_x \in B_{\mathfrak r_1 + \mathfrak r_2}(\beta e^{-\ell})$. Then $x$ satisfies $x \in X_{\beta^{1/4}}$ and $a(-2\alpha \ell) x \in X_{\beta^{1/4}}$. By Lemma \ref{lm:separation-lemma-a}, if we write $h_x = \exp(\vv h_x)$, we have $\|p_{\mathfrak u^\ast}(\vv h_x)\| \ge \beta e^{-2\alpha \ell}$. \par Let us write $x = u_x x_0$ where $u_x = u(r e^{\eta_1 \ell})$ for some $r \in [1]$. Let us fix a representative $g_0 \in G$ of $x_0$, then we have \[ u(r_x e^{\alpha \ell}) u_x g_0 = \exp (\vv v_x ) a^x_0 h_x u_x g_0 \gamma_x, \] for some $\gamma_x \in \Gamma$. The above equality is equivalent to \begin{equation} \label{eq:bound-2} g_0 \gamma_x g^{-1}_0 = (a_0^x h_x u_x)^{-1} \exp(-\vv v_x) u(r_x e^{2\alpha \ell}) u_x. \end{equation} \par Without loss of generality, we can assume that for different $x$ and $x'$ as above, $|u^{-1}_{x'} u_x| \ge e^{5 \alpha \ell}$. \par For each $x$ we get $\gamma_x \in \Gamma$. We claim that those $\gamma_x$'s are different. In fact, if there exist $x, x'$ such that $\gamma_x = \gamma_{x'}$, then we have \begin{align*} & (a_0^x h_x u_x)^{-1} \exp(-\vv v_x) u(r_x e^{2\alpha \ell}) u_x \\ = & (a_0^{x'} h_{x'} u_{x'})^{-1} \exp(-\vv v_{x'}) u(r_{x'} e^{2\alpha \ell}) u_{x'}. \end{align*} This implies that \begin{align*} & \exp(\vv v'_x) (a_0^x)^{-1} u_x^{-1} h_x^{-1} u_x u(r_x e^{2\alpha \ell}) \\ = & \exp(\vv v'_{x'}) (a_0^{x'})^{-1} u_{x'}^{-1} h_{x'}^{-1} u_{x'} u(r_{x'} e^{2\alpha \ell}), \end{align*} where $\vv v'_x = - {\mathrm{Ad}}((a_0^x h_x u_x)^{-1})\vv v_x$ and $\vv v'_{x'}$ denotes the same expression with $x$ replaced by $x'$. By comparing the $\mathfrak r_1 + \mathfrak r_2$, $A_0$ and $H$ components of both sides, we have $\exp(\vv v'_x) = \exp(\vv v'_{x'})$, $a_0^x = a_0^{x'}$ and \[ u_x^{-1} h_x^{-1} u_x u(r_x e^{2\alpha \ell}) = u_{x'}^{-1} h_{x'}^{-1} u_{x'} u(r_{x'} e^{2\alpha \ell}).\] This implies that \begin{equation} \label{eq:bound-3} u(-L) h_x u(L) = u(r e^{2\alpha \ell}) h_{x'}, \end{equation} where $u(L) = u_x u^{-1}_{x'}$ and $r = r_x - r'_{x'}$. Note that $|r|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1$ and $|L| \ge e^{10 \alpha \ell}$. $|r| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1$ implies that the norm of the right hand side is $\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{2\alpha \ell}$. On the other hand, the left hand side is equal to \[ \exp({\mathrm{Ad}}(u(-L))\vv h ) = \exp ({\mathrm{Ad}}(u(-L)) (a_0 \mathfrak a + a_1 \mathfrak u + a_2 \mathfrak u^\ast) ) \] whose $\mathfrak u$ coordinate is $a_1 + a_0 L + a_2 L^2$. Since $|a_2| \ge \beta e^{-2\alpha \ell}$ and $|L| \ge e^{5 \alpha \ell}$, we have that \[ |a_1 + a_0 L + a_2 L^2| \ge \beta e^{8\alpha \ell}. \] Therefore, we have that the equality \eqref{eq:bound-3} is impossible to hold and conclude the claim. \par Let us consider the adjoint action of $G$ on $\bigoplus_{i=1}^8 \wedge^i \mathfrak g$ and denote $$v_H = \mathfrak a \wedge \mathfrak u \wedge \mathfrak u^{\ast}.$$ Then the stabilizer of $v_H$ \[{\mathrm{Stab}}(v_H) = A_0 H.\] Then \eqref{eq:bound-2} implies that \[ \gamma_x g_0^{-1} v_H = \exp({\mathrm{Ad}} ( g_0^{-1})\vv v'_x) g_0^{-1} v_H. \] The norm of $g_0^{-1} v_H$ is $\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \|g_0^{-1}\|^{300}$. Let us estimate the norm of $\vv v'_x$. Note that $\|u_x\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{\eta_1 \ell}$, we have that the norm of $(a_0^x h_x u_x)^{-1}$ is bounded by $2 e^{\eta_1 \ell}$. Therefore, $\|\vv v'_x\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-\ell + \eta_1 \ell}$. Let us denote $v_0 = g_0^{-1} v_H/ \| g_0^{-1} v_H\|$, then we have \[\| \gamma_x v_0 - v_0\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \|\vv v'_x\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{- (1-\eta_1) \ell}. \] \par Considering the group generated by $\gamma_x$'s, we have the following two cases: \par \textbf{Case 1}: $\langle \gamma_x \rangle$ is abelian. \par \textbf{Case 2}: $\langle \gamma_x \rangle$ is not abelian. \par For \textbf{Case 1}, we claim that there exists $\gamma_x$ whose unipotent part is not trivial. In fact, if every $\gamma_x$ is diagonizable, then they belong to a maximal torus in $G$. Note that $\|\gamma_x\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{\eta_1 \ell}$, we have that there are at most $\ell^{100}$ different $\gamma_x$'s which contradicts to the fact that there are $e^{\eta_2 \ell}$ different $\gamma_x$'s. Note that $\gamma_x \in {\mathrm{SL}}(3,{\mathbb{Z}})$, if $\gamma_x$ has nontrivial unipotent part, it must be unipotent. By repeating the same argument as in \cite{einsiedler-margulis-venkatesh}, we can find $g' \in G$ satisfying that $\gamma_x g'^{-1} v_H = g'^{-1} v_H$ and \[ \|g' - g_0\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \| \gamma_x v_0 - v_0\| \|\gamma_x\|^{10} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{- (1-\eta_1)\ell} e^{10 \eta_1 \ell} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-\ell/2}. \] Then $g' \gamma_x g'^{-1} \in A_1 H$. Since $\gamma_x$ is unipotent, we have $g' \gamma_x g'^{-1} \in H$. We claim that the lattice $g' {\mathbb{Z}}^3$ contains a nonzero vector $\vv p $ satisfying $\|\vv p\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{2 \eta_1 \ell}$ and $\vv p \in {\mathbb{R}} \vv e_2 + {\mathbb{R}} \vv e_3$. In fact, we can find a basis $\{\vv p_1, \vv p_2, \vv p_3\}$ of $g'{\mathbb{Z}}^3$ such that $\|\vv p_i\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \|g'\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2 \|g_0\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2 \beta^{-1}$ for $i=1,2,3$. Since $g' \gamma_x g'^{-1}$ is a unipotent element in $H$, its fixing vectors \[ V(g' \gamma_x g'^{-1}) := \{\vv p \in {\mathbb{R}}^3 : g' \gamma_x g'^{-1} \vv p = \vv p \} \] has dimension $2$. Therefore, $g' \gamma_x g'^{-1} \vv p_i \neq \vv p_i$ for some $i \in \{1,2,3\}$. On the other hand, we have $g' \gamma_x g'^{-1} \vv p_i \in g' {\mathbb{Z}}^3$. Thus, \[ g' \gamma_x g'^{-1} \vv p_i - \vv p_i \in g' {\mathbb{Z}}^3. \] Moreover, since $g' \gamma_x g'^{-1} \in H$, it is easy to see that $\vv p := g' \gamma_x g'^{-1} \vv p_i - \vv p_i \in {\mathbb{R}} \vv e_2 + {\mathbb{R}} \vv e_3$. Its norm is bounded by \[ \|\vv p\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \|\vv p_i\| + \|g' \gamma_x g'^{-1}\| \|\vv p_i\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \|\vv p_i\| + \|g'\| \|\gamma_x\| \|g'^{-1}\| \|\vv p_i\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2\beta^{-1} + e^{\eta_1 \ell} \beta^{-10} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{2 \eta_1 \ell}. \] Then for any $\eta_3 >0$, \[ \| a_0( \eta_3 \ell) \vv p \| = e^{- \eta_3 \ell/6} \|\vv p\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-(\eta_3/6 - 2 \eta_1)\ell}.\] Let us choose $\eta_3 = 24 \eta_1$. Then it is easy to see that \[ \|a_0(\eta_3 \ell) g' - a_0(\eta_3 \ell) g_0\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1, \] and $a_0( \eta_3 \ell) g' \Gamma \not\in X_{e^{-2\eta_1 \ell}}$. This implies that $a_0(\eta_3 \ell) g_0 \Gamma \not\in X_{\beta^{1/4}}$ which contradicts to that $g_0\Gamma = x_0 \in \mathcal F$. \par Let us consider \textbf{Case 2}. Let us take $\gamma = \gamma_x$ and $\gamma' = \gamma_{x'}$ not commuting. Using the same argument as in \textbf{Case 1} (again using the argument in \cite{einsiedler-margulis-venkatesh}), we can find we can find $g' \in G$ satisfying that $\gamma g'^{-1} v_H = g'^{-1} v_H$, $\gamma' g'^{-1} v_H = g'^{-1} v_H$ and \[ \|g' - g_0\| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-\ell/2}. \] This implies that $g' \gamma g'^{-1}, g' \gamma' g'^{-1} \in A_1 H$. Then their commutator \[ g' \gamma \gamma' \gamma^{-1} \gamma'^{-1} g'^{-1} \in H. \] Then we can use the same argument as in \textbf{Case 1} to deduce that $a_0(\eta_3 \ell) g_0\Gamma \not\in X_{\beta^{1/4}}$ leading to the same contradiction. \par This completes the proof of Proposition \ref{prop:initial-bound-dimension}. \end{proof} \subsection{Dimension Improvement} \label{subsec-dimension-improvement} \par This subsection is devoted to dimension improvement under the action of $a(t)$. \par Let us first introduce some definition: \begin{defn} \label{def:dimension-control} Given a probability measure $\mu$ on $X$, we say that $\mu$ is $(d, \ell)$-good if for any $x_0 \in X$, \[\mu (Q_\ell B^{A_0 H}(\beta) x_0) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{- d \ell}.\] We say that $\mu$ is $(d, [\ell_1, \ell_2])$-good if it is $(d, \ell)$-good for any $\ell \in [\ell_1 , \ell_2]$. Given a subspace $V$ of $\mathfrak r_1 + \mathfrak r_2$ generated by elements from the standard basis of $\mathfrak r_1 + \mathfrak r_2$, namely, $\{\vv v_1 , \vv v_2, \vv w_1 , \vv w_2\}$, we say that $\mu$ is $(d, \ell)$-good along $V$ if for any $x_0 \in X$, \[\mu (Q(V, \ell)x_0) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-d \ell}.\] We say that $\mu$ is $(d, [\ell_1, \ell_2])$-good along $V$ if it is $(d, \ell)$-good along $V$ for any $\ell \in [\ell_1, \ell_2]$. \end{defn} \par According to this definition, Proposition \ref{prop:initial-bound-dimension} says that $\mu_{\mathcal F}$ is $(\alpha, [\delta_3 t, t])$-good. \par Starting from $s_0 = \delta_1 t/2$ and $d_0 = \alpha$, let us define $\{s_i, d_i\}$ by $s_{i+1} = s_i/2$ and $d_{i+1} = d_i + \epsilon_1$. \par Starting from $\mathcal F_0 = \mathcal F$, we will construct a sequence $\{\mathcal F_i : i \in \N\}$ such that every $\mathcal F_{i+1}$ is constructed by removing a small proportion from $a(s_i) \mathcal F_{i}$. \par In this subsection, we will prove that we can construct a sequence $\{\mathcal F_i: i \in \N\}$ such that $\mu_{\mathcal F_{i+1}}$ has better dimension control compared with $\mu_{\mathcal F_i}$. Similarly to Definition \ref{def:initial-construction}, by removing an exponentially small proportion, we can assume that for any $x \in {\mathrm{supp}} \mu_{\mathcal F_i}$, any $0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \delta_2 t$ and $ 0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell' \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$, $a_0(\ell')a(-\ell)x \in X_{\beta^{1/4}}$. By repeating the proof of Proposition \ref{prop:initial-bound-dimension}, we have that $\mu_{\mathcal F_i}$ is $(\alpha, [\delta_3 t, t])$-good. \par Before stating the main statement of this subsection, let us introduce some notation. \par For notational simplicity, for $s' >0$, let us denote \begin{equation} \label{eq:omega-neighborhood}\Omega_{s'} := B^{A_0}(\beta) Q^H(2s')Q^{s'}_{3s'} ,\end{equation} and \begin{equation}\label{eq:theta-neighborhood}\Theta_{s'} := B^{A_0}(\beta) Q^H(2s') Q_{s'} .\end{equation} For $s' , \ell >0$, let us denote \begin{equation} \label{eq:sigma-neighborhood} \Sigma_{s', \ell} := B^{A_0 A}(\beta) B^{U^\ast}(\beta e^{-2s'}) B_{\vv r^-}(\beta e^{-s'-\ell})B_{\vv r^+}(\beta e^{s'-\ell}) B^U(\beta), \end{equation} and $\Sigma_{s'} :=\Sigma_{s', 0} $. \par We first prove the following lemma: \begin{lemma} \label{lm:initial-dimension-under-a-t} Let $\mu = a(s_i)_\ast \mu_{\mathcal F_i}$ and $s' = s_i/2$. For any $\delta_3 t \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ell \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t$ and any $x \in {\mathrm{supp}} \mu$, \[ \mu(\Sigma_{s', \ell} x) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2s'} e^{-\alpha \ell}.\] \end{lemma} \begin{proof} \par Note that \[ \mu(\Sigma_{s', \ell} x) = \mu_{\mathcal F_i} (a(-2s')\Sigma_{s', \ell} x ),\] and \[ a(-2s')\Sigma_{s', \ell} x = Q_\ell B^{A_0}(\beta) R^H(2s') x' \] where $x' = a(-2s') x$. Since $\mu_{\mathcal F_i}$ is the normalized measure on $U$-orbits, we have \[ \mu_{\mathcal F_i}(Q_\ell B^{A_0}(\beta) R^H(2s') x') \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2s'} \mu_{\mathcal F_i}(Q_\ell B^{A_0 H }(\beta) x').\] By repeating the proof of Proposition \ref{prop:initial-bound-dimension} to $\mathcal F_i$ we have \[ \mu_{\mathcal F_i}(Q_\ell B^{A_0 H }(\beta) x') \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-\alpha \ell}, \] which concludes the proof. \end{proof} We then introduce a Kakeya-type model to analyze the dimension change along $U$-orbits. \begin{defn} \label{def:kakeya-model} \par Let $s'>0$ and $\mu$ be a probability measure on $X$ defined by taking the normalized measure on a subset of a $U$-orbit consisting of a union of $U$-orbits of length $e^{2s'}$. For $\Sigma(x)= \Sigma_{s'}x \subset X$, let us cut $\Sigma(x)$ into small pieces each of which is of the form $ \Omega(y) = \Omega_{s'}y$. For each piece $\Omega(y)$, let us write \[ y =a_0(y) a(y) \exp(r^\ast(y) e^{-2s'} \mathfrak u^\ast) \exp(\vv w^-_y) \exp(\vv w^+_y) u(r (y) ) x,\] where $a_0(y) \in B^{A_0}(\beta)$, $a(y) \in B^A(\beta)$, $\vv w^-_y = v_{2, y} \vv v_2 + w_{2, y} \vv w_2 \in \vv r^- $ and $\vv w^+_y = v_{1, y} \vv v_1 + w_{1, y} \vv w_1 \in \vv r^+$, $ r^{\ast}(y), r(y) \in [\beta]$. By replacing $y$ with an element in $\Omega (y)$, we may ignore $a_0(y) a(y) \exp(r^\ast(y) e^{-2s'} \mathfrak u^\ast)$ and write \[ y = \exp(\vv w^-_y) \exp(\vv w^+_y) \exp(r (y) \mathfrak u ) x \] Then we assign $\Omega(y)$ with a finite curve: \[ \mathcal L(y) := \{ (f_y(r), f_{v, y}(r), f_{w, y}(r) : r \in [e^{2s'}] \}, \] where $f_y$ $f_{v, y}$ and $f_{w, y}$ are given as follows: Let us write \[ u(r)\exp( \vv w^-_y ) \exp(\vv w^+_y) = \exp( \vv f_{-, y}(r) ) \exp(\vv f_{+, y}(r)) \exp (f_y(r) \mathfrak u ), \] where $\vv f_{-, y} (r) \in \vv r^-$ and $\vv f_{+, y} (r) \in \vv r^+$. Then $f_{v, y}$ and $f_{w , y}$ are $\vv v_1$ and $\vv w_1$, respectively, coordinates of $\vv f_{+, y}$. In fact, it is straightforward to check that \begin{equation} \label{eq:fvy} f_{v, y}(r) = \frac{2r v_{2, y}}{2 + r v_{2, y} w_{2, y}} + v_{1, y}, \end{equation} \begin{equation} \label{eq:fwy} f_{w, y}(r) = \frac{-2r w_{2, y}}{2 - r v_{2, y} w_{2, y}} + w_{1, y}, \end{equation} and \begin{equation} \label{eq:fy} f_y(r) = \frac{2r + 2r w_{1,y} v_{2, y}}{2 + r v_{2, y} w_{2, y}} + \frac{w_{1,y}v_{1,y} - f_{v, y}(r)f_{w, y}(r)}{2} \end{equation} \par Let $\mathcal T(y)$ denote the $\beta e^{-s'}$-neighborhood of $\mathcal L(y)$. We then assign $\mathcal T(y)$ with a weight \[\mathcal W(y) = \mu(\Omega(y)).\] It is easy to see that $\Omega(y_1), \Omega(y_2)$ are from the same $\Theta(y)$ if and only if $\mathcal T(y_1)$ and $\mathcal T(y_2)$ intersect at $r = 0$. \par For $\mathcal T(y_1)$ and $\mathcal T(y_2)$, let us define $\Delta(y_1, y_2)$ as follows: By replacing $y_2$ with an element in $\Omega(y_2)$, we can write \[ y_2 = \exp(\vv w^-) \exp(\vv w^+) u(r) y_1,\] where $\vv w^- = v_2 \vv v_2 + w_2 \vv w_2 \in \vv r^-$. Then we define \begin{equation} \label{eq:delta-y_1-y_2} \Delta(y_1, y_2) = (v_2, w_2 ). \end{equation} Let us the collection of tubes $\{\mathcal T(y)\}$ with weights $\{\mathcal W(y)\}$ the Kakeya-type model for $(\mu, \Sigma(x))$. \end{defn} The following lemma tells the connection between the dimension change along $U$-orbits and the Kakeya-type model: \begin{lemma} \label{lm:dimension-kakeya} \par Let us fix $\mu$, $s' >0$ and $\Sigma(x) = \Sigma_{s'} x $ as above. Then for any $\Omega(y_1)$ and $\Omega(y_2)$ from $\Sigma(x)$, the following two statements are equivalent: \par (1). There exist $L_1, L_2 \in [e^{2s'}]$ with $|L_1- L_2| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta |L_1|$ such that $u(L_1)\Omega(y_1)$ and $u(L_2) \Omega(y_2)$ are contained in the same neighborhood of the form $\Theta(z)$; \par (2). $\mathcal T(y_1)$ intersects $\mathcal T(y_2)$ at $L = f_{y_1}(L_1) = f_{y_2}(L_2)$ where $|L-L_1| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta |L_1|$. \par Moreover, if one of the above holds, then for any $|L'_1| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{s'} \|\Delta(y_1, y_2)\|^{-1}$, there exists $L'_2$ with $|L'_1 - L'_2 | \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta |L'_1|$ such that $u(L_1+L'_1)\Omega(y_1)$ and $u(L_2+L'_2) \Omega(y_2)$ are contained in the same neighborhood of the form $\Theta_{s'}z'$. \end{lemma} \begin{proof} \par Without loss of generality, we can assume that $y_1 = x$ and denote $y_2$ by $y$. \par By replacing $y$ with an element in $\Omega(y)$, we can write \[ y = \exp(\vv w^-_y) \exp(\vv w^+_y) u(r(y)) x.\] Suppose statement (1) holds for $L_1, L_2$. Let us consider $u(L_2) y$: \begin{align*} u(L_2)y &= u(L_2) \exp(\vv w^-_y) \exp(\vv w^+_y) u(r(y)) x \\ &= \exp(\vv f_{-, y}(L_2)) \exp(\vv f_{+, y}(L_2)) u(f_y(L_2) + r(y)) x. \end{align*} Note that \[ \| \vv f_{-, y}(L_2) \| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta e^{-s'}.\] We then have \[ \| \vv f_{+, y}(L_2) \| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta e^{-s'}\] which implies $\mathcal T(y)$ and $\mathcal T(x)$ intersect at $L = f_y(L_2)$. This proves that statement (2) holds. \par Using similar argument, we can prove that (2) implies (1) and the second part of the lemma. \end{proof} \par Let us introduce the following structure-randomness decomposition of a measure $\mu$: \begin{defn} \label{def:structure-randomness} \par Let $s'>0$ and $\mu$ be a probability measure on $X$ defined by taking the normalized measure on a subset of a $U$-orbit consisting of a union of $U$-orbits of length $e^{2s'}$. For a neighborhood $\Sigma(x) = \Sigma_{s'} x$, let us consider its Kakeya-type model. To handle our case, let us assume that the weight of $\mathcal T(y)$ satisfies that $\mathcal W(y) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2 d_i s' } e^{-2s'}$ for every tube $\mathcal T(y)$. Then every neighborhood $\Theta(z) = \Theta_{s'}z \subset \Sigma(x)$ corresponds to a collection of tubes passing through $\vv p_y = (0, v_{1, y}, w_{1,y})$. Let us fix a constant $C_1 \ge 1$ which will be determined later. For a fixed tube $\mathcal T(z)$ and a point $\vv p_y$, all tubes passing through $\vv p_y$ and intersecting $\mathcal T(z)$ determine a surface, denoted by $\mathcal P(\mathcal T(z), y)$. Moreover, there is a corresponding curve $\mathcal C(\mathcal T(z), y)$ in ${\mathbb{R}} \vv v_2 + {\mathbb{R}} \vv w_2 = {\mathbb{R}}^2$ such that if $\vv w^{-} \in \mathcal C(\mathcal T(z), y)$, then $\mathcal T(\exp(\vv w^{-}) y) \in \mathcal P(\mathcal T(z), y)$. It is easy to see that the sum of weights of tubes assing through $\vv p_y$ and intersecting $\mathcal T(z)$ is at most \[ e^{2s'} e^{-2 d_i s' } e^{-2s'} = e^{-2 d_i s'}.\] A surface $\mathcal P(\mathcal T(z), y)$ is called highly concentrated if the sum of weights of tubes passing through $\vv p_y$ and intersecting $\mathcal T(z)$ is at least $e^{(2-7C_1\epsilon_1)s'} e^{-2 d_i s' } e^{-2s'}$. Let us take all orbits corresponding to tubes contained in highly concentrated surfaces and define a measure $\mu_{\mathcal P}$. Then $\mu$ admits the following decomposition: \[ \mu = \nu_{\mathfrak r} \mu_\mathfrak r + \nu_\mathcal P \mu_{\mathcal P}. \] Let us call $\mu_\mathfrak r$ the random component of $\mu$ and $\mu_{\mathcal P}$ the structured component of $\mu$. \end{defn} \par Recall that starting from $\mathcal F_0 = \mathcal F$, we want to construct a sequence $\{(\mathcal F_i, s_i)\}$ such that $\mathcal F_{i+1}$ is a subset of $a(s_i) \mathcal F_{i}$ with a better dimension control. \par The following proposition is the goal of this subsection: \begin{prop} \label{prop:improve-dimension} \par Suppose that $\mathcal F_i$ is $(d_i, s_i)$-good. Then $\mu = \mu_{a(s_i)\mathcal F_i}$ admits a decomposition: \[ \mu = \nu_{\mathcal R_i} \mu_{\mathcal R_i} + \nu_{\mathcal F_{i+1}} \mu_{\mathcal F_{i+1}} + \nu_{\mathcal P} \mu_{\mathcal P}, \] such that $\nu_{\mathcal R_i} = O(\beta)$, $\mu_{\mathcal P}$ is the structured part of $\mu$ as in Definition \ref{def:structure-randomness}, and $\mu_{\mathcal F_{i+1}}$ is $(d_i+\epsilon_1, s_{i+1})$-good. Recall that $s_{i+1} = s_i/2$. \end{prop} \begin{proof} \par Let us denote $s' = s_{i+1}$. We claim that our hypotheses on $\mathcal F_i$ implies that for any $\Omega(x) = \Omega_{s_i} x$, we have $\mu(\Omega(x)) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2 s'} e^{-2 d_i s'}$. In fact, \[ \mu(\Omega(x)) = \mu_{\mathcal F_i}(a(-2s') \Omega(x)). \] Note that \[ a(-2s') \Omega(x) = Q_{2s'} B^{A_0}(\beta) R^H(2s') x' \] where $x' = a(-2s') x$. Noting that $\mu_{\mathcal F_i}$ is a normalized measure on $U$-orbits, we have \[ \mu_{\mathcal F_i} (Q_{2s'} B^{A_0}(\beta) R^H(2s') x') \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2s'} \mu_{\mathcal F_i} (Q_{2s'} B^{A_0 H }(\beta) x'). \] By our hypothesis, we have \[ \mu_{\mathcal F_i} (Q_{2s'} B^{A_0 H }(\beta) x') \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2d_i s'}, \] which implies the claim. \par Let us write \[ \mu = \nu_{\mathfrak r} \mu_{\mathfrak r} + \nu_{\mathcal P} \mu_{\mathcal P},\] as in Definition \ref{def:structure-randomness}. Then we can let \par For $\nu_{\mathfrak r} \mu_{\mathfrak r}$, there are two cases: \par \textbf{Case 1.} $\nu_{\mathfrak r} < \beta$; \par \textbf{Case 2.} $\nu_{\mathfrak r} \ge \beta$. \par For \textbf{Case 1}, we can put \[ \mathcal R_i = {\mathrm{supp}} \mu_{\mathfrak r} \text{ and } \mathcal F_{i+1} = \emptyset, \] and we are done. \par Now let us handle \textbf{Case 2}, namely, $\nu_{\mathfrak r} \ge \beta$. In this case, by the definition of $\mu_{\mathfrak r}$, we have that for any neighborhood $\Sigma(x) = \Sigma_{s'}x$, its corresponding Kakeya-type model does not have any highly concentrated surfaces. \par For every neighborhood $\Theta(z) = \Theta_{s'} z$, we call it a small neighborhood if \[ \mu_{\mathfrak r} (\Theta(z)) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2s'} e^{- (d_i + \epsilon_1) s'}, \] otherwise we call it a big neighborhood. \par Let us define $\mathcal F_{i+1}$ the union of all orbits contained in small neighborhoods, and define $\mathcal R_i$ to be the rest. Then it suffices to show that $\mu_{\mathfrak r} (\mathcal R_i) < \beta$. \par For contradiction, we assume that $\mu_{\mathfrak r} (\mathcal R_i) \ge \beta$. \par For every $y \in \mathcal R_i$, if \[ |u([e^{2s'}])y \cap \mathcal R_i| \ge \beta^2 e^{2s'}, \] we will call it a good point, otherwise we call it a bad point. Then it is easy to see that the collection of bad points has $\mu_{\mathfrak r}$-measure $\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta^2$. Therefore, the collection of good points, denoted by $\mathcal R'_{i}$, has $\mu_{\mathfrak r}$-measure $\ge \beta$. \par Now let us cut $X$ into pieces each of which is of the form \[ \Sigma(x) := \Sigma_{s'} x. \] We call a piece $\Sigma(x)$ good if \[ \mu_{\mathfrak r}(\Sigma(x)) \ge \beta^{100} e^{-2s'}, \] and \[ \mu_{\mathfrak r}(\mathcal R'_i \cap \Sigma(x)) \ge \beta^2 \mu_{\mathfrak r}(\Sigma(x)), \] otherwise we called it bad. \par Now let us fix a good $\Sigma(x)$. Let us consider its corresponding Kakeya-type model and calculate the following weighted intersection number: We take the sum of $\mathcal W(y_1) \mathcal W(y_2)$ for all pairs $\mathcal T(y_1)$ and $\mathcal T(y_2)$ intersecting each other and $y_1 \in \mathcal R_i$ or $y_2 \in \mathcal R_i$ and denote it by $\mathcal S(\Sigma(x))$. \par Now for a fixed $\mathcal T(y)$ with $y \in \mathcal R'_i$, we have that \[ |u([e^{2s'}])y \cap \mathcal R'_i | \ge \beta^2 e^{2s'}. \] For every $\tilde{y} \in u([e^{2s'}])y \cap {\mathrm{supp}} \mu$, we have \[\mu(\Theta(\tilde{y})) \ge e^{-2s'} e^{-(d_i +\epsilon_1)s'}.\] By Lemma \ref{lm:dimension-kakeya}, every $\Omega(\tilde{z}) \subset \Theta(\tilde{y})$ corresponds to some $\Omega(z)$ from $\Sigma(x)$ with $\mathcal T(z)$ interesecting $\mathcal T(y)$. For now let us assume that there exists an absolute constant $C_1 \ge 1$ such that \begin{equation} \label{eq:assumption-1} \| \Delta(y, z) \| \ge e^{-(1 + C_1 \epsilon_1 ) s'}. \end{equation} We will explain how to remove this assumption in Remark \ref{rmk:remove-condition-1}. Let this $C_1$ be the constant in Definition \ref{def:structure-randomness}. \par Noting that $\Omega(\tilde{z})$ and $\Omega(z)$ has the same $\mu_{\mathfrak r}$-measure, by running over all $\Omega(\tilde{z}) \subset \Theta(\tilde{y})$, we have that the contribution from $\Theta(\tilde{y})$ to $\mathcal S(\Sigma(x))$ is at least \[ \mathcal W(y) e^{-2s'} e^{-(d_i +\epsilon_1)s'}. \] By assumption \eqref{eq:assumption-1}, each such intersection can cover at most $e^{C_1\epsilon_1 s'}$ of the whole $u([e^{2s'}])y$ orbit. Therefore, from the orbit we can find at least $\beta^2 e^{(2-C_1 \epsilon_1)s'} \ge e^{(2-2 C_1 \epsilon_1)s'} $ different $\tilde{y}$'s. Therefore, we have that the contribution from the whole orbit $u([e^{2s'}]) y$ to $\mathcal S(\Sigma(x))$ is at least \[ \mathcal W(y) e^{-2s'} e^{-(d_i +\epsilon_1)s'} e^{(2-2C_1\epsilon_1) s' } = \mathcal W(y) e^{-(d_i +3C_1\epsilon_1)s'}.\] By running over all $y \in \mathcal R'_i \cap \Sigma(x)$, we get \[ \mathcal S(\Sigma(x)) \ge \tilde{\mathfrak m} e^{-4 C_1 \epsilon_1 s'} e^{-d_i s'}, \] where $\tilde{\mathfrak m} = \mu_{\mathfrak r}(\Sigma(x))$. \par On the other hand, let us get an upper bound on $\mathcal S(\Sigma(x))$. \par For any $\mathcal T(z)$ and $\Theta(y)$ with $y \in \mathcal R_i$, let us consider all $\mathcal T(y')$ passing through $\vv p_y$ (cf. Definition \ref{def:structure-randomness}) and intersecting $\mathcal T(z)$. Note that all such $\mathcal T(y')$'s determine a surface $\mathcal P$ containing $\vv p_y$ and $\mathcal L(z)$. Since $\mu_{\mathfrak r}$ does not have any highly concentrated surfaces, we have that the contribution to $\mathcal S(\Sigma(x))$ from $\mathcal T(z)$ and all such $\mathcal T(y')$ is $$\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \mathcal W(z) e^{(2-7C_1\epsilon_1)s'} e^{-2 d_i s'} e^{-2s'} = \mathcal W(z) e^{-7C_1\epsilon_1 s'} e^{-2d_i s'}. $$ Since $y \in \mathcal R'_i$, the $\mu_{\mathfrak r}$-measure of each $\Theta(y)$ is at least $e^{-(d_i + \epsilon_1)s'} e^{-2s'}$ and the measure of $\Sigma(x)$ is at most $e^{-2s'}$, we have that there are at most $e^{d_i s'} e^{\epsilon_1 s'}$ different $\Theta(y)$ from $\Sigma(x)$. Thus the total contribution to $\mathcal S(\Sigma(x))$ with a fixed $\mathcal T(z)$ and all possible $\Theta(y)$ is at most \[ \mathcal W(z) e^{-7C_1\epsilon_1 s'} e^{-2d_i s'} e^{d_i s'} e^{\epsilon_1 s'} = \mathcal W(z) e^{-6 C_1 \epsilon_1 s'} e^{-d_i s'}.\] By running over all $\mathcal W(z)$, we get \[ \mathcal S(\Sigma(x)) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \tilde{\mathfrak m} e^{-6 C_1 \epsilon_1 s'} e^{-d_i s'}. \] This implies that \[ \tilde{\mathfrak m} e^{-4 C_1 \epsilon_1 s'} e^{-d_i s'} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \tilde{\mathfrak m} e^{-6 C_1 \epsilon_1 s'} e^{-d_i s'}, \] which leads to a contradiction. This completes the proof. \end{proof} \begin{remark} \label{rmk:remove-condition-1} \par Now let us explain how to remove assumption \eqref{eq:assumption-1}. \par For contradiction, let us assume that the assumption does not hold. Then using the dyadic pigeonholing method, we have that there exists $\xi \ge C_1 \epsilon_1$ such that we can find a subset $\tilde \mathcal R_i \subset \mathcal R_i$ with proportion $\ge \beta^2$ such that for each $\Theta(y) \subset \tilde \mathcal R_i$, the union of $\Omega(z) \subset \Theta(y)$ with $\|\Delta(y, z)\| \in [e^{-(1+\xi)s'}, 2e^{-(1+\xi)s'}]$ has measure $\ge e^{-2s'} e^{-(d_i + \epsilon_1)s'}$. Let us denote the union by $\Xi (y)$. \par Now in the definition of the summation $\mathcal S(\Sigma(x))$, let us add one more condition: \begin{equation} \label{eq:remove-condition-1} \|\Delta(y, z)\| \in [e^{-(1+\xi)s'}, 2e^{-(1+\xi)s'}]. \end{equation} Repeating the argument in the proof of Proposition \ref{prop:improve-dimension}, we have that \[ \mathcal S(\Sigma(x)) \ge \tilde \mathfrak m e^{- (\xi + \epsilon_1)s'} e^{-d_i s'}. \] \par On the other hand, let us get an uppper bound on $\mathcal S(\Sigma(x))$. In fact, for a fixed $\mathcal T(z)$, any $\mathcal T(y)$ intersecting $\mathcal T(z)$ with condition \eqref{eq:remove-condition-1} is from some $\Xi(y')$ which is contained in $$ \Psi(z) := \Sigma_{s' , \xi s'} z.$$ Noting that $i \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N_1$, we have $$s' = 2^{-i-1} s_0 \ge 4^{-N_1} \delta_1 t \ge \epsilon t. $$ Thus, \[\xi s' \ge C_1 \epsilon_1 s' \ge C_1 \epsilon_1 \epsilon t \ge 10 \alpha^{-1} 10^{-5} \alpha \theta \epsilon t = 10^{-4} \theta \epsilon t \ge \delta_3 t . \] By Lemma \ref{lm:initial-dimension-under-a-t}, we have \[ \mu_{\mathfrak r} (\Psi(z)) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2 s'} e^{-\xi \alpha s'}.\] Noting that each $\Xi(y')$ has measure $\ge e^{-2s'} e^{-(d_i + \epsilon_1)s'}$, we get there are at most \[ e^{-2s'} e^{- \xi \alpha s'} (e^{-2s'} e^{-(d_i + \epsilon_1)s'})^{-1} = e^{d_i s'} e^{-(\alpha \xi - \epsilon_1)s'} \] possible $\Xi(y')$'s. For each $\Xi(y')$, there are at most $e^{(2-\xi)s'}$ different $\mathcal T(y)$'s intersecting $\mathcal T(z)$. Therefore, by running all possible $\mathcal T(z)$'s, we get \begin{align*} \mathcal S(\Sigma(x)) &\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \tilde \mathfrak m e^{(2-\xi)s'} e^{-2 s'} e^{-2 d_i s'} e^{d_i s'} e^{-(\alpha \xi - \epsilon_1)s'} \\ & = \tilde \mathfrak m e^{-(\xi + \alpha \xi - \epsilon_1)s'} e^{- d_i s'}. \end{align*} The upper bound will be smaller than the lower bound if $C_1 \ge 5 \alpha^{-1}$ (which is true since $C_1 = 10 \alpha^{-1}$) which leads to a contradiction. \end{remark} \section{Structured component} \label{sec-structured-component} \par This section is devoted to the study of the structured component $\mu_{\mathcal P}$ of $\mu$. \par First note that by repeating the argument in the proof of Proposition \ref{prop:improve-dimension}, we can easily prove that (without assuming the absence of highly concentrated surfaces) $\mu_{\mathfrak r}$ and $\mu_{\mathcal P}$ are $(d_i - \epsilon_1, s_{i+1})$-good. Therefore, if we have that $\mu_{\mathcal F_i}$ is $(d_i + 2 \epsilon_1, s_i)$-good, then we have done. Thus, later in this paper, we can assume that for any $x \in {\mathrm{supp}} \mu_{\mathcal F_i}$, \begin{equation} \label{eq:exact-dimension-assumption} \mu_{\mathcal F_i}(Q_{s_i} B^{A_0 H}(\beta)x)\ge e^{-(d_i + 2 \epsilon_1) s_i}. \end{equation} Under this assumption, we have that for any $x \in {\mathrm{supp}} \mu$, \begin{equation} \label{eq:upper-omega} \mu(\Omega(x)) \ge e^{-2s'} e^{-2(d_i + 2 \epsilon_1)s'},\end{equation} and for any $x \in X$, \begin{equation} \label{eq:upper-theta} \mu(\Theta(x)) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-2s'} e^{-(d_i - \epsilon_1)s'}. \end{equation} Thus, if $d_i < 2 - 7 C_1 \epsilon_1$, then we will not have any highly concentrated surfaces. Therefore, later in this paper, we can assume that \begin{equation} \label{eq:dim-assumption} d_i \ge 2 - 7 C_1 \epsilon_1. \end{equation} \par For $\mu_{\mathcal P}$, we have the following statement: \begin{prop} \label{prop:structured-component} \par If $d_i \ge 2 - 7 C_1 \epsilon_1$, then $\mu_{\mathcal P}$ can be decomposed as \[ \mu_{\mathcal P} = \nu_1 \mu_{\mathcal P, 1} + \nu_2 \mu_{\mathcal P, 2},\] where $\mu_{\mathcal P, 1}$ is $(d_i + \epsilon_1, s_{i+1})$-good, and $a(2s')_\ast \mu_{\mathcal P, 2}$ is effectively equidistributed in $X$. \end{prop} \par We first define the following property for highly concentrated surfaces: \begin{defn} \label{def:integrable-surface} \par Given a tube $\mathcal T(z)$ and $\Theta(y)$, the surface $\mathcal P = \mathcal P(\mathcal T(z), y)$ is called integrable if we can find at least $e^{(4 - 120 C_1 \epsilon_1)s'}$ different tubes $\mathcal T(z'')$ contained in $\mathcal P$. \end{defn} \par We will need the following lemma about the structure of the Lie algebra of ${\mathrm{SL}}(3,{\mathbb{R}})$. It can be proved by applying \eqref{eq:fvy} and \eqref{eq:fwy}. We will omit the details. \begin{lemma} \label{lm:integrable-surface} Any integrable surface is of the form \[ \mathcal P_1 := \{ (t, 0, s): (t,s) \in {\mathbb{R}}^2 \} ,\] or \[ \mathcal P_2 := \{ (t, s, 0) : (t,s) \in {\mathbb{R}}^2 \}.\] \end{lemma} \par $\mathcal P_1$ corresponds to the subgroup \[ H_1 := \left\{ \begin{bmatrix} 1 & \\ \vv v & h \end{bmatrix}: \vv v \in {\mathbb{R}}^2, h \in {\mathrm{SL}}(2, {\mathbb{R}}) \right\} \equiv {\mathrm{SL}}(2, {\mathbb{R}}) \ltimes {\mathbb{R}}^2, \] and $\mathcal P_2$ corresponds to the subgroup \[ H_2 := \left\{ \begin{bmatrix} 1 & \vv w \\ & h \end{bmatrix}: \vv w \in {\mathbb{R}}^2, h \in {\mathrm{SL}}(2, {\mathbb{R}}) \right\} \equiv {\mathrm{SL}}(2, {\mathbb{R}}) \ltimes {\mathbb{R}}^2. \] \par Now we have equipped to prove Proposition \ref{prop:structured-component} \begin{proof}[Proof of Proposition \ref{prop:structured-component}] Let us decompose $\mu_{\mathcal P}$ as \[ \mu_{\mathcal P} = \nu_{\mathrm{int}} \mu_{\mathrm{int}} + \nu_{\mathrm{non-int}} \mu_{\mathrm{non-int}},\] where $ \mu_{\mathrm{int}}$ denotes the component supported on integrable surfaces, and $\mu_{\mathrm{non-int}}$ denotes the rest. \par For $\mu_{\mathrm{non-int}}$, let us prove it is $(d_i + \epsilon_1, s')$-good. \par For contradiction, let us assume that there is a subset $\mathcal R \subset {\mathrm{supp}} \mu_{\mathrm{non-int}}$ with measure $\ge \beta$ such that for each $x \in \mathcal R$, \[ \mu_{\mathrm{non-int}} (\Theta(x)) \ge e^{-2s'} e^{-(d_i + \epsilon_1) s'}.\] By repeating the argument in the proof of Proposition \ref{prop:improve-dimension}, we have that to avoid a contradiction, there must be at least \[ e^{ 2 d_i s'} e^{d_i s'} e^{-14C_1 \epsilon_1 s'} \] highly concentrated surfaces, counting multiplicity. Therefore, for almost every $\Theta(y)$, counting multiplicity, there are at least $e^{ 2 d_i s'} e^{-15C_1 \epsilon_1 s'}$ highly concentrated surfaces passing through $\vv p_y$. By our assumption, have that every highly concentrated surface can be counted for at most $e^{4s' - 120C_1 \epsilon_1 s'}$ times. Therefore, there are at least \[ e^{ 2 d_i s'} e^{-15C_1 \epsilon_1 s'} e^{-4s' + 120C_1 \epsilon_1 s'} = e^{2(d_i - 2) s'} e^{105 C_1 \epsilon_1 s'}\] different highly concentrated surfaces passing through $\vv p_y$. Every highly concentrated surface gives a corresponding curve in ${\mathbb{R}} \vv w_2 + {\mathbb{R}} \vv v_2$. Note that every curve covers at least $e^{2s' - 15C_1 \epsilon_1 s'}$ different $\Omega(z)$'s. Therefore, counting multiplicity, they cover at least \[ e^{2(d_i -1 )s'} e^{90 C_1 \epsilon_1 s'} \] $\Omega(z)$'s. We claim that they cover at least \[ e^{d_i s'} e^{20 C_1 \epsilon_1 s'} \] different $\Omega(z)$'s. In fact, if every $\Omega(z)$ is covered by at most \[ e^{ (d_i-2) s'} e^{70C_1 \epsilon_1 s'} \] different highly concentrated curves, then we are done. Now we assume that some $\Omega(z)$ is covered by at least \[ e^{(d_i-2) s'} e^{70 C_1 \epsilon_1 s'} \] different curves. Note that these curves can cover at least \begin{align*} e^{2 s' - 15 C_1 \epsilon_1 s'} e^{(d_i-2) s'} e^{70 C_1 \epsilon_1 s'} &= e^{ d_i s'} e^{ 55 C_1 \epsilon_1 s'} \end{align*} different $\Omega(z')$s. This proves the claim. \par This implies that \[ \mu_{\mathrm{non-int}}(\Theta(x)) \ge e^{-2s'} e^{-(d_i - 20 C_1 \epsilon_1) s'}, \] which contradicts to \eqref{eq:upper-theta}. This proves that $\mu_{\mathrm{non-int}}$ is $(d_i+\epsilon_1, s')$-good. \par For $\mu_{\mathrm{int}}$, first note that $\mu_{\mathrm{int}}$ can be further decomposed as \[ \mu_{\mathrm{int}} = \sum_j \nu_{\mathrm{int}, j} \mu_{\mathrm{int}, j}, \] where each $\mu_{\mathrm{int}, j}$ is either supported on $\mathcal P_1$ on $\mathcal P_2$. \par Let us fix $\bar \mu_j = \mu_{\mathrm{int}, j}$. Without loss of generality, let us assume that it is supported on $\mathcal P_1$. \par Let us cut $X$ into small pieces each of which is of the form \[\Upsilon(x) := Q^0_{s'} B^{A_0} Q^H (2s') x .\] Then for any $x \in {\mathrm{supp}} \bar \mu_j$, the restriction of $\bar \mu_j$ to $\Upsilon(x)$, denoted by $\bar \mu_{j, x}$, is supported on \[ \Xi(x) := Q(V, s') B^{A_0}(\beta)Q^H(2s') x,\] where $V = {\mathbb{R}} \vv v_1 + {\mathbb{R}} \vv v_2 + {\mathbb{R}} \vv w_2 $. \par We call $\Xi(x)$ a good neighborhood if $\bar \mu_j (\Xi(x)) \ge e^{-(1 + 15 C_1 \epsilon_1)s'}$, otherwise we call it a bad neighborhood. Then by the definition of integrable surfaces, we have that the total measure of the union of bad neighborhoods is $\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \beta$. Therefore we can remove all bad neighborhoods and assume that $\Xi(x)$ is good. Then for any $\Theta(z) \subset \Xi(x)$, \[ \bar \mu_{j, x} (\Theta(z)) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec e^{-(1 - 75 C_1 \epsilon_1)s'}. \] Note that the action of $a(2s')$ on $\Xi(x)$ is $\beta$-close to the action of $a_1(s')$. Then we can apply Proposition \ref{prop:high-dimension-to-equidistribution-a1} to conclude that $a(2s')_\ast \bar \mu_{j,x}$ is effectively equidistributed in $X$. If $\bar \mu_{j,x}$ is supported on $\mathcal P_2$, then we can apply Proposition \ref{prop:high-dimension-to-equidistribution-b} with $b(s')$ to conclude the same statement. Repeating this argument to every $x$ and $j$, we conclude that $a(2s')_\ast \mu_{\mathrm{int}} $ is effectively equidistributed. \par Letting $\mu_{\mathcal P, 1} = \mu_{\mathrm{non-int}}$ and $\mu_{\mathcal P, 2} = \mu_{\mathrm{int}}$, we complete the proof. \end{proof} \section{Proof of the main theorem} \label{sec-main-proof} \par We are equipped to prove Theorem \ref{thm:main-thm}. \begin{proof}[Proof of Theorem \ref{thm:main-thm}] \par By Proposition \ref{prop:initial-bound-dimension}, we have that $\mu_{\mathcal F_0}$ is $(d_0, s_0)$-good. Applying Proposition \ref{prop:improve-dimension} and \ref{prop:structured-component}, we get that for $d_i \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2 - 7 C_1 \epsilon_1$, $\mu_{\mathcal F_{i+1}} = a(s_i)_\ast \mu_{\mathcal F_i}$ is, modulo removing an exponentially small proportion, $(d_{i+1}, s_{i+1})$-good where $d_{i+1} = d_i + \epsilon_1$ and $s_{i+1} = s_i/2$. For $d_i \ge 2 - 7 C_1 \epsilon_1$, by Proposition \ref{prop:structured-component}, $\mu_{\mathcal F_{i+1}}$ might admit a structured component $\mu_{i+1, \mathrm{int}}$ such that $a(s_i)_\ast \mu_{i+1, \mathrm{int}}$ is effectively equidistributed. Noting that $\mu_{i+1, \mathrm{int}}$ is a component of $a(t- s_i) u([1])x$, we are done for this part. For the rest of $\mu_{\mathcal F_{i+1}}$, we can proceed this inductive construction until $d_i \ge 4 - \theta$. Then by applying Proposition \ref{prop:high-dimension-to-equidistribution-a}, we have that $a(2 s_i)_\ast \mu_{\mathcal F_i}$ is effectively equidistributed. Noting that $\mu_{\mathcal F_i}$ is a component of $a(t - 2 s_i) u([1])x$, we get desired result for this part. \par This completes the proof. \end{proof}
1,116,691,498,040
arxiv
\section{Introduction}\label{intro} Let $\Bbbk$ be an algebraically closed field and denote by $\PP^n$ the projective space of dimension $n$ over $\Bbbk$. The set $\birn$ of birational maps $f:\PP^n\tor\PP^n$ is the so-called Cremona group of $\PP^n$. For an element $f\in\bir(\PP^n)$ there exist homogeneous polynomials of the same degree $f_0,\ldots, f_n\in k[x_0,\ldots,x_n]$, without nontrivial common factors, such that if $\x=(x_0:\cdots:x_n)$ is not a common zero of the $f_i$'s, then $f(\x)=\bigl(f_0(\x):\cdots:f_n(\x)\bigr)$. The (algebraic) degree of $f$ is the common degree of the $f_i$'s, and is denoted by $\deg(f)$. A natural way to produce an ``algebraic family'' of birational maps is to consider a birational map $f=(f_0:\cdots:f_n)\in \birn$ and to allow the coefficients of the $f_i$'s vary in an affine (irreducible) $\Bbbk$-variety $T$. That is, we consider polynomials $f_0,\ldots, f_n\in \Bbbk[T]\otimes \Bbbk[x_0,\ldots,x_n]$, homogeneous and of the same degree in $\x$ and we define $\varphi:T\to \birn$ by \[ \varphi(t,\x)=\bigr(f_0(t,\x):\cdots:f_n(t,\x)\bigl); \] in particular we assume that for all $t\in T$ the map $\varphi_t:=\varphi(t,\cdot):\PP^n\tor\PP^n$ is birational. As pointed out by Serre in \cite[\S 1.6]{Se} the family of the topologies on $\birn$ which make any such algebraic family a continuous function, has a \emph{finest} element, designed in \emph{loc.~cit} as the \emph{Zariski Topology} of $\birn$. Moreover, we can replace $\PP^n$ with an irreducible algebraic variety $X$ of dimension $n$ and the same holds for $\bir(X)$. The aim of this work is to study the behavior of these ``morphisms'' $T\to\bir(X)$ and obtain, as application, some insight about the relationship between the topology and the algebraic structure of the group $\bir(X)$, where $X$ is a rational variety. More precisely, in Section 2 we present some basic results about $\bir(X)$ that show the relationship between the algebraic structure and the Zariski topology. In Section 3, the main one, we deal with the case $X=\PP^n$, or more generally the case where $X$ is a rational variety(see Lemma \ref{lem:functoriality}). We begin by stating two deep results about the connectedness and simplicity of $\birn$ proved in \cite{Bla11} and \cite{CaLa} (Proposition \ref{pro3.3}) and extract as an easy consequence that a nontrivial normal subgroup of $\bir(\PP^2)$ has trivial centralizer. Next we prove that for a morphism $\varphi:T\to \birn$, the function $t\mapsto \deg(\varphi_t)$ is lower semicontinuous (\S 3.2). This result has some nice consequences: \begin{itemize} \item[(a)] every Cremona transformation of degree $d$ is a specialization of Cremona transformations of degree $>d$ (Corollary \ref{cor:degeneration}); \item[(b)] the degree map $\deg:\birn\to \Z$ is lower semicontinuous (\S 3.3); \item[(c)] a morphism $T\to\birn$ maps constructible sets into constructible sets (\S 3.5); \item[(d)] the Zariski topology of $\birn$ is not Noetherian (\S 3.6); \item[(e)] there exist (explicit, non canonical) closed immersions of $\bir(\PP^{n-1})\hookrightarrow \birn$ (\S 3.7); \item[(f)] the subgroup consisting of the elements $f\in\birn$ which stabilize the set of lines passing through a fixed point is closed (\S 3.7). \end{itemize} \section{Generalities} Following \cite[\S 2]{Dem} we have: \begin{defi}\label{defi_pseudo} A birational map $\varphi:T\times X\tor T\times X$, where $T$ and $X$ are $\Bbbk$-varieties and $X$ is irreducible, is said to be a \emph{pseudo-automorphism of $T\times X$, over $T$}, if there exists a dense open subset $U\subset T\times X$ such that: \begin{itemize} \item[(a)] $\varphi$ is defined on $U$; \item[(b)] $U_t:=U\cap \bigl(\{t\}\times X\bigr)$ is dense in $\{t\}\times X$ for all $t\in T$, and \item[(c)] there exists a morphism $f:U\to X$ such that $\varphi|_{_{U}}(t,x)=\bigl(t,f(t,x)\bigr)$, and $\varphi|_{_{U_t}}: U_t\to \{t\}\times X$ is a birational morphism. \end{itemize} \end{defi} In particular, a pseudo-automorphism $\varphi$ as above induces a family $T\to \bir(X)$ of birational maps $\varphi_t:X\tor X$. Following \cite{Bla11} we call this family an {\em algebraic family} in $\bir(X)$ or a {\em morphism} from $T$ to $\bir(X)$. We will identify a morphism $\varphi:T\to\bir(X)$ with its corresponding pseudo-automorphism and denote $\varphi_t=\varphi(t)$. Note that if $\varphi:T\to\bir(X)$ is a morphism, the map $\psi:T\to\bir(X)$ defined by $\psi_t=\varphi_t^{-1}$ is also a morphism where $\varphi_t^{-1}$ denotes the inverse map of $\varphi_t$. We say $\calf\subset \bir(X)$ is {\em closed} if its pullback under every morphism $T\to\bir(X)$ is closed in $T$, for all $T$. This defines the so-called \emph{Zariski topology} on $\bir(X)$ (\cite{Mum}, \cite[\S 1.6]{Se}, \cite{Bla11}). In order to define the Zariski topology, as above, it suffices to consider morphisms from an affine variety $T$. Indeed, notice that a subset $F\subset T$ is closed if and only if there exists a cover by open sets $T=\cup V_i$, with $V_i$ affine, such that $F\cap V_i$ is closed in $V_i$, for all $i$. Then we may restrict a pseudo-automorphism $\varphi:T\times X\tor T\times X$ to each $V_i\times X$ and obtain a pseudo-automorphism $\varphi_i:V_i\times X\tor V_i\times X$, for every $i$. The assertion follows easily from the previous remark. Clearly, we may also suppose $T$ is irreducible. Unless otherwise explicitly stated, in the sequel we always suppose $T$ is affine and irreducible. \begin{lem} \label{lem:functoriality} Let $F:X\dashrightarrow Y$ be a birational map between two algebraic varieties. Then the map $F^*:\bir(Y)\to \bir(X)$ defined by $F^*(f)= F^{-1}\circ f\circ F$ is a homeomorphism, with inverse $(F^{-1})^*$. \end{lem} \begin{proof} The result follows once we observe that $ \varphi:T\times Y\dashrightarrow T\times Y$ is a pseudo-automorphism if and only if $(\operatorname{id}\times F^{-1})\circ \varphi\circ (\operatorname{id}\times F): T\times X\dashrightarrow T\times X$ is a pseudo automorphism. \end{proof} We consider $\bir(X)\times \bir(Y)\subset \bir(X\times Y)$ by taking $(f,g)\in\bir(X)\times \bir(Y)$ into the rational map $F:X\times Y\to X\times Y$ defined as $F(x,y)=\bigl(f(x), g(y)\bigr)$. \begin{lem}\label{lem1} Let $X, Y$ be algebraic varieties and $F\in \bir(X\times Y)$ a birational map; write $F(x,y)=\bigl(F_1(x,y),F_2(x,y)\bigr)$ for $(x,y)\in X\times Y$ in the domain of $F$. Then $F\in \bir(X)\times \bir(Y)\subset \bir(X\times Y)$ if and only if there exist dense open subsets $U\subset X$, $V\subset Y$ such that $F$ is defined on $U\times V$ and $F_1(x,y)=F_1(x,y')$, $F_2(x,y)=F_1(x',y)$ for $x,x'\in U$, $y,y'\in V$, \end{lem} \begin{proof} First suppose there exist $f\in\bir(X)$ and $g\in\bir(Y)$ such that $F(x,y)=\bigl(f(x),g(y)\bigr)$. Consider nonempty open sets $U\subset X$ and $V\subset Y$ such that $f$ and $g$ are defined on $U$ and $V$ respectively. Hence, $F_1$ and $F_2$ are defined on $U\times V$ and we have that $F_1(x,y)=f(x)$ and $F_2(x,y)=g(y)$, from which the ``only if part'' follows. Conversely, suppose there exist nonempty open sets $U$ and $V$ as stated. Then $F_1$ and $F_2$ induce morphisms $f:U\to X$ and $g:V\to Y$ such that $F(x,y)=\bigl(f(x),g(y)\bigr)$ for $(x,y)\in U\times V$. Since $U\times V$ is dense in $X\times Y$, this completes the proof. \end{proof} \begin{pro}\label{pro1.1} If $X, Y$ are algebraic (irreducible) varieties, then $\bir(X)\times \bir(Y)\subset \bir(X\times Y)$ is a closed subgroup. \end{pro} \begin{proof} In view of Lemma \ref{lem:functoriality}, we can assume that $X\subset\A^n,Y\subset\A^m$ are affine varieties. Let $\varphi:T\times X\times Y\dashrightarrow T\times X\times Y$ be a pseudo-automorphism (over $T$). Then \[\varphi(t,x , y)=\bigl(t,f_1(t,x,y),\dots, f_n(t,x,y),g_1(t,x,y),\dots, g_m(t,x,y)\bigr),\] where $f_i,g_j\in \Bbbk(T\times X\times Y)$ are rational functions on $T\times X\times Y$ (of course, $f_i,g_j$ verify additional conditions). Let $A:=\varphi^{-1}\bigl(\bir(X)\times \bir(Y)\bigr)$ and denote by $\overline{A}$ the closure of $A$ in $T$. Following Lemma \ref{lem1} it suffices to prove that the restrictions of the $f_i'$s (resp.~the $g_j'$s) to $\overline{A}\times X\times Y$ do not depend on $y$ (resp.~on $x$), which implies $A=\overline{A}$. Up to restrict $\varphi$ to each irreducible component of $\overline{A}$ we may suppose that $A$ is dense in $T$. By symmetry we only consider the case relative to the $f_i'$s and write $f=f_i$ for such a rational function. Since the poles of $f$ are contained in a proper subvariety of $T\times X\times Y$, we deduce that there exists $y_0\in Y$ such that the restriction of $f$ to $T\times X\times \{y_0\}$ induces a rational function on this subvariety. If $p:T\times X\times Y\to T\times X\times \{y_0\}$ denotes the morphism $(t,x,y)\mapsto (t,x,y_0)$ we conclude $f\circ p$ is a rational function on $T\times X\times Y$. Our assumption implies $f$ coincides with $f\circ p$ along $A\times X\times Y,$ which is dense in $T\times X\times Y$, so $f=f\circ p$ and the result follows. \end{proof} \begin{rem}\label{rem5} Two pseudo-automorphisms $\varphi:T\times X\tor T\times X$ and $\psi:T\times Y\tor T\times Y$ induce a morphism $(\varphi,\psi): T\to \bir(X)\times\bir(Y)$, that is, an algebraic family in $\bir(X)\times\bir(Y)$. As in the proof of Proposition \ref{pro1.1}, it follows from Lemma \ref{lem1} that $\mathcal F\subset \bir(X)\times\bir(Y)$ is closed if and only if $(\varphi,\psi)^{-1}(\mathcal F) $ is closed for every pair $\varphi,\psi$. Moreover, it is easy to prove that the topology on $\bir(X)\times\bir(Y)$ induced by the Zariski topology of $\bir(X\times Y)$ is the \emph{finest} topology for which all the morphisms $(\varphi,\psi)$ are continuous. Observe that the Zariski topology of $\bir(X)\times\bir(Y)$ is finer than the product topology of the Zariski topologies of its factors, as it is the case for algebraic varieties. \end{rem} \begin{pro}\label{pro2.1} If $\varphi,\psi:T\to \bir(X)$ are morphisms, then $t\mapsto \varphi_t\circ \psi_t$ defines an algebraic family in $\bir(X)$. Moreover, the product homomorphism $\bir(X)\times \bir(X)\to \bir(X)$ and the inversion map $\bir(X)\to\bir(X)$ are continuous. \end{pro} \begin{proof} To prove the first assertion it suffices to note that the family $t\mapsto \varphi_t\circ \psi_t$ corresponds to the pseudo-automorphism $\varphi\circ \psi: T\times X\tor T\times X$. Applying Remark \ref{rem5}, the first part of the second assertion follows. Indeed, if $\mathcal F\subset \bir(X)$ is a closed subset, then $(\varphi,\psi)^{-1}\bigl(m^{-1}(\mathcal F)\bigr)=(\varphi\circ \psi)^{-1}(\mathcal F)$. For the rest of the proof it suffices to note that for a family $\psi$ as above the map $t\mapsto \psi_t^{-1}$ defines an algebraic family. \end{proof} \begin{lem}\label{lem2.3} The Zariski topology on $\bir(X)$ is T1. In particular, if $\varphi,\psi:T\to \bir(X)$ are two morphisms, then the subset $\bigl\{t\in T; \varphi(t)=\psi(t)\bigr\}$ is closed. \end{lem} \begin{proof} It suffices to show that $id\in\bir(X)$ is a closed point. Without loss of generality we may suppose $X\subset \PP^m$ is a projective variety. Then a morphism $\varphi:T\to\bir(X)$ may be represented as \[ \varphi_t=\bigl(f_0(t,x):\cdots:f_m(t,x)\bigr), (t,x)\in U, \] where $U$ is as in Definition \ref{defi_pseudo} and $f_i\in \Bbbk[T][x_0,\ldots,x_m]$, $i=0,\ldots,m$, are homogeneous of same degree in the variables $x_0,\ldots,x_m$. Therefore \[ \begin{split} \bigl\{t\in T; \varphi(t)=id\bigr\} & = \bigcap_{i,j=0}^m\bigl\{t\in T \mathrel{:} x_jf_i(t,x)-x_if_j(t,x)=0, \ \forall (t,x)\in U_t\bigr\}\\ & = \bigcap_{i,j=0}^m\bigl\{t\in T \mathrel{:} x_jf_i(t,x)-x_if_j(t,x)=0, \ \forall x\in X\bigr\}\\ & = \bigcap_{i,j=0, x\in X}^m\bigl\{t\in T \mathrel{:} x_jf_i(t,x)-x_if_j(t,x)=0 \bigr\}. \end{split} \] Since for all $i,j$ the equations \[x_jf_i(t,x)-x_if_j(t,x)=h_1(x)=\cdots=h_\ell(x)=0\] define a closed set in $T\times X$, and $X$ is projective we deduce $\bigl\{t\in T \mathrel{:} \varphi(t)=id\bigr\}$ is closed in $T$. \end{proof} \begin{cor} Let $\psi:Y\to \bir(X)$ be a morphism, where $Y$ is a projective variety. Then $\psi(Y)$ is closed. \end{cor} \begin{proof} A morphism $\varphi:T\to \bir(X)$ induces a morphism $\phi: T\times Y\to \bir(X)$ defined by $(t,y)\mapsto \varphi(t)\circ\psi(y)^{-1}.$ Then $\phi^{-1}\bigl(\{id\}\bigr)=\bigl\{(t,y); \varphi(t)=\psi(y)\bigr\}$ is closed in $T\times Y$. The projection of this set onto the first factor is exactly $\varphi^{-1}\bigl(\psi(Y)\bigr)$ which is closed. \end{proof} \begin{cor}\label{cor2.6} The centralizer of an element $f\in\bir(X)$ is closed. In particular, the centralizer $C_{\bir(X)}(G)$ of a subgroup $G\subset \bir(X)$ is closed. \hfill $\qed$ \end{cor} \begin{proof} Since the commutator map $c_f:\bir(X)\to \bir(X)$, $c_f(h)=hfh^{-1}f^{-1}$, is continuous, $c_f^{-1}\bigl(\{id\}\bigr)$ is closed. \end{proof} Another consequence of Lemma \ref{lem2.3} (and Remark \ref{rem5}) is that for an arbitrary topological subspace $A\subset\bir(X)$ and a point $f\in\bir(X)$, the natural identification map $\{f\}\times A\to A$ is an homeomorphism. As in \cite[Chap.I, Thm. 3]{Sha} we obtain: \begin{cor}\label{cor2.7} If $A,B\subset\bir(X)$ are irreducible subspaces, then $A\times B$ is an irreducible subspace of $\bir(X)\times\bir(X)$. \end{cor} \begin{pro}\label{pro2.8} The irreducible components of $\bir(X)$ do not intersect. Moreover, $\bir(X)^0$, the unique irreducible component of $\bir(X)$ which contains $id$, is a normal (closed) subgroup. \end{pro} \begin{proof} Let $A,B$ be irreducible components containing $id$. Corollary \ref{cor2.7} implies $A\cdot B$ is irreducible. Since $id\in A\cap B$ then $A \cup B \subset A\cdot B$ from which it follows $A=A\cdot B=B$. This proves the uniqueness of $\bir(X)^0$. The rest of the proof works as in \cite[Chapter 3, Thm. 3.8]{FR}. \end{proof} We have also the following easy result: \begin{pro}\label{pro2.9} Let $H\subset \bir(X)$ be a subgroup. (a) The closure $\overline{H}$ of $H$ is a subgroup. Moreover, if $H$ is normal, then $\overline{H}$ is normal. (b) If $H$ contains a dense open set, then $H=\overline{H}$. \end{pro} \begin{proof} The proof of this result follows the same arguments that the analogous case for algebraic groups (see \cite[Chapter 3, Section 3]{FR}). For example, in order to prove the second part of (a) it suffices to note that since $g\mapsto fgf^{-1}$ is an homeomorphism, then $f\overline{H}f^{-1}=\overline{fHf^{-1}}$. \end{proof} \section{The Cremona group}\label{sec3} In this section we consider the case $X=\PP^n$; we fix homogeneous coordinates $x_0,\ldots,x_n$ in $\PP^n$. As in the introduction, if $f:\PP^n\tor\PP^n$ is a birational map, the \emph{degree} of $f$ is the minimal degree $\deg(f)$ of homogeneous polynomials in $\Bbbk[x_0,\ldots,x_n]$ defining $f$. \subsection{Connectedness and simplicity}\ In \cite[Thms. 4.2 and 5.1]{Bla11} J\'er\'emy Blanc proves the following two results: \begin{thm}[J. Blanc]\label{bla1} $\birplane$ does not admit nontrivial normal closed subgroups. \end{thm} \begin{thm}[J. Blanc]\label{bla2} If $f,g\in \birn$, then there exists a morphism $\theta:U\to\birn$, where $U$ is an open subset of $\A^1$ containing $0,1$, such that $\theta(0)=f, \theta(1)=g$. In particular $\birn$ is connected. \end{thm} In Theorem \ref{bla2} the open set $U$ is irreducible and the morphism $\theta$ is continuous. Hence we deduce that $\birn$ is irreducible. On the other hand, in \cite{CaLa} Serge Cantat and St\'ephane Lamy prove the following result: \begin{thm}[S. Cantat-S. Lamy]\label{cala} $\birtwo$ is not a simple (abstract) group, i.e., it contains a non trivial normal subgroup. \end{thm} In fact they prove that for a ``very general'' birational map $f\in\birtwo$ of degree $d$, with $d\gg 0$, the minimal normal subgroup containing $f$ is nontrivial. From Theorems \ref{bla1} and \ref{cala} it follows that all non trivial normal subgroup in $\birtwo$ are dense. Putting all together we obtain: \begin{pro}\label{pro3.3} Let $G\subset \bir(\PP^2)$ be a nontrivial normal subgroup. Then $C_{\bir(\PP^2)}(G)=\{id\}$. \end{pro} \begin{proof} Suppose $C_{\bir(\PP^2)}(G)\neq \{id\}$. The closure $\overline{G}$ of $G$ is a normal subgroup, then it coincides with the entire Cremona group. If $f\in C_{\bir(\PP^2)}(G)$, then $G$ is contained in the centralizer of $f$, which is closed. We deduce that $f$ commute with all the elements of $\bir(\PP^2)$, that is $C_{\bir(\PP^2)}(G)$ coincides with the center $Z(\bir(\PP^2))$ of ${\bir(\PP^2)}$. Since $Z(\bir(\PP^2))=\{id\}$, the result follows. For the convenience of the reader we give a proof of the well known fact that $Z(\bir(\PP^2))=\{id\}$. Recall that $\bir(\PP^2)$ is generated by quadratic transformations of the form $g_1\sigma g_2 $ where $g_1,g_2\in\pgl(3,\Bbbk)$ and $\sigma=(x_1x_2:x_0x_2:x_0x_1)$ is the \emph{standard} quadratic transformation. Take $f\in Z(\bir(\PP^2))$. If $L\subset\PP^2$ is a general line, then we may construct a quadratic transformation $\sigma_L$ which contracts $L$ to a point and such that $f$ is well defined in this point. Since $f\sigma_L=\sigma_L f$ and we may suppose $f$ is well defined and injective on an open set of $L$ we deduce $f$ transforms $L$ into a curve contracted by $\sigma_L$, that is, the strict transform of $L$ under $f$ is a line, and then $f\in\pgl(3,\Bbbk)$, so $f\in Z(\pgl(3,\Bbbk))=\{id\}$. \end{proof} \subsection{Writings and degree of a pseudo-morphism}\ Let $\varphi:T\to\bir(\PP^n)$ be a morphism, where $T$ is an irreducible variety. Denote by $\pi: T\times\PP^n\to T$ the projection onto the first factor. Then the pseudo-automorphism $\varphi$ (Definition \ref{defi_pseudo}) verifies the following commutative diagram \[ \xymatrix{T\times \PP^n\ar@{-->}[rr]^\varphi\ar@{->}[rd]_\pi& &T\times \PP^n\ar@{->}[ld]^\pi\\ &T&} \] In other words, $\varphi$ induces a commutative diagram \[ \xymatrix{\Bbbk(T\times \PP^n)& &\Bbbk(T\times \PP^n)\ar@{-->}[ll]_{\varphi^*}\\ & \Bbbk(T)\ar@{^{(}->}[ru]_{\pi^*}\ar@{_{(}->}[lu]^{\pi^*}& }. \] We deduce that there exist rational functions $\varphi_0,\ldots,\varphi_n\in\Bbbk(T\times \PP^n)$ such that \[ \varphi(t,\x)=\bigl(\varphi_0(t,\x):\cdots:\varphi_n(t,\x)\bigr), \] where the formula above holds for $(t,\x)$ in an open set $U\subset T\times \PP^n$. Moreover, we may suppose $U\cap (\{t\}\times \PP^n)\neq \emptyset$ for all $t\in T$. Observe that we are assuming that $U$ is contained in the domain of definition of $\varphi_i$, for all $i$. Hence, for all $t\in T$, there exists an open set $U_{t}\subset \{t\}\times \PP^n$ where all $\varphi_i|_{{U_t}}$ are well defined. We can also assume that there exists $i_t$ such that $\varphi_{i_t}$ does not vanish in $U_{t}$. Let $V\subset T$ be an affine nonempty open subset. From the remarks above, we deduce that there exists a (non necessarily unique) representation of $\varphi$ of the form \begin{equation} \varphi(t,\x)=\bigl(f_0(t,\x):\cdots:f_n(t,\x)\bigr), (t,\x)\in U'\subset U\cap(V\times \PP^n), \label{writing} \end{equation} where $U'\subset U\cap(V\times \PP^n) $ is an open subset and $f_0,\ldots,f_n\in \Bbbk[V\times\A^{n+1}]=\Bbbk[V]\otimes \Bbbk[x_0,\ldots,x_n]$ are homogeneous polynomials in $x_0,\ldots,x_n$, of the same degree. In particular, if $U'\cap \bigl(\{t_0\}\times \PP^n\bigr)\neq \emptyset$, then \[ \varphi_{t_0}(\x)= \bigl(f_0(t_0,\x):\cdots:f_n(t_0,\x)\bigr) \] for $\x$ in an open set $U'_{t_0}\subset \PP^n$; that is, there exist $\x_0\in U'_{t_0}$ and $i_0$ such that $ f_{i_0}(t_0,\x_0)\neq 0$. Observe that $ \{t_0\}\times U'_{t_0}\subset U_{t_0}$. \begin{defi}\label{defi:writing} With the notations above, consider the $(n+1)$-uple $(f_0,\ldots,f_n)$ satisfying (\ref{writing}) and let $\ell=\deg(f_i)$. We say that $w_V^\varphi=(f_0,\dots,f_n)$ is a \emph{writing of $\varphi$ on $V$}. The positive integer $\deg(w_V^\varphi):=\ell$ is said to be the \emph{degree} of $w_V^\varphi$. \end{defi} \begin{rem} Let $ w=w^\varphi_V=(f_0,\dots,f_n)$ be a writing of $\varphi$ on an affine open subset $V\subset T$. We introduce the ideal $I(w)\subset \Bbbk[V]\otimes \Bbbk[\x]$ generated by $f_0, \ldots, f_n$. Then $I(w)$ defines a subvariety $X^w\subset V\times \A^{n+1}$. Notice that $X^w$ is stable under the action of $\Bbbk^*$ on $V\times \A^{n+1}$ defined by $\lambda\cdot (t,x)\mapsto (t,\lambda x)$. Moreover, the projection $\pi:X^w\to V$ onto the first factor is equivariant and, by definition, surjective. The function $t\mapsto \dim\pi^{-1}(t)$ is upper-semicontinuous, from which we deduce $V_{i}:=\{t; \dim\pi^{-1}(t)\geq i\}$ is closed in $V$ for all $i=1, \dots, n+1$. Since $\pi^{-1}(t)=X^{w}\cap \bigl((\{t\}\times\A^{n+1}\bigr)$, it follows that $\dim\pi^{-1}(t)>n$ if and only if $\pi^{-1}(t)=\{t\}\times \A^{n+1}$. In other words, an element $t\in V$ belongs to $V_{n+1}$ if and only if $\bigl(\{t\}\times \PP^n\bigr)\cap U'=\emptyset$, where $U'\subset V\times \PP^n$ is the domain of definition of the rational map $(t,\x) \mapsto \bigl(t, \bigl(f_0(t,\x):\dots: f_n(t,\x)\bigr)\bigr)$. Observe that $V_{n+1}\subsetneq V$. \end{rem} The preceding remark motivates the following \begin{defi} Let $\varphi:T\to \birn$ be a morphism and $t\in T$. A \emph{writing passing through $t$} is a writing $w^\varphi_V$ of $\varphi$ such that $t\in V\setminus V_{n+1}$. \end{defi} \begin{lem} Let $\varphi:T\to \birn$ be a morphism and $t_0\in T$. Then there exists a writing $w^\varphi_V$ passing through $t_0$. \end{lem} \begin{proof} By definition, there exists $\x_0\in \P^n$ such that $\varphi$ is defined in $(t_0, \x_0)$. Hence, there exist $f_0,g_0,\dots, f_n,g_n\in \Bbbk[T]\otimes \Bbbk[\x]$ such that $(g_0\cdots g_n)(t_0,\x_0)\neq 0$ and $\varphi(t, \x)=\bigl(f_0/g_0(t,\x): \dots : f_n/g_n(t,\x)\bigr)$, where the equality holds in an open neighborhood $A$ of $(t_0,\x_0)$ in $ T\times \P^n$. Eliminating denominators, we deduce that \[ \varphi(t, \x)=\bigl(h_0(t,\x): \dots : h_n(t,\x)\bigr), \] where $h_i\in \Bbbk[T]\otimes \Bbbk[\x]$ and the above formula holds in an open subset $A' \subset A$, containing $(t_0,\x_0)$. If $V\subset T$ is an affine open subset such that for all $t\in V$ there exists $\x\in\P^n$ with $(t,\x)\in A'$, it is clear that $w^\varphi_V=(h_0,\dots,h_n) $ is a writing of $\varphi$ through $t_0$. \end{proof} \begin{defi} Let $\varphi:T\to\bir(\PP^n)$ be a morphism, where $T$ is an irreducible variety. Denote by $\calv$ the family of nonempty affine open sets in $T$ on which there exists, at most, a writing of $\varphi$. The \emph{degree} of $\varphi$ is the positive integer \[ \Deg(\varphi):=\min\{\deg(w_V^\varphi): V \in\calv \}. \] \end{defi} Note that two $n$-uples $(f_0,\ldots,f_n)$ and $ (f'_0,\ldots,f'_n)$, with $\deg(f_i)=\deg(f'_i)=\Deg(\varphi)$ define the same writing on an open set $V$ if and only if they coincide up to multiplication by a nonzero element in $\Bbbk(V)=\Bbbk(T)$. For $t\in T$ we denote by $\deg(\varphi_t)$ the usual algebraic degree of the map $\varphi_t:\PP^n\tor\PP^n$; it is the minimal degree of components among the $(n+1)$-uples of homogeneous polynomials defining $\varphi_t$. By applying (\ref{writing}) we obtain that if $t\in T$, then $\deg(\varphi_{t})\leq \deg(w_V^\varphi)$ for every writing $w_V^\varphi$ passing through $t$. Moreover, we have the following \begin{lem}\label{lem4.1} Let $w=w_V^\varphi$ be a writing for the morphism $\varphi:T\to \birn$ and $t\in V\setminus V_{n+1}$. Then the following assertions are equivalent: (a) $t\in V_{n}$. (b) There is a codimension 1 subvariety $X^w_t\subset\A^{n+1}$ such that $\pi^{-1}(t)=\{t\}\times X^w_t$. (c) $\deg(\varphi_t)<\deg(w)$. \end{lem} \begin{proof} The equivalence of assertions (a) and (b) is obvious. In order to prove that (b) is equivalent to (c) let $w=(f_0, \ldots,f_n)$; then for every $t\in V\setminus V_{n+1}$ the rational map \[ \x\mapsto \bigl(f_0(t,\x): \ldots:f_n(t,\x)\bigr) \] coincides with $\varphi_t$. Therefore $\deg(\varphi_t)<\deg(f_i)$ if and only if the polynomials $g_0,\ldots,g_n\in \Bbbk[\x]$ defined by $g_i(\x)=f_i(t,\x)$, where $t$ is fixed and $i=0,\ldots,n$, admit a nontrivial factor. \end{proof} The following example is taken from \cite[Lemma 2.13]{BlFu} \begin{exa} Let $T\subset \PP^2$ be the projective nodal cubic curve of equation $a^3+b^3-abc=0$, with singular point $o=(0:0:1)$, and consider the morphism $\varphi:T\to \birn$ defined by \[\varphi(a:b:c)=(x_0f:x_1g:x_2f:\cdots:x_n f),\] where \[f=bx_0^2+cx_0x_2+ax_2^2,\ g=(a+b)x_0^2+(b+c)x_0x_2+ax_2^2;\] note that $\varphi_o=(x_0^2x_2:x_0x_1x_2:x_0^2x_2:\cdots:x_0^2x_n)$ is the identity map. Set $ f'=abf$ and $ g'=abg$, that is \[f'=ab^2x_0^2+(a^3+b^3)x_0x_2+a^2bx_2^2,\ g'=ab(a+b)x_0^2+(ab^2+a^3+b^3)x_0x_2+a^2bx_2^2.\] If $V\subset T$ is the affine open set defined by $c=1$, then $w^\varphi_V=(x_0f',x_1g',x_2f',\ldots,x_n f')$ is a writing of $\varphi$ on $V$ with degree 3. Clearly $o\in V_{n+1}$ and $w^\varphi_V$ is through all non-singular point in $T$. As it follows from \emph{loc. cit} the polynomial $ax_0+bx_2$ defines (locally) a higher common divisor for $f'$ and $g'$ in $\Bbbk[V']\otimes \Bbbk[\x]$ where $V':=V\setminus\{o\}$. Hence $V=V_n$. Dividing all components in $w^\varphi_V$ by $ax_0+bx_2$ we obtain a new writing on $V$ of degree $2$. One deduces $\Deg(\varphi)=2$. \end{exa} \begin{rem}\label{rem:DEGDESIGN} Consider a morphism $\varphi:T\to \bir(\PP^n)$ and let $U$ and $f:U\to \PP^n$ be as in Definition \ref{defi_pseudo}. If $\sigma:S\to T$ is a birational morphism it follows that the morphism \[ (\sigma\times id)^{-1}(U)\to S\times\PP^n, (s,\x)\mapsto \bigl(s,f(\sigma(s),\x)\bigr), \] induces a morphism $\varphi\circ \sigma:S\to \birn$. If $s\in S$, then $\bigl((\sigma\times id)^{-1}(U)\bigr)_s\simeq U_{\sigma(s)}$ and up to this isomorphism the birational map $\varphi_{_{\sigma(s)}}$ coincides with $(\varphi\circ\sigma)_s$. \end{rem} \begin{lem}\label{lem:DEGDESIGN} Let $\varphi:T\to \bir(\PP^n)$ be a morphism and consider a birational morphism $\sigma:S\to T$. Then $\Deg(\varphi)=\Deg(\varphi\circ \sigma)$. \end{lem} \begin{proof} Notice that if $\varphi:T\to \birn$ is a morphism and $U \subset T$ is an open subset, then $\varphi|_U:U\to \birn$ is also a morphism, and clearly $\Deg (\varphi)= \Deg (\varphi|_{U})$. Then it suffices to prove the result when $\sigma$ is an isomorphism, in which case the result is trivial. \end{proof} \subsection{Degree and semicontinuity} \ \begin{pro}\label{pro4.1} Let $\varphi:T\to\bir(\PP^n)$ be a morphism. Consider the set $U_\varphi:=\{t\in T \mathrel{:} \deg(\varphi_t)=\Deg(\varphi) \}$. Then (a) $U_\varphi$ is a nonempty open subset of $T$. (b) $\deg(\varphi_{t})\leq \Deg(\varphi)$ for all $t\in T$. \end{pro} \begin{proof} We may reduce the proof to the case where $T$ is smooth. Indeed, if $T$ is singular we consider a proper birational surjective morphism $\sigma:S\to T$, where $S$ is smooth, and set $\psi:=\varphi\circ\sigma$; assume that assertions (a) and (b) hold on $S$. Then Remark \ref{rem:DEGDESIGN} implies (b) holds on $T$ and that remark together with Lemma \ref{lem:DEGDESIGN} imply $U_{\psi}=\sigma^{-1}(U_{\varphi})$. Since $\sigma$ is proper and surjective it follows that $\sigma$ is an open morphism; hence $\sigma(U_{\psi})=U_{\varphi}$ is a nonempty open subset of $T$ which proves that (a) also holds on $T$. Now assume $T$ is smooth. In order to prove that $U_\varphi$ is not empty we consider a writing $w_V^\varphi$ such that $\deg (w_V^\varphi)=\Deg(\varphi)$. By Lemma \ref{lem4.1}, it suffices to prove that $V\backslash V_n\neq\emptyset$. Assume that $V_{n}=V$ and consider the variety $X^w\subset V\times \A^{n+1}$ defined by the ideal $I(w)$ generated by the components of $w$. Since $V_{n+1}\subsetneq V$, it follows that $X^w$ has codimension 1; denote by $Z$ the union of codimension 1 irreducible components of $X^w$ which project onto $V$. If $t_0\in V\setminus V_{n+1}$, then the ideal $I(Z)_{t_0}\subset \calo_{V,t_0}[\x]=\calo_{T,t_0}[\x]$ of elements in $\calo_{V,t_0}[\x]$ vanishing in a neighborhood of $(\{t_0\}\times \A^{n+1})\cap Z$ is principal; let $g\in \calo_{V,t_0}[\x]$ be a polynomial, homogeneous in $x_0,\ldots,x_n$, which generates $I(Z)_{t_0}$. Hence there exist a positive integer $\ell$, an index $0\leq j\leq n$ and homogeneous polynomials $h_0,\ldots,h_n \in \calo_{V,t_0}[\x]$ such that $f_i=g^{\ell}h_i$, for all $i=0,\ldots,n$, and $h_j\not\in I(Z)_{t_0}$. There exists an affine open neighborhood $V'$ of $t_0$ in $V\setminus V_{n+1}$ such that $f_i,g,h_i\in \Bbbk[V']\otimes\Bbbk[\x]$. Then $w^{\varphi}_{V'}:=(h_0,\ldots,h_n)$ defines a writing of $\varphi$ on $V'$ through $t_0$, with $\deg(w^{\varphi}_{V'})< \deg (w^\varphi_V)=\Deg(\varphi)$, and we obtain a contradiction. In order to prove that $U_\varphi$ is open, let $t_0\in U_\varphi$ and consider a writing $w'=w_U^\varphi=(f_0',\dots, f_n')$ passing through $t_0$. If $U\setminus U_n\neq \emptyset$ then $A= (V\setminus V_n)\cap (U\setminus U_n)\neq \emptyset$ and it follows from Lemma \ref{lem4.1} that for all $t\in A$ \[ \deg(w')=\deg(\varphi_t)=\deg(w)=\Deg(\varphi)=\deg(\varphi_{t_0}). \] Hence $t_0\in U\setminus U_n\subset U_\varphi$. If $U=U_n$, by arguing as in the preceding part of the proof we deduce the existence of an affine open neighborhood $U'\subset U\setminus U_{n+1}$ of $t_0$ and a writing $w^\varphi_{U'}=(h_0',\dots , h_n')$, with $f_i={g'}^{\ell'}h'_i$ for some $g', h_i'\in \Bbbk[U']\otimes \Bbbk[\x]$. Since $h'_j$ does not belong to $I(Z')_{t_0} $ (obvious notations), Lemma \ref{lem4.1}(c) implies $\deg(w^{\varphi}_{U'})\leq\deg(\varphi_{t_0})$, and thus $\deg(w^{\varphi}_{U'})=\Deg(\varphi)$. Hence $t_0\in U'\setminus U'_{n}\subset U_\varphi$ which completes the proof of $(a)$. In order to prove that $U_\varphi$ is open, let $t_0\in U_\varphi$ and consider a writing $w_U^\varphi=(f_0',,\dots f_n')$ passing through $t_0$. If $t_0\in U\setminus U_n$ there is noting to prove. Otherwise $\deg(w_U^\varphi)>\deg(\varphi_{t_0})=\Deg(\varphi)$, hence $U=U_n$. By arguing as in the preceding part of the proof we deduce the existence of an affine open neighborhood $U'\subset U\setminus U_{n+1}$ of $t_0$ and a writing $w^\varphi_{U'}=(h_0',\dots , h_n')$, with $f_i={g'}^{\ell'}h'_i$ for some $g', h_i'\in \Bbbk[U']\otimes \Bbbk[\x]$. Since $h'_j$ does not belong to $I(Z')_{t_0} $ (obvious notations), Lemma \ref{lem4.1}(c) implies $\deg(w^{\varphi}_{U'})\leq\deg(\varphi_{t_0})$, and thus $\deg(w^{\varphi}_{U'})=\Deg(\varphi)$. Hence $t_0\in U'\setminus U'_{n}\subset U_\varphi$ which completes the proof of $(a)$. To prove (b) we consider a writing $w=w_V^\varphi=(g_0,\dots,g_n)$ such that $\deg(w)=\Deg(\varphi)$. Since $g_i\in\Bbbk[V]\otimes \Bbbk[\x]\subset \Bbbk(T)[\x]$ for all $i$, there exists $a\in\Bbbk[T]$ such that $ag_i\in\Bbbk[T]\otimes \Bbbk[\x]$ for $i=1,\ldots,n$. Write \[ ag_i=\sum_{I\in\cali} a^i_I {\x}^I,\ \cali=\bigl\{I=(i_0,\ldots,i_n); i_0+\cdots+i_n=\Deg(\varphi)\bigr\}, a^i_I\in\Bbbk[T], \] for $i=0,\ldots,n$. If $t\in T$ we take an irreducible smooth curve $C\subset T$ passing through $t$ such that $C\cap U_\varphi\neq \emptyset$. If $\alpha$ is a local parameter for the local ring $\calo_{C,t}$ of $C$ at $t$, there exists $m$ such that $\alpha^m$ does divide the restriction of $a^i_I$ to $C$, for all $I$ and all $i$, but $\alpha^{m+1}$ does not; set \[g'_i:=\sum_{I\in\cali} b^i_I {\x}^I,\] where $b_I^i:=(a^i_I|_{C})/\alpha^m\in\calo_{C,t}$, $i=0,\ldots,n$. By construction $(g'_0,\ldots,g'_n)$ defines a writing of the morphism $\varphi|_{C}:C\to\bir(\PP^n)$ on an open neighborhood of $t$ in $C$. It follows $\deg(\varphi_{t})\leq \deg(g'_i)=\deg(g_i)=\Deg(\varphi)$. \end{proof} As a consequence of (the proof of) Proposition \ref{pro4.1} we have the following: \begin{cor}\label{corowritngdeg} Let $\varphi:T\to \birn$ be a morphism, then: \noindent $(a)$ $\Deg(\varphi)=\max \bigl\{ \deg(\varphi_t)\mathrel{:} t\in T\bigr\}$. Moreover, a writing $w_V^\varphi$ is of minimum degree, that is $\deg(w^\varphi_V)=\Deg(\varphi)$, if and only if $V\setminus V_n\neq \emptyset$.\hfill \noindent $(b)$ If $t\in T$ is such that $\deg(\varphi_t)=\Deg (\varphi )$, then there exists a writing $w=w^\varphi_V$ through $t$, with $\deg(w)=\Deg(\varphi)$. \qed \end{cor} Clearly the function $t\mapsto\deg(\varphi_t)$ takes finitely many values, say $d_1=\Deg(\varphi)>d_2>\cdots>d_\ell\geq 1$. Consider the decomposition $T\backslash U_\varphi=X_1\cup\cdots\cup X_r$ in irreducible components. We may restrict $\varphi$ to each $X_i$ and apply Proposition \ref{pro4.1} to conclude $\deg(\varphi_t)=d_2$ for $t$ in an open set (possibly empty for some $i$) $U_i\subset X_i$ and $\deg(\varphi_t)<d_2$ on $X_i\backslash U_i$, $i=1,\ldots,r$. Repeating the argument with $d_3$, and so on, we deduce: \begin{thm}\label{thm4.2} Let $\varphi:T\to \bir(\PP^n)$ be a morphism. Then (a) There exists a stratification by locally closed sets $T=\cup_{j=1}^\ell V_j$ such that $\deg(\varphi_t)$ is constant on $V_j$, for all $j=1,\ldots,\ell$. (b) The function $\deg{\scriptstyle \circ}\varphi:T\to \N$, $t\mapsto \deg(\varphi_t)$, is lower-semicontinuous. \hfill \qed \end{thm} \begin{cor}\label{cor:degeneration} If $d, e\in\Z$ are positive integers numbers with $d\leq e$, then every Cremona transformation of degree $d$ is specialization of Cremona transformations of degrees $\geq e$. \end{cor} \begin{proof} Let $f$ be a Cremona transformation of degree $d$. Consider a morphism $\theta:T\to \bir(\PP^n)$, where $T$ is a dense open set in $\A^1$ containing $0,1$ such that $\theta(0)=f$ and $\theta(1)$ is a Cremona transformation of degree $e$ (Theorem \ref{bla2}). The proof follows from Proposition \ref{pro4.1} applied to the morphism $\theta$. \end{proof} \begin{cor}\label{cor4.3} The degree function $\deg:\bir(\PP^n)\to\N$ is lower-semicontinuous, i.e. for all $d$ the subset $\bir_{\leq d}(\PP^n)$ of birational maps of degree $\leq d$ is closed. In particular, a subset $\calf\subset\bir(\PP^n)$ is closed if and only if $\calf\cap \bir(\P^n)_{\leq d}$ is closed for all $d>0$. \end{cor} \begin{proof} The assertion relative to semicontinuity is a direct consequence of Theorem \ref{thm4.2}(b). For the last assertion we note that if $\varphi:T\to\bir(\PP^n)$ is a morphism and $e=\Deg(\varphi)$, then $\varphi^{-1}(\calf)=\varphi^{-1}\bigl(\calf\cap \bir(\P^n)_{\leq e}\bigr)$. \end{proof} \begin{rem}\label{rem4.4} Note that $\bir(\PP^n)=\bigcup_{d\geq 1} \bir(\PP^n)_{\leq d}$, with $\bir(\PP^n)_{\leq d}\subsetneq \bir(\PP^n)_{\leq d+1}$ and $\bir(\PP^n)_1=\pgl(n+1,\Bbbk)$. \end{rem} \subsection{Algebraization of morphisms}\ In this paragraph we deal with the morphisms $\varphi:T\to \birn$ and their relationship with the stratification described in Theorem \ref{thm4.2}. We consider the locally closed sets $\birn_d:=\birn_{\leq d}\backslash \birn_{\leq d-1}$, where $d\geq 2$. If $\Deg(\varphi)=d$, then $U_\varphi=\varphi^{-1}(\birn_{d})$. Nguyen has shown in his doctoral thesis (\cite{Ngu}) that $\birn_d$ (with the induced Zariski topology) supports a structure of algebraic variety (see also \cite[Prop.2.15]{BlFu}). We give here some details on this construction, as a preliminary result for Theorem \ref{thm:chevalley}. For integers $d,n,r$, with $d,n>0$ and $r\geq 0$, we consider the vector space $V=\Bbbk[x_0,\ldots, x_n]_d^{r+1}$ of $(r+1)$-uples of $d$-forms. Notice that the projective space $\P_{(d,n,r)}=\P(V)$ consisting of dimension 1 subspaces in $V$ has dimension $N(d,n,r)={n+d\choose d}(r+1)-1$. The following lemma shows how to identify $\birn_d$ with a locally closed subset of $\P_{(d,n,n)}$. In particular, $\birn_{d}$ is a quasi-projective variety and $\birn_{\leq d}$ is a finite union of quasi-projective varieties. The reader should be aware that the topology induced by $\birn$ on $\birn_{\leq d}$ is not the one given by this union \begin{lem}\label{birndtopo} There exists a canonical bijection between $\birn_{d}$ and a locally closed subset of $\P_{(d,n,n)}$. In particular, $\birn_{d}$ is a quasi-projective variety. \end{lem} \begin{proof} Let $e<d$ be a non negative integer number. Consider the projective spaces $\P_{(d,n,n)}$, $\P_{(d-e,n,n)}$ and $\P_{(e,n,0)}$. Then there exists a ``Segre type'' morphism $s:\P_{(d-e,n,n)}\times \P_{(e,n,0)}\to \P_{(d,n,n)}$ which to a pair of elements $(g_0:\cdots:g_n)\in \P_{(d-e,n,n)}$, $(f)\in \P_{(e,n,0)}$ it associates $(g_0f:\cdots :g_nf)$. We denote by $\calw_e\subset \P_{(d,n,n)}$ the image of $s$, which is a projective subvariety. Now consider the open set $\calu\subset\P_{(d,n,n)}$ consisting of points $(f_0:f_1:\cdots:f_n)$ where the Jacobian determinant $\partial (f_0,f_1,\ldots, f_n)/\partial (x_0,\ldots, x_n)$ is not identically zero. Clearly, an element $(f_0:f_1:\cdots:f_n)\in \P_{(d,n,n)}\cap \calu$ can be identified with a dominant rational map $\P^n\to \P^n$ defined by homogeneous polynomials (without common factors) of degree $\leq d$, and any such dominant rational map can be described in this way. Under this identification, points in $\calu_d:=\left[\P_{(d,n,n)}\backslash\left(\cup_{e=1}^{d-1} \calw_e\right)\right]\cap \calu$ are in one-to-one correspondence with dominant rational maps defined by polynomials of degree exactly $d$. As it follows readily from \cite[Annexe B, Pro. B]{RPV}, the (bijective) image of $\birn_d$ under the correspondence above is closed in $\calu_d$. Hence it is a quasi-projective variety. \end{proof} The topology given by the preceding construction coincides with the Zariski topology, inducing a structure of algebraic variety on $\birn_d$: \begin{thm}[Blanc and Furter]\label{thm4.4} Let $\varphi:T\to \birn$ be a morphism with $d=\Deg(\varphi)$, and let $U_\varphi$ be as in Proposition \ref{pro4.1}. Then we have: \noindent (a) the induced map $U_\varphi\to \birn_{d}$ is a morphism of algebraic varieties. \noindent (b) the topology on $\birn_d$ induced by $\birn$ coincides with the topology of $\birn_d$ induced by $\P_{(d,n,n)}$ as in Lemma \ref{birndtopo}. \end{thm} \qed \subsection{Chevalley type Theorem} \begin{thm}\label{thm:chevalley} Let $X$ be a rational variety. If $\varphi:T\to \bir(X)$ is a morphism and $C\subset T$ is a constructible set, then $\varphi(C)$ is constructible and contains a dense open subset of $\overline{\varphi(C)}$. \end{thm} \begin{proof} By Lemma \ref{lem:functoriality} we may suppose $X=\P^n$ and $\varphi$ with degree $d=\Deg(\varphi)$. Hence $\overline{\varphi(T)}\subset \birn_{\leq d}$; we consider the morphism $\varphi_0:U_0=U_\varphi\to\birn_d$ induced by $\varphi$. On the other hand, Theorem \ref{thm4.2} gives a stratification $T\backslash U_0=\cup V_j^{\ell}$ by locally closed sets such that $d_j:=\deg\bigl(\varphi(t)\bigr)$ is constant on each $V_j$; set $\varphi_j:V_j\to \birn_{d_j}$ the morphism induced by $\varphi$ on $V_j$. We deduce that $\varphi(C)$ is constructible by using Theorem \ref{thm4.4} and applying the standard Chevalley Theorem to the morphisms $\varphi_0,\varphi_1,\ldots, \varphi_\ell$. The last assertion of the theorem is a general topology result: since $\varphi(C)$ is constructible, then $\varphi(C)=\cup_{i=1}^\ell Z_i$, where $Z_i$ is a locally closed subset for all $i=1,\dots , \ell$. Then \[ \varphi(C)\setminus \cup_i \bigl(\overline{Z_i}\setminus Z_i\bigr)= \overline{\varphi(C)} \setminus \cup_i \bigl(\overline{Z_i}\setminus Z_i\bigr) \] is a dense open subset of $\overline{\varphi(C)}$. \end{proof} \subsection{Cyclic closed subgroups} \begin{cor} Let $\{f_m\}\subset \birn$ be a infinite sequence of birational maps. Then $\{f_m\}$ is closed if and only if $\lim_{m\to\infty}\deg(f_m)=\infty$. In particular, the Zariski topology on $\bir(\PP^n)$ is not Noetherian. \end{cor} \begin{proof} Let $\varphi :T\to \birn$ be a morphism, with $\Deg(\varphi)=d$. Then there exists $m_0$ such that $\deg(f_m)\geq d$ for all $m\geq m_0$, and thus $\varphi^{-1}\bigl(\{f_m\}\bigr)$ is finite. Hence, the if follows from Corollary \ref{cor4.3} and Theorem \ref{thm4.4}. Conversely, suppose that $\liminf_{m\to\infty}\deg(f_m)=d<\infty$. Then there exist infinitely many $f_i$ whose degree is $d$. Hence, $\{f_m\}\cap \bir(\PP^n)_{d}$ is an infinite countable subset of the algebraic variety and thus it is not closed. \end{proof} \begin{cor}\label{cor:discret_subgroups} Let $f\in\birn$ be a birational map of degree $d$. The cyclic subgroup $\langle f\rangle$ generated by $f$ is closed if and only if either $f$ is of finite order or $\lim_{m\to\infty}\deg(f^m)=\infty$. \qed \end{cor} When $n=2$ the behavior of the sequence $\deg(f^m)$ is well known. Applying Corollary \ref{cor:discret_subgroups} one can thus characterize when $\langle f \rangle$ is closed in $\bir (\PP^2)$. \begin{cor} Let $f\in \bir(\PP^2)$. Then the following assertions are equivalent: \begin{enumerate} \item $\langle f\rangle$ is not closed in $\bir(\PP^2)$. \item $f$ has infinite order and $\bigl\{ \deg(f^m)\bigr\}_{m\in\N}$ is bounded. \item $f$ is conjugated to an element of infinite order of $\operatorname{PGL}(3,\C)$. \end{enumerate} \end{cor} \begin{proof} Indeed, following \cite{DiFa}, if $\langle f\rangle$ is infinite, then the sequence $\deg(f^m)$ either is bounded or grows with order at least $m$. Hence, the infinite cyclic group $\langle f\rangle$ is not closed only when the sequence $\deg(f^m)$ is bounded. The remaining equivalence follows from \cite[Thm. A]{BlDe13}. \end{proof} \subsection{Some big closed subgroups}\ Let $o\in\PP^n$ be a point. Consider the subgroup $\staro\subset \birn$ of birational transformations which stabilize (birationality) the set of lines passing through $o$. If $o'$ is another point $\staro$ and $\star_{o'}(\PP^n)$ may be conjugated by mean of a linear automorphism; in the sequel we fix $o=(1:0:\cdots:0)$. In \cite{Do} the group $\staro$ is introduced in a different form and is called the \emph{de Jonqui\`eres subgroup of level $n-1$} (see also \cite{Pa}). Let $\pi:\PP^n\tor\PP^{n-1}$ be the projection of center $o$ defined by \[ (x_0:x_1:\cdots:x_n)\mapsto (x_1:\cdots:x_n). \] Then $\staro=\{f\in\birn: \exists \tau\in \bir(\PP^{n-1})\,, \pi f=\tau\pi\}$. Moreover, note that $\staro$ is the semidirect product \[\xymatrix{1\ar@{->}[r]&\jono\ar@{->}[r]&\staro\ar@{->}[r]^\rho&\bir(\PP^{n-1})\ar@{->}[r]&1} \] where $\jono=\{f\in\birn: \pi f=\pi\}$ and $\rho$ is the evident homomorphism, and $\tau=\rho(f)$. Indeed, the morphism $\sigma:\bir(\PP^{n-1})\to\bir(\PP^n)$ given by \[(h_1:\cdots:h_n)\mapsto (x_0h_1:x_1h_1:\cdots:x_1h_n)\] is injective and such that $\sigma\bigl( \bir(\PP^{n-1})\bigr)\subset \staro$. Clearly, $\rho{\scriptstyle \circ}\sigma=id$. Moreover, we affirm that $\rho$ is continuous, and $\sigma $ is a continuous closed immersion. Indeed, if $\varphi:T\to \birn$ is a morphism then the composition $\rho{\scriptstyle \circ}\varphi$ defines a morphism $T\to\bir(\PP^{n-1})$; therefore $\rho$ is a continuous function. Clearly, $\sigma$ is continuous. In order to prove, among other things, that $\sigma$ is a closed immersion we need the following: \begin{lem}\label{lem*} Let $f\in \Bbbk[T]\otimes \Bbbk[x_0,\ldots,x_n]$ be a polynomial, homogeneous in $\x$; denote by $\deg_{x_0}(f)$ its degree in $x_0$. Then for all integer $m\geq 0$ and $i=0,\ldots,n$ the sets \[ R= \bigl\{t\in T\mathrel{:} x_i| f(t,\x)\bigr\}\ ,\ S_m= \bigl\{t\in T; \deg_{x_0}(f) \leq m\bigr\}\] are closed in $T$. \end{lem} \begin{proof} Let $a_1,\ldots,a_N\in \Bbbk[T]$ be the coefficients of $f$ as polynomial in $x_0,\ldots,x_n$. It is clear that $R$ and $S_m$ are defined as common zeroes of a subset of the polynomials $\{a_1,\ldots,a_N\}\subset \Bbbk[T]$. \end{proof} \begin{thm}\label{thmfinal} The subgroups $\jono$ and $\staro$ are closed and $\sigma\bigl(\bir (\PP^{n-1})\bigr)$ is closed in $\birn$. In particular, $\sigma$ is a closed immersion. \end{thm} \begin{proof} Let $\varphi:T\to\birn$ be a morphism, say with $\Deg(\varphi)=d$. In order to prove that $\varphi^{-1}\bigl(\jono\bigr)$ is closed it suffices to consider a net $(t_\xi)$ in $\varphi^{-1}\bigl(\jono\bigr)$, where $\xi$ varies in a directed set, and show that every limit point $t_\infty\in T$ of that net satisfies $\varphi(t_\infty)\in \jono$. Let $t_\infty$ be such a limit point and $T=\cup_{j=0}^lV_j$ be the stratification given by Theorem \ref{thm4.2}(a), where $V_0=U_\varphi$ is the open set introduced in Proposition \ref{pro4.1}. Then there exists $j$ such that the subnet $(t_\xi)\cap V_j$ has $t_\infty$ as limit point. Thus, we can assume $t_\xi\in U_\varphi$ for all $\xi$, that is that $\deg(\varphi_{t_\xi})=d$. By shrinking $T$, if necessary, we may assume \[\varphi(t,\x)=\bigl(f_0(t,\x):\cdots:f_n(t,\x)\bigr),\] where $f_i\in \Bbbk[T]\otimes \Bbbk[\x]$ are homogeneous in $\x=\{x_0,\ldots,x_n\}$ of degree $d$ (see Corollary \ref{corowritngdeg}(b)). From the description given in \cite[\S 2]{Pa} it follows that for all $\xi$ there exists a homogeneous polynomial $q_\xi\in \Bbbk[\x]$ such that: \begin{itemize} \item[(a)] $f_i(t_\xi,\x)=x_iq_\xi(\x)$, for $i>0$; \item[(b)] $f_0(t_\xi,\x)$ and $q_\xi(\x)$ have degrees $\leq 1$ in $x_0$; \item[(c)] $f_0(t_\xi,\x)q_\xi(\x)$ has degree $\geq 1$ in $x_0$. \end{itemize} By Lemma \ref{lem*}, when $t_\xi$ specializes to $t_\infty$, then $\varphi_\xi=\varphi(t_\xi)$ specializes to the birational map $\varphi_{t_\infty}=(f:x_1q:\cdots:x_nq):\PP^n\tor\PP^n$, where $f(\x)$ and $x_iq(\x)$, $i>0$, are polynomials in $\x$ of degree $d$ and with degree $\leq 1$ in $x_0$. Suppose that $f$ and $q$ admit a common factor $h\in \Bbbk[x_0,\ldots,x_n]$, of degree $\geq 0$. Since the limit map $\varphi_{t_\infty}$ is birational (of degree $\leq d$) we deduce that $h\in \Bbbk[x_1,\ldots,x_n]$: otherwise $h$ would have degree $1$ in $x_0$ and the map $\varphi_{t_\infty}$ would be defined by polynomials in $x_1,\ldots,x_n$ contradicting birationality. Hence $f':=f/h$ and $q':=q/h$ satisfy the conditions (b) and (c) above. We conclude $\varphi_{t_\infty}=(f':x_1q':\cdots:x_nq')$. Applying again the description of \cite[\S 2]{Pa}, we deduce that $\pi\varphi_{t_\infty}=\pi$ , that is $\varphi_{t_\infty}\in\jono$, which proves $\jono$ is closed. In order to prove that $\sigma\bigl(\bir(\PP^{n-1})\bigr)$ is closed, consider a net $(t_\xi)\subset \varphi^{-1}\bigl(\sigma(\PP^{n-1})\bigr)$, with limit point $t_\infty$. As before, we can assume that $t_\xi \in U_\varphi$ for all $\xi$. With the notation introduced above we have that \begin{itemize} \item[(a)] $f_i(t_\xi,\x)=x_1h_{i,\xi}(\x)$, for $i>0$, and \item[(b)] $f_0(t_\xi,\x)=x_0h_{1,\xi}(\x)$, \end{itemize} where $\tau_\xi=(h_{1,\xi}:\cdots:h_{n,\xi}):\PP^{n-1}\tor\PP^{n-1}$ is birational. From Lemma \ref{lem*} we obtain that $h_{i,\xi}$ specializes to a polynomial $h_i\in \Bbbk[x_1,\ldots,x_n]$, $i>0$ and that $\varphi_{t_\infty}=(x_0h_1:x_1h_1:\cdots:x_1h_n)$. Since $\pi\varphi_{t_\infty}=\tau_{\xi}\pi$ we conclude that $\varphi_{t_\infty}\in\staro$ and thus $(h_1:\cdots:h_n)\in \bir(\PP^{n-1})$ (\cite[Prop. 2.2]{Pa}). Since $\sigma\bigl((h_1:\cdots:h_n)\bigr)=\varphi_{t_\infty}$, it follows that $\sigma \bigl( \bir(\PP^{n-1})\bigr)$ is closed. Finally, since for elements $f\in\jono$ and $h\in \bir(\PP^{n-1})$ the product $f\rtimes h$ is the composition $f{\scriptstyle \circ}\sigma(h)$, then $\staro=\jono Im(\sigma)$ (product in $\birn$). The fact that $\staro$ is closed follows then from the two assertions we have just proved together with the continuity of the functions $\rho:\staro \to \bir\bigl(\PP^{n-1}\bigr)$, the group product and the group inversion. Indeed, let $(f_\xi\rtimes h_\xi)$ be a net in $\staro$ which specializes to $s\in\birn$. Then $\rho( f_\xi\rtimes h_\xi)=\rho(1\rtimes h_\xi)=h_\xi$ specializes to $\rho(s)=h\in \bir(\PP^{n-1})$. Since $(f_\xi\rtimes h_\xi)\cdot (1\rtimes h_\xi^{-1})=f_\xi\rtimes 1\in \jono$, the net $(f_\xi\rtimes 1)$ specializes to $s\sigma(h^{-1})\in\jono$. Thus $s\in\staro$. \end{proof} \begin{rem} More generally, for $\ell=1,\ldots, n$, the map $\sigma_\ell:\bir(\PP^{n-1})\to \birn$ defined by \[\sigma_\ell\bigl((h_1:\cdots:h_n)\bigr)= (x_0h_\ell:x_\ell h_1:\cdots:x_\ell h_n)\] is a continuous, closed, homomorphism whose image is contained in $\staro$ and such that $\rho\sigma_\ell=id$. In this notation, the map $\sigma$ of Theorem \ref{thmfinal} is $\sigma_1$. Moreover, one has \[ \bigcap_{\ell=1}^n \sigma_\ell\bigl(\bir(\PP^{n-1})\bigr)=\{id\}. \] If $\calu_\ell$ is the dense open set $\birn\backslash \sigma_\ell\bigl(\bir(\PP^{n-1})\bigr)$, then $\birn-\{id\}=\bigcup_{\ell=1}^n \calu_\ell$. \end{rem}
1,116,691,498,041
arxiv
\subsubsection*{Acknowledgements} The authors would like to thank the referees for their valuable comments, which greatly improved this paper. This work was supported by National Natural Science Foundation of China (Grant Nos. 11701259, 11971140, 11461045, 11675113), the China Scholarship Council (Grant No.201806825038), Natural Science Foundation of Jiangxi Province of China (Grant No. 20202BAB201001), the Key Project of Beijing Municipal Commission of Education (Grant No. KZ201810028042), Beijing Natural Science Foundation (Grant No. Z190005), Natural Science Foundation of Zhejiang Province of China (Grant No.LY17A010027). This work was completed while Zhaoqi Wu and Lin Zhang were visiting Max-Planck-Institute for Mathematics in the Sciences in Germany. \vskip0.1in {\bf Appendix A: Proof of Lemma 1} {\bf Proof of Lemma 1.} Note that $\prod_{1\leq i<j\leq N}(\mu_i-\mu_j)$ is the classical Vandermonde determinant $$ \prod_{1\leq i<j\leq N}(\mu_i-\mu_j)= \left|\begin{array}{ccc} 1&\cdots&1\\ \mu_1&\cdots&\mu_N\\ \vdots&\ddots&\vdots\\ \mu_1^{N-1}&\cdots&\mu_N^{N-1}\\ \end{array} \right|. $$ It can be seen that if $P_0,P_1,\cdots,P_{N-1}$ are polynomials of respective degrees $0,1,\cdots,N-1$ and respective dominant coefficients $a_0,a_1,\cdots,a_{N-1}$, one has $$ \prod_{1\leq i<j\leq N}(\mu_i-\mu_j)= \frac{1}{\prod_{k=0}^{N-1}a_k} \left|\begin{array}{ccc} P_0(\mu_1)&\cdots&P_0(\mu_N)\\ P_1(\mu_1)&\cdots&P_1(\mu_N)\\ \vdots&\ddots&\vdots\\ P_{N-1}(\mu_1)&\cdots&P_{N-1}(\mu_N)\\ \end{array} \right| $$ Now choose $P_k(x)$ to be Laguerre polynomials $L_k(x)$: $$ L_k(x)=\sum_{j=0}^k (-1)^k\tbinom{k}{k-j}\frac{x^j}{j!}. $$ Note that $L_k(x)$ have the orthogonality property \begin{equation}\label{eq22} \int_0^\infty L_k(x)L_l(x)e^{-x}dx=\delta_{kl}, \end{equation} and the coefficient of the term with the highest degree is $a_k=\frac{(-1)^k}{k!}$. We have \begin{eqnarray}\label{eq23} \prod_{1\leq i<j\leq N}(\mu_i-\mu_j)^2 &=&\frac{1}{\prod_{k=0}^{N-1}a_k^2} \left|\begin{array}{ccc} L_0(\mu_1)&\cdots&L_0(\mu_N)\\ L_1(\mu_1)&\cdots&L_1(\mu_N)\\ \vdots&\ddots&\vdots\\ L_{N-1}(\mu_1)&\cdots&L_{N-1}(\mu_N)\\ \end{array} \right| \nonumber\\ &=&\prod_{k=0}^{N-1}(k!)^2 \sum_{\sigma,\tau\in S_N}\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)L_{\sigma(k)-1}(\mu_k)L_{\tau(k)-1}(\mu_k), \end{eqnarray} which implies that \begin{eqnarray*} &&\int_{\mathbb{R}_+^N}\sqrt{\mu_1\mu_2}\mathrm{exp}\left(-\sum_{j=1}^N \mu_j\right)|\Delta(\mu)|^2\prod_{j=1}^N \mathrm{d}\mu_j \\&&=\prod_{k=0}^{N-1}(k!)^2 \sum_{\sigma,\tau\in S_N}\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)\left(\int_0^\infty\sqrt{\mu_1}e^{-\mu_1}L_{\sigma(1)-1}(\mu_1)L_{\tau(1)-1}(\mu_1)\mathrm{d}\mu_1\right) \\&&\left(\int_0^\infty\sqrt{\mu_2}e^{-\mu_2}L_{\sigma(2)-1}(\mu_2)L_{\tau(2)-1}(\mu_1)\mathrm{d}\mu_2\right)\left(\prod_{k=3}^{N}\int_{\mathbb{R}_+^{N-2}}e^{-\mu_k} L_{\sigma(k)-1}(\mu_k)L_{\tau(k)-1}(\mu_k)\mathrm{d}\mu_k\right), \end{eqnarray*} where $S_N$ is the permutation group on $\{1,2,\cdots,N\}$. Denote $I_{kl}^{(q)}:=\int_0^\infty L_k(x)L_l(x)e^{-x}x^{q}dx$, where $q>-1$. It holds that \cite{JSR} \begin{equation}\label{eq24} I_{kl}^{(q)}=\sum_{r=0}^{\min(k,l)}(-1)^{k+l}\tbinom{q}{k-r}\tbinom{q}{l-r}\frac{\Gamma(q+r+1)}{r!},~~q>-1. \end{equation} Note that $$\int_0^\infty\sqrt{\mu_i}e^{-\mu_i}L_{\sigma(i)-1}(\mu_i)L_{\tau(i)-1}(\mu_i)\mathrm{d}\mu_i=I_{\sigma(i)-1,\tau(i)-1}^{(\frac{1}{2})},~~i=1,2$$ and $$\int_0^\infty\sqrt{\mu_i}e^{-\mu_i}L_{\sigma(1)-1}(\mu_i)L_{\sigma(2)-1}(\mu_i)\mathrm{d}\mu_i=I_{\sigma(1)-1,\sigma(2)-1}^{(\frac{1}{2})},~~i=1,2.$$ We calculate the integral $\int_{\mathbb{R}_+^N}\sqrt{\mu_1\mu_2}\mathrm{exp}\left(-\sum_{j=1}^N \mu_j\right)|\Delta(\mu)|^2\prod_{j=1}^N \mathrm{d}\mu_j$ by considering the following two cases. {\bf Case I}: $\sigma=\tau$. Denote $I=\sum_{k=0}^{N-1}I_{kk}^{(\frac{1}{2})}$, we have \begin{eqnarray}\label{eq25} &&\sum_{\sigma,\tau\in S_N,\sigma=\tau}\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)\left(\int_0^\infty\sqrt{\mu_1}e^{-\mu_1}L_{\sigma(1)-1}(\mu_1)L_{\tau(1)-1}(\mu_1)\mathrm{d}\mu_1\right) \nonumber\\ &&\left(\int_0^\infty\sqrt{\mu_2}e^{-\mu_2}L_{\sigma(2)-1}(\mu_2)L_{\tau(2)-1}(\mu_1)\mathrm{d}\mu_2\right)\left(\prod_{k=3}^{N}\int_{\mathbb{R}_+^{N-2}}e^{-\mu_k} L_{\sigma(k)-1}(\mu_k)L_{\tau(k)-1}(\mu_k)\mathrm{d}\mu_k\right) \nonumber\\ &&=\sum_{\sigma\in S_N}I_{\sigma(1)-1,\sigma(1)-1}^{(\frac{1}{2})}I_{\sigma(2)-1,\sigma(2)-1}^{(\frac{1}{2})} =(N-2)!\sum_{k\neq l}I_{kk}^{(\frac{1}{2})}I_{ll}^{(\frac{1}{2})} \nonumber\\ &&=(N-2)!\left[\left(\sum_{k=0}^{N-1}I_{kk}^{(\frac{1}{2})}\right)^2-\sum_{k=0}^{N-1}\left(I_{kk}^{(\frac{1}{2})}\right)^2\right]. \end{eqnarray} {\bf Case II}: $\sigma\neq \tau$. First, note that if there exists $k_0\in \{3,4,\cdots,N\}$ such that $\sigma(k_0)\neq \tau(k_0)$, then by Eq. (\ref{eq22}) we have $$\left(\prod_{k=3}^{N}\int_{\mathbb{R}_+^{N-2}}e^{-\mu_k} L_{\sigma(k)-1}(\mu_k)L_{\tau(k)-1}(\mu_k)\mathrm{d}\mu_k\right)=0.$$ Thus $\int_{\mathbb{R}_+^N}\sqrt{\mu_1\mu_2}\mathrm{exp}\left(-\sum_{j=1}^N \mu_j\right)|\Delta(\mu)|^2\prod_{j=1}^N \mathrm{d}\mu_j=0$. Otherwise, $\sigma(i)=\tau(i)$ for $i=3,\cdots,N$, which implies that $\sigma(1)=\tau(2)$ and $\sigma(2)=\tau(1)$, i.e., $\tau=\sigma(1 2)$. Then we have \begin{eqnarray}\label{eq26} &&\sum_{\sigma,\tau\in S_N,\sigma\neq \tau}\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)\left(\int_0^\infty\sqrt{\mu_1}e^{-\mu_1}L_{\sigma(1)-1}(\mu_1)L_{\tau(1)-1}(\mu_1)\mathrm{d}\mu_1\right) \nonumber\\ &&\left(\int_0^\infty\sqrt{\mu_2}e^{-\mu_2}L_{\sigma(2)-1}(\mu_2)L_{\tau(2)-1}(\mu_1)\mathrm{d}\mu_2\right)\left(\prod_{k=3}^{N}\int_{\mathbb{R}_+^{N-2}}e^{-\mu_k} L_{\sigma(k)-1}(\mu_k)L_{\tau(k)-1}(\mu_k)\mathrm{d}\mu_k\right) \nonumber\\ &&=\sum_{\sigma\in S_N}(-1)I_{\sigma(1)-1,\sigma(2)-1}^{(\frac{1}{2})}I_{\sigma(2)-1,\sigma(1)-1}^{(\frac{1}{2})} =-(N-2)!\sum_{k\neq l}(I_{kl}^{(\frac{1}{2})})^2. \end{eqnarray} Combining Eqs. (\ref{eq25}) and (\ref{eq26}), we have \begin{eqnarray}\label{eq27} &&\int_{\mathbb{R}_+^N}\sqrt{\mu_1\mu_2}\mathrm{exp}\left(-\sum_{j=1}^N \mu_j\right)|\Delta(\mu)|^2\prod_{j=1}^N \mathrm{d}\mu_j \nonumber\\ &&=\prod_{k=0}^{N-1}(k!)^2\left[(N-2)!\left(\left(\sum_{k=0}^{N-1} I_{kk}^{(\frac{1}{2})}\right)^2-\sum_{k=0}^{N-1} \left(I_{kk}^{(\frac{1}{2})}\right)^2\right)-(N-2)!\sum_{k\neq l}\left(I_{kl}^{(\frac{1}{2})}\right)^2\right] \nonumber\\ &&=(N-2)!\prod^{N}_{j=1}\Gamma(j)^2\left[\left(\sum_{k=0}^{N-1} I_{kk}^{(\frac{1}{2})}\right)^2-\sum_{k,l=0}^{N-1} \left(I_{kl}^{(\frac{1}{2})}\right)^2\right], \end{eqnarray} where $$ I_{kl}^{(\frac{1}{2})}=\sum_{r=0}^{\min(k,l)}(-1)^{k+l}\tbinom{\frac{1}{2}}{k-r}\tbinom{\frac{1}{2}}{l-r}\frac{\Gamma(\frac{3}{2}+r)}{r!}. $$ $\Box$ {\bf Appendix B: Proof of Theorem 4} {\bf Proof of Theorem 4.} Since $\mathrm{d\mu_{HS}}$ is a normalized Hilbert-Schmidt measure, by the definition of $C_I(\rho)$, we have \begin{eqnarray}\label{eq28} \int_{\mathrm{D}(\mathbb{C}^N)} C_I(\rho)\mathrm{d\mu_{HS}}(\rho) &=&\int_{\mathrm{D}(\mathbb{C}^N)} \left[1-\sum_{k=1}^N\langle k|\sqrt{\rho}|k\rangle ^2\right] \mathrm{d\mu_{HS}}(\rho) \nonumber\\ &=&1-\int_{\mathrm{D}(\mathbb{C}^N)} \sum_{k=1}^N\langle k^{\otimes 2}|\sqrt{\rho}^{\otimes 2}|k^{\otimes 2}\rangle \mathrm{d\mu_{HS}}(\rho) \nonumber\\ &=&1-\sum_{k=1}^N \left\langle k^{\otimes 2}\left|\int_{\mathrm{D}(\mathbb{C}^N)}\sqrt{\rho}^{\otimes 2}\mathrm{d\mu_{HS}}(\rho)\right|k^{\otimes 2}\right\rangle. \end{eqnarray} It suffices to compute the integral $ \int_{\mathrm{D}(\mathbb{C}^N)}\sqrt{\rho}^{\otimes 2}\mathrm{d\mu_{HS}}(\rho). $ In fact, by the factorization in Eq. (\ref{eq7}), it follows that \begin{eqnarray}\label{eq29} &&\int_{\mathrm{D}(\mathbb{C}^N)}\sqrt{\rho}^{\otimes 2}\mathrm{d\mu_{HS}}(\rho) \nonumber\\ &&=\int \mathrm{d\nu(\Lambda)}\int_{\mathrm{U(N)}}\left[(U\otimes U)(\sqrt{\Lambda}\otimes \sqrt{\Lambda})(U\otimes U)^{\dag}\mathrm{d\mu_{Haar}}(U)\right]. \end{eqnarray} Using the following formula for integral over unitary groups \cite{LZ7}: \begin{eqnarray}\label{eq30} &&\int_{\mathrm{U(N)}}(U\otimes U)A(U\otimes U)^{\dag}\mathrm{d\mu_{Haar}}(U) \nonumber\\ &&=\left(\frac{\mathrm{Tr}(A)}{N^2-1}-\frac{\mathrm{Tr}(AF)}{N(N^2-1)}\right)\mathbf{1}_{N^2} -\left(\frac{\mathrm{Tr}(A)}{N(N^2-1)}-\frac{\mathrm{Tr}(AF)}{N^2-1}\right)F, \end{eqnarray} where $A\in M_{N^2}(\mathbb{C})$ and $F$ is the swap operator defined by $F|ij\rangle=|ji\rangle$ for all $i,j=1,2,\cdots,N$, we have \begin{eqnarray}\label{eq31} \int_{\mathrm{U(N)}}(U\otimes U)(\sqrt{\Lambda}\otimes \sqrt{\Lambda})(U\otimes U)^{\dag}\mathrm{d\mu_{Haar}}(U) =\frac{N(\mathrm{Tr}\sqrt{\Lambda})^2-1}{N(N^2-1)}\mathbf{1}_{N^2}+\frac{N-(\mathrm{Tr}\sqrt{\Lambda})^2}{N(N^2-1)}F. \end{eqnarray} Noting that \begin{eqnarray}\label{eq32} \int(\mathrm{Tr}\sqrt{\Lambda})^2\mathrm{d\nu(\Lambda)} &=&\int \mathrm{d\nu(\Lambda)}+2\int \sum_{1\leq i<j\leq N}\sqrt{\lambda_i\lambda_j}\mathrm{d\nu(\Lambda)} \nonumber\\ &=& 1+2\int \sum_{1\leq i<j\leq N}\sqrt{\lambda_i\lambda_j}\mathrm{d\nu(\Lambda)} \nonumber\\ &=& 1+2C_{\mathrm{HS}}^N\int_{\mathbb{R}_+^N} \sum_{1\leq i<j\leq N}\sqrt{\lambda_i\lambda_j}\delta\left(1-\sum_{j=1}^N\lambda_j\right)|\Delta(\lambda)|^2\prod_{j=1}^N \mathrm{d}\lambda_j \nonumber\\ &=& 1+2C_{\mathrm{HS}}^N\tbinom{N}{2}\int_{\mathbb{R}_+^N} \sqrt{\lambda_1\lambda_2}\delta\left(1-\sum_{j=1}^N\lambda_j\right)|\Delta(\lambda)|^2\prod_{j=1}^N \mathrm{d}\lambda_j, \end{eqnarray} where $C_{\mathrm{HS}}^N$ is given in Eq. (\ref{eq9}), we only need to calculate $$ \int_{\mathbb{R}_+^N}\sqrt{\lambda_1\lambda_2}\delta\left(1-\sum_{j=1}^N\lambda_j\right)|\Delta(\lambda)|^2\prod_{j=1}^N \mathrm{d}\lambda_j. $$ Denote $$F(t)=\int_{\mathbb{R}_+^N}\sqrt{\lambda_1\lambda_2}\delta\left(t-\sum_{j=1}^N\lambda_j\right)|\Delta(\lambda)|^2\prod_{j=1}^N \mathrm{d}\lambda_j.$$ By performing Laplace transform $(t\rightarrow s)$ of $F(t)$, and letting $\mu_j=s\lambda_j,j=1,2$, we get \begin{eqnarray}\label{eq33} \tilde{F}(s)&=&\int_{\mathbb{R}_+^N}\sqrt{\lambda_1\lambda_2}\mathrm{exp}\left(-s\sum_{j=1}^N \lambda_j\right)|\Delta(\lambda)|^2\prod_{j=1}^N \mathrm{d}\lambda_j \nonumber\\ &=& s^{-(N^2+1)}\int_{\mathbb{R}_+^N} \sqrt{\mu_1\mu_2}\mathrm{exp}\left(-\sum_{j=1}^N \mu_j\right)|\Delta(\mu)|^2\prod_{j=1}^N \mathrm{d}\mu_j. \end{eqnarray} Utilizing the inverse Laplace transform $(s\rightarrow t):\mathscr{L}^{-1}(s^{\alpha})=\frac{t^{-\alpha-1}}{\Gamma(-\alpha)}$, we obtain \begin{equation}\label{eq34} F(t)=\frac{t^{N^2}}{\Gamma(N^2+1)}\int_{\mathbb{R}_+^N}\sqrt{\mu_1\mu_2}\mathrm{exp}\left(-\sum_{j=1}^N \mu_j\right)|\Delta(\mu)|^2\prod_{j=1}^N \mathrm{d}\mu_j. \end{equation} Thus \begin{eqnarray}\label{eq35} &&\int_{\mathbb{R}_+^N}\sqrt{\lambda_1\lambda_2}\delta\left(1-\sum_{j=1}^N\lambda_j\right)|\Delta(\lambda)|^2\prod_{j=1}^N \mathrm{d}\lambda_j \nonumber\\ &&=\frac{1}{\Gamma(N^2+1)}\int_{\mathbb{R}_+^N}\sqrt{\mu_1\mu_2}\mathrm{exp}\left(-\sum_{j=1}^N \mu_j\right)|\Delta(\mu)|^2\prod_{j=1}^N \mathrm{d}\mu_j. \end{eqnarray} Substituting Eq. (\ref{eq18}) into Eq. (\ref{eq35}) yields \begin{eqnarray}\label{eq36} &&\int_{\mathbb{R}_+^N}\sqrt{\lambda_1\lambda_2}\delta\left(1-\sum_{j=1}^N\lambda_j\right)|\Delta(\lambda)|^2\prod_{j=1}^N \mathrm{d}\lambda_j \nonumber\\ &&=\frac{(N-2)!\prod^{N}_{j=1}\Gamma(j)^2}{\Gamma(N^2+1)}\left[\left(\sum_{k=1}^N I_{kk}^{(\frac{1}{2})}\right)^2-\sum_{k,l=1}^N \left(I_{kl}^{(\frac{1}{2})}\right)^2\right], \end{eqnarray} which by Eqs. (\ref{eq9}) and (\ref{eq32}) gives rise to \begin{eqnarray}\label{eq37} \int(\mathrm{Tr}\sqrt{\Lambda})^2\mathrm{d\nu(\Lambda)} =1+\frac{1}{N^2}\left[\left(\sum_{k=1}^N I_{kk}^{(\frac{1}{2})}\right)^2-\sum_{k,l=1}^N \left(I_{kl}^{(\frac{1}{2})}\right)^2\right]. \end{eqnarray} Combining Eqs. (\ref{eq29}), (\ref{eq31}) and (\ref{eq37}), we obtain \begin{eqnarray*} &&\int_{\mathrm{D}(\mathbb{C}^N)}\sqrt{\rho}^{\otimes 2}\mathrm{d\mu_{HS}}(\rho) \\&&=\int \left[\frac{N(\mathrm{Tr}\sqrt{\Lambda})^2-1}{N(N^2-1)}\mathbf{1}_{N^2}+\frac{N-(\mathrm{Tr}\sqrt{\Lambda})^2}{N(N^2-1)}F\right]\mathrm{d\nu(\Lambda)} \\&&=\frac{N\mathbf{1}_{N^2}-F}{N(N^2-1)}\int(\mathrm{Tr}\sqrt{\Lambda})^2\mathrm{d\nu(\Lambda)}+\frac{NF-\mathbf{1}_{N^2}}{N(N^2-1)}\int \mathrm{d\nu(\Lambda)} \\&&=\frac{N\mathbf{1}_{N^2}-F}{N(N^2-1)} \left(1+\frac{1}{N^2}\left[\left(\sum_{k=1}^N I_{kk}^{(\frac{1}{2})}\right)^2-\sum_{k,l=1}^N \left(I_{kl}^{(\frac{1}{2})}\right)^2\right]\right) +\frac{NF-\mathbf{1}_{N^2}}{N(N^2-1)}. \end{eqnarray*} Finally, by using the fact that $\sum_{k=1}^N \langle k^{\otimes 2}|F|k^{\otimes 2}\rangle=N,$ we have $$\sum_{k=1}^N \langle k^{\otimes 2}|N\mathbf{1}_{N^2}-F|k^{\otimes 2}\rangle =\sum_{k=1}^N \langle k^{\otimes 2}|NF-\mathbf{1}_{N^2}|k^{\otimes 2}\rangle=\frac{N^2-N}{N(N^2-1)}=\frac{1}{N+1}.$$ From Eq. (\ref{eq28}) we get (\ref{eq19}). $\Box$
1,116,691,498,042
arxiv
\section{Introduction} Whether the rate of profit has a tendency to decline with capitalist development was of considerable interest to classical political economy. Being the income stream of the capitalist class, profit is both the source and spur for capital accumulation. If there was a tendency for the rate of profit to continuously fall over time, this naturally pointed to some deep contradiction in the capitalism system. For, through this tendency, the system seemed to undermine itself \citep[Chapter~IV]{dobb_1945}. Adam Smith had argued that capital accumulation and competition between capitalists would impart a tendency to the rate of profit to fall. David Ricardo, while disagreeing with Smith's explanation, nonetheless fell compelled to offer his own answer. Diminishing returns on land, argued Ricardo, was the ultimate source of the tendency for the rate of profit to fall. For, with capital accumulation, there is a rise in the demand for labor and therefore for food. This, in turn, necessitates the cultivation of inferior plots of land, thereby raising the price of labor and squeezing profits \citep[Chapter~IV, pp. 86--87]{dobb_1945}. Ricardo's argument shifted the locus of the declining tendency of the rate of profit to outside capitalism, to the possibilities or otherwise of technical progress in agricultural production. Marx brought the focus back to the internal dynamics of capitalism. In developing his own argument about the law of the tendential fall in the rate of profit in Volume III of \textit{Capital}, Marx argued that technical progress in capitalist production, which brings about a rise in the organic composition of capital (the ratio of material and labor costs), would manifest itself as a `tendency' of the rate of profit to fall \citep{marx_3}. It is immaterial whether there is technical progress in agriculture. As long as the organic composition of capital has a tendency to rise at the aggregate level, technical progress, the very strength of capitalism, will undermine itself by imparting a declining trend to the rate of profit. Starting with \citet{okishio_1961}, a large literature has argued that Marx's argument is flawed. If capitalists adopt a new techniques of production only if they reduce the cost of production at existing prices, which seems reasonable, then the rate of profit will have a tendency to rise, rather than to fall. To be more precise, if capitalist producers choose to adopt a new technique of production only if it is cost-reducing at current prices and the real wage rate remains unchanged, then the long run rate of profit in the economy will rise \citep{okishio_1961, bowles_1981, roemer_1981, dietzenbacher_1989}. Okishio's justly celebrated result rests on the assumption that the real wage rate does \textit{not} change. This is an extremely restrictive assumption, given that the analysis is about \textit{long run} prices and profit rates. There is no theoretical or empirical reason to believe that the real wage rate remains constant over the course of technical change, i.e. the adoption of a new technique of production by an innovating capitalist and its subsequent diffusion through the rest of the economy.\footnote{Even \citet{okishio_2000} admits that the assumption of a constant real wage rate is unrealistic. ``The assumption of a constant real wage rate implies either a non-monetary economy or the instantaneous adaptation of the money-wage rate to the prices of consumption goods. Both are unrealistic. A capitalistic economy is a monetary production economy. Labourers receive a money-wage. The money-wage rate and the prices of consumption goods change owing to competition in the consumption goods market and in the labour market. The assumption of a constant real wage rate cannot be maintained.'' \citep[pp.~493]{okishio_2000}.} In fact, technical change interacts with larger social and economic forces, including those relevant to labour market outcomes, and it is not inconceivable that the real wage rate can change - one way or the other - after technical change. Taking a Marxian view of the matter suggests that the real wage rate is an outcome of class struggle, and it is unclear why class struggle would not be able to change the real wage rate over the course of technical change. At the least its seems plausible to argue that, since technical change increases labor productivity, workers will attempt to bargain for some part of the gain of technical change, especially in a context of labor constraints \citep[pp.~113--114]{dobb_1945}. Hence, it is eminently possible that the real wage rate will \textit{increase} with technical change, rather than remain unchanged in advanced capitalist economies marked by labor constraints. An important finding of the Marxist literature on technical change and distribution, one that is often not appreciated, is that Okishio's result will no longer hold if we allow the real wage to change over the course of technical change \citep{roemer_1981, foley_1986b, dietzenbacher_1989, laibman_1992, liang_2021}. There have been two broad approaches to providing more structure to how the real wage rate might change over the course of technical change, i.e. discovery, adoption and diffusion of new techniques of production. The first approach has worked with a \textit{constant wage-profit ratio} as a plausible description of how the real wage rate might behave over the course of technical change. An analysis of the effect of technical change on the rate of profit when the profit-wage ratio remains constant was worked out in a $2$-commodity model in \citet{roemer_1977, roemer_1981}. Two important findings in \citet{roemer_1977} are that, first, we can only define sectoral profit-wage ratios, but not the aggregate profit-wage ratio, without reference to the scale of production, and second, that sectoral profit-wage ratios can remain constant only when the real wage varies across sectors, i.e. we need to assume non-competitive labour markets. The main result in \citet{roemer_1977, roemer_1981} is that the rate of profit falls (or remains unchanged) if there is cost-reducing capital-using labour-saving (CU-LS) technical change in the capital goods (consumer goods) sector. This result has been generalized to the case of an $n$-commodity model - without distinguishing between capital and consumer goods industries - in \citet{chen_2019}, which shows that when there is cost-reducing CU-LS technical change in any sector with sectoral profit-wage ratios remaining constant, the equilibrium profit rate falls. The second approach uses a \textit{constant rate of exploitation} as a description of how the real wage might vary over the course of technical change. The idea that the rate of exploitation might remain constant before and after technical change goes back to Marx \citep{marx_3}. His analysis of the law of the tendential fall worked with the often implicit assumption of a constant rate of exploitation. \citet{laibman_1982,laibman_1992} incorporated this assumption in a two-sector model and analysed the effect of technical change on the rate of profit. The main finding of \citet{laibman_1982} was that it is \textit{possible} for the rate of profit to fall after cost-reducing technical change if the rate of exploitation remains constant. While \citet{michl_1988} and \citet[Chapter~6]{basu_2021} presents similar results in a one sector model, \citet{liang_2021} has generalized Laibman's result to an $m$-sector two department model with fixed capital. This paper contributes to this literature by extending the results in \citet{laibman_1982}, \citet{michl_1988}, \citet[Chapter~6]{basu_2021} and \citet{liang_2021}. We extend the analysis of \citet{laibman_1982}, \citet{michl_1988} and \citet[Chapter~6]{basu_2021} to a general $n$-sector circulating capital model of a capitalist economy. Unlike \citet{laibman_1982}, we do not distinguish between capital and consumption goods. We extend the analysis in \citet{liang_2021} by allowing for a general change in the real wage bundle. Whereas \citet{liang_2021} only allows proportional changes in the vector of the real wage bundle, we allow for the real wage bundle to change in an arbitrary manner over the course of technical change. In this general setting, we demonstrate that under certain plausible conditions, the long run rate of profit can fall after viable technical change if the rate of exploitation remains constant (or even rises in a bounded manner). One advantage of using the constant rate of exploitation description of real wage behavior is that we do not need to assume non-competitive labor markets, as is needed in \citet{roemer_1981} and \citet{chen_2019}. The intuition for our result is straightforward. When a new technique of production becomes available in a sector, capitalists compare the cost of production associated with the new technique and the old technique using \textit{current} prices and wage rates. Capitalists do not know the direction in which class struggle will proceed and therefore do not take account of possible changes in the nominal or real wage rate - an outcome of class struggle - when arriving at their decision to adopt the new technique of production. Hence, if the technique reduces costs of production at current prices and wage rates, capitalists adopt the new technique of production. The course of class struggle can, under certain circumstances, lead to an increase in the real wage bundle in such a way that it not only becomes more expensive at current prices but also keeps the rate of exploitation unchanged. If technical change is of the capital-using labor-saving (CU-LS) type, the predominant form of technical change in capitalism \citep{foley_michl__tavani_2019}, then the labor value of all commodities will (weakly) fall \citet[Theorem~4.9]{roemer_1981}. Hence, a `larger' real wage bundle will still be compatible with a constant rate of exploitation. The `larger' real wage bundle can accommodate relatively higher magnitudes of commodities for which the labor values have fallen relatively more. If these commodities also had relatively high prices in the original situation compared to labor values after technical change, then the monetary cost of the real wage bundle will increase to such an extent that it will lead to a fall in the long run, equilibrium rate of profit. One important condition that ensures the fall in the equilibrium rate of profit is that the reduction in cost afforded by the new technique of production, evaluated with the original prices and the original real wage bundle, not be too large. In fact, if the cost reduction is bounded above by the change in the nominal labor cost associated with the new technique of production, then the equilibrium rate of profit will fall - squeezed by the rise in the nominal labor cost coming from the new real wage bundle \citet[Theorem~5]{dietzenbacher_1989}. Since new techniques of production are perturbations of current techniques \citep{dumenil_levy_1995}, the assumption of an upper bound on the cost reduction associated with a new technique of production seems reasonable. Given this bounded nature of cost reduction, the rate of profit falls because capitalists are unable to fully take account of the effects of technical change on the labor market. While capitalists might be able to control wage movements at the level of their firm, technical change has larger impacts on the labor market that is beyond the control of individual capitalists. It is this inability to fully control wage movements that, under certain plausible configurations of technological change, will lead to a fall in the long run, equilibrium rate of profit. Hence, individually rational capitalist actions can lead to an overall undermining of the interest of the whole capitalist class. The rest of the paper is organized as follows. In section~\ref{sec:setup}, we describe the basic set up and define viable technical change. In section~\ref{sec:results}, we derive sufficient conditions for the rate of profit to fall after viable technical change if the rate of exploitation remains constant (these results are presented as Theorem~\ref{thm:frp}, and ~\ref{thm:existence}); subsequently, we show that if we impose a minor restriction on the permissible set of real wage bundles before technical change, then starting from any configuration of technology and real wage, there will always exist viable, CU-LS technical changes that will satisfy the sufficient conditions of Theorem~\ref{thm:existence} (this existence result is presented as Theorem~\ref{thm:existence-1}). In section~\ref{sec:example}, we present an example of a $3$ sector model to illustrate our argument; finally, we conclude the paper in section~\ref{sec:conclusion}. \section{The Set-Up}\label{sec:setup} \subsection{Initial Configuration} Consider an economy with $n$ sectors of production, where the technology is given by the non-negative $n \times n$ input-output matrix, $A \geqq 0$, and the $1 \times n$ vector of direct labor inputs, $L \gg 0$, and the real wage bundle given by the $n \times 1$ vector $b \geqq 0$.\footnote{For vectors and matrices, we will use the following notation: $x \geqq 0$, if for $i=1,2, \ldots, n$, $x_i \geq 0$ and $x \neq 0$; $x \gg 0$, if for $i=1,2, \ldots, n$, $x_i>0$.} Each sector produces one commodity with one technique of production and there is no fixed capital. The cost of producing one unit of the commodity in sector $i$ is given by $p A_{*i}+wL_i$, where $A_{*i}$ denotes the $i$-th column of $A$, and $w=pb$ is the nominal wage rate. Using the normalization that the nominal wage rate is unity, the $1 \times n$ vector of long run equilibrium prices (prices of production), $p$, and the long run equilibrium (uniform) rate of profit, $\pi$, are given by \begin{equation}\label{eq:pop-1} p = \left( 1+\pi \right) p M, \textrm{ and } pb=1, \end{equation} where $M = A+bL$, is the augmented input matrix. We assume that the input-output matrix, $A$, is productive and indecomposable. Then, an application of the Perron-Froebenius theorem shows that $p \gg 0$ and $\pi>0$ \citep[pp. 36]{dietzenbacher_1989}. For this configuration of technology, the $1 \times n$ vector of labor values, $\Lambda$, is given by \begin{equation}\label{value-def} \Lambda = L \left( I-A\right)^{-1}. \end{equation} Standard results in linear algebra show that, since $A$ is productive, $(I-A)^{-1} \gg 0$ \citep[Appendix]{pasinetti_1977}. Hence, we have $\Lambda \gg 0$. Once labor values and the real wage bundle is known, we can define the rate of exploitation as \begin{equation} e = \frac{1-\Lambda b}{\Lambda b}. \end{equation} \begin{assumption}\label{def-B} The real wage bundle, $b$, is an element of the set, $ \mathbb{B} = \mathbb{B}_1 \cap \mathbb{B}_2 $, where $\mathbb{B}_1 = \left\lbrace b \in \mathbb{R}^n_{+} \textrm{ s.t. } 0 < \Lambda b \leq 1\right\rbrace$, and $\mathbb{B}_2 = \left\lbrace b \in \mathbb{R}^n_{+} \textrm{ s.t. } 1/(\Lambda b) = 1+e < \max_k (p_k/\lambda_k)\right\rbrace$, where $p_k$ and $\lambda_k$ denote the $k$-th element of $p$ and $\Lambda$, respectively. \end{assumption} According to assumption~\ref{def-B}, a permissible real wage bundle before technical change must satisfy two restrictions. First, it must belong to $ \mathbb{B}_1$. This restriction is meant to rule out negative rates of exploitation, because the latter are conceptually meaningless. Note that there does not exist a one-one relationship between the rate of exploitation and the real wage bundle. In fact, given a positive rate of exploitation, $e>0$, any real wage bundle, $b$, that satisfies $\Lambda b = 1/(1+e)$ will be associated with $e$. The second condition in the definition of $\mathbb{B}$ imposes some restriction on the permissible real wage bundles and requires comment. The second restriction is that the real wage bundle, $b$, must belong to $\mathbb{B}_2$, i.e. $1+e = 1/(\Lambda b) < \max_k (p_k/\lambda_k)$. This is a technical condition required for proving a result further down in the paper (Theorem~\ref{thm:existence-1}). It states that, given any positive rate of exploitation, $e>0$ (which is ensured by the first restriction), the real wage bundle must be such that the maximum price-value ratio among the $n$ commodities is strictly greater than the reciprocal of the value of the real wage bundle (which is equal to $1$ plus the rate of exploitation). This condition is less restrictive than might appear at first sight. \citet[Corollary~8.6]{roemer_1981} shows that, as long as the organic composition of capital is not identically equal in all sectors, $\max_k (p_k/\lambda_k) \geq 1+e \geq \min_k (p_k/\lambda_k)$. Hence, the condition imposed in assumption~\ref{def-B} is just ruling out the possibility of an equality for the left hand weak inequality. We will comment on this condition below when we use it. \subsection{Viable, CU-LS Technical Change} Suppose there is a cost-reducing (viable) technical change in sector $i$, i.e. the cost of producing one unit of output with the new technique of production is lower than with the older technique of production when both are evaluated at current prices and wage rate. Hence, \begin{equation}\label{viability} p A_{*i} + L_i > p \bar{A}_{*i} + \bar{L}_i, \end{equation} where $A_{*i}$ and $\bar{A}_{*i}$ denote the $i$-th columns of the matrices $A$ and $\bar{A}$, respectively, $L_i$ denotes the $i$-th element of $L$, and we have used the normalization, once again, that the nominal wage rate is $1$. In addition, suppose technical change is capital-using and labor-saving (CU-LS). This means the the amount of material inputs used to produce one unit of the commodity in sector $i$ rises, while the amount of direct labor input falls. Hence, for $j = 1, 2, \ldots, n$, \begin{equation}\label{culs} a_{ji} < \bar{a}_{ji}, \textrm{ and } L_i > \bar{L}_i, \end{equation} where $a_{ij}$ and $\bar{a}_{ij}$ denote the $(i,j)$-th elements of $A$ and $\bar{A}$, respectively, and $L_i$ denotes the $i$-th element of $L$. Since the new technique of production reduces unit cost of production, evaluated at current prices, capitalist firms in sector $i$ will adopt the new technique of production \citep{okishio_1961}. All other sectors continue using the old technology - because technical change occurs only in sector $i$. Hence, the new technology in the economy is captured by the $n \times n$ input-output matrix, $\bar{A}$, and the $1 \times n$ vector of direct labor inputs, $\bar{L}$, where the columns of $A$ and $\bar{A}$ are identical other than for column $i$, and the elements of $L$ and $\bar{L}$ are identical, other than the $i$-th element. With the new technology, the $1 \times n$ vector of labor values, $\bar{\Lambda}$, is given by \begin{equation} \bar{\Lambda} = \bar{L} \left( I - \bar{A}\right)^{-1} \gg 0, \end{equation} where strict inequality follows because $\bar{A}$ is productive.\footnote{We will only need viable technical change for the results in Theorem~\ref{thm:frp} and ~\ref{thm:existence} below; we will require CU-LS technical change for Theorem~\ref{thm:existence-1}.} \subsection{Class Struggle and a New Real Wage Bundle} In this paper we will study the consequences of a viable, CU-LS technical change on the equilibrium rate of profit when the real wage bundle is allowed to vary. Suppose class struggle, during and after the technical change, leads to the emergence of a new real wage bundle, $\bar{b} \geqq 0$, with the following properties. \begin{property}\label{ass:more-exp} The new real wage bundle is such that $\bar{b} \in \mathbb{B}_1$ and \begin{equation}\label{ass:bexpense} p \bar{b}>pb=1. \end{equation} \end{property} This property specifies that the new real wage bundle is more expensive at original prices. This just means that workers are able to bargain for and secure a higher nominal wage rate as the process of technical change works itself out over the long run. \begin{property}\label{ass:constexp} The new real wage bundle, $\bar{b}$, keeps the labor value of the real wage bundle unchanged, i.e. \begin{equation} \Lambda b = \bar{\Lambda} \bar{b}. \end{equation} \end{property} The rate of exploitation, before technical change, is given by $e=(1-\Lambda b)/\Lambda b$. After technical change and with the new real wage bundle, it is given by $\bar{e}=(1-\bar{\Lambda} \bar{b})/\bar{\Lambda} \bar{b}$. Hence, this property ensures that workers are able to secure a new real wage bundle that keeps the rate of exploitation unchanged even after technical change. If the rate of exploitation captures the balance of class forces, after taking account of technical change and its impact on the labor market, then this assumption states that there is no change in the balance of class forces over the course of technical change. \begin{property}\label{ass:costred} The decline in the unit cost of production in sector $i$ (the sector that witnessed technical change) is bounded above by the change in the nominal labor cost corresponding to the new technique of production, \begin{equation}\label{cond:costred} 0 <p A_{*i} + L_i - p \bar{A}_{*i} - \bar{L}_i < \bar{L}_i \left( p \bar{b} - 1\right) . \end{equation} \end{property} Since technical change in sector $i$ is viable, as captured by (\ref{viability}), it reduces the unit cost of production at current prices and wage rates. This gives us the left hand side of the inequality in (\ref{cond:costred}). The right hand side of of (\ref{cond:costred}), in addition, puts an upper bound on the decline in the unit cost of production. Note that the unit cost of production in sector $i$ before technical change is given by $p A_{*i} + L_i$; and, after technical change in that sector, it is given by $p \bar{A}_{*i} + \bar{L}_i$. Hence, $p A_{*i} + L_i - p \bar{A}_{*i} - \bar{L}_i$ is the decline in the unit cost of production in sector $i$. Since $pb=1$ was the nominal wage rate in the initial situation and $p \bar{b}$ is the nominal wage rate with the new real wage bundle, where both are evaluated at the original prices, $\bar{L}_i(p \bar{b} - 1)$ is the change in the nominal labor cost in sector $i$ corresponding to the direct labor input requirement associated with the new technique, $\bar{L}_i$. The right hand side of the inequality in (\ref{cond:costred}) states that the decline in unit cost of production is bounded above by $\bar{L}_i( p \bar{b} - 1)$. This condition is reasonable because technical change involves the emergence and adoption of new techniques of production that are local perturbations of the existing techniques of production \citep{dumenil_levy_1995}. Thus, while the amount of material and labor inputs required by the new technique is different from the old, the changes are not too large. The intuition of the change in cost corresponding to the new technique of production being not `too large' is captured by the above condition for the bound on the cost reduction associated with the new technique of production. \section{Main Results}\label{sec:results} The main results in this paper consists of three theorems. First, in Theorem~\ref{thm:frp} we show that if there exists some $\bar{b} \geqq 0$ that satisfies property~\ref{ass:more-exp}, ~\ref{ass:constexp} and ~\ref{ass:costred}, then viable technical change keeps the rate of exploitation constant even as the equilibrium rate of profit falls. This is a straightforward application of \citet[Theorem~5]{dietzenbacher_1989}. Second, in Theorem~\ref{thm:existence} we derive sufficient conditions for any new real wage bundle $\bar{b} \in \mathbb{B}_1$ to satisfy properties~\ref{ass:constexp} and ~\ref{ass:costred}. The implication is that if class struggle leads to the emergence of such a real wage bundle then viable technical change will be accompanied by a fall in the uniform rate of profit even as the rate of exploitation remains unchanged. Analytically, the main challenge is to show that such a real wage bundle exists and is economically meaningful. While it is easy to see that an increase in the real wage bundle will lead to a fall in the equilibrium rate of profit, it is not immediately obvious that such a real wage bundle can also keep the rate of exploitation unchanged. The second theorem below provides sufficient conditions for the existence of such real wage bundles.\footnote{Note that for the first two theorems, we do not need the assumption of CU-LS technical change. We only need technical change to be viable, i.e. cost-reducing at original prices.} While theorem~\ref{thm:existence} takes us some distance, it still leaves the question of existence unaddressed. That is, we still need to ask if, starting from any configuration of technology and real wage (that satisfies assumption~\ref{def-B}), we can find a new viable technique of production that satisfies the sufficient condition of the second theorem. Theorem~\ref{thm:existence-1} in this paper answers this question in the affirmative for the class of CU-LS technical change. In this theorem, we show that, starting from \textit{any} configuration of technology and the real wage bundle (that satisfies assumption~\ref{def-B}), there always exists some viable, CU-LS technological change that satisfy the sufficient condition of the theorem~\ref{thm:existence}. The three theorems together show that for any configuration of technology and real wage bundle (that satisfies assumption~\ref{def-B}), it is always \textit{possible} for a capitalist economy to witness a fall in the equilibrium rate of profit alongside a constant rate of exploitation, after a viable, CU-LS technical change. \begin{theorem}\label{thm:frp} Let $p$ and $\pi$ denote the price of production vector and the uniform rate of profit with the technology given by $A, L$ and the real wage bundle given by $b \in \mathbb{B}$. Suppose there is a viable technological change, with the new technology given by $\bar{A}, \bar{L}$ and the real wage bundle given by $\bar{b} \in \mathbb{B}_1$. Let $\bar{p}$ and $\bar{\pi}$ denote the price of production vector and the uniform rate of profit with the new technology. If the new real wage bundle $\bar{b}$ satisfies property~\ref{ass:more-exp}, ~\ref{ass:constexp} and ~\ref{ass:costred}, then $\bar{\pi}<\pi$. \end{theorem} \begin{proof} Since $\bar{b}$ satisfies property~\ref{ass:constexp}, the rate of exploitation remains unchanged. An application of \citet[Theorem~5]{dietzenbacher_1989} shows that, since (\ref{ass:bexpense}) and (\ref{cond:costred}) hold, the uniform rate of profit declines. \end{proof} The implication of this result is interesting. It shows that if a more expensive real wage bundle satisfying property~\ref{ass:constexp} and ~\ref{ass:costred} exists, then viable technical change can, at the same time, keep the rate of exploitation constant and also lead to a fall in the uniform rate of profit. Hence, this shows that Marx's claim in Volume III of \textit{Capital} can be sustained under certain conditions. Of course, to complete the argument, we must demonstrate that such a real wage bundle $\bar{b}$ actually exists. In the next theorem we provide sufficient conditions for the existence of such a real wage bundle. \begin{theorem}\label{thm:existence} Let $e$ denote the rate of exploitation before technical change, i.e. \begin{equation}\label{def:exp} e = \frac{1-\Lambda b}{\Lambda b}>0, \end{equation} let $p$ denote the initial price of production vector, and let $g$ denote the decline in the cost of production in sector $i$ (the sector which witnessed viable technical change) as a fraction of the labor cost corresponding to the new technique of production in that sector evaluated at the old wage rate, \begin{equation}\label{defg} g = \frac{\left( p A_{*i} + L_i\right) - \left( p \bar{A}_{*i} + \bar{L}_i\right)}{\bar{L}_i}>0. \end{equation} Let $\bar{\Lambda}=[\bar{\lambda}_i]$ denote the vector of labor values after the viable technical change described in Theorem~\ref{thm:frp}. If, for some $j=1,2, \ldots, n$, \begin{equation}\label{thm:cond} \left( p_j/\bar{\lambda}_j\right) > \left( 1+e\right)\left( 1+g\right), \end{equation} then there exists some $\bar{b} \in \mathbb{B}_1$ that satisfies property~\ref{ass:more-exp}, ~\ref{ass:constexp} and ~\ref{ass:costred}. \end{theorem} \begin{proof} Consider the $n$ dimensional space whose coordinate system is ($\bar{b}_1, \ldots, \bar{b}_n$). Any point in this space is a candidate real wage bundle. We will only consider the nonnegative orthant of this space because negative elements in the real wage bundle are not meaningful. Consider the hyperplane in this space given by the set of points $P$ defined by \begin{equation}\label{defP} P = \left\lbrace \bar{b} \geqq 0 \quad | \quad p \cdot \bar{b} - \alpha = 0 \right\rbrace, \end{equation} where $i$ denotes the sector in which technical change occurred, and \begin{equation}\label{defalpha} \alpha = ( p A_{*i} + L_i - p \bar{A}_{*i} )/\bar{L}_i>1, \end{equation} where the strict inequality in (\ref{defalpha}) comes from the fact that $\alpha = 1+g$ and $g>0$. Note that, for $j=1, 2, \ldots, n$, the hyperplane $P$ intersects the coordinate axes at points of the form $x_j e_j$, where $e_j$ is a $n$-vector with $1$ as the $j$-th element and $0$ as every other element, and $ x_j = \alpha/p_j > 0 $, where the strict inequality follows from (\ref{defalpha}) and $p_j>0$. Thus, the hyperplane $P$ intersects the coordinate axes at strictly positive points. The important point to note is that all points `above' hyperplane, $P$, satisfies property~~\ref{ass:more-exp} and ~\ref{ass:costred}, i.e. such real wage bundles are more expensive at original prices and the decline in the unit cost of production in sector $i$ is bounded above by the change in labor cost corresponding to the new technique of production. Now consider the hyperplane, in the same $n$ dimensional space, given by the set of points $V$ defined by \begin{equation}\label{defV} V = \left\lbrace \bar{b} \geqq 0 \quad | \quad \bar{\Lambda} \cdot \bar{b} - \beta = 0 \right\rbrace, \end{equation} where \begin{equation}\label{defbeta} \beta = \Lambda b>0, \end{equation} where the strict inequality in (\ref{defbeta}) follows because $\Lambda>0$ and $b\geq 0$. Note that, for $j=1, 2, \ldots, n$, this hyperplane intersects the coordinate axes at points of the form $y_j e_j$ where, $y_j = \beta/\bar{\lambda}_j > 0$, where the strict inequality follows from (\ref{defbeta}) and $\bar{\lambda}_j>0$. Thus, the hyperplane $V$ also intersects the coordinate axes at strictly positive points. For the hyperplane, $V$, the important point to note is that all points on this hyperplane satisfy property~\ref{ass:constexp}, i.e. such real wage bundles ensure that the rate of exploitation remains unchanged before and after technical change. Using the assumption of the theorem, condition (\ref{thm:cond}), we have, for at least one $j=1, 2, \ldots, n$, $ ( p_j/\bar{\lambda}_j) > ( 1+e)( 1+g)$. Using (\ref{defg}), we see that $(1+g)=(p A_{*i}+L_i-p \bar{A}_{*i})/\bar{L}_i$. Using (\ref{def:exp}), we also know that $(1+e)=(1/\Lambda b)$. Using these, we have, \begin{align*} \frac{p_j}{\bar{\lambda}_j} & > ( 1+e)( 1+g) = \frac{1}{\Lambda b}\left[ \frac{p A_{*i}+L_i-p \bar{A}_{*i}}{\bar{L}_i}\right] = \frac{1}{\Lambda b}\left[ \frac{p A_{*i}+L_i-p \bar{A}_{*i}}{\bar{L}_i}\right] = \frac{ \alpha}{\beta}, \end{align*} so that $\beta/\bar{\lambda}_j>\alpha/p_j$. This means that some portion of the hyperplane, $V$, lies above the hyperplane, $P$, in the \textit{positive orthant}, as shown in Figure~\ref{fig:simplex} and ~\ref{fig:2d} for a $3$-dimensional setting. This provides us with an infinite number of points $\bar{b} \in \mathbb{B}_1$ for which property~\ref{ass:more-exp}, ~\ref{ass:constexp} and ~\ref{ass:costred} will be satisfied. To show this more formally, we need to demonstrate that points on the hyperplane $V$ that lie in the positive orthant \textit{and} to the right of the intersection with hyperplane $P$ are `above' the hyperplane $P$. Let $j=k$ for which (\ref{thm:cond}) is satisfied, i.e. $ (p_k/\bar{\lambda}_k) > ( 1+e)( 1+g)$. Consider the hyperplane given by the set of points $B_{k-1}$ defined by $B_{k-1} = \left\lbrace \bar{b} \geqq 0 | \bar{b}_{k-1}=0 \right\rbrace$. Let $x$ denote the point of intersection of $P$, $V$ and $B_{k-1}$. Let $y$ denote the point where the hyperplane $V$ intersects the $b_k$ coordinate axis, and let $z$ denote the point where the hyperplane $P$ intersects the $b_k$ coordinate axis. Let $u$ denote the unit vector given by $u = (y-x)/\|y-x\|$, and let $v$ denote the unit vector given by $v = (z-x)/\|z-x\|$, where $\|w\|$ denotes the Euclidean norm of the vector $w$. The vector ($y-x$) lies on the hyperplane $V$, and the vector ($z-x$) lies on the hyperplane $P$.\footnote{In Figure~\ref{fig:2d}, $x=OA, y=OC, z=OB$. Hence, $y-x=CA$ and $z-x=BA$. } Hence, the angle between $u$ and $v$ is the angle between $p$ and $\bar{\Lambda}$, because the vector $p$ is perpendicular to the hyperplane $P$ and the vector $\bar{\Lambda}$ is perpendicular to hyperplane $V$ (see Figure~\ref{fig:2d}).\footnote{Recall that the $P$ is given by $p \cdot \bar{b} = \alpha$, and $V$ is given by $\bar{\Lambda} \cdot \bar{b} = \beta$. This shows that the vector $p$ is perpendicular to $P$ and the vector $\bar{\Lambda}$ is perpendicular to $V$.} Hence, if $\theta$ denotes the angle between $u$ and $v$, then $\cos \theta = (p \cdot \bar{\Lambda})/\|p\|\|\bar{\Lambda}\|>0$, where the strict inequality comes from the fact that both $p$ and $\bar{\Lambda}$ are strictly positive vectors. Let $0<t<1$, and consider a point $x_0 = x + tu$, and note that $x_0$ lies on $V$. We will show that $x_0$ is `above' $P$ by showing that it lies on the other side of $P$, compared to the origin. When we plug in the zero vector (origin) in the equation for the hyperplane $P$, we get $-\alpha<0$. When we plug in $x_0$ into the equation for the same hyperplane $P$, we get $p \cdot x+tp \cdot u$. Since $x$ lies on $P$, we have $p \cdot x = \alpha>0$. Additionally, $tp \cdot u = t \|p\| \cos \theta >0$, because $\cos \theta>0$ (as we saw above). Hence, $p \cdot x+tp \cdot u>0$. This shows that the point $x_0$ is `above' the hyperplane $P$, i.e. the origin and $x_0$ are on \textit{different} sides of the hyperplane $P$. \end{proof} \textit{Discussion.} The key condition in Theorem~\ref{thm:existence} is captured in (\ref{thm:cond}). This instructs us to look at the ratio of the price of production before technical change and the labor value after technical change, sector by sector. If for any sector, this ratio is bounded below by the product $(1+e)(1+g)$, then the condition will be satisfied. When technical change is CU-LS, this condition is not restrictive, because $p\gg \Lambda \geqq \bar{\Lambda}$, i.e. the vector of prices of production before technical change is strictly larger than the vector of values after technical change. \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{mysimplex} \caption{The hyperplanes $P$ (blue, extending outward on the $\bar{b}_1$ axis) and $V$ (red, extending outward on the $\bar{b}_3$ axis).} \label{fig:simplex} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{my2d} \caption{Projection of the hyperplanes $P$ (blue) and $V$ (red) onto the plane defined by $\bar{b}_2=0$.} \label{fig:2d} \end{subfigure} \caption{Intersecting hyperplanes given by equations (\ref{defP}) and (\ref{defV}) for the case $n=3$.} \label{fig:hyper} \end{figure} To see the first inequality, $p \gg \Lambda$, note that, from (\ref{eq:pop-1}), we have, $p = (1+\pi)L[I - (1+\pi)A]^{-1} = (1+\pi)L \sum_{j=1}^{\infty}(1+\pi)^jA^j \gg (1+\pi)L \sum_{j=1}^{\infty}A^j = (1+\pi)L[I - A]^{-1}= (1+\pi)\Lambda \gg \Lambda$, where we have used the facts that $0<\pi<R$ (where $R$ is the maximal rate of profit, i.e. the rate of profit when the real wage bundle is the zero vector) and $A$ is productive. As long as the real wage bundle, $b \in \mathbb{B}$, has at least one strictly positive element, we have $0<\pi<R$, where $1+R$ is the reciprocal of the maximal eigenvalue of $A$, and this ensures the validity of the infinite series matrix expansion of $ [I - (1+\pi)A]^{-1} $; and, as long as $A$ is productive, we have a valid infinite series matrix expansion for $[I - A]^{-1}$.\footnote{Note, we have used the notation $A^0=I$, the $0$-th power of the matrix $A$ is the identity matrix. For a discussion of this infinite series matrix expansion, see \citet[Appendix, pp. 266]{pasinetti_1977}.} The second inequality, $\Lambda \geqq \bar{\Lambda}$, follows from \citet[Theorem~4.9]{roemer_1981} because the technical change under consideration is CU-LS. Given these inequalities, i.e. $p \gg \Lambda \geqq \bar{\Lambda}$, the condition in (\ref{thm:cond}) is merely stating that the ratio of price of production (before technical change) and the labor value (after technical change) in at least one sector must not only by larger than unity, but be larger than the quantity appearing on the right hand side of (\ref{thm:cond}). The intuition behind this condition is that, since CU-LS technical change reduces the labor value, we can increase the magnitude of commodities in the real wage bundle with relatively low labor value and yet keep the value of the wage bundle unchanged. But, if these commodities had relatively high prices in the original situation compared to their labor values after technical change, then the monetary cost of the real wage bundle increases to such an extent that it leads to a fall in the equilibrium rate of profit. While this intuitive argument might be persuasive, it still leaves open the question of a rigorous demonstration of existence. Starting from \textit{any} configuration of technology and real wage bundle (that satisfies assumption~\ref{def-B}), can we always find some viable, CU-LS technological change that satisfies the condition in (\ref{thm:cond})? The next result shows that this question can be answered in the affirmative. \begin{theorem}\label{thm:existence-1} Let $A$ and $L$ denote any input-output matrix and direct labor input vector, respectively, and let the real wage bundle be given by, $b \in \mathbb{B}$, as defined in assumption~\ref{def-B}. Then, a new input-output matrix, $\bar{A}$, and a direct labor input vector, $\bar{L}$, exists such that the new technology is viable, CU-LS and the condition in (\ref{thm:cond}) is satisfied. \end{theorem} \begin{proof} Given $A$, and $L$, we can calculate the vector of labor values, $\Lambda$, using (\ref{value-def}). Choose a real wage bundle, $b \in \mathbb{B}$, defined in assumption~\ref{def-B}. The definition of $\mathbb{B}$ ensures that $\max_k (p_k/\lambda_k)> (1+e) = (1/\Lambda b)$. Let $j$ be the index for which the value-price ratio attains the maximum magnitude. Let $\phi=(\Lambda b p_j)/\lambda_j$, and note that $\phi>1$. Let $i$ denote the industry in which technological change occurs. Choose $\varepsilon$ such that such that $0<\varepsilon<L_i/\sum_k p_k$, which ensures that $L_i - \varepsilon \sum_k p_k>0$. Now construct the new technique of production for industry $i$ as follows: choose $\bar{L}_i$ such that $(L_i- \varepsilon \sum_k p_k)/\phi < \bar{L}_i<L_i-\varepsilon \sum_k p_k$ (which is always possible because $\phi > 1$\footnote{It is precisely here that the second property in assumption~\ref{def-B} is used. We need $\phi>1$ as a strict inequality because if $\phi=1$, then $(L_i- \varepsilon \sum_k p_k)/\phi = L_i-\varepsilon \sum_k p_k$ and we would not be able to choose a $\bar{L}_i$. It is to rule out this eventuality that we must ensure the strict inequality in the second condition defining the set $\mathbb{B}$ in assumption~\ref{def-B}.}), and construct the vector $\bar{A}_{*i}$ by adding $\varepsilon$ to \textit{every} element of the vector $A_{*i}$. For all other industries, the technique of production remains unchanged. We will now show that this new technology given by, $\bar{A}, \bar{L}$, is CU-LS, viable, and satisfies (\ref{thm:cond}). Since every element of the vector $\bar{A}_{*i}$ is strictly greater than the corresponding element of the vector $A_{*i}$ and $\bar{L}_i<L_i-\varepsilon \sum_k p_k<L_i$, the new technology is CU-LS. To see viability, let us use the right hand inequality defining $\bar{L}_i$: $ \bar{L}_i<L_i-\varepsilon \sum_k p_k $. Since $p \bar{A}_{*i} - p A_{*i} = \varepsilon \sum_k p_k$, we have $\bar{L}_i< L_i-\varepsilon \sum_k p_k=L_i-(p \bar{A}_{*i} - p A_{*i})$, and viability is established. Let us now use the left hand inequality defining $\bar{L}_i$: $(L_i- \varepsilon \sum_k p_k)/\phi < \bar{L}_i$. This guarantees that $L_i - \phi \bar{L}_i < \varepsilon \sum_k p_k = p (\bar{A}_{*i}-A_{*i})$. Using the definition of $\phi$, this gives us $(p A_{*i}+L_i - p \bar{A}_{*i})/\bar{L}_i < \Lambda b p_j/\lambda_j$. Hence, we have, \[ \frac{1}{\Lambda b}\left[ \frac{p A_{*i}+L_i - p \bar{A}_{*i}}{\bar{L}_i}\right] < \frac{p_j}{\lambda_j}. \] Since the new technology is CU-LS, \citet[Theorem~4.9]{roemer_1981} shows that $0<\bar{\lambda}_j \leq \lambda_j$. Hence, $p_j/\lambda_j \leq p_j/\bar{\lambda}_j$. Thus, \[ \frac{1}{\Lambda b}\left[ \frac{p A_{*i}+L_i - p \bar{A}_{*i}}{\bar{L}_i}\right] < \frac{p_j}{\lambda_j} \leq \frac{p_j}{\bar{\lambda}_j}, \] so that condition (\ref{thm:cond}) is satisfied. \end{proof} \textit{Discussion.} The result in theorem~\ref{thm:existence-1} shows that, starting from any configuration of technology and real wage bundle (that satisfies assumption~\ref{def-B}), we can always find a new technology given by $\bar{A}$ and $\bar{L}$ such that the technology is viable, CU-LS and condition (\ref{thm:cond}) is satisfied. Taken together, theorems~\ref{thm:frp}, ~\ref{thm:existence} and ~\ref{thm:existence-1} show that, starting from any configuration of technology and distribution (real wage bundle), a capitalist economy can always witness a viable, CU-LS technical change that keeps the rate of exploitation constant and leads to a fall in the uniform rate of profit. This demonstrates that Marx's claim in Volume III of \textit{Capital} that the rate of profit can fall due to technical change if the rate of exploitation remains unchanged can be sustained in certain plausible configurations of technology and distribution. The requirement that the rate of exploitation remain unchanged over the period of technical change, i.e. as the economy moves from the old to the new long run equilibrium, is restrictive. Moreover, it contradicts another important claim that Marx developed in Volume I of \textit{Capital}, that the rate of exploitation rises with the development of capitalism, in the form of the production of absolute or relative surplus value. The fact of a rising rate of exploitation, which is less restrictive than the assumption of a constant rate of exploitation, can be accommodated in our framework. \begin{corollary}\label{thm:exp-rise} Starting from any configuration of technology and distribution (real wage bundle), a capitalist economy can always witness a viable, CU-LS technical change that allows the rate of exploitation to increase and leads to a fall in the uniform rate of profit. \end{corollary} \begin{proof} The proof follows by noting that as long as the sufficient condition in theorem~\ref{thm:existence} is satisfied, there will exist real wage bundles which are below the hyperplane $V$ and above the hyperplane $P$ (e.g. points in the interior of triangle $ABC$ in Figure~\ref{fig:2d}). For such real wage bundles, the rate of exploitation will rise (because they are below $V$) and the rate of profit will fall (because they are above $P$). \end{proof} Corollary~\ref{thm:exp-rise} shows that we \textit{can} have a rise in the rate of exploitation, in the form of the production of relative surplus value, and yet viable, CU-LS technical change can lead to a fall in the rate of profit. Hence, the interaction of class struggle and technical change can allow for the rate of exploitation to rise, and yet the equilibrium rate of profit might decline under the conditions laid out in theorem~\ref{thm:existence}. In the next section, we provide an example of a $3$-sector economy where we can find an infinite number of real wage bundles that, after a viable, CU-LS technical change, can keep the rate of exploitation constant and lead to a fall in the equilibrium rate of profit. But a caveat is necessary at this point. Our argument is not that class struggle will always discover a real wage bundle that satisfies property~\ref{ass:constexp} and ~\ref{ass:costred}. Rather, we have demonstrated that such a real wage bundle does exist and that it is not possible to rule it out without additional restrictions on technical change or class struggle. Hence, it is \textit{possible} that such a real wage bundle will be discovered by class struggle. In that case, the equilibrium rate of profit will fall even when capitalists have adopted cost-reducing techniques of production. \section{An Example}\label{sec:example} Consider a slight variation in the example of a $3$-sector economy discussed in \citet[pp. 39]{dietzenbacher_1989}.\footnote{R code to implement this example is given in the Appendix.} \subsection{Initial Situation} Let initial technology be given by \[ A = \begin{bmatrix} 0.35 & 0.05 & 0.25 \\ 0.15 & 0.45 & 0.05 \\ 0.15 & 0.15 & 0.35 \end{bmatrix} \] and \[ L = \begin{bmatrix} 0.2 & 0.15 & 0.25 \end{bmatrix}. \] Let the initial real wage bundle be given by \[ b = \begin{bmatrix} 1/3 \\ 1/3 \\ 1/3 \end{bmatrix}. \] For this configuration of technology and real wage bundle, we can calculate the uniform rate of profit, $\pi=0.17647$, and the price of production vector as \[ p = \begin{bmatrix} 1 & 0.9090909 & 1.090909 \end{bmatrix}. \] The vector of values is given by \[ \Lambda = \begin{bmatrix} 0.5714286 & 0.5 & 0.6428571 \end{bmatrix}. \] Hence, $pb=1$ and $\Lambda b = 0.5714286$. Let us check that the chosen real wage bundle satisfies the two properties specified in assumption~\ref{def-B}. Since $\Lambda b = 0.5714286<1$, the first property is satisfied. Moreover, $1/\Lambda b = 1.75$, and $\max_k (p_k/v_k)=1.8182$. Hence, the second condition is satisfied. \subsection{CU-LS, Viable Technical Change} A CU-LS, viable technical change takes place in sector 3. The new technology is given by \[ \bar{A} = \begin{bmatrix} 0.35 & 0.05 & 0.27 \\ 0.15 & 0.45 & 0.07 \\ 0.15 & 0.15 & 0.37 \end{bmatrix} \] and \[ \bar{L} = \begin{bmatrix} 0.2 & 0.15 & 0.18 \end{bmatrix}. \] Note that the new technology is \begin{itemize} \item CUS-LS: because the third column of $\bar{A}$ is, element by element, greater than the third column of $A$, and the third element of $\bar{L}$ is strictly less than the third element of $L$; \item viable: because the cost of production in sector 3 falls from $0.9272727$ to $0.9172727$ (using the price vector computed above and using the normalisation that the nominal wage rate is $1$). \end{itemize} Hence, a capitalist producer will adopt this new technology. With this new technology, the new vector of value is given by \[ \bar{\Lambda} = \begin{bmatrix} 0.5511364 & 0.4797078 & 0.5752165 \end{bmatrix}. \] \subsection{Class Struggle and a New Real Wage Bundle} Suppose class struggle leads to the emergence of a new real wage bundle given by \[ \bar{b} = \begin{bmatrix} \bar{b}_1 \\ \bar{b}_2 \\ \bar{b}_3 \end{bmatrix}. \] We need to ensure that the new real wage vector $\bar{b} \geq 0$ is more expensive than the original real wage bundle \begin{equation} p_1 \bar{b}_1 + p_2 \bar{b}_2 + p_3 \bar{b}_3 > 1=pb, \end{equation} and that the decline in the unit cost of production in bounded above by the change in the nominal labor cost associated with the new technique of production, \begin{equation} pA_{*3} + L_i < p\bar{A}_{*3} + (p_1 \bar{b}_1 + p_2 \bar{b}_2 + p_3 \bar{b}_3) \bar{L}_3, \end{equation} where $A_{*3}$ and $\bar{A}_{*3}$ denote the third column of $A$ and $\bar{A}$, respectively, and, finally, that the labor value of the real wage bundle remains unchanged, \begin{equation} \bar{\lambda}_1 \bar{b}_1 + \bar{\lambda}_2 \bar{b}_2 + \bar{\lambda}_3 \bar{b}_3 = \Lambda b, \end{equation} which ensures that the rate of exploitation remains constant. Since the new technique of production reduces the unit cost of production in sector $ 3 $, we have $(pA_{*3} + L_i - p\bar{A}_{*3})/\bar{L}_3>1$. Hence, the above three conditions can be reduced to two conditions: \begin{align} p_1 \bar{b}_1 + p_2 \bar{b}_2 + p_3 \bar{b}_3 & > (pA_{*3} + L_i - p\bar{A}_{*3})/\bar{L}_3, \label{cond1}\\ \bar{\lambda}_1 \bar{b}_1 + \bar{\lambda}_2 \bar{b}_2 + \bar{\lambda}_3 \bar{b}_3 = \Lambda b. \label{cond2} \end{align} Since $(pA_{*3} + L_i - p\bar{A}_{*3})/\bar{L}_3=1.055556$, and using the vector of old price of production, the vector of old and new labor values, we have the following two equations: \begin{align} 1*\bar{b}_1 + 0.9090*\bar{b}_2 + 1.0909*\bar{b}_3 & > 1.055556 ,\label{cond11}\\ 0.5511364 \bar{b}_1 + 0.4797078 \bar{b}_2 + 0.5752165 \bar{b}_3 & = 0.5714286 \label{cond21}. \end{align} The condition in (\ref{cond11}) is satisfied by all points in the positive orthant of the 3-dimensional space with coordinates $(\bar{b}_1,\bar{b}_2,\bar{b}_3)$ that lie above the hyperplane $\bar{b}_1 + 0.9090*\bar{b}_2 + 1.0909*\bar{b}_3 = 1.055556$. This hyperplane intersects the three axes at $(1.0555556,0,0)$, $(0,1.1611111,0)$, and $(0,0,0.9675926)$. The condition is (\ref{cond21}) is satisfied by all points in 3-dimensional space with coordinates $(\bar{b}_1,\bar{b}_2,\bar{b}_3)$ that lie on the hyperplane given by (\ref{cond21}). This hyperplane intersects the three axes at $(1.0368189,0,0)$, $(0,1.1912014,0)$, and $(0,0,0.9934149)$. Hence, the two hyperplanes will intersect in the positive orthant. Thus, there are an infinite number of points that lie on the hyperplane given by (\ref{cond21}) and that also satisfy (\ref{cond11}). To choose one \textit{particular} real wage bundle, $\bar{b} \geq 0$, that satisfies (\ref{cond11}) and (\ref{cond21}), let us draw from a uniform distribution with support on $(1.1611111, 1.1912014)$. The draw gives us the second element of the new real wage bundle: $\bar{b}_2=1.170977$. Let us also impose the condition $\bar{b}_1=\bar{b}_3$ to simplify the computation. Hence, using (\ref{cond21}), we get $\bar{b}_1=(0.5714286 - 1.170977*0.4797078)/(2*0.5752165)$. Hence, $\bar{b}_1=0.008613$. Hence, the new real wage bundle is given by \[ \bar{b} = \begin{bmatrix} 0.008613 \\ 1.170977 \\ 0.008613 \end{bmatrix}. \] Note that the equilibrium rate of profit now becomes $ \bar{\pi}=0.1604551 $, and the new price of production vector is given by \[ \bar{p} = \begin{bmatrix} 0.9288424 & 0.8398318 & 0.9956171 \end{bmatrix}. \] Hence, $\max_k (\bar{p}_k/\bar{\lambda}_k)=1.750715 > 1.75 = 1/(\bar{\Lambda}\bar{b})$. Thus, the condition for assumption~\ref{def-B} is satisfied by the new real wage bundle. Note, finally, that using this real wage bundle, we get a constant value of the real wage bundles, $\Lambda b = \bar{\Lambda} \bar{b}=0.5714286$. This keeps the rate of exploitation constant. Moreover, the new uniform rate of profit is $\bar{\pi}=0.1604551<0.1764706=\pi$. Hence, the equilibrium rate of profit falls after a viable, CU-LS technical change. \section{Conclusion}\label{sec:conclusion} Technical change is a characteristic feature of capitalist economies. Since the profit rate is one of the clearest indicators of the health of a capitalist economy, seen from the perspective of capital, it is of great interest to investigate the effect of technical change on the rate of profit. In Volume III of \textit{Capital}, Marx had argued that technical change will impart a falling tendency to the rate of profit when the rate of exploitation remains constant. In this paper, we have demonstrated that this result \textit{can} be obtained in a multisector economy. To be more concrete, we have demonstrated that there exist plausible real wage bundles which keep the rate of exploitation constant and lead to a fall in the new equilibrium rate of profit even after viable, CU-LS technical change. The picture of technical change that Marx gave us in the volumes of \textit{Capital} remains extremely relevant. Competitive pressures force capitalists to search for new cost-reducing techniques of production. The innovator capitalist who manages to adopt such a technique is able to make super-normal profits. That creates the incentive for capitalists to constantly look for and adopt, when found, cost-reducing techniques of production. The adoption of the new technique by the innovator disrupts the prevailing equilibrium. This is because the economy is interconnected in complex ways. The output of the innovator capitalist is used as inputs in other industries; the innovator's demand for the output of other industries also change because of the technical change. When all these changes have played themselves out, a new equilibrium profit rate and a new set of prices of production emerge. \citet{okishio_1961} had shown that the new equilibrium rate of profit would be higher than the one that prevailed before technical change if the real wage rate remains unchanged. In this paper, we have shown that if the rate of exploitation remains unchanged, which will imply that the real wage rate has to increase, the rate of profit can fall after cost-reducing technical change of the type analysed by \citet{okishio_1961}, as long as the reduction in cost is bounded above by the change in the nominal labor cost associated with the new technique of production. The constancy of the rate of exploitation is one way to capture the balance of class forces. Hence, the result in this paper shows that if the balance of class forces manages to keep the division between paid and unpaid labour time unchanged, cost-reducing technical change can lead to a fall in the rate of profit - if the cost reduction from technical change is not too large. In such cases, individually rational decisions by capitalist producers might harm the collective interest of the capitalist class. This is just one pathology of a competitive, capitalist economy.
1,116,691,498,043
arxiv
\section{Introduction} As with the choice of an optimization algorithm, the choice of loss function is an indispensable ingredient in training neural network models. Yet, while there is extensive theoretical and empirical research into optimization and regularization methods for training deep neural networks~\cite{sun2019optimization}, far less is known about the selection of loss functions. In recent years, cross-entropy loss has been predominant in training for multi-class classification with modern neural architectures. There is surprisingly little theoretical or empirical evidence in support of this choice. To the contrary, an extensive set of experiments with neural architectures conducted in~\cite{hui2020evaluation} indicated that training with the (rescaled) square loss produces similar or better classification accuracy than cross entropy on most classification tasks. Still, the rescaled square loss proposed in that work requires additional parameters (which must be tuned) when the number of classes is large. Further, the optimization learning rate for the square loss is typically different from that of cross entropy, which precludes the use of square loss as an out-of-the-box replacement. In this work we propose the ``squentropy'' loss function for multi-class classification. Squentropy is the sum of two terms: the standard cross-entropy loss and the average square loss over the incorrect classes. Unlike the rescaled square loss, squentropy has no adjustable parameters. Moreover, in most cases, we can simply use the optimal hyperparameters for cross-entropy loss without any additional tuning, making it a true ``plug-and-play'' replacement for cross-entropy loss. To show the effectiveness of squentropy, we provide comprehensive experimental results over a broad range of benchmarks with different neural architectures and data from NLP, speech, and computer vision. In {24 out of 34} tasks, squentropy has the best (or tied for best) classification accuracy, in comparison with cross entropy and the rescaled square loss. Furthermore, squentropy has consistently improved {\it calibration}, an important measure of how the output values of the neural network match the underlying probability of the labels~\cite{guo2017calibration}. Specifically, in { 26 out of 32} tasks for which calibration results can be computed, squentropy is better calibrated than either alternative. We also show results on 121 tabular datasets from~\cite{fernandez2014we}. Compared with cross entropy, squentropy has better test accuracy on {94 out of 121} tasks, and better calibration on 83 datasets. Finally, we show that squentropy is less sensitive to the randomness of the initialization than either of the two alternative losses. Our empirical evidence suggests that in most settings, squentropy should be the first choice of loss function for multi-class classification via neural networks. \section{The squentropy loss function} \label{sec_squentropy} The problem we consider here is supervised multi-class classification. We focus on the loss functions for training neural classifiers on this task. Let $D=(\bm{x}_i, y_i)_{i=1}^n$ denote the dataset sampled from a joint distribution $\mathcal{D}(\mathcal{X}, \mathcal{Y})$. For each sample $i$, $\bm{x}_i\in \mathcal{X}$ is the input and $y_i \in \mathcal{Y}=\{1,2, \dotsc, C\}$ is the true class label. The one-hot encoding label used for training is $\bm{e}_{y_i}=[0,\ldots, \underbrace{1}_{y_i}, 0, \ldots,0]^T \in \mathbb{R}^C$. Let $f(\bm{x}_i)\in \mathbb{R}^C$ denote the logits (output of last linear layer) of a neural network of input $\bm{x}_i$, with components $f_j(\bm{x}_i)$, $j=1,2,\dotsc,C$. Let $p_{i,j}=e^{f_j(\bm{x}_i)}/\sum_{j=1}^C e^{f_j(\bm{x}_i)}$ denote the predicted probability of $\bm{x}_i$ to be in class $j$. Then the squentropy loss function on a single sample $\bm{x}_i$ is defined as follows: \begin{equation} \label{mix_func} l_{\text{sqen}}(\bm{x}_i,y_i) = - \log p_{i,y_i}(\bm{x}_i) + \frac{1}{C-1}\sum_{\substack{j=1,}{j\neq y_i}}^C f_j(\bm{x}_i)^2. \end{equation} The first term $- \log p_{i,y_i}(\bm{x}_i)$ is simply cross-entropy loss. The second term is the square loss averaged over the incorrect ($j\neq y_i$) classes. The cross-entropy loss is minimized when $f_{y_i}(\bm{x}_i) \to \infty$ while $f_j(\bm{x}_i) \to - \infty$ or at least stays finite for $j \ne y_i$. By encouraging all incorrect logits to go to a specific point, namely $0$, it is possible that squentropy yields a more ``stable'' set of logits --- the potential for the incorrect logits to behave chaotically is taken away. In other words, the square loss term plays the role of a regularizer. We discuss this point further in Section~\ref{sec_norm}. \paragraph{Dissecting squentropy.} Cross entropy acts as an effective penalty on the prediction error made for the true class $y_i$, as it has high loss and large gradient when $p_{i,y_i}$ is close to zero, leading to effective steps in a gradient-based optimization scheme. The ``signal'' coming from the gradient for the incorrect classes is weaker, so such optimization schemes may be less effective in driving the probabilities for these classes to zero. Squentropy can be viewed as a modification of the rescaled square loss~\cite{hui2020evaluation}, in which cross entropy replaces the term $t(f_{y_i}(\bm{x}_i)-M)^2$ corresponding to the true class, which depends on two parameters $t$, $M$ that must be tuned. This use of cross entropy dispenses with the additional parameters yet provides an adequate ``signal'' for the gradient for a term that captures loss on the ``true'' class. The second term in \eqref{mix_func} pushes all logits $f_j(\bm{x}_i)$ corresponds to false classes $j \ne y_i$ to $0$. Cross entropy attains a loss close to zero on term $i$ by sending $f_{y_i}(\bm{x}_i)\to \infty$ and/or $f_j(\bm{x}_i) \to -\infty$ for all $j \neq y_i$. By contrast, squentropy ``anchors" the incorrect logits at zero (via the second term) while driving $f_{y_i}(\bm{x}_i)\to \infty$ (via the first term). Then the predicted probability of true class $p_{i, y_i}(\bm{x}_i)$ will be close to $\frac{e^{f_{y_i}(\bm{x}_i)}}{e^{f_{y_i}(\bm{x}_i)}+C-1}$ for squentropy, which possibly approaches $1$ more slowly than for cross entropy. When the training process is terminated, the probabilities $p_{i, y_i}(\bm{x}_i)$ tend to be less clustered near $1$ for squentropy than for cross entropy. Confidence in the true class thus tends to be slightly lower in squentropy. We see the same tendency toward lower confidence in the {\em test} data, thus helping calibration. In calibration literature, various post-processing methods, such as Platt scaling \cite{platt1999probabilistic} and temperature scaling \cite{guo2017calibration}, also improves calibration by reducing $p_{i,y_i}$ below $1$, while other methods such as label smoothing \cite{muller2019does, liu2022devil} and focal loss \cite{mukhoti2020calibrating} achieve similar reduction on the predicted probability. While all these methods require additional hyperparameters, squentropy does not. We conjecture that calibration of squentropy can be further improved by combining it with these techniques. \paragraph{Relationship to neural collapse.} Another line of work that motivates our choice of loss function is the concept of neural collapse \cite{Pap20a}. Results and observations for neural collapse interpose a linear transformation between the outputs of the network (the transformed features $f_j(\bm{x}_i)$) and the loss function. They show broadly that the features collapse to a class average and that, under a cross-entropy loss, the final linear transformation maps them to rays that point in the direction of the corners of the simplex in $\mathbb{R}^C$. (A modified version of this claim is proved for square loss in \cite{han2021neural}.) Our model is missing the interposing linear transformation, but these observations suggest roughly that cross entropy should drive the true logits $f_{y_i}(\bm{x}_i)$ to $\infty$ while the incorrect logits $f_j(\bm{x}_i)$ for $j \ne y_i$ tend to drift toward $-\infty$, as discussed above. As noted earlier, the square loss term in our squentropy loss function encourages $f_j(\bm{x}_i)$ for $j \ne y_i$ to be driven to zero instead --- a more well defined limit and one that may be achieved without blowing up the weights in the neural network (or by increasing them at a slower rate). In this sense, as mentioned above, the squared loss term is a kind of regularizer. \paragraph{Confidence calibration.} We use the expected calibration error (ECE) \cite{naeini2015obtaining} to evaluate confidence calibration performance. It is defined as $\mathbb{E}_{p}[|\mathbb{P}(\hat{y}=y|p)-p|]$, where $p$ and $y$ correspond to the estimated probability and true label of a test sample $\bm{x}$. $\hat{y}$ is the predicted label given by $\argmax_j p_j$. It captures the expected difference between the accuracy $\mathbb{P}(\hat{y}=y|p)$ and the estimated model confidence $p$. Because we only have finite samples in practice, and because we do not have access to the true confidences $p_{\text{true}}$ for the test set (only the labels $y)$, we need to replace this definition with an {\em approximate} ECE. This quantity is calculated by dividing the interval $[0,1]$ of probability predictions into $K$ equally-spaced bins with the $k$-th bin interval to be $(\frac{k-1}{K}, \frac{k}{K}]$. Let $B_k$ denote the set of test samples $(\bm{x}_i,\hat{y}_i)$ for which the confidence $p_{i,y_i}$ predicted by the model lies in bin $k$. (The probabilities $p_{i,j}$ are obtained from a softmax on the exponentials of the logits $f_j(\bm{x}_i)$.) The accuracy of this bin is defined to be $\text{acc}(B_k)=\frac{1}{|B_k|}\sum_{i\in B_k}\mathbf{1}(\hat{y}_i=y_i)$, where $y_i$ is the true label for the test sample $\bm{x}_i$ and $\hat{y}_i$ is the model prediction for this item (the one for which $p_{i,j}$ are maximized over $j=1,2,\dotsc,C$). The confidence for bin $k$ is defined empirically as $\text{conf}(B_k) = \frac{1}{|B_k|}\sum_{i\in B_k} p_{i,y_i}$. We then use the following definition of ECE: \begin{equation} \label{eq:ece} \text{ECE} = \sum_{k=1}^K\frac{|B_k|}{n}\left|\text{acc}(B_k)-\text{conf}(B_k)\right|. \end{equation} This quantity is small when the frequency of correct classification over the test set matches the probability of the predicted label. \section{Experiments} In this paper we consider three loss functions, our proposed squentropy, cross entropy and the (rescaled) square loss from \cite{hui2020evaluation}. The latter is formulated as follows: \begin{equation} \label{square_func} {l}_{s}(\bm{x}_i,y_i) = \frac{1}{C}\left(t*(f_{y_i}(\bm{x}_i)-M)^2+\sum_{j=1, j\neq y_i}^Cf_j(\bm{x}_i)^2\right), \end{equation} where $t$ and $M$ are positive parameters. ($t=M=1$ yields standard square loss.) We will point out those entries in which values $t>1$ or $M>1$ were used; for the others, we set $t=M=1$. Note that following \cite{hui2020evaluation}, the square loss is directly applied to the logits, with no softmax layer in training. We conduct extensive experiments on various datasets. These include a wide range of well-known benchmarks across NLP, speech, and vision with different neural architectures --- more than $30$ tasks altogether. In addition, we evaluate the loss functions on 121 tabular datasets \cite{fernandez2014we}. In the majority of our experiments, training with squentropy gives best test performance and also consistently better calibration results. \paragraph{Training scheme.} In most of experiments we train with squentropy with hyperparameter settings that are optimal for cross entropy, given in \cite{hui2020evaluation}. This choice favors cross entropy. This choice also means that switching to squentropy requires a change of just one line of code. Additional gains in performance of squentropy might result from additional tuning, at the cost of more computation in the hyperparameter tuning process. \input{accuracy_ece} \paragraph{Datasets.} We test on a wide range of well-known benchmarks from NLP, speech and computer vision. NLP datasets include MRPC, SST-2, QNLI, QQP, text8, enwik8, text5, and text20. Speech datasets include TIMIT, WSJ, and Librispeech. MNIST, CIFAR-10, STL-10 \cite{coates2011analysis}, CIFAR-100, SVHN \cite{netzer2011reading}, and ImageNet are vision tasks. See Appendix A of \cite{hui2020evaluation} for details of most of those datasets. (The exceptions are SVHN, STL-10, and CIFAR-100, which we describe in Appendix \ref{app_data} of this paper). The 121 tabular datasets are from \cite{fernandez2014we} and they are mostly small datasets --- $90$ of them have $\le 5000$ samples. The feature dimension is small (mostly $<50$) and most datasets are class-imbalanced. \vspace{-1mm} \paragraph{Architectures and hyperparameter settings.} We choose various modern neural architectures, including simple fully-connected networks, convolutional networks (TCNN\cite{bai2018empirical}, Resnet-18, VGG, Resnet-50 \cite{he2016deep}, EfficientNet\cite{tan2019efficientnet}), LSTM-based networks \cite{chen2016enhanced} (LSTM+CNN, LSTM+Attention, BLSTM), and Transformers \cite{vaswani2017attention} (fine-tuned BERT, Transformer-XL, Transformer, Visual transformer). See Table \ref{acc_ece} for detailed references. We follow the hyperparameter settings given in Appendix~\ref{app_para} of \cite{hui2020evaluation} for the cross-entropy loss and the square loss (other than SVHN, STL-10, and CIFAR-100), and use the algorithmic parameters settings of the cross entropy for squentropy in most cases. The exceptions are SVHN and STL-10, where squentropy and square loss have a smaller learning rate (0.1 for cross entropy while 0.02 for squentropy and square loss). More details about hyperparameter settings of SVHN, STL-10, CIFAR-100 are in Appendix \ref{app_para}. \paragraph{Metrics.} For NLP, vision and 121 tabular datasets, we report accuracy as the metric for test performance. For speech dataset, we conduct the automatic speech recognition (ASR) tasks and report test set error rates which are standard metrics for ASR. Precisely, for TIMIT, we report phone error rate (PER) and character error rate (CER). For WSJ and Librispeech, we report CER and word error rate (WER). ECE is the metric to measure the calibration results for all datasets. For speech datasets, we report calibration results for the acoustic modeling part. See Table \ref{acc_ece} shows the results of NLP, speech and vision datasets. Figure \ref{121_results} show results of 121 tabular datasets. In addition, we provide reliability diagrams \cite{degroot1983comparison, niculescu2005predicting} to visualize the confidence and accuracy of each interval and see details in Section \ref{calibration_expr}. \paragraph{Remarks on Table \ref{acc_ece}.} For the results of square loss, we use rescaled square loss with $t>1$ or $M>1$ for TIMIT(PER) ($t=1, M=15$), WSJ ($t=1, M=15$), Librispeech ($t=15, M=30$), CIFAR-10 and CIFAR-100 ($t=1, M=10$), and ImageNet ($t=15, M=30$). All others are the standard square loss. Note that WSJ (WER) and WSJ (CER) share the same ECE number as they share one acoustic model. (Similarly for Librispeech.) Additionally, since ECE numbers are not available for Top-5 accuracy, the corresponding entries (ImageNet, Top-5 acc.) are marked as ``N/A''. For the empirical results reported in Table~\ref{acc_ece}, we discuss generalization / test performance in Section~\ref{generalization_expr} and calibration results in Section~\ref{calibration_expr}. Results for 121 tabular datasets are reported in Section~\ref{sec_121}. We report the {\em average} accuracy/error rate (for test performance) and {\em average} ECE (for model calibration) of \textit{5 runs} with different random initializations for all experiments. We report the standard derivation of this collection of runs in Section~\ref{sec_std}. \subsection{Empirical results on test performance} \label{generalization_expr} \begin{figure*}[ht] \centering \centerline{\includegraphics[width=2.4\columnwidth]{figs/c100_diagram1.pdf}} \vspace{-5mm} \caption{\textbf{Confidence histograms (top) and reliability diagrams (bottom) for a Wide Resnet on CIFAR-100.} See Table \ref{acc_ece} for its test accuracy. The Confidence histogram gives the portion of samples in each confidence interval, and the reliability diagrams show the accuracy as a function of the confidence. The ECE numbers are percentages as in Table \ref{acc_ece}. \textit{Left: }Squentropy, \textit{Middle left: } cross entropy, \textit{Middle right: } Rescaled square loss, \textit{Right: } Standard square loss. We see that models trained with squentropy are better calibrated, while cross entropy suffers from overconfidence and the standard square loss is highly underconfident.} \label{fig:diagram} \vspace{-3mm} \end{figure*} Our results show that squentropy has better test performance than cross entropy and square loss in the majority of our experiments. The perf(\%) numbers in Table \ref{acc_ece} show the test accuracy of benchmarks of the NLP and vision tasks, and error rate for the speech tasks. Squentropy behaves the best in {\em 24 out of 34} tasks. We also report the numbers for {\em subsets} of enwik8 and CIFAR-100. Compared with full datasets of these collections, squentropy seems to gain more when the datasets are small. \paragraph{Applicability and significance.} Table \ref{acc_ece} shows improvements for squentropy across a wide range distributions from the NLP, speech, and vision domains. On the other hand, the improvement on one single task often is not significant, and for some datasets, squentropy's performance is worse. One reason may be our choice to use the optimal hyperparameter values for cross entropy in squentropy. Further tuning of these hyperparameters may yield significant improvements. \subsection{Empirical results on calibration} \label{calibration_expr} In this section we show model calibration results, measured with ECE of the models given in Table \ref{acc_ece}. The ECE numbers for NLP, speech, and vision tasks are also shown in Table \ref{acc_ece}. \paragraph{Squentropy consistently improves calibration.} As can be seen in and Table \ref{acc_ece}, in {26 out of 32} tasks, the calibration error (ECE) of models trained with squentropy is smaller than for cross entropy and square loss, even in those cases in which squentropy had slightly worse test performance, such as WSJ, STL10, and SVHN. Besides using ECE to measure model calibration, we also provide a popular form of visual representation of model calibration: reliability diagrams \cite{degroot1983comparison, niculescu2005predicting}, which show accuracy as a function of confidence as a bar chart. If the model is perfectly calibrated, i.e. $\mathbb{P}(\hat{y}_i=y_i|p_i)=p_i$, the diagram should show all bars aligning with the identity function. If most of the bars lie below the identity function, the model is overconfident as the confidence is mostly larger than corresponding accuracy. When most bars lie above the identity function that means the model is underconfident as confidence is smaller than accuracy. For a given bin $k$, the difference between $\text{acc}(B_k)$ and $\text{conf}(B_k)$ represents the calibration \textit{gap} (orange bars in reliability diagrams – e.g. the bottom row of Figure \ref{fig:diagram}). In Figure \ref{fig:diagram} we plot the confidence histogram (top) and the reliability diagrams (bottom) of Wide Resnets on CIFAR-100, trained with four different loss functions: squentropy, cross entropy, rescaled square loss (with $t=1, M=10$), and standard square loss ($t=1, M=1$). The confidence histogram gives the percentage of samples in each confidence interval, while the reliability diagrams show the test accuracy as a function of confidence. In the reliability diagrams of Figure \ref{fig:diagram} bottom, the orange bars, which represent the confidence {\em gap}, start from the top of the blue (accuracy) bar. We show $\text{conf}(B_k) - \text{acc}(B_k)$ for all intervals in all reliability diagram plots. Note that for intervals where confidence is smaller than accuracy, the orange bars go down from the top of the blue bars, such as the one in the right bottom of Figure \ref{fig:diagram}. More reliability diagrams for other tasks are given in Appendix \ref{app_diag}. \paragraph{Squentropy vs. cross entropy.} If we compare the diagrams of squentropy and cross entropy, the bars for squentropy are closer to the identity function; cross entropy apparently yields more overconfident models. The gap for squentropy is also smaller than cross entropy in most confidence intervals. \begin{figure}[ht] \centering \centerline{\includegraphics[width=1\columnwidth]{figs/121_1_43.pdf}} \vspace{1mm} \resizebox{.45\textwidth}{!}{% \begin{tabular}{cccc} \hline Loss functions & Squentropy & Cross entropy & Square loss \\ \hline Avg accuracy & \textbf{85.6\%} & 85.2\% & 85.5\% \\ Avg ECE & \textbf{11.6\%} & 13.0\% & 15.7\% \\ \hline \end{tabular}} \caption{\textbf{Test accuracy and model calibration of 121 tabular datasets from \cite{fernandez2014we}} trained with a 3 layer (64-128-64) fully connected network. The results for each dataset are averaged over 5 runs with different random initializations. \textit{Left:} Test accuracy (larger is better). \textit{Right:} Calibration error ECE (smaller is better). The top figures plot the results of squentropy and cross entropy, while the bottom figures plot the results of squentropy and the (rescaled) square loss. Test accuracy/ECE for each dataset are tabulated in Appendix \ref{app_121}.} \label{121_results} \vspace{-5mm} \end{figure} \paragraph{Standard square loss leads to underconfidence.} We also plot the reliability diagrams for training with the standard square loss on the right ones of Figure \ref{fig:diagram}. We see that it is highly underconfident as the confidence is smaller than $0.1$ (exact number is $0.017$) for all samples. Note that the square loss is directly applied to the logits $f_j(x_i)$, and the logits are driven to the one-hot vector $\bm{e}_{y_i}$, then the {\em probabilities} $p_{i,j}$ formed from these logits are not going to be close to the one-hot vector. The ``max'' probability (confidence) will instead be close to $\frac{e}{e+(C-1)}$, which is small when $C$ is large. \paragraph{Rescaling helps with calibration.} The second-from-right bottom diagram in Figure~\ref{fig:diagram} shows the results of training with the rescaled square loss ($t=1, M=10$) on CIFAR-100. This minimization problem drives the logits of true class closer to $M$, making the max probability approach $\frac{e^M}{e^M+(C-1)}$ - a much larger value than for standard square loss, leading to better calibration. However, squentropy can avoid extra rescaling hyperparameters while achieving even smaller values of ECE. \subsection{Additional results on 121 Tabular datasets} \label{sec_121} Additional results for 121 small, low dimensional, and class-imbalanced tabular dataset, obtained with 3-layer fully-connected networks, are shown in Figure~\ref{121_results}. For all these cases, we use SGD optimizer with weight decay parameter $5*10^{-4}$ and run $400$ epochs with learning rate $0.01$. The ``square loss'' function used here is in fact rescaled version with parameters $t=1$ and $M=5$. Figure~\ref{121_results} shows that for most datasets, squentropy has slightly better test accuracy and significantly smaller ECE than cross entropy or square loss. Squentropy has the best test accuracy in 71 out of 121 tasks and best calibration in 60 tasks. If only compare with cross entropy, squentropy is better in 94 tasks on accuracy, and is better on calibration in 83 tasks. Test accuracy and ECE for each dataset in this collection are tabulated in Appendix~\ref{app_121}. \input{variance} \begin{figure*}[ht] \centering \begin{minipage}[b]{1.5\columnwidth} \includegraphics[width=\columnwidth]{figs/sqen_margin11.pdf} \end{minipage} \hfill \begin{minipage}[b]{1.5\columnwidth} \includegraphics[width=\columnwidth]{figs/ce_margin11.pdf} \end{minipage} \hfill \begin{minipage}[b]{1.5\columnwidth} \includegraphics[width=\columnwidth]{figs/square_margin11.pdf} \end{minipage} \vspace{-3mm} \caption{\textbf{Decision boundary along different epochs for test samples.} We fix all random seeds to be the same for all cases and hence the test set is exactly the same. (Thus, we display legends only in the bottom-row figures). Color coding indicates the calculated probability of class label to be $1$, according to the scale on th eright. The white line between red and blue areas indicates the decision boundary. We train a 3-layer fully connected network with 12 units in each layer, for a 2-class spiral data set in $\mathbb{R}^2$. There are 1000 samples for training and 500 samples for test, and we train for 1000 epochs, yielding a training accuracy of $100\%$ for all loss functions. Test accuracies are squentropy: $99.9\%$, cross entropy: $99.7\%$, square loss: $99.8\%$. \textit{Top:} squentropy. \textit{Middle:} cross entropy. \textit{Bottom: }square loss. Columns show results after 100, 500, and 1000 epochs, respectively.} \label{fig_bd} \end{figure*} \subsection{Robustness to initialization} \label{sec_std} To evaluate the stability of the model trained with the loss functions considered in this paper, we report the standard deviation of the accuracy/error rate with respect to the randomness in initialization of weights for NLP, speech, and vision tasks. Standard deviation is over 5 runs with different random initializations; see Table~\ref{variance} for results. The standard derivation of squentropy is smaller in the majority of the tasks considered, so results are comparatively insensitive to model initialization. \section{Observations} As mentioned previously, we conjecture that the square term of squentropy acts as an implicit regularizer and in this section we provide some observations in support of this conjecture. We discuss the decision boundary learnt by a fully-connected network on a 2-class spiral data problem (the ``Swiss roll") in Section~\ref{db_exp}, and remark on the weight norm of the last linear layer of several networks in Section~\ref{sec_norm}. \subsection{Predicted probabilities and decision boundary} \label{db_exp} Using a simple synthetic setting, we observe that the decision boundary learned with squentropy appears to be smoother than that for cross entropy and the square loss. We illustrate this point with a 2-class classification task with spiral data and a 3-layer fully-connected network with parameter $\theta$. This setup enables visual observations. Given a sample $\bm{x}_i\in \mathbb{R}^2$ and labels $y_i \in \{1,2\}$, and the one hot encoding $\bm{y}_i=[0, 1]$ or $\bm{y}_i=[1, 0]$, we solve for weights $\theta$ to define functions $f_1(\bm{x}_i)$ and $f_2(\bm{x}_i)$ corresponding to the two classes. For any $\bm{x}_i$, we then predict a probability of $\bm{x}_i$ being classified as class $1$ as follows: $p(\bm{x}_i) := e^{f_1(\bm{x}_i)} / (e^{f_1(\bm{x}_i)} + e^{f_2(\bm{x}_i)})$. Samples are assigned to class $1$ if $f_{i,1}>f_{i,2}$ and to class 2 otherwise. The decision boundary is the set of points for which $\{\bm{x} \, | \, f_{1}(\bm{x})=f_{2}(\bm{x}) \}$ or $\{\bm{x}|p(\bm{x}_i) = 1/2\}$. We see from Figure~\ref{fig_bd} that the decision boundary obtained with squentropy is smoother than those learnt with both cross entropy and square loss. This appears to be true throughout the training process, on this simple example. Meanwhile, the margin (distance from training points to the decision boundary) is also larger for squentropy in many regions. Together, the large margin and smooth decision boundary imply immunity to perturbations and could be one of the reasons for the improved generalization resulting from the use of squentropy~\cite{elsayed2018large}. \begin{figure}[ht] \centerline{\includegraphics[width=0.85\columnwidth]{figs/w_norm.pdf}} \vspace{-5mm} \caption{\textbf{Weight norm along training.} We train a Resnet-18 on CIFAR-10 (calibration error, ECE: Squentropy: $8.9\%$, cross entropy: $10.0\%$) and STL-10 (ECE: Squentropy: $21.2\%$, cross entropy: $26.1\%$), a wide Resnet on CIFAR-100 (ECE: Squentropy: $10.9\%$, cross entropy: $17.9\%$), and show the norm of the last linear layer's weights. These are the same experiments as given in Table \ref{acc_ece}.} \label{w_norm} \end{figure} \vspace{-3mm} \subsection{Weight norm} \label{sec_norm} Neural classifiers trained with cross-entropy loss suffer from overconfidence, causing miscalibration of the model \cite{guo2017calibration}. Our calibration results in Figure~\ref{fig:diagram} and Section~\ref{sec_more_reliability} show evidence of this phenomenon. As can be seen in the confidence histogram of cross entropy --- the $(1,2)$ figure in Figure~\ref{fig:diagram} --- the average confidence $p_{y_i}(\bm{x}_i)$ for the predicted label in cross entropy is close to $1$. This fact suggests that the logits $f_{y_i}(\bm{x}_i)$ of true class are close to $\infty$, while the logits of the incorrect classes approach $-\infty$. Such limits are possible only when the weights of last linear layer have large norm. To quote \cite{mukhoti2020calibrating}, ``{\it cross-entropy loss thus inherently induces this tendency of weight magnification in neural network optimisation}.'' \citet{guo2017calibration} comment that weight decay, which corresponds to adding a penalty term to the loss consisting of the sum of squares of the weights, can produce appreciably better calibration while having a minimal effect on test error; see the rightmost diagram in Figure 2 of \cite{guo2017calibration}. In \cite{mukhoti2020calibrating, liu2022devil}, the authors point out how focal loss proposed in~\cite{lin2017focal} improves calibration by encouraging the predicted distribution to have higher entropy, thus implicitly regularizing the weights. Figure~C.1 of \cite{mukhoti2020calibrating} compares weight norm and final logit values between cross entropy and the focal loss, showing that the latter are significantly smaller. We perform a similar experiment, showing in Figure~\ref{w_norm} the weight norm of the final-layer weights for three examples from Table~\ref{acc_ece} as a function of training steps. We observe that the weight norm for the model trained with squentropy is much smaller than the norms for the same set of weights in the model trained with cross entropy, along the whole training process. \section{Summary, thoughts, future investigations} As with the selection of an optimization procedure, the choice of the loss function is an ineluctable aspect of training all modern neural networks. Yet the machine learning community has paid little attention to understanding the properties of loss functions. There is little justification, theoretical or empirical, for the predominance of cross-entropy loss in practice. Recent work~ \citep{hui2020evaluation} showed that the square loss, which is universally used in regression, can perform at least as well as cross entropy in classification. Other works have made similar observations: \cite{rifkin2002everything, sangari2015convergence, que2016back, demirkaya2020exploring}. While several alternative loss functions, such as the focal loss~\cite{lin2017focal}, have been considered in the literature with good results, none have been adopted widely. Even the hinge loss, the former leader in the popularity contest for classification losses, is barely used outside the context of Support Vector Machines. In this work we demonstrate that a simple hybrid loss function can achieve better accuracy and better calibration than the standard cross entropy on a significant majority of a broad range of classification tasks. Our squentropy loss function has no tunable parameters. Moreover, most of our experiments were conducted in a true ``plug-and-play'' setting using the same algorithmic parameters in the optimization process as for training with the standard cross-entropy loss. Performance of squentropy can undoubtedly be further improved by tuning the optimization parameters. Furthermore, various calibration techniques can potentially be applied with squentropy in the same way they are used with cross entropy. Thus, from a practical point of view, squentropy currently appears to be the natural first choice to train neural models. By no means does it imply that we know of fundamental reasons or compelling intuition indicating that squentropy is the last word on the choice of loss functions for classification. One of the main goals of this work is to encourage both practitioners and theoreticians to investigate the properties of loss functions, an important but largely overlooked aspect of modern Machine Learning. \section*{Acknowledgements} We acknowledge support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning\footnote{\url{https://deepfoundations.ai/}} through awards DMS-2031883 and \#814639 as well as the TILOS institute (NSF CCF-2112665). This work was supported also by an NSF TRIPODS grant to the Institute for Foundations of Data Science (NSF DMS-2023239), NSF grant CCF-222421, and AFOSR via subcontract UTA20-001224 from UT-Austin. LH thanks Chaoyue Liu and Parthe Pandit for reading the draft and give useful comments on the writing. We thank Nvidia for the donation of GPUs and Google for providing access to the cloud TPUs. This work uses CPU/GPU nodes (allocated with TG-CIS220009) provided by San Diego Supercomputer center, with the Extreme Science and Engineering Discovery Environment (XSEDE) \cite{towns2014xsede}, which is supported by NSF grant number ACI-1548562. \section{Introduction} As with the choice of an optimization algorithm, the choice of loss function is an indispensable ingredient in training neural network models. Yet, while there is extensive theoretical and empirical research into optimization and regularization methods for training deep neural networks~\cite{sun2019optimization}, far less is known about the selection of loss functions. In recent years, cross-entropy loss has been predominant in training for multi-class classification with modern neural architectures. There is surprisingly little theoretical or empirical evidence in support of this choice. To the contrary, an extensive set of experiments with neural architectures conducted in~\cite{hui2020evaluation} indicated that training with the (rescaled) square loss produces similar or better classification accuracy than cross entropy on most classification tasks. Still, the rescaled square loss proposed in that work requires additional parameters (which must be tuned) when the number of classes is large. Further, the optimization learning rate for the square loss is typically different from that of cross entropy, which precludes the use of square loss as an out-of-the-box replacement. In this work we propose the ``squentropy'' loss function for multi-class classification. Squentropy is the sum of two terms: the standard cross-entropy loss and the average square loss over the incorrect classes. Unlike the rescaled square loss, squentropy has no adjustable parameters. Moreover, in most cases, we can simply use the optimal hyperparameters for cross-entropy loss without any additional tuning, making it a true ``plug-and-play'' replacement for cross-entropy loss. To show the effectiveness of squentropy, we provide comprehensive experimental results over a broad range of benchmarks with different neural architectures and data from NLP, speech, and computer vision. In {24 out of 34} tasks, squentropy has the best (or tied for best) classification accuracy, in comparison with cross entropy and the rescaled square loss. Furthermore, squentropy has consistently improved {\it calibration}, an important measure of how the output values of the neural network match the underlying probability of the labels~\cite{guo2017calibration}. Specifically, in { 26 out of 32} tasks for which calibration results can be computed, squentropy is better calibrated than either alternative. We also show results on 121 tabular datasets from~\cite{fernandez2014we}. Compared with cross entropy, squentropy has better test accuracy on {94 out of 121} tasks, and better calibration on 83 datasets. Finally, we show that squentropy is less sensitive to the randomness of the initialization than either of the two alternative losses. Our empirical evidence suggests that in most settings, squentropy should be the first choice of loss function for multi-class classification via neural networks. \section{The squentropy loss function} \label{sec_squentropy} The problem we consider here is supervised multi-class classification. We focus on the loss functions for training neural classifiers on this task. Let $D=(\bm{x}_i, y_i)_{i=1}^n$ denote the dataset sampled from a joint distribution $\mathcal{D}(\mathcal{X}, \mathcal{Y})$. For each sample $i$, $\bm{x}_i\in \mathcal{X}$ is the input and $y_i \in \mathcal{Y}=\{1,2, \dotsc, C\}$ is the true class label. The one-hot encoding label used for training is $\bm{e}_{y_i}=[0,\ldots, \underbrace{1}_{y_i}, 0, \ldots,0]^T \in \mathbb{R}^C$. Let $f(\bm{x}_i)\in \mathbb{R}^C$ denote the logits (output of last linear layer) of a neural network of input $\bm{x}_i$, with components $f_j(\bm{x}_i)$, $j=1,2,\dotsc,C$. Let $p_{i,j}=e^{f_j(\bm{x}_i)}/\sum_{j=1}^C e^{f_j(\bm{x}_i)}$ denote the predicted probability of $\bm{x}_i$ to be in class $j$. Then the squentropy loss function on a single sample $\bm{x}_i$ is defined as follows: \begin{equation} \label{mix_func} l_{\text{sqen}}(\bm{x}_i,y_i) = - \log p_{i,y_i}(\bm{x}_i) + \frac{1}{C-1}\sum_{\substack{j=1,}{j\neq y_i}}^C f_j(\bm{x}_i)^2. \end{equation} The first term $- \log p_{i,y_i}(\bm{x}_i)$ is simply cross-entropy loss. The second term is the square loss averaged over the incorrect ($j\neq y_i$) classes. The cross-entropy loss is minimized when $f_{y_i}(\bm{x}_i) \to \infty$ while $f_j(\bm{x}_i) \to - \infty$ or at least stays finite for $j \ne y_i$. By encouraging all incorrect logits to go to a specific point, namely $0$, it is possible that squentropy yields a more ``stable'' set of logits --- the potential for the incorrect logits to behave chaotically is taken away. In other words, the square loss term plays the role of a regularizer. We discuss this point further in Section~\ref{sec_norm}. \paragraph{Dissecting squentropy.} Cross entropy acts as an effective penalty on the prediction error made for the true class $y_i$, as it has high loss and large gradient when $p_{i,y_i}$ is close to zero, leading to effective steps in a gradient-based optimization scheme. The ``signal'' coming from the gradient for the incorrect classes is weaker, so such optimization schemes may be less effective in driving the probabilities for these classes to zero. Squentropy can be viewed as a modification of the rescaled square loss~\cite{hui2020evaluation}, in which cross entropy replaces the term $t(f_{y_i}(\bm{x}_i)-M)^2$ corresponding to the true class, which depends on two parameters $t$, $M$ that must be tuned. This use of cross entropy dispenses with the additional parameters yet provides an adequate ``signal'' for the gradient for a term that captures loss on the ``true'' class. The second term in \eqref{mix_func} pushes all logits $f_j(\bm{x}_i)$ corresponds to false classes $j \ne y_i$ to $0$. Cross entropy attains a loss close to zero on term $i$ by sending $f_{y_i}(\bm{x}_i)\to \infty$ and/or $f_j(\bm{x}_i) \to -\infty$ for all $j \neq y_i$. By contrast, squentropy ``anchors" the incorrect logits at zero (via the second term) while driving $f_{y_i}(\bm{x}_i)\to \infty$ (via the first term). Then the predicted probability of true class $p_{i, y_i}(\bm{x}_i)$ will be close to $\frac{e^{f_{y_i}(\bm{x}_i)}}{e^{f_{y_i}(\bm{x}_i)}+C-1}$ for squentropy, which possibly approaches $1$ more slowly than for cross entropy. When the training process is terminated, the probabilities $p_{i, y_i}(\bm{x}_i)$ tend to be less clustered near $1$ for squentropy than for cross entropy. Confidence in the true class thus tends to be slightly lower in squentropy. We see the same tendency toward lower confidence in the {\em test} data, thus helping calibration. In calibration literature, various post-processing methods, such as Platt scaling \cite{platt1999probabilistic} and temperature scaling \cite{guo2017calibration}, also improves calibration by reducing $p_{i,y_i}$ below $1$, while other methods such as label smoothing \cite{muller2019does, liu2022devil} and focal loss \cite{mukhoti2020calibrating} achieve similar reduction on the predicted probability. While all these methods require additional hyperparameters, squentropy does not. We conjecture that calibration of squentropy can be further improved by combining it with these techniques. \paragraph{Relationship to neural collapse.} Another line of work that motivates our choice of loss function is the concept of neural collapse \cite{Pap20a}. Results and observations for neural collapse interpose a linear transformation between the outputs of the network (the transformed features $f_j(\bm{x}_i)$) and the loss function. They show broadly that the features collapse to a class average and that, under a cross-entropy loss, the final linear transformation maps them to rays that point in the direction of the corners of the simplex in $\mathbb{R}^C$. (A modified version of this claim is proved for square loss in \cite{han2021neural}.) Our model is missing the interposing linear transformation, but these observations suggest roughly that cross entropy should drive the true logits $f_{y_i}(\bm{x}_i)$ to $\infty$ while the incorrect logits $f_j(\bm{x}_i)$ for $j \ne y_i$ tend to drift toward $-\infty$, as discussed above. As noted earlier, the square loss term in our squentropy loss function encourages $f_j(\bm{x}_i)$ for $j \ne y_i$ to be driven to zero instead --- a more well defined limit and one that may be achieved without blowing up the weights in the neural network (or by increasing them at a slower rate). In this sense, as mentioned above, the squared loss term is a kind of regularizer. \paragraph{Confidence calibration.} We use the expected calibration error (ECE) \cite{naeini2015obtaining} to evaluate confidence calibration performance. It is defined as $\mathbb{E}_{p}[|\mathbb{P}(\hat{y}=y|p)-p|]$, where $p$ and $y$ correspond to the estimated probability and true label of a test sample $\bm{x}$. $\hat{y}$ is the predicted label given by $\argmax_j p_j$. It captures the expected difference between the accuracy $\mathbb{P}(\hat{y}=y|p)$ and the estimated model confidence $p$. Because we only have finite samples in practice, and because we do not have access to the true confidences $p_{\text{true}}$ for the test set (only the labels $y)$, we need to replace this definition with an {\em approximate} ECE. This quantity is calculated by dividing the interval $[0,1]$ of probability predictions into $K$ equally-spaced bins with the $k$-th bin interval to be $(\frac{k-1}{K}, \frac{k}{K}]$. Let $B_k$ denote the set of test samples $(\bm{x}_i,\hat{y}_i)$ for which the confidence $p_{i,y_i}$ predicted by the model lies in bin $k$. (The probabilities $p_{i,j}$ are obtained from a softmax on the exponentials of the logits $f_j(\bm{x}_i)$.) The accuracy of this bin is defined to be $\text{acc}(B_k)=\frac{1}{|B_k|}\sum_{i\in B_k}\mathbf{1}(\hat{y}_i=y_i)$, where $y_i$ is the true label for the test sample $\bm{x}_i$ and $\hat{y}_i$ is the model prediction for this item (the one for which $p_{i,j}$ are maximized over $j=1,2,\dotsc,C$). The confidence for bin $k$ is defined empirically as $\text{conf}(B_k) = \frac{1}{|B_k|}\sum_{i\in B_k} p_{i,y_i}$. We then use the following definition of ECE: \begin{equation} \label{eq:ece} \text{ECE} = \sum_{k=1}^K\frac{|B_k|}{n}\left|\text{acc}(B_k)-\text{conf}(B_k)\right|. \end{equation} This quantity is small when the frequency of correct classification over the test set matches the probability of the predicted label. \section{Experiments} In this paper we consider three loss functions, our proposed squentropy, cross entropy and the (rescaled) square loss from \cite{hui2020evaluation}. The latter is formulated as follows: \begin{equation} \label{square_func} {l}_{s}(\bm{x}_i,y_i) = \frac{1}{C}\left(t*(f_{y_i}(\bm{x}_i)-M)^2+\sum_{j=1, j\neq y_i}^Cf_j(\bm{x}_i)^2\right), \end{equation} where $t$ and $M$ are positive parameters. ($t=M=1$ yields standard square loss.) We will point out those entries in which values $t>1$ or $M>1$ were used; for the others, we set $t=M=1$. Note that following \cite{hui2020evaluation}, the square loss is directly applied to the logits, with no softmax layer in training. We conduct extensive experiments on various datasets. These include a wide range of well-known benchmarks across NLP, speech, and vision with different neural architectures --- more than $30$ tasks altogether. In addition, we evaluate the loss functions on 121 tabular datasets \cite{fernandez2014we}. In the majority of our experiments, training with squentropy gives best test performance and also consistently better calibration results. \paragraph{Training scheme.} In most of experiments we train with squentropy with hyperparameter settings that are optimal for cross entropy, given in \cite{hui2020evaluation}. This choice favors cross entropy. This choice also means that switching to squentropy requires a change of just one line of code. Additional gains in performance of squentropy might result from additional tuning, at the cost of more computation in the hyperparameter tuning process. \input{accuracy_ece} \paragraph{Datasets.} We test on a wide range of well-known benchmarks from NLP, speech and computer vision. NLP datasets include MRPC, SST-2, QNLI, QQP, text8, enwik8, text5, and text20. Speech datasets include TIMIT, WSJ, and Librispeech. MNIST, CIFAR-10, STL-10 \cite{coates2011analysis}, CIFAR-100, SVHN \cite{netzer2011reading}, and ImageNet are vision tasks. See Appendix A of \cite{hui2020evaluation} for details of most of those datasets. (The exceptions are SVHN, STL-10, and CIFAR-100, which we describe in Appendix \ref{app_data} of this paper). The 121 tabular datasets are from \cite{fernandez2014we} and they are mostly small datasets --- $90$ of them have $\le 5000$ samples. The feature dimension is small (mostly $<50$) and most datasets are class-imbalanced. \vspace{-1mm} \paragraph{Architectures and hyperparameter settings.} We choose various modern neural architectures, including simple fully-connected networks, convolutional networks (TCNN\cite{bai2018empirical}, Resnet-18, VGG, Resnet-50 \cite{he2016deep}, EfficientNet\cite{tan2019efficientnet}), LSTM-based networks \cite{chen2016enhanced} (LSTM+CNN, LSTM+Attention, BLSTM), and Transformers \cite{vaswani2017attention} (fine-tuned BERT, Transformer-XL, Transformer, Visual transformer). See Table \ref{acc_ece} for detailed references. We follow the hyperparameter settings given in Appendix~\ref{app_para} of \cite{hui2020evaluation} for the cross-entropy loss and the square loss (other than SVHN, STL-10, and CIFAR-100), and use the algorithmic parameters settings of the cross entropy for squentropy in most cases. The exceptions are SVHN and STL-10, where squentropy and square loss have a smaller learning rate (0.1 for cross entropy while 0.02 for squentropy and square loss). More details about hyperparameter settings of SVHN, STL-10, CIFAR-100 are in Appendix \ref{app_para}. \paragraph{Metrics.} For NLP, vision and 121 tabular datasets, we report accuracy as the metric for test performance. For speech dataset, we conduct the automatic speech recognition (ASR) tasks and report test set error rates which are standard metrics for ASR. Precisely, for TIMIT, we report phone error rate (PER) and character error rate (CER). For WSJ and Librispeech, we report CER and word error rate (WER). ECE is the metric to measure the calibration results for all datasets. For speech datasets, we report calibration results for the acoustic modeling part. See Table \ref{acc_ece} shows the results of NLP, speech and vision datasets. Figure \ref{121_results} show results of 121 tabular datasets. In addition, we provide reliability diagrams \cite{degroot1983comparison, niculescu2005predicting} to visualize the confidence and accuracy of each interval and see details in Section \ref{calibration_expr}. \paragraph{Remarks on Table \ref{acc_ece}.} For the results of square loss, we use rescaled square loss with $t>1$ or $M>1$ for TIMIT(PER) ($t=1, M=15$), WSJ ($t=1, M=15$), Librispeech ($t=15, M=30$), CIFAR-10 and CIFAR-100 ($t=1, M=10$), and ImageNet ($t=15, M=30$). All others are the standard square loss. Note that WSJ (WER) and WSJ (CER) share the same ECE number as they share one acoustic model. (Similarly for Librispeech.) Additionally, since ECE numbers are not available for Top-5 accuracy, the corresponding entries (ImageNet, Top-5 acc.) are marked as ``N/A''. For the empirical results reported in Table~\ref{acc_ece}, we discuss generalization / test performance in Section~\ref{generalization_expr} and calibration results in Section~\ref{calibration_expr}. Results for 121 tabular datasets are reported in Section~\ref{sec_121}. We report the {\em average} accuracy/error rate (for test performance) and {\em average} ECE (for model calibration) of \textit{5 runs} with different random initializations for all experiments. We report the standard derivation of this collection of runs in Section~\ref{sec_std}. \subsection{Empirical results on test performance} \label{generalization_expr} \begin{figure*}[ht] \centering \centerline{\includegraphics[width=2.4\columnwidth]{figs/c100_diagram1.pdf}} \vspace{-5mm} \caption{\textbf{Confidence histograms (top) and reliability diagrams (bottom) for a Wide Resnet on CIFAR-100.} See Table \ref{acc_ece} for its test accuracy. The Confidence histogram gives the portion of samples in each confidence interval, and the reliability diagrams show the accuracy as a function of the confidence. The ECE numbers are percentages as in Table \ref{acc_ece}. \textit{Left: }Squentropy, \textit{Middle left: } cross entropy, \textit{Middle right: } Rescaled square loss, \textit{Right: } Standard square loss. We see that models trained with squentropy are better calibrated, while cross entropy suffers from overconfidence and the standard square loss is highly underconfident.} \label{fig:diagram} \vspace{-3mm} \end{figure*} Our results show that squentropy has better test performance than cross entropy and square loss in the majority of our experiments. The perf(\%) numbers in Table \ref{acc_ece} show the test accuracy of benchmarks of the NLP and vision tasks, and error rate for the speech tasks. Squentropy behaves the best in {\em 24 out of 34} tasks. We also report the numbers for {\em subsets} of enwik8 and CIFAR-100. Compared with full datasets of these collections, squentropy seems to gain more when the datasets are small. \paragraph{Applicability and significance.} Table \ref{acc_ece} shows improvements for squentropy across a wide range distributions from the NLP, speech, and vision domains. On the other hand, the improvement on one single task often is not significant, and for some datasets, squentropy's performance is worse. One reason may be our choice to use the optimal hyperparameter values for cross entropy in squentropy. Further tuning of these hyperparameters may yield significant improvements. \subsection{Empirical results on calibration} \label{calibration_expr} In this section we show model calibration results, measured with ECE of the models given in Table \ref{acc_ece}. The ECE numbers for NLP, speech, and vision tasks are also shown in Table \ref{acc_ece}. \paragraph{Squentropy consistently improves calibration.} As can be seen in and Table \ref{acc_ece}, in {26 out of 32} tasks, the calibration error (ECE) of models trained with squentropy is smaller than for cross entropy and square loss, even in those cases in which squentropy had slightly worse test performance, such as WSJ, STL10, and SVHN. Besides using ECE to measure model calibration, we also provide a popular form of visual representation of model calibration: reliability diagrams \cite{degroot1983comparison, niculescu2005predicting}, which show accuracy as a function of confidence as a bar chart. If the model is perfectly calibrated, i.e. $\mathbb{P}(\hat{y}_i=y_i|p_i)=p_i$, the diagram should show all bars aligning with the identity function. If most of the bars lie below the identity function, the model is overconfident as the confidence is mostly larger than corresponding accuracy. When most bars lie above the identity function that means the model is underconfident as confidence is smaller than accuracy. For a given bin $k$, the difference between $\text{acc}(B_k)$ and $\text{conf}(B_k)$ represents the calibration \textit{gap} (orange bars in reliability diagrams – e.g. the bottom row of Figure \ref{fig:diagram}). In Figure \ref{fig:diagram} we plot the confidence histogram (top) and the reliability diagrams (bottom) of Wide Resnets on CIFAR-100, trained with four different loss functions: squentropy, cross entropy, rescaled square loss (with $t=1, M=10$), and standard square loss ($t=1, M=1$). The confidence histogram gives the percentage of samples in each confidence interval, while the reliability diagrams show the test accuracy as a function of confidence. In the reliability diagrams of Figure \ref{fig:diagram} bottom, the orange bars, which represent the confidence {\em gap}, start from the top of the blue (accuracy) bar. We show $\text{conf}(B_k) - \text{acc}(B_k)$ for all intervals in all reliability diagram plots. Note that for intervals where confidence is smaller than accuracy, the orange bars go down from the top of the blue bars, such as the one in the right bottom of Figure \ref{fig:diagram}. More reliability diagrams for other tasks are given in Appendix \ref{app_diag}. \paragraph{Squentropy vs. cross entropy.} If we compare the diagrams of squentropy and cross entropy, the bars for squentropy are closer to the identity function; cross entropy apparently yields more overconfident models. The gap for squentropy is also smaller than cross entropy in most confidence intervals. \begin{figure}[ht] \centering \centerline{\includegraphics[width=1\columnwidth]{figs/121_1_43.pdf}} \vspace{1mm} \resizebox{.45\textwidth}{!}{% \begin{tabular}{cccc} \hline Loss functions & Squentropy & Cross entropy & Square loss \\ \hline Avg accuracy & \textbf{85.6\%} & 85.2\% & 85.5\% \\ Avg ECE & \textbf{11.6\%} & 13.0\% & 15.7\% \\ \hline \end{tabular}} \caption{\textbf{Test accuracy and model calibration of 121 tabular datasets from \cite{fernandez2014we}} trained with a 3 layer (64-128-64) fully connected network. The results for each dataset are averaged over 5 runs with different random initializations. \textit{Left:} Test accuracy (larger is better). \textit{Right:} Calibration error ECE (smaller is better). The top figures plot the results of squentropy and cross entropy, while the bottom figures plot the results of squentropy and the (rescaled) square loss. Test accuracy/ECE for each dataset are tabulated in Appendix \ref{app_121}.} \label{121_results} \vspace{-5mm} \end{figure} \paragraph{Standard square loss leads to underconfidence.} We also plot the reliability diagrams for training with the standard square loss on the right ones of Figure \ref{fig:diagram}. We see that it is highly underconfident as the confidence is smaller than $0.1$ (exact number is $0.017$) for all samples. Note that the square loss is directly applied to the logits $f_j(x_i)$, and the logits are driven to the one-hot vector $\bm{e}_{y_i}$, then the {\em probabilities} $p_{i,j}$ formed from these logits are not going to be close to the one-hot vector. The ``max'' probability (confidence) will instead be close to $\frac{e}{e+(C-1)}$, which is small when $C$ is large. \paragraph{Rescaling helps with calibration.} The second-from-right bottom diagram in Figure~\ref{fig:diagram} shows the results of training with the rescaled square loss ($t=1, M=10$) on CIFAR-100. This minimization problem drives the logits of true class closer to $M$, making the max probability approach $\frac{e^M}{e^M+(C-1)}$ - a much larger value than for standard square loss, leading to better calibration. However, squentropy can avoid extra rescaling hyperparameters while achieving even smaller values of ECE. \subsection{Additional results on 121 Tabular datasets} \label{sec_121} Additional results for 121 small, low dimensional, and class-imbalanced tabular dataset, obtained with 3-layer fully-connected networks, are shown in Figure~\ref{121_results}. For all these cases, we use SGD optimizer with weight decay parameter $5*10^{-4}$ and run $400$ epochs with learning rate $0.01$. The ``square loss'' function used here is in fact rescaled version with parameters $t=1$ and $M=5$. Figure~\ref{121_results} shows that for most datasets, squentropy has slightly better test accuracy and significantly smaller ECE than cross entropy or square loss. Squentropy has the best test accuracy in 71 out of 121 tasks and best calibration in 60 tasks. If only compare with cross entropy, squentropy is better in 94 tasks on accuracy, and is better on calibration in 83 tasks. Test accuracy and ECE for each dataset in this collection are tabulated in Appendix~\ref{app_121}. \input{variance} \begin{figure*}[ht] \centering \begin{minipage}[b]{1.5\columnwidth} \includegraphics[width=\columnwidth]{figs/sqen_margin11.pdf} \end{minipage} \hfill \begin{minipage}[b]{1.5\columnwidth} \includegraphics[width=\columnwidth]{figs/ce_margin11.pdf} \end{minipage} \hfill \begin{minipage}[b]{1.5\columnwidth} \includegraphics[width=\columnwidth]{figs/square_margin11.pdf} \end{minipage} \vspace{-3mm} \caption{\textbf{Decision boundary along different epochs for test samples.} We fix all random seeds to be the same for all cases and hence the test set is exactly the same. (Thus, we display legends only in the bottom-row figures). Color coding indicates the calculated probability of class label to be $1$, according to the scale on th eright. The white line between red and blue areas indicates the decision boundary. We train a 3-layer fully connected network with 12 units in each layer, for a 2-class spiral data set in $\mathbb{R}^2$. There are 1000 samples for training and 500 samples for test, and we train for 1000 epochs, yielding a training accuracy of $100\%$ for all loss functions. Test accuracies are squentropy: $99.9\%$, cross entropy: $99.7\%$, square loss: $99.8\%$. \textit{Top:} squentropy. \textit{Middle:} cross entropy. \textit{Bottom: }square loss. Columns show results after 100, 500, and 1000 epochs, respectively.} \label{fig_bd} \end{figure*} \subsection{Robustness to initialization} \label{sec_std} To evaluate the stability of the model trained with the loss functions considered in this paper, we report the standard deviation of the accuracy/error rate with respect to the randomness in initialization of weights for NLP, speech, and vision tasks. Standard deviation is over 5 runs with different random initializations; see Table~\ref{variance} for results. The standard derivation of squentropy is smaller in the majority of the tasks considered, so results are comparatively insensitive to model initialization. \section{Observations} As mentioned previously, we conjecture that the square term of squentropy acts as an implicit regularizer and in this section we provide some observations in support of this conjecture. We discuss the decision boundary learnt by a fully-connected network on a 2-class spiral data problem (the ``Swiss roll") in Section~\ref{db_exp}, and remark on the weight norm of the last linear layer of several networks in Section~\ref{sec_norm}. \subsection{Predicted probabilities and decision boundary} \label{db_exp} Using a simple synthetic setting, we observe that the decision boundary learned with squentropy appears to be smoother than that for cross entropy and the square loss. We illustrate this point with a 2-class classification task with spiral data and a 3-layer fully-connected network with parameter $\theta$. This setup enables visual observations. Given a sample $\bm{x}_i\in \mathbb{R}^2$ and labels $y_i \in \{1,2\}$, and the one hot encoding $\bm{y}_i=[0, 1]$ or $\bm{y}_i=[1, 0]$, we solve for weights $\theta$ to define functions $f_1(\bm{x}_i)$ and $f_2(\bm{x}_i)$ corresponding to the two classes. For any $\bm{x}_i$, we then predict a probability of $\bm{x}_i$ being classified as class $1$ as follows: $p(\bm{x}_i) := e^{f_1(\bm{x}_i)} / (e^{f_1(\bm{x}_i)} + e^{f_2(\bm{x}_i)})$. Samples are assigned to class $1$ if $f_{i,1}>f_{i,2}$ and to class 2 otherwise. The decision boundary is the set of points for which $\{\bm{x} \, | \, f_{1}(\bm{x})=f_{2}(\bm{x}) \}$ or $\{\bm{x}|p(\bm{x}_i) = 1/2\}$. We see from Figure~\ref{fig_bd} that the decision boundary obtained with squentropy is smoother than those learnt with both cross entropy and square loss. This appears to be true throughout the training process, on this simple example. Meanwhile, the margin (distance from training points to the decision boundary) is also larger for squentropy in many regions. Together, the large margin and smooth decision boundary imply immunity to perturbations and could be one of the reasons for the improved generalization resulting from the use of squentropy~\cite{elsayed2018large}. \begin{figure}[ht] \centerline{\includegraphics[width=0.85\columnwidth]{figs/w_norm.pdf}} \vspace{-5mm} \caption{\textbf{Weight norm along training.} We train a Resnet-18 on CIFAR-10 (calibration error, ECE: Squentropy: $8.9\%$, cross entropy: $10.0\%$) and STL-10 (ECE: Squentropy: $21.2\%$, cross entropy: $26.1\%$), a wide Resnet on CIFAR-100 (ECE: Squentropy: $10.9\%$, cross entropy: $17.9\%$), and show the norm of the last linear layer's weights. These are the same experiments as given in Table \ref{acc_ece}.} \label{w_norm} \end{figure} \vspace{-3mm} \subsection{Weight norm} \label{sec_norm} Neural classifiers trained with cross-entropy loss suffer from overconfidence, causing miscalibration of the model \cite{guo2017calibration}. Our calibration results in Figure~\ref{fig:diagram} and Section~\ref{sec_more_reliability} show evidence of this phenomenon. As can be seen in the confidence histogram of cross entropy --- the $(1,2)$ figure in Figure~\ref{fig:diagram} --- the average confidence $p_{y_i}(\bm{x}_i)$ for the predicted label in cross entropy is close to $1$. This fact suggests that the logits $f_{y_i}(\bm{x}_i)$ of true class are close to $\infty$, while the logits of the incorrect classes approach $-\infty$. Such limits are possible only when the weights of last linear layer have large norm. To quote \cite{mukhoti2020calibrating}, ``{\it cross-entropy loss thus inherently induces this tendency of weight magnification in neural network optimisation}.'' \citet{guo2017calibration} comment that weight decay, which corresponds to adding a penalty term to the loss consisting of the sum of squares of the weights, can produce appreciably better calibration while having a minimal effect on test error; see the rightmost diagram in Figure 2 of \cite{guo2017calibration}. In \cite{mukhoti2020calibrating, liu2022devil}, the authors point out how focal loss proposed in~\cite{lin2017focal} improves calibration by encouraging the predicted distribution to have higher entropy, thus implicitly regularizing the weights. Figure~C.1 of \cite{mukhoti2020calibrating} compares weight norm and final logit values between cross entropy and the focal loss, showing that the latter are significantly smaller. We perform a similar experiment, showing in Figure~\ref{w_norm} the weight norm of the final-layer weights for three examples from Table~\ref{acc_ece} as a function of training steps. We observe that the weight norm for the model trained with squentropy is much smaller than the norms for the same set of weights in the model trained with cross entropy, along the whole training process. \section{Summary, thoughts, future investigations} As with the selection of an optimization procedure, the choice of the loss function is an ineluctable aspect of training all modern neural networks. Yet the machine learning community has paid little attention to understanding the properties of loss functions. There is little justification, theoretical or empirical, for the predominance of cross-entropy loss in practice. Recent work~ \citep{hui2020evaluation} showed that the square loss, which is universally used in regression, can perform at least as well as cross entropy in classification. Other works have made similar observations: \cite{rifkin2002everything, sangari2015convergence, que2016back, demirkaya2020exploring}. While several alternative loss functions, such as the focal loss~\cite{lin2017focal}, have been considered in the literature with good results, none have been adopted widely. Even the hinge loss, the former leader in the popularity contest for classification losses, is barely used outside the context of Support Vector Machines. In this work we demonstrate that a simple hybrid loss function can achieve better accuracy and better calibration than the standard cross entropy on a significant majority of a broad range of classification tasks. Our squentropy loss function has no tunable parameters. Moreover, most of our experiments were conducted in a true ``plug-and-play'' setting using the same algorithmic parameters in the optimization process as for training with the standard cross-entropy loss. Performance of squentropy can undoubtedly be further improved by tuning the optimization parameters. Furthermore, various calibration techniques can potentially be applied with squentropy in the same way they are used with cross entropy. Thus, from a practical point of view, squentropy currently appears to be the natural first choice to train neural models. By no means does it imply that we know of fundamental reasons or compelling intuition indicating that squentropy is the last word on the choice of loss functions for classification. One of the main goals of this work is to encourage both practitioners and theoreticians to investigate the properties of loss functions, an important but largely overlooked aspect of modern Machine Learning. \section*{Acknowledgements} We acknowledge support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning\footnote{\url{https://deepfoundations.ai/}} through awards DMS-2031883 and \#814639 as well as the TILOS institute (NSF CCF-2112665). This work was supported also by an NSF TRIPODS grant to the Institute for Foundations of Data Science (NSF DMS-2023239), NSF grant CCF-222421, and AFOSR via subcontract UTA20-001224 from UT-Austin. LH thanks Chaoyue Liu and Parthe Pandit for reading the draft and give useful comments on the writing. We thank Nvidia for the donation of GPUs and Google for providing access to the cloud TPUs. This work uses CPU/GPU nodes (allocated with TG-CIS220009) provided by San Diego Supercomputer center, with the Extreme Science and Engineering Discovery Environment (XSEDE) \cite{towns2014xsede}, which is supported by NSF grant number ACI-1548562.
1,116,691,498,044
arxiv
\section{Introduction}\label{sec:introduction} Retrieving and aggregating subsets of event data of a particular characteristic is a recurring activity in process analysis and process mining~\cite{van2016process}. Each \emph{event} is thereby defined by an \emph{event classifier} such as the \emph{activity} or state that was recorded, a \emph{case identifier} referring to the object or entity where the activity was carried out, and a \emph{timestamp} or \emph{ordering} attribute defining the order of events. If all events use the same, single case identifier attribute, the event data is \emph{single-dimensional} and can be stored in an \emph{event log} as one \emph{sequence of events} per case according to the data model of the XES-Standard~\cite{ieee_xes_standard}, see Fig.~\ref{fig:event_data_models}(a). Such sequences can be easily queried for \emph{behavioral properties} such as \emph{event (sub-)sequences} or \emph{temporal relations} such as ``directly/eventually-follows'' in combination with other data attributes~\cite{DBLP:journals/eswa/BottrighiCLMT16,DBLP:conf/icdt/DeutchM09,Liu:2009:SEC:1944968.1944974,DBLP:conf/otm/RaimCMMM14,DBLP:conf/edoc/SongWWWTK11,DBLP:journals/information/TangMS18}. \emph{Aggregating} directly/eventually-follows relations between events is fundamental for discovering process models from event logs~\cite{van2016process,DBLP:journals/tkde/AugustoCDRMMMS19,DBLP:journals/is/WeerdtBVB12}. Most processes in practice however involve multiple inter-related entities which results in \emph{multi-dimensional} event data in which each event is directly or indirectly linked to multiple different case identifiers; sequential event logs cannot represent such multi-dimensional event data~\cite{DBLP:journals/tsc/LuNWF15}. Relational databases (RDBs) can store \emph{1:n and n:m relations between events and case identifiers} and among different case identifiers\,---\,but the explicit behavioral information of sequences (of arbitrary length) is lost (Fig.~\ref{fig:event_data_models}(e)). \paragraph{State of the art.} From existing literature~\cite{Gonzalez_2019_pm_db_phd,DBLP:journals/tsc/LuNWF15,DBLP:conf/bpm/MurillasRA16}, we identified \emph{requirements for modeling (R1-R4), querying (R5-R11), and aggregating (R12-R16) multi-dimensional event data}, see Sect.~\ref{sec:background}. Several data models for multi-dimensional event data have been researched. Extracting sequential paths of events over multiple entities requires large, non-intuitive queries~\cite{DBLP:conf/bpm/JansS17,DBLP:journals/sosym/MurillasRA19} and introduces false information (e.g., $D \to D$ and $E\to E$ in Fig.~\ref{fig:event_data_models}(a) have no corresponding Offer)~\cite{DBLP:journals/tsc/LuNWF15}. Behavioral queries on RDBs are limited to pairs of directly-following events~\cite{DBLP:journals/dpd/DijkmanGSDGH20} or pre-defined patterns~\cite{DBLP:conf/caise/SchonigRCJM16} for a single entity. Multi-identifier event tables~\cite{DBLP:conf/sefm/Aalst19,DBLP:conf/caise/LiMCA18,DBLP:journals/ijcis/PopovaFD15} correlate each event to multiple entities, but provide no sequential order for each entity dimension, see Fig.~\ref{fig:event_data_models}(d). \emph{Entity-specific event logs} describe sequential information per entity~\cite{DBLP:journals/sosym/MurillasRA19,DBLP:journals/ijcis/PopovaFD15} while \emph{relation-specific event logs} allow to reify relations between entities~\cite{DBLP:journals/tsc/LuNWF15} into a composite entity describing their interaction~\cite{DBLP:conf/apn/Fahland19}, see Fig.~\ref{fig:event_data_models}(b); both do not describe correlation of an event to multiple entities. A path such as a sequence $D \to E \to F \to G$ over 3 different entities as shown in Fig.~\ref{fig:event_data_models}(f) cannot be queried in any of these models as they leave multi-entity correlation and sequential information per entity separated. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/overview_event_data_models.pdf} \caption{Event Data Models} \label{fig:event_data_models} \end{figure} In a prior exploratory case study~\cite{esser2019storing}, we showed an integrated data model based on \emph{labeled property graphs} using edges to correlate events to entities, and to model ``directly-following'' events per entity as shown in Fig.~\ref{fig:event_data_models}(c). We used \emph{graph query languages} of existing \emph{graph database systems}~\cite{robinson2013graph} to answer behavioral multi-entity queries (Fig.~\ref{fig:event_data_models}(f)) and aggregating directly-follows per entity. In parallel, Berti et al.~\cite{DBLP:conf/simpda/BertiA19} demonstrated that aggregating directly-follows edges per entity in a graph of events allows for simpler discovery of models of behavior over multiple entities; though their model assumes that relations between entities have already been reified and does not support querying as we discuss in Sect.~\ref{sec:background}. However, our data-model~\cite{esser2019storing} was based on a single real-life data set, did not provide a generic data model and queries, specifically for aggregation. \paragraph{Research problem.} In this paper, we approach the problem of identifying a generally applicable model of event data in a multi-dimensional setting. The specific problem is to \emph{identify a minimal set of core concepts for a data model of multi-dimensional process event data with clearly defined semantics} to fully (1) model, (2) query, and (3) aggregate all kinds of real-life process event data suitable for process analysis, addressing requirements R1-R15. From a collection of public real-life event logs\footnote{\url{https://data.4tu.nl/repository/collection:event\_logs\_real}} we identified 5 publicly available data-sets with unique multi-dimensional characteristics that can serve as benchmark: multiple entities interact via shared common entity (BPIC14~\cite{BPIC2014}); multiple entities interact asynchronously, based on click-stream data (BPIC16~\cite{BPIC2016}), based on a case management system (BPIC17~\cite{BPIC2017}\footnote{BPIC18~\cite{BPIC2018} originates from a similar kind of system and has the same characteristics as BPIC17 and hence was excluded}), based on ERP system data (BPIC19~\cite{BPIC2019}); multiple event logs of the same processes executed in different organizations, BPIC15~\cite{BPIC2015}. A data model has to allow modeling, querying, and aggregating at least these 5 data sets in their multi-dimensional nature. \paragraph{Method.} First we determined the process event data concepts any data model had to support based on literature (see Sect.~\ref{sec:background:event_log_concepts}). All recent works that succeed in modeling (some) aspects of multi-dimensional event data employ graph properties. We therefore took the most complete proposal~\cite{esser2019storing} based on labeled property graphs (LPGs, see Sect.~\ref{sec:background:lpg}) as a starting point. We then iteratively developed a solution that could support all 5 benchmark data sets using the following approach with the existing graph database (GDB) system Neo4J (\url{neo4j.com}); Neo4j was chosen for LPG storage and querying due to off-the-shelf availability and suitable performance. \begin{enumerate} \item We transformed the event data into a standardized input format of an event table where each record describes one event and its properties, including references to all entities involved. This format is readily available or easily obtainable from the XES-Standard event log standard~\cite{ieee_xes_standard}. \item We import events into the GDB as an LPG of generic, unrelated nodes only. \item We then identified node and relation ship types as semantic concepts for the data model and corresponding data transformations in terms of queries on LPGs to model all input data sets through the same semantic concepts and the same data transformation queries. \item The semantic concepts for nodes and relations thereby had to serve as adequate abstractions so that all event data could be queried and analyzed through these semantic abstractions only. \item In case a suitable solution (types, relations, queries) was found for one data set, we applied it on all other data sets. If the solution could not be applied on one data set, we identified the cause and generalized the concepts and queries and repeated steps 3-5 for all data sets. \end{enumerate} We conducted over 100 iterations of the above process over all 5 data sets until reaching a fixed point. \paragraph{Contribution and Results.} We contribute a generally applicable, minimal, integrated data model for multi-dimensional event data with semantic definitions for correlation and behavioral ordering using labeled property graphs. Our model allows querying and aggregating the modeled multi-dimensional event data and thereby subsumes and exceeds several prior works. We identified \emph{4 semantic node types} for multi-dimensional event data in LPGs: events, entities, logs, and event classes; \emph{3 semantic structural relations} for relating each event to: one more entities, to exactly one log, and to one or more event classes; and \emph{2 semantic behavioral relations} describing directly follows between two events (along a chosen entity), and its congruent directly-follows relation between event classes (summarizing event-level directly follows on class level); see Sect.~\ref{sec:represent} for the concepts and Sect.~\ref{sec:semantics} for their semantic definition in terms of LPGs. We identified queries to extract entities from input event data for both, explicitly recorded entities and composite entities by reifying relations, to correlate events to entities, to derive directly-follows relations for all events of a specific entity (explicit or reified). All the available multi-dimensional event data could be transformed into our model using a standard set of queries satisfying (R1-R4); see Sect.~\ref{sec:storing}. We show that all query requirements (R5-R11) on multi-dimensional event data are satisfied by our model using the standard query language Cypher. We evaluated the query results to be correct against a manually constructed ground truth. Query execution times are practically feasible and in complex cases outperform hand-written algorithms on sequential event logs for the same task; see Sect.~\ref{sec:querying}. We identified queries to aggregate events and directly-follows relations between events to even classes and directly-follows relations between event classes for a specific entity; the queries allow to consider structural and behavioral properties during aggregation, addressing all aggregation requirements (R12-R15). Specifically aggregating directly-follows on event classes allows to realize process discovery for models of processes over multiple entities directly through standard queries in existing graph database systems. In Sect.~\ref{sec:mining} we demonstrate discovery of artifact-centric models of multiple entities with asynchronous and synchronous interactions~\cite{DBLP:journals/tsc/LuNWF15}, also called multi-viewpoint models~\cite{DBLP:conf/simpda/BertiA19}. All queries realizing the transformations into our data model, querying the data, and discovering process models through graph databases are available at GitHub\footnote{\url{https://github.com/multi-dimensional-process-mining/graphdb-eventlogs} and at~\cite{graphdataset}}. We discuss limitations and alleys for future work in Sect.~\ref{sec:conclusion}. \section{Background}\label{sec:background} We first recall the foundational concepts of single-dimensional process event logs in Sect.~\ref{sec:background:event_log_concepts}. After discussing challenges and requirements for analyzing multi-dimensional event data in Sect.~\ref{sec:background:multi-dim-req}, we discuss the state of the art on modeling, querying, and analyzing multi-dimensional event data in Sect.~\ref{sec:background:multi-dim-literature}, before we recall the data model of labeled property graphs and the query language Cypher. \subsection{Modeling Single-Dimensional Event Logs} \label{ssec:processEventData} \label{sec:background:event_log_concepts} Information Systems (IS) create and update information records in structured transactions or \emph{activities}. Each update is linked to one or more \emph{entities} with unique \emph{entity identifiers}, for example a specific order and related invoices. Each update can be recorded as an \emph{event} with attributes for the activity carried out, the entity identifiers, and the \emph{timestamp} (or \emph{ordering} of updates). Events are implicitly related to each other via the structural 1:1, 1:n, and n:m relations between the entities on which the updates occurred~\cite{DBLP:journals/tsc/LuNWF15}. A \emph{process event log} is a collection of recorded events $E$ structured into a specific \emph{view} on an IS from the perspective handling of \emph{one} specific entity, e.g., handling a credit application. Table~\ref{tab:2_BPIC17ExampleCaseTable} shows a simplified event log taken from the BPIC17~\cite{BPIC2017} data set describing the handling of a credit application (identified by \emph{Appl}.). \begin{table}[] \begin{center} \begin{tabular}{l|l|l|l|l|l|l} \hline \textbf{Appl.} & \textbf{Activity} & \textbf{Timestamp}& \textbf{Resource} & \textbf{Amount} & \textbf{oID} & \textbf{Origin} \\\hline 1 & Create Appl. & 29.08.19 10:30 & 9 & 1000 & & A \\ 1 & Appl. Ready & 29.08.19 10:35 & 10 & 1000 & & A \\ 1 & Handle Leads & 29.08.19 10:40 & 42 & 1000 & & W \\ 1 & Create Offer & 29.08.19 13:14 & 11 & 1000 & 1 & O \\ 1 & Create Offer & 29.08.19 13:49 & 11 & 1000 & 2 & O \\ 1 & Send Offer & 29.08.19 18:00 & 12 & 1000 & 2 & O \\ 1 & Send Offer & 29.08.19 18:00 & 12 & 1000 & 1 & O \\ 1 & Call Offers & 30.08.19 10:15 & 44 & 1000 & & W \\ 1 & Offer Returned & 30.08.19 13:49 & 16 & 1000 & 2 & O \\ 1 & Appl. Complete & 30.08.19 13:59 & 44 & 1000 & & A \end{tabular} \end{center} \caption{Simplified BPIC'17 Example} \label{tab:2_BPIC17ExampleCaseTable} \end{table} The following process-specific concepts are part of every event log~\cite{van2016process}. \textbf{(E1)} Each event $e \in E$ in an event log records an \emph{atomic} observation using 3 attributes: an entity identifier $e.\mathit{entityid}$ to which the event is related, an {event class} $e.\mathit{class}$ (usually the activity name $e.\mathit{activity}$), and an \emph{ordering attribute} (usually the event's timestamp $e.time$). \textbf{(E2)} Optional attribute $e.\mathit{lifecycle}$ records states of long-running behavior, e.g. an activity \emph{started} or \emph{completed}. \textbf{(E3)} Optional attribute $e.\mathit{resource}$ records whether an actor or resource was involved in the event. \textbf{(E4)} Each entity identifier $id = e.\mathit{entityid}$, e.g. a specific application, defines a \emph{case} (or execution) of the process; the set of events \emph{correlated} to this entity is $\{e_1,\ldots,e_n\} = \{ e \in E \mid e.\mathit{entityid} \}$. \textbf{(E5)} The sequence $\langle e_1,\ldots,e_n\rangle$ of all events correlated to an entity ordered by the ordering attribute is called a \emph{trace} (of this entity). The IEEE XES-Standard~\cite{ieee_xes_standard} materializes these concepts in a tree-structure that specifically pre-determines a unique case identifier. Process mining relies on querying and aggregating the \emph{directly-follows} relation over events $E$. \textbf{(E6)} Event $e_b$ \emph{directly follows} $e_a$, $e_a \to e_b$ iff there is a trace $\langle \ldots,e_a,e_b,\ldots \rangle$. Directly-follows is aggregated over event classes: event class $b$ directly follows event class $a$, $a \to b$ iff there is a trace $\langle \ldots,e_a,e_b,\ldots \rangle$ with $e_a.\mathit{class} = a$ and $e_b.\mathit{class} = b$. Nearly all process discovery algorithms create for each event class one activity node~\cite{van2016process} and dependencies between activities are derived from the directly-follows relation between activities~\cite{DBLP:journals/tkde/AugustoCDRMMMS19,DBLP:journals/is/WeerdtBVB12}. Model quality improves when the class of an event is determined by behavioral properties such as the set preceding activities~\cite{DBLP:conf/bpm/LuFBA16}. \subsection{Requirements for Analyzing Multi-Dimensional Event Data} \label{sec:background:multi-dim-req} An IS usually hosts multiple uniquely identifiable entities, e.g., credit applications and offers. For example, the BPIC'17 data (shown in Tab.~\ref{tab:2_BPIC17ExampleCaseTable}) identifies four entities: credit applications (identified by \emph{Appl.}, events with $\mathit{Origin}=A$), credit offers (identified by \emph{oID} with $\mathit{Origin}=O$) with a 1:n relation to Applications, the Workflow (identified by \emph{Appl.} with $\mathit{Origin}=W$); and the actors working on the case (\emph{resource}) with a n:m relation to Application, Workflow, and Offers; Fig.~\ref{fig:event_data_models}(e) illustrates (part of) the data in a relational database (RDB). Extracting a single-dimensional event log (Fig.~\ref{fig:event_data_models}(a)) groups all events under a single entity (case identifier), e.g. the Application or the Offer document, and \emph{flattens} the data accordingly~\cite{DBLP:conf/bpm/JansS17} leading to false behavioral information called \emph{convergence} and \emph{divergence}~\cite{DBLP:conf/sefm/Aalst19,DBLP:journals/tsc/LuNWF15}. Flattening the data in Fig.~\ref{fig:event_data_models}(e) under \emph{Application} de-normalizes the 1:n relation to \emph{Offer} and results in the event log of Tab.~\ref{tab:2_BPIC17ExampleCaseTable} and Fig.~\ref{fig:event_data_models}(a): \emph{Create Offer} $\to$ \emph{Create Offer} occurs in the log, whereas in reality this never happens for any entity (\emph{convergence}). Flattening the data in Fig.~\ref{fig:event_data_models}(e) under \emph{Offer} via the n:1 relation replicates events on the 1-side for each entity on the n-side (\emph{divergence}), see Fig.~\ref{fig:event_data_models}(a). A recent literature survey of 95 studies~\cite{Gonzalez_2019_pm_db_phd,DBLP:conf/bpm/MurillasRA16} established requirements for \emph{querying} event data. Focusing on \emph{querying for structure and behavior in multi-dimensional event data} we identified from~\cite[pp.133]{Gonzalez_2019_pm_db_phd} the requirements to \textbf{(R1)} query and analyze events (E1 of Sect.~\ref{sec:background:event_log_concepts}), and to \textbf{(R2)} consider relations between multiple data entities (as in RDBs). The technique shall support \textbf{(R3)} storing and querying business process-oriented concepts (E2-E3) and \textbf{(R4)} capture information about how events are correlated to different entities to avoid convergence and divergence (generalize E4-E6). According to~\cite{Gonzalez_2019_pm_db_phd,DBLP:conf/bpm/MurillasRA16}, queries should \textbf{(R5)} be expressed as graphs to specify the behavior of interest in a natural way, \textbf{(R6)} allow to query paths (or sequences) of events (connected by some relation), \textbf{(R7)} allow to select individual cases based on partial patterns, \textbf{(R8)} allow to query temporal properties (such as directly/eventually-follows), \textbf{(R9)} correlate events related to the same entity, \textbf{(R10)} allow querying aspects related to several entities or processes at the same time on the same data set, and \textbf{(R11)} allow to query multiple event logs and combine results. Prior work on analyzing multi-dimensional event data~\cite{DBLP:journals/tsc/LuNWF15} identified four major aggregation operations for discovering so-called \emph{artifact-centric process models}. The technique has to support \textbf{(R12)} aggregating events into \emph{user-defined} event classes, e.g., activities, based on data properties, \textbf{(R13)} aggregating (reifying) records of a relation between two entities into a new \emph{composite} entity to model, query and aggregate \emph{interactions} between different entities, \textbf{(R14)} aggregating behavioral relations from the event level to the \emph{event class} level \emph{per (inferred) entity type}, and \textbf{(R15)} relating or synchronizing aggregated behavior of different entity types. Altogether, \emph{a user shall be able to query and aggregate for individual events (and their properties), for different entities/case notions, for behavioral and structural relations, and for patterns of multiple events (within and across entities)}. \subsection{Related work} \label{sec:background:multi-dim-literature} We review 5 existing types of data models for event data against the requirements of Sect.~\ref{sec:background:multi-dim-req} showing that \emph{no existing data model or query language on sequential event logs or RDBs satisfies R1-R15.} \paragraph{\#1. Single event log for a single, selected entity.} Event logs as described in Sect.~\ref{sec:background:event_log_concepts} and illustrated in Fig.~\ref{fig:event_data_models}(a) cannot correctly model or aggregate behavior of events related to multiple entities due to convergence and divergence as discussed in Sect.~\ref{sec:background:multi-dim-req}. Sequential event logs can be stored and queried using files~\cite{ieee_xes_standard} or through RDBs~\cite{DBLP:conf/simpda/SyamsiyahDA16b}. Of the 95 works surveyed~\cite[pp.133]{Gonzalez_2019_pm_db_phd}, several approaches exist to retrieve cases from event logs for temporal properties~\cite{Liu:2009:SEC:1944968.1944974,DBLP:conf/edoc/SongWWWTK11}, for most frequent behavior~\cite{DBLP:conf/icdt/DeutchM09}, for sequences of activities~\cite{DBLP:journals/eswa/BottrighiCLMT16} or algebraic expressions of sequence, choice, and parallelism over activities~\cite{DBLP:journals/information/TangMS18}, or to check whether a temporal-logic property holds~\cite{DBLP:conf/otm/RaimCMMM14}. Several techniques support graph-based queries~\cite{DBLP:conf/sc/Cuevas-VicenttinDWSL12,DBLP:conf/icde/HuangBDMY15,Liu:2009:SEC:1944968.1944974}. These techniques satisfy R7 and R8. However, they only support a single fixed case notion and thus fail R2, R10, R11. \paragraph{\#2. Event table with multiple entity identifiers.} The model defines a single table, each record is an event with multi-valued entity identifier attributes, first introduced by Popova et al.~\cite{DBLP:journals/ijcis/PopovaFD15} and later formalized by Aalst et al. as \emph{object-centric log}~\cite{DBLP:conf/sefm/Aalst19}. \emph{Redo} event logs~\cite{DBLP:conf/bpm/MurillasHR17} include database operations and \emph{XOC} logs~\cite{DBLP:conf/caise/LiMCA18} even include database snapshots. The BPAF~\cite{bpaf_standard} format is a precursor that allows querying event data of different processes~\cite{DBLP:conf/ic3k/BaqueroM12a} but not based on properties of specific events (violates R7). All these models only describe correlation (and data operations) of an event to multiple entities, but leave sequential ordering per entity implicit, see Fig.~\ref{fig:event_data_models}(d), which prevents R6 and R8. They are usually transformed to other formats for analysis~\cite{DBLP:conf/simpda/BertiA19,DBLP:journals/ijcis/PopovaFD15} \paragraph{\#3. Event data in a relational database.} Events are stored as time-stamped attributes and can be related to various entities through primary and foreign keys as shown in Fig.~\ref{fig:event_data_models}(e). Dijkman et al.~\cite{DBLP:journals/dpd/DijkmanGSDGH20} show a native, efficient relational algebra operator to query directly-following events. Also pre-defined behavioral patterns can be queried efficiently~\cite{DBLP:conf/caise/SchonigRCJM16}. However, these operators are fixed to one entity identifier and querying paths requires unbounded joins (violates R4,R6,R10). \paragraph{\#4. Multiple logs, one per entity and per relation.} Convergence and divergence can be avoided by extracting one log per entity providing multiple views~\cite{DBLP:conf/sefm/Aalst19,DBLP:journals/tsc/LuNWF15,DBLP:journals/sosym/MurillasRA19,DBLP:journals/ijcis/PopovaFD15}, as shown in Fig.~\ref{fig:event_data_models}(b). Non-overlapping logs can be extracted automatically by partitioning the relational schema~\cite{DBLP:journals/tsc/LuNWF15,DBLP:journals/ijcis/PopovaFD15}. Interactions between entities can be modeled by extracting a sequential log per relation: per record in the relation, include all events of one entity preceded or succeeded by an event of another entity~\cite{DBLP:journals/tsc/LuNWF15}, as shown in Fig.~\ref{fig:event_data_models}(b). Extraction of an interaction log corresponds to reifying the relation into an entity which overlaps and synchronizes with other entities~\cite{DBLP:conf/apn/Fahland19}. The approach of~\cite{DBLP:journals/sosym/MurillasRA19} provides a meta-model to extract event logs of different perspectives from user-defined, composite entities~\cite{DBLP:journals/sosym/MurillasRA19} that also may overlap. However, the separation into multiple event logs violates R9 and R10, and if logs do not overlap also R15. DAPOQ~\cite[Ch.7]{Gonzalez_2019_pm_db_phd} generalizes various prior query languages to query and extract events in the context of their relational data model for behavior properties, but does not support retrieving individual cases (R7) or specifying behavioral and structural patterns (R8). \paragraph{\#5. Graph-based, events as nodes related to multiple entities.} The technique in~\cite{DBLP:conf/bpm/BeheshtiBNS11} supports graph-based SPARQL queries over event data in RDF format from multiple entities, but does not allow to select individual cases or querying for behavioral properties (violates R7,R8). Werner et al.~\cite{DBLP:journals/tsc/WernerG15} modeled behavior over two entity types in financial auditing as a \emph{graph over events} describing ``directly-follows'' per entity or relation, but their model was not generalized or used for querying (violates R5-R11). Our graph-based model~\cite{esser2019storing} shown in Fig.~\ref{fig:event_data_models}(c) generalizes the model of Werner et al.~\cite{DBLP:journals/tsc/WernerG15} to standard process concepts (Sect.~\ref{sec:background:event_log_concepts}) using labeled property graphs and Cypher~\cite{francis2018cypher}. Berti et al.~\cite{DBLP:conf/simpda/BertiA19} convert object-centric logs (format \#2) into two separate graphs: one describes correlation of events to entities, and one describes the directly-follows relation between any two events per entity similar to~\cite{esser2019storing,DBLP:journals/tsc/WernerG15}. Assuming that all relations between entities have been reified into entities (see \#4), they aggregate the directly-follows relations per entity and satisfy R14,R15. However, in most event data in practice~\cite{BPIC2017,DBLP:journals/tsc/LuNWF15}, such as Fig.~\ref{fig:event_data_models}, relations are not reified yet, limiting the applicability of~\cite{DBLP:conf/simpda/BertiA19} as it does not support R13. Further, the model does not support querying the event data prior to aggregation (violates R6, R7, R8, R10) because correlation and directly-follows relations are stored in separate graphs. In this paper, we generalize our previously explored integrated, graph-based model~\cite{esser2019storing} shown in Fig.~\ref{fig:event_data_models} to be applicable to all kinds of real-life data sets while satisfying R1-R11 and additionally R12-R15, thereby subsuming prior works. \subsection{Labeled property graphs and querying} \label{ssec:LPGandGraphDB} \label{sec:background:lpg} \emph{Labeled Property Graphs} (LPGs) are a data structure used in graph databases (GDBs)~\cite{robinson2013graph}. An LPG $G = (N,R,\mathit{label},\mathit{prop})$ consists of nodes $N$ (vertices) and relationships $R$ (edges) where each relationship $r\in R$ defines a directed edge $\overrightarrow{r} = (n_1,n_2) \in N \times N$ between two nodes. The labeling function $\mathit{label} : N \cup R \to 2^\mathit{Label}$ assigns each node and each relationship a non-empty set of labels designating their type. Function $\mathit{prop} : (N \cup R) \times K \to V$ assigns each node or relationship an arbitrary number of key-value pairs, called \emph{properties}. We write $n.k = v$ for $\mathit{prop}(k) = v$, and $n.k = \perp$ if $k$ is undefined for $n$. \begin{wrapfigure}{r}{.55\linewidth} \includegraphics[scale=.5]{figures/example3.png} \caption{Labeled Property Graph} \label{fig:Example} \end{wrapfigure} The example LPG in Fig.~\ref{fig:Example} models the relationships between a professor and two students. The example contains nodes with the \emph{labels} \textit{:Person}, \textit{:Professor}, \textit{:Student} and \textit{:Document}. The document you are currently reading is authored by Stefan, a student supervised by Dirk who co-authors this document and say Miro is another student contributing to this paper. The ``Name'' of each person is a property of the \textit{:Person} nodes; ``Type'' is a property of \textit{:Document} nodes. The described relationships between the nodes can also hold properties like the starting date of a supervision. Neo4j supports multiple labels for nodes while relationships have exactly one label. \emph{Cypher} is a language for querying LPGs~\cite{francis2018cypher} and supported by Neo4j. Cypher queries use pattern matching to select sub-graphs of interest. The pattern $(\mathit{n : \mathit{Label}}\ \{\mathit{Prop}: \mathit{Value}\})$ matches any node labeled $\mathit{\mathord{:}Label}$ that has property $\mathit{Prop}$ set to $\mathit{Value}$. Pattern $(n)-[r\mathord{:}\mathit{Label}]\mathord{-}\mathord{>}(m)$ matches any relationship labeled $\mathit{\mathord{:}Label}$ from a node $n$ to a node $m$. Any combination of nodes and relations $(n,r,m)$ that match the pattern are included in the result set; if any variable $n,r,m$ is already bound, then only combinations including the bound nodes/relationships will be returned. We explain the Cypher query concepts used in this paper by a single (albeit inefficient) example query. For the graph in Fig.~\ref{fig:Example}, we query for the longest path between ``Dirk'' and a student, other than ``Stefan'', who also works on a document that ``Dirk'' co-authors. \begin{lstlisting}[style=smallStyle] MATCH path = (s:Student)-[*]-(p:Professor {Name: "Dirk"}) WHERE NOT s.Name = "Stefan" WITH s AS student, p AS professor, path AS paths MATCH (d:Document)<-[:IS_COAUTHOR_OF]-(professor:Professor) WHERE (student:Student)--(d) RETURN student, d, paths, length(paths) AS pLength ORDER BY pLength DESC LIMIT 1 \end{lstlisting} The $\mathit{MATCH}$ clause retrieves pairs of nodes $s$ and $p$ and a $path$ between $s$ and $p$ that match the pattern in line 1: a Student $s$ related to Professor $p$ by a path $-[*]-$ of arbitrary relationships, direction, and length. The $\mathit{WHERE}$ clause in line 2 restricts the pattern such that the student's name cannot be ``Stefan''. By defining the professors' name property to be ``Dirk'' in line 1 we also restrict the pattern. $\mathit{WITH}$ in line 3 formats and renames the result set (e.g., $s$ renamed to $\mathit{student}$) that is passed to the next query from line 4 on: variable $\mathit{student}$ in lines 4-6 may only take values retrieved for variable $s$ in lines 1-2, e.g., ``Miro'' but not ``Stefan.'' Line 4 matches the documents Dirk coauthors and line 5 restricts the results to documents that have a direct relationship to a student. The $\mathit{RETURN}$ statement in line 6 formats the result set of lines 4-5 as output. For the example graph, the $student$ is ``Miro'' and the document $d$ is the ``paper''; variable $\mathit{paths}$ contains the 2 possible paths between Miro and Dirk. One walks over Stefan and one does not. ``Length()'' is a function of Cypher that returns the hops needed to walk a path. Lines 7-8 sort the results by path lengths (\emph{ORDER\ BY} clause) in descending order ($\mathit{DESC}$) and return only the first path of this ordered list (\emph{LIMIT} 1). Instead of $\mathit{RETURN}$, a Cypher query may also end with a statement $\mathit{CREATE} (student) \mathord{<}\mathord{-}[r\mathord{:}\mathit{FOUND}]\mathord{-}(d)$ to add a new relationship of label \emph{:Found} from node $d$ to node $student$; statement $\mathit{MERGE}$ only creates the specified node/relationship if it doesn't exist yet; see~\cite{Esser2019cs_tue} for more details. \section{Representing Multi-Dimensional Event Data in Labeled Property Graphs}\label{sec:represent} Labeled property graphs introduced in Sect.~\ref{ssec:LPGandGraphDB} allow versatile data modeling of various concepts and relations between concepts. In this section, we propose how to model the central concepts and relations of process event data of Section~\ref{sec:background:event_log_concepts} in labeled property graphs. Figure~\ref{fig:complete_schema} summarizes our proposal which we explain in detail below. In Section~\ref{sec:semantics}, we \emph{constrain} the way how the concepts and relations may occur in a labeled property graph describing event data, thereby defining the \emph{semantics} for process concepts in terms of LPGs. In that section, we also discuss how to \emph{refine} the proposed node and relationship types to aid in the analysis. \subsection{Modeling Events Related to Multiple Entities or Logs} We introduce node and relation types for event, entity, and event log; together they describe the \emph{instance-level} concepts of Fig.~\ref{fig:complete_schema}, i.e., concrete entities or recorded events. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{figures/schema_complete.pdf} \caption{Schema of node and relationship types for modeling multi-dimensional event data in labeled property graphs} \label{fig:complete_schema} \label{fig:3event_node} \label{fig:3entityNodeType} \label{fig:3timestamp} \label{fig:3logNodeType} \label{fig:3attributeEvent} \end{figure} \par\vspace{.5em}\noindent\textbf{Event.} We represent the core element of each event log, the \emph{event}, as a node with label $\mathit{\mathord{:}Event}$ as shown in figure~\ref{fig:3event_node}. Of the three mandatory event attributes (cf. Sect.~\ref{sec:background:event_log_concepts}), we only model \emph{activity} and \emph{timestamp} as properties to event nodes, having datatypes \emph{STRING} and \emph{DATETIME}, respectively. We describe correlation to multiple case identifiers as well as the event classifier through the graph structure as we explain next. The graph in Fig.~\ref{fig:example_entity_events_df} models 5 events. \par\vspace{.5em}\noindent\textbf{Entity.} Single-dimensional event logs fix a single entity identifier (called case identifier, cf. Sect.~\ref{sec:background:event_log_concepts}) to which each event is correlated. Our model abandons the notion of a case identifier in favor of the more general \emph{Entity} concept. We model each entity as a node with the label $\mathit{\mathord{:}Entity}$ as shown in Fig.~\ref{fig:3entityNodeType}. Its property \emph{EntityType} describes the type of the entity. Property \emph{ID} is the entity identifier. We require the combination of \emph{EntityType} and \emph{ID}, stored as property \emph{uID}, to be unique in the entire graph similar to a primary key value in a relational database (indicated by \emph{uID} being underlined in Fig.~\ref{fig:3entityNodeType}). We keep our model limited to describing the existence of entities and defer modeling of more specific entity types and structure relations between entities and types to existing proposals~\cite{robinson2013graph}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/example_bpic17_lpg_event_entity_df.pdf} \caption{Graph describing the events of Application 1 and Offer 1 of Tab.~\ref{tab:2_BPIC17ExampleCaseTable}} \label{fig:example_entity_events_df} \end{figure} The graph in Fig.~\ref{fig:example_entity_events_df} models 4 entities of 4 different entity types. Each \emph{:Entity} node represents a concrete entity related to the process, such as \emph{Application} $1$ and \emph{Offer} $1$ of our running example of Tab.~\ref{tab:2_BPIC17ExampleCaseTable}. Our model also allows entities that are not classically considered part of the process execution such as \emph{Resource} 11. Finally, an entity node can describe a reified relation between two existing entities, such as the entity $(1,1)$ of type \emph{A+O} describing the relation between \emph{Application} $1$ and \emph{Offer} $1$. We model \emph{correlation} of an event to an entity by an event to entity relationship type: $(:Event)-[\mathord{:}\mathit{E\_EN}]\mathord{-}\mathord{>}(:Entity)$. Specifically through this relationship, we can correlate any event to any number of entities of different types, allowing for multi-dimensional correlation of events to entities as shown in Fig.~\ref{fig:example_entity_events_df}. For example, event $e4$ is correlated to \emph{Resource 11}, \emph{Offer 1}, and reified relation \emph{A+O (1,1)} whereas event $e2$ is correlated to \emph{Application 1} and \emph{A+O (1,1)}. \par\vspace{.5em}\noindent\textbf{Event Log.} Some event data sets, such as BPIC'15\cite{BPIC2015}, consist of multiple event logs. To support multiple logs in one graph event log instance, we introduce a separate node type for logs with the label $\mathit{\mathord{:}Log}$ as shown in figure~\ref{fig:3logNodeType}. Similar to the entities, we model which event belongs to which log with a relationship type: $(:Log)-[\mathord{:}\mathit{L\_E}]\mathord{-}\mathord{>}(:Event)$. \par\vspace{.5em}\noindent\textbf{Attributes.} Figure~\ref{fig:3attributeEvent} shows all properties our graph data model expects to be present. Additionally, any event, entity or log node can carry any other property. \subsection{Modeling Behavior as Paths}\label{ssec:DFgeneral} \par\vspace{.5em}\noindent\textbf{Directly-Follows.} Events are ordered by time from the viewpoint of an entity they are correlated to (cf. Sect.~\ref{sec:background:event_log_concepts}). As our model allows events to be correlated to multiple entities, each event may have multiple ``next'' events, depending on the entity. We model temporal ordering of events through a \emph{:DF} relationship between any two events $x$ and $y$ that directly follow each other from the perspective of one or more entities: $(x:Event)-[\mathord{:}\mathit{DF}]\mathord{-}\mathord{>}(y:Event)$. Each \emph{:DF} has the list of \emph{EntityTypes} for which this relationship holds as property. In Fig.~\ref{fig:example_entity_events_df}, events $e1$, $e2$, and $e9$ follow each other for entity type \emph{Application}, i.e., \emph{Application 1}; events $e2$, $e4$, $e7$, $e9$ follow each other for entity type \emph{A+O}; and $e7$ also follows $e4$ for \emph{Offer}. Note that in this way, all directly-follows relations of the events of Tab.~\ref{tab:2_BPIC17ExampleCaseTable} are modeled correctly and in a single data structure, compared to the other models discussed in Fig.~\ref{fig:event_data_models}. \subsection{Modeling Aggregations of Events and Behavior}\label{sec:represent:class} \par\vspace{.5em}\noindent\textbf{Event classes.} While we assume each event to have the mandatory attribute \emph{Activity} events can be classified in other ways as well (cf. Sect.~\ref{sec:background:event_log_concepts}). In our model of Fig.~\ref{fig:complete_schema}, each event class is described by a node \emph{:Class} defined by a unique \emph{ID} and a \emph{Type} shared by all event classes defined in the same way. Each event can be associated to zero or more event classes by relationship $(:Event)-[\mathord{:}\mathit{E\_C}]\mathord{-}\mathord{>}(:Class)$. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/example_class_dfc_how.pdf} \caption{Graph with multiple event classes and aggregated directly-follows} \label{fig:example_class_dfc} \end{figure} Figure~\ref{fig:example_class_dfc} shows non-standard event classes. Two event classes of type ``Resource'' are defined by the Resource entities occurring in the data, e.g., $Res1$ and $Res2$. Events $e5$, $e6$, and $e8$ belong to class $Res1$. Five event classes of type \emph{Last 2 Activities} are defined by the sequence of the current and previous activity for this event, e.g., distinguishing ``first $A$'', $(-,A)$, ``repeated $A$'' $(A,A)$, and ``$A$ after $B$'' $(B,A)$. Events $e2$, $e4$, $e7$ belong to class $(A,B)$. \par\vspace{.5em}\noindent\textbf{Directly-follows on event classes.} Event data analysis aggregates directly-follows relations between events to directly-follows relations between event classes (cf. Sect.~\ref{sec:background:event_log_concepts}). Our model of Fig.~\ref{fig:complete_schema} provides relationship $(:Class)-[\mathord{:}\mathit{DF\_C}]\mathord{-}\mathord{>}(:Class)$. As for \emph{:DF}, each \emph{:DF\_C} relationship lists in attribute \emph{EntityType} for which entity types the aggregated directly-follows relationship holds. This allows to describe aggregated behavior per entity type. The graph in Fig.~\ref{fig:example_class_dfc} shows the aggregation of \emph{:DF} to \emph{:DF\_C}, assuming a single entity type. The two sub-graphs induced by \emph{:DF\_C} describe a hand-over of work social network (bottom)~\cite{DBLP:journals/cscw/AalstRS05} and a transition system~\cite{DBLP:journals/sosym/AalstRVDKG10} (top) of the event data in the same model. \section{Semantics of Entities and Events in Labeled Property Graphs}\label{sec:semantics} The node and relationship types introduced in Sect.~\ref{sec:represent} allow us to model multi-dimensional event data in LPGs. However, LPGs allow for unrestricted use of any node and relationship types which would allow for creating LPGs that do not capture the semantics of event data. For example, Figure~\ref{fig:wrongGraph} only uses the node and relationship types of Sect.~\ref{sec:represent} but the graph violates the semantics the types shall encode: \emph{:DF} does not order events $e2$ and $e3$ according to their timestamp, events $e1$ and $e2$ are ordered by \emph{:DF} but belong to different entities, and event $e3$ even directly-follows itself. \begin{figure}[t] \centering \includegraphics[scale=0.6]{figures/wrongGraph.pdf} \caption{Incorrect semantic pattern of :E\_EN and :DF relationships} \label{fig:wrongGraph} \end{figure} In this section, we formulate constraints on how the nodes and edges over the types of Sect.~\ref{sec:represent} may be relates, thereby giving them semantics. In the following, we formulate such constraints for any labeled property graph $G = (N,R,\mathit{label},\mathit{prop})$ (see Sect.~\ref{sec:background:lpg}). \subsection{Strictly Typed Nodes} We formalize the semantics of the node labels $\mathit{Entity}$, $\mathit{Event}$, $\mathit{Log}$, $\mathit{Class}$ and of the relationship labels $\mathit{E\_EN}$ (event to entity), $\mathit{E\_L}$ (event to log), $\mathit{DF}$ (directly-follows on events), $\mathit{E\_C}$ (event to event class), and $\mathit{DF\_CL}$ (directly-follows on event classes). Each node/relationship may have one of these types (i.e., no node or relationship may have two different semantic roles). Formally, $\mathit{Label}_N = \{ \mathit{Entity}, \mathit{Event}, \mathit{Log}, \mathit{Class}\}$, $\mathit{Label}_R = \{ \mathit{E\_EN}, \mathit{E\_L}, \mathit{DF}, \mathit{E\_C}, \mathit{DF\_CL}\}$ and for each $n \in N$, $|\mathit{label}(n) \cap \mathit{Label}_N| \leq 1$ and $|\mathit{label}(n) \cap \mathit{Label}_R| = 0$ and for each $r \in R$, $|\mathit{label}(r) \cap \mathit{Label}_N| = 0$ and $|\mathit{label}(r) \cap \mathit{Label}_R| \leq 1$. As all nodes and relationships of interest carry exactly one label in $\mathit{Label}_N$ and $\mathit{Label}_R$ which are disjoint sets. We write $n.\mathit{label}$ and $r.\mathit{label}$ for their labels in the following. Further, we write $L$ for the set $\{ n \in N \mid n.\mathit{label} = L \}$ of nodes and of relations $\{ r \in R \mid r.\mathit{label} = L \}$ carrying label $L$, respectively. For example, $n \in \textit{Entity}$ and $(e_1,e_2) \in \textit{DF}$. \subsection{Semantics of Event-Entity Relations}\label{sec:semantics:e_en} The Event-Entity relationship $E\_EN$ correlates an event to its process entities. While each event $e$ can be related to multiple different entities, for example an Application and a Resource, there must not be two $E\_EN$ relationships to the same entity. Furthermore, each event is correlated to some entity, and vice versa, as shown in Fig.~\ref{fig:pattern:e_en}. \begin{figure}\centering \includegraphics[scale=0.6]{figures/pattern_propertyDF.png} \hfill \includegraphics[scale=0.58]{figures/4_LogToEvent.png} \caption{Correct semantic pattern of :E\_EN and :DF relationships (left) and :L\_E relationship (right)}\label{fig:pattern:e_en}\label{fig:pattern:l_e}\label{fig:pattern:df} \end{figure} Formally, the following properties have to hold: \begin{enumerate} \item Between any pair of $e \in \textit{Event}$ and $n \in \textit{Entity}$ is at most one relation $r \in \textit{E\_EN}, \overrightarrow{r} = (e,n)$. As a shorthand, we write $\textit{E\_EN} \subseteq \textit{Event} \times \textit{Entity}$, and $(e,n) \in \textit{E\_EN}$. \item Each event $e \in \textit{Event}$ is correlated to at least one entity: there exists $(e,n) \in \textit{E\_EN}$ \item Each entity $n \in \textit{Entity}$ is correlated to at least one event: there exists $(e,n) \textit{E\_EN}$ \end{enumerate} \subsection{Semantics of Log-Event Relations}\label{sec:semantics:l_e} Log-Event relationships $L\_E$ explicitly encode which events belongs to which log. Every event must be in exactly one log, and each log must have at least one event, as shown in Fig.~\ref{fig:pattern:l_e}. Formally, the following properties have to hold: \begin{enumerate} \item $\textit{L\_E} \subseteq \textit{Log} \times \textit{Event}$ \item Each event $e \in \textit{Event}$ is in exactly one event log: there exists exactly one $r \in \textit{L\_E}, \overrightarrow{r} = (l,e)$. \item Each event log $l \in \textit{Log}$ has at least one event: there exists at least one $r \in \textit{L\_E}, \overrightarrow{r} = (l,e)$ . \end{enumerate} \subsection{Semantics of Directly-Follows Relation} \label{sec:semantics:df} We model temporal relations as paths of \emph{:DF} relationships over \emph{:Event}. Each \emph{:DF} relationship must go forward in time from the point of view of an \emph{:Entity} node correlated to \emph{both} events involved as shown in Fig.~\ref{fig:pattern:df}. Overall, all \emph{:DF} relationships induce a \emph{partial order}. Formally, the following properties have to hold: \begin{enumerate} \item $\textit{DF} \subseteq \textit{Event} \times \textit{Event}$ \item For every $(e_1,e_2) \in \textit{DF}$, exists a log $l \in \textit{Log}$ with $(e_1,l)$ and $(e_2,l) \in \textit{L\_E}$. \item For every $(e_1,e_2) \in \textit{DF}$, $e_1.\textit{Timestamp} \leq e_2.\textit{Timestamp}$ holds, i.e., events are ordered by time. \item For every $(e_1,e_2) \in \textit{DF}$, exists an entity $n \in \textit{Entity}$ with $(e_1,n)$ and $(e_2,n) \in \textit{E\_EN}$ such that $n.EntityType \in (e_1,e_2).EntityTypes$, and there exists no event $e_x$ correlated to $n$, $(e_x,n) \in \textit{E\_EN}$, such that $e_1.\textit{Timestamp} < e_x.\textit{Timestamp} < e_2.\textit{Timestamp}$ holds, i.e., $e_2$ directly-follows $e_1$ from the perspective of entity $n$. \item For all events $e_1$, $(e_1,e_1) \not\in \textit{DF}$, i.e., \emph{:DF} is irreflexive. \item For all events $e_0,e_n \in \textit{Event}$ exists no cycle. $(e_0,e_1)(e_1,e_2),\ldots,(e_{n-1},e_n),(e_n,e_0) \in \textit{DF}$ i.e., \emph{:DF} is acyclic and hence the transitive closure of \emph{:DF} is a partial order. \end{enumerate} \subsection{Semantics of Event-Class relation}\label{sec:semantics:e_c} Events relate to \emph{:Class} nodes in a similar way as to \emph{:Entity} nodes: relationship \emph{:E\_C} relates each event to at least one class of the same type, and vice versa. Formally, the following properties have to hold: \begin{enumerate} \item $\textit{E\_C} \subseteq \textit{Event} \times \textit{Class}$ \item Each event $e \in \textit{Event}$ has at least one event class: there exists $(e,c) \in \textit{E\_C}$, and there are no two classes $(e,c_1),(e,c_2) \in \textit{E\_C}$ with $,c_1\neq c_2$ and $c1.\mathit{Type} = c2.\mathit{Type}$. \end{enumerate} \subsection{Semantics of Class-level Directly-Follows Relation}\label{sec:semantics:df_c} The class-level directly follows relationship \emph{:DF\_C} is only defined between \emph{:Class} nodes of the same type, and may only aggregate \emph{:DF} relationships between events correlated to the same entity, as shown in Fig.~\ref{fig:pattern:dfc}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/semantics_class_dfc.pdf} \caption{Correct Semantic Pattern of :E\_C and :DF\_C Relationship} \label{fig:pattern:dfc} \end{figure} Formally, the following properties have to hold: \begin{enumerate} \item $\textit{DF\_C} \subseteq \textit{Class} \times \textit{Class}$ \item Any two related classes $(c_1,c_2) \in \textit{E\_C}$ are of the same type $c_1.Type = c_2.Type$ \item For any two related classes $(c_1,c_2) \in \textit{DF\_C}$ exist events $e_1,e_2$ of these classes $(e_1,c_1),(e_2,c_2) \in \textit{E\_C}$ ordered in the same way as $c_1$ and $c_2$, $(e_1,e_2) \in \textit{DF}$ for corresponding entity types, $(e_1,e_2).\mathit{EntityTypes} \subseteq (c_1,c_2).\mathit{EntityTypes}$. \end{enumerate} \subsection{Refined Directly-Follows Relation} \label{sec:semantics:df_refined} The \emph{:DF} relationship defined in Sect.~\ref{sec:semantics:df} ensures that there is a single \emph{:DF} relationship between any two ordered events, allowing for writing simple queries. However, queries over paths cannot easily restrict \emph{:DF} relationships to specific entity types as these are stored as lists. This renders queries over multiple entities very complex or infeasible, such as \emph{Q6} in Sect.~\ref{sec:querying}. Here, we discuss two options to refine \emph{:DF} relationships We can refine the relationship label :DF into a \emph{set} of relationship labels with one :DF\_$\textit{type}$ for each value $\textit{type}$ of an \textit{EntityType} property occurring in the graph. In the example of Fig.~\ref{fig:pattern:labelDF}, :DF is refined into three labels :DF\_Application, :DF\_Offer, and :DF\_A+O. All relationships of :DF\_$\textit{type}$ have to satisfy the constraints of Sect.~\ref{sec:semantics:df} and additionally, if $(e_1,e_2) \in \textit{DF\_type}$ then both events are correlated $(e_1,n),(e_2,n) \in \textit{E\_EN}$ to entity $n$ of this type $n.\textit{EntityType} = type$. \begin{figure}[t]\centering \includegraphics[scale=0.7]{figures/example_bpic17_lpg_event_entity_df_label.pdf} \caption{Graph of Fig.~\ref{fig:example_entity_events_df} with refined directly-follows relationship}\label{fig:pattern:labelDF} \end{figure} In the resulting model, there can be as many dedicated :DF\textit{\_type} relations between two events as there are entity types in the data. In consequence, the required disk space can grow significantly, depending on the log size and number of entity types. The analysis in turn becomes more efficient because queries can directly match :DF\_$\textit{type}$ labels, which allowed us to define a query for Q6 (see Sect.~\ref{sec:querying}). We can refine the :DF\_C relation to :DF\_$\textit{type}$ relations in the same way. Next to encoding the entity type in the label, we can also add a property \emph{EntityType} to the :DF relationships leading to a similar data model with the same number of relationships, but using all the same :DF label. Formally, we now may have more than one :DF relationship between two events. This encoding has advantages when writing queries for aggregating :DF relations. \section{Translating Event Logs to Labeled Property Graphs}\label{sec:storing} We now present a semi-automatic procedure for translating event tables with multiple entity identifiers in CSV format (cf. Sect.~\ref{sec:background:multi-dim-literature}) into the graph data structure introduced in Sect.~\ref{sec:represent} satisfying the semantic constraints of Sect.~\ref{sec:semantics}. In a nutshell, our method has the following steps. (\ref{sec:storing:source_format}) We assume the event data to be given in the form of an event table where each record describes one event. (\ref{sec:storing:import_events}) We translate each record with all its attributes to an \emph{:Event} node in the LPG with corresponding properties, obtaining a graph of unrelated \emph{:Event} nodes. (\ref{sec:storing:log_nodes}) We create \emph{:Log} nodes for each log in the source data set and correlate them with the respective \emph{:Event} nodes (\ref{sec:storing:correlate}) We provide query templates to extract \emph{:Entity} nodes from \emph{:Event} properties (e.g. identifiers) and to correlate \emph{:Event} nodes to all their \emph{:Entity} nodes. (\ref{sec:storing:df}) A generic query derives the entity-specific directly follows \emph{:DF} relationships between events. (\ref{sec:storing:reify}) Finally, we provide queries to reify relations between entities into new composite entities, allowing to derive \emph{:DF} relationships of interactions between entities. We explain the queries on the running example of Tab.~\ref{tab:2_BPIC17ExampleCaseTable}. We demonstrate the types of graphs obtained on the full BPIC17 dataset~\cite{BPIC2017} in Sect.~\ref{sec:storing:demo_bpic17} and report on a quantative evaluation of all datasets in Sect.~\ref{sec:storing:evaluation}. \subsection{Source Event Data Format}\label{sec:storing:source_format} We expect the event data of the source log to be in event table format (see Sect.~\ref{sec:background:multi-dim-literature}) defining columns \emph{Activity} and \emph{Timestamp} and multiple columns \emph{Attribute1},\ldots,\emph{AttributeN} that also contain entity identifiers. In case the data comes from multiple logs, also a \emph{LogID} column is required. \subsection{Import the Events}\label{sec:storing:import_events} The following Cypher query imports the entire event table into the graph such that each row of the table translates to one :Event node with one property per attribute (column) in the table. Importing the first four rows of Tab.~\ref{tab:2_BPIC17ExampleCaseTable} results in the graph shown in Fig.~\ref{fig:4_3_improted_events} \begin{lstlisting}[escapechar=\#,style=smallStyle] LOAD CSV WITH HEADERS FROM "file:///#\textit{filename}#.csv" as line CREATE (e:Event {LogID: line.LogID, Activity: line.Activity, Timestamp: datetime(line.Timestamp), #\textit{\$Attribute1}#: line.#\textit{\$Attribute1}#, ..., #\textit{\$AttributeN}#: line.#\textit{\$AttributeN}# }) \end{lstlisting} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/import_example_events.pdf} \caption{Graph after step 1: event nodes with properties} \label{fig:4_3_improted_events} \end{figure} \subsection{Create Logs}\label{sec:storing:log_nodes} Next we create the log nodes. With the MERGE command we generate exactly one new :Log node per distinct e.LogID in the the data set. \begin{lstlisting}[style=smallStyle] MATCH (e:Event) MERGE (:Log {ID: e.LogID}) \end{lstlisting} Then we correlate all events to the respective log nodes resulting in the structure shown at the top of Fig.~\ref{fig:4_4_logs}(left). The resulting log conforms to the constraints of Sect.~\ref{sec:semantics:l_e}. \begin{lstlisting}[style=smallStyle] MATCH (e:Event) MATCH (l:Log) WHERE e.LogID = l.ID CREATE (l)-[:L_E]->(e) \end{lstlisting} \subsection{Create Entities Nodes and Correlate Events (using Domain Knowledge)}\label{sec:storing:correlate} We create entities and correlate events to entities in two steps. First we identify and create the set of all entities that occur in the data by creating \emph{:Entity} nodes. Then we correlate each event to all its entities by creating \emph{:E\_EN} relationships. Creating entities from events requires \emph{domain knowledge} about how entity types and identifiers are stored in the event data. The user decides whether the presence of an identifier allows correlating the event to an entity. In our running example, events can have two different entity identifiers: \emph{Appl} and \emph{oID} (see Tab.~\ref{tab:2_BPIC17ExampleCaseTable} or Fig.~\ref{fig:4_3_improted_events}). The property \textit{Origin} designates their ``owning'' entity \emph{Application}, \emph{Workflow}, or \emph{Offer}. For example, only events with \textit{e.Origin = ``A''} belong to the \emph{Application} entity with id \textit{e.Appl}. The following query template provides 3 parameters. \textit{Type} sets the entity type to which an event shall be correlated; \textit{EntityID} is the event property that defines the entity identifier to which the event shall be correlated; \emph{EntityProperty} allows to correlate only those events having a specific property. Calling the template with \textit{EntityProperty} $\equiv$ ``\textit{e.Origin = ``A''}, \textit{EntityID} $\equiv$ ``Appl'', \textit{Type} $\equiv$ ``\textit{Application}'', creates one \emph{:Entity} node of type ``Application'' for each value of \emph{e.Appl} found in all event nodes where \textit{e.Origin = ``A''}, i.e., $e1$ and $e2$ in Fig.~\ref{fig:4_3_improted_events}. Prefixing the entity id with its types ensures a unique \emph{:Entity} node id (uID). % \begin{lstlisting}[escapechar=\#,style=smallStyle] MATCH (e:Event) WHERE #\textit{EntityProperty}# WITH e.#\textit{EntityID}# AS id, e.#\textit{Type}# as name MERGE (en:Entity {ID:id, uID:(name+id), EntityType: name}) \end{lstlisting} The following query template with the same parameters correlates each matching event to its entity node. \begin{lstlisting}[escapechar=\#,style=smallStyle] MATCH (e:Event) WHERE #\textit{EntityProperty}# MATCH (n:Entity {EntityType: #\textit{Type}#}) WHERE e.#\textit{EntityID}# = n.ID CREATE (e)-[:E_EN]->(n) \end{lstlisting} The above query template has to be executed for each entity type in the data. Assuming each event contains correlation information for at least one entity, it conforms to the semantic requirements of Sect.~\ref{sec:semantics:e_en}. The event graph after creating entities for Application, Workflow, and Offer is shown in Fig.~\ref{fig:4_5_entities}(left). \begin{figure}[tb] \centering \includegraphics[scale=0.5]{figures/import_example_entities.pdf} \includegraphics[scale=0.5]{figures/import_example_df.pdf} \caption{Graph after creating log and entity nodes (left) and after adding directly-follows relationships between Application events (right)} \label{fig:4_4_logs}\label{fig:4_5_entities}\label{fig:4_5_df} \end{figure} \subsection{Create Entity-specific Directly-Follows Relation}\label{sec:storing:df} We derive the directly-follows relation per entity from the \emph{timestamp} attribute of each event. We collect all \emph{:Event} nodes correlated to the same \emph{:Entity} node $n$ via \emph{:E\_EN} (lines 1-2 in the query below) and order all events by their \emph{timestamp} attribute and collect them in a $\textit{eventList}$ of length $k$ (line 3-4). We then iterate over the 0-indexed $\textit{eventList} = \langle e_0,\ldots,e_{k-1} \rangle$ (lines 5-6) and create a \emph{:DF} relationship from $e_i$ to $e_{i+1}$ for each $i=0,\ldots,k-2$. \begin{lstlisting}[style=smallStyle] MATCH (n : Entity ) MATCH (n) <-[:E_EN]- ( ev ) WITH n , ev as events ORDER BY ev.timestamp,ID(e) WITH n , collect ( events ) as eventList UNWIND range(0,size(eventList)-2) AS i WITH n , eventList[i] as e1, eventList[i+1] as e2 MERGE ( e1 ) -[df:DF {EntityType:n.EntityType}]->( e2 ) \end{lstlisting} The data may contain events with identical timestamps, typically due to coarse-grained or imprecise recording~\cite{DBLP:conf/bpm/LuFA14,DBLP:conf/bpm/PegoraroUA19}. To ensure that all directly-follows relations form a directed acyclic graph (see Sect.~\ref{sec:semantics:df}), we need to provide a globally consistent ordering for events with identical timestamps. We do so using the internal unique $ID(e)$ of the \emph{:Event} nodes in line 3 to order events by $ID(e)$ in case their timestamps are identical. As we import the events in the same order as in the source data $ID(e)$ is consistent with the implicit ordering in the source data. The query creates \emph{:DF} relationships for events \emph{per entity} node in the graph; through \emph{MERGE} in line 7, we ensure that we only add relationships between different events per \emph{EntityType} as discussed in Sect.~\ref{sec:semantics:df_refined}. We may also create entity type-specific \emph{:DF\_type} relationships when using \emph{type} as parameter: we add \emph{WHERE n.EntityType = type} in line 1 and use \emph{MERGE ( e1 ) -[df:DF\_type]-$>$( e2 )} instead in line 7. Figure~\ref{fig:4_5_df}(right) shows a \emph{:DF\_Application} relationship created between $e1$ and $e2$ of the running example using this adapted query. Creating \emph{:DF} relationships in this way conforms to the constraints of Sect.~\ref{sec:semantics:df}. \subsection{Reify relations between entities into composite entities for describing interactions}\label{sec:storing:reify} Entity creation and correlation may leave events of different entities unrelated if an event is not explicitly related to more than on entity. In our running example of BPIC17~\cite{BPIC2017}, events are tightly correlated to either an Application, Offer, or Workflow entity as shown in Fig.~\ref{fig:4_5_entities}(left). Deriving directly-follows relations per entity as in Sect.~\ref{sec:storing:df} leaves these entities disconnected. We cannot connect \emph{Offer} entities by further correlating \emph{Offer} events to other existing entities such \emph{Application}. If we would correlate $e3$ and $e4$ directly to \emph{Application 1} by entity identifier \emph{Appl}., we would ``pollute'' the directly-follows relation of \emph{Application 1} with events that are only remotely related to it, resulting in convergence errors (see Sect.~\ref{sec:background:multi-dim-req}). Instead, we have to model the \emph{interactions} between two entities $n1$ and $n2$ by \emph{reifying} the relation between $n1$ and $n2$ into a composite entity $r$\,---\,and then derive \emph{:DF} relationships for $r$~\cite{DBLP:conf/apn/Fahland19,DBLP:journals/tsc/LuNWF15}. This also requires domain knowledge Our data model starts from recorded events, thus we have to infer relations between entities from event attributes. Assume two events \emph{(e1:Event) -[:E\_EN]-$\mathord{>}$ (n1:Entity)} and \emph{(e2:Event) -[:E\_EN]-$\mathord{>}$ (n2:Entity)} are correlated to different entities \emph{n1 $\mathord{<>}$ n2}. If \emph{e2} contains some property \emph{refto1} referencing the entity identifier \emph{ID} of \emph{n1}, i.e., a foreign key, we observe that \emph{n2} is related to \emph{n1} through event \emph{e2}. In our running example, we observe that \emph{Order 1} is related to \emph{Application 1} through event \emph{e4} of \emph{Order 1} via property \emph{Appl.}, see Fig.~\ref{fig:4_5_entities_composite}. We lift this observation to entity types. The relation $R$ from entity \emph{type1} to entity \emph{type2} is the set of all pairs $(n1.\mathit{ID},n2.\mathit{ID})$ where (1) there is an entity $n1$ of \emph{type1}, (2) some event \emph{(e2:Event) -[:E\_EN]-$\mathord{>}$ (n2:Entity)} is correlated to entity $n2$ of \emph{type2}, (3) with $e2.\textit{refto1} = n1.\textit{ID}$. Lines 1-5 of the query template below construct this relation $R$ for chosen parameters \emph{type1}, \emph{type2}, and \emph{refto1}. Line 6 reifies relation $R$ by creating a new \emph{composite} entity $r$ of \emph{typeR} for each pair $(n1.\textit{ID},n2.\textit{ID})$. \begin{lstlisting}[escapechar=\#,style=smallStyle] MATCH (e1:Event) -[:E_EN]-> (n1:Entity) WHERE n1.EntityType='#\emph{type1}#' MATCH (e2:Event) -[:E_EN]-> (n2:Entity) WHERE n2.EntityType='#\emph{type2}#' AND n1 <> n2 AND e2.#\emph{refto1}# = n1.ID WITH DISTINCT n1.ID as n1_id, n2.ID as n2_id WHERE n1_id <> 'Unknown' AND n2_id <> 'Unknown' CREATE ( r : Entity { ID:n1_id+'_'+n2_id, EntityType : '#\emph{typeR}#', #\emph{type1}#ID: n1_id, #\emph{type2}#ID: n2_id, uID:'#\emph{typeR}#_'+n1_id+'_'+n2_id) } ) \end{lstlisting} Applying the above queries for \emph{type1} $\equiv$ \emph{Application}, \emph{type2} $\equiv$ \emph{Offer}, \emph{refto1} $\equiv$ \emph{Appl}, \emph{typeR} $\equiv$ \emph{Case\_AO}, on our running example results in the new entity with ID $(1,1)$ in the graph of Fig.~\ref{fig:4_5_entities_composite}. The new entity $r$ refers to $n1$ and $n2$ by properties \emph{type1ID} and \emph{type2ID}, e.g., \emph{ApplicationID} and \emph{OrderID}; $r$'s own \emph{ID} is the combination of $n1.\textit{ID}$ and $n2.\textit{ID}$. Any event $e$ correlated to an entity $n$ to which the composite entity $r$ refers (by $r.\textit{type1ID}$ or $r.\textit{type2ID}$) can now be correlated to $r$ using the following generic query template. \begin{lstlisting}[escapechar=\#,style=smallStyle] MATCH (e:Event) -[:E_EN]-> (n:Entity) WHERE n.EntityType='#\emph{type}#' MATCH (r:Entity) WHERE r.EntityType = '#\emph{typeR}#' AND n1.ID = r.#\emph{type}#ID CREATE (e1) -[:E_EN]-> (r) \end{lstlisting} \begin{figure} \centering \includegraphics[scale=0.5]{figures/import_example_entities_composite.pdf} \caption{Graph after reifying relation between \emph{Application 1} and \emph{Offer 1} into composite entity \emph{Case\_AO (1,1)} } \label{fig:4_5_entities_composite} \end{figure} Applying the above query twice, once for \emph{type} $\equiv$ \emph{Application} and once for \emph{type} $\equiv$ \emph{Offer} both with \emph{typeR} $\equiv$ \emph{Case\_AO} adds the \emph{:E\_EN} relationships from $e2$ and $e4$ to \emph{Case\_AO} $(1,1)$ shown in Fig.~\ref{fig:4_5_entities_composite}. Depending on the available domain knowledge, the correlation query can be made more specific by adding \emph{WHERE} clauses for only correlating events that satisfy specific properties. We may now derive \emph{:DF} relationships for the composite entity \emph{typeR} using the queries of Sect.~\ref{sec:storing:df}, e.g., Fig.~\ref{fig:4_5_entities_composite} also shows relationship \emph{:DF\_Case\_AO} derived for the composite entity of type \emph{Case\_AO}. By correlating events of related entities \emph{Application 1} and \emph{Order 1} to their own reified entity \emph{Case\_AO (1,1)}, we could construct a new directly-follows relation \emph{:DF\_Case\_AO} describing the interaction between \emph{Application 1} and \emph{Offer 1}. The original directly-follows relations \emph{:DF\_Application} and \emph{:DF\_Offer} remain as before and ``unpolluted''. \subsection{Demonstration on BPIC17}\label{sec:storing:demo_bpic17} We applied the above queries on the events of the full BPIC17 dataset~\cite{BPIC2017}. After importing all events\footnote{we filtered out events with lifecycle attribute \emph{suspend} or \emph{resume} for reducing the size of the figures}, we derived entities for 3 types: \emph{Application}, \emph{Workflow}, \emph{Offer}. We reified the binary relations between the first three entities into \emph{Case\_AO}, \emph{Case\_AW}, \emph{Case\_WO} and derived entity-specific \emph{:DF\_type} relationships. Figure~\ref{fig:bpic17_singlecase} shows the graph of handling loan application 681547497 involving one Application entity (dark blue), one Workflow entity (light blue), and two Offer entities (orange). Interactions are shown through the grey \emph{:DF}-relationships of \emph{Case\_AO}, \emph{Case\_AW}, and \emph{Case\_WO}.\footnote{To simplify the visualization, the graph does not contain \emph{:DF\_Case\_AO}, \emph{:DF\_Case\_AW}, \emph{:DF\_Case\_WO} relationships which are in parallel to a \emph{DF\_Application}, \emph{DF\_Workflow}, \emph{DF\_Offer} relationship.} The graph shows how both offers are created and handled concurrently to the application entity. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/bpic17_graph_single_case_681547497_parallel_offers.pdf} \includegraphics[width=\linewidth]{figures/bpic17_graph_single_case_681547497_parallel_offers_cutout.pdf} \caption{Graph of handling loan application 681547497 in BPIC17~\cite{BPIC2017} (top) with detail of two parallel offers (bottom)} \label{fig:bpic17_singlecase} \end{figure} Figure~\ref{fig:bpic17_multiple_cases} visualizes 7 randomly selected process executions: the 1st and 4th involve only one Offer whereas all others involve two offer entities; some executions in BPIC17 involve 5 or more offer entities. Offers may be created in parallel (2nd, 7th) or with Application events in between (3rd, 5th, 6th). Offers may conclude in parallel (2nd, 3rd, 4th) or with Application events in between (6th, 7th). \begin{figure} \centering \includegraphics[width=\linewidth]{figures/bpic17_graph_multiple_cases_single_parallel_offers.pdf} \caption{Graph of 7 randomly selected process executions of BPIC17~\cite{BPIC2017}} \label{fig:bpic17_multiple_cases} \end{figure} We then derived \emph{Resource} as additional entity from the \emph{e.resource} property of events. While Application, Workflow, and Offer are local to a process execution, the \emph{Resource} entities describe works who persist in the system and work on many entities. We derived the \emph{:DF\_Resource} relationships. Querying the data for the events of the 7 process executions of Fig.~\ref{fig:bpic17_multiple_cases} and the \emph{:DF} relationships of all entities results in the graph in Fig.~\ref{fig:bpic17_resources_across_cases}, where \emph{:DF\_Resource} relationships are shown in red. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/bpic17_graph_resource_connects_all.pdf} \includegraphics[width=\linewidth]{figures/bpic17_graph_resource_connects_all_zoom_in.pdf} \caption{Behavior of resources overlaid on the 7 process executions of Fig.~\ref{fig:bpic17_multiple_cases}(top) and some details (bottom).} \label{fig:bpic17_resources_across_cases} \end{figure} We can clearly see that all process executions and entities are tightly connected through the resources. Each resources is always involved for a sequence of several events of the same or related entities, and then moves to another entity in another process execution while handing the previous entity over to another resource. Overlaying \emph{:DF\_Resource} relationships also allows us to see that interactions between related Application, Workflow, and Offer entities of the same process execution are not explained by Resource entities. In the graph in Fig.~\ref{fig:bpic17_resource_vs_derived}, the first event of \emph{Offer\_1647347263} correlated to \emph{User\_85} follows after an Application event (via \emph{Case\_AO}) correlated to \emph{User\_7}, i.e., there is no resource explaining the ordering of Application and Offer. This confirms the importance of reifying relations between entities into composite entities\,---\,otherwise the graph would describe that \emph{Offer\_1647347263} would start concurrently to all preceding events. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/bpic17_graph_two_cases_681547497_2014483796_resource_dervied_df_cutout.pdf} \caption{Resource behavior does not explain entity interactions} \label{fig:bpic17_resource_vs_derived} \end{figure} \subsection{Evaluation}\label{sec:storing:evaluation} We applied the above steps for importing and transforming the event data into our proposed graph-based data model on 5 real-life datasets~\cite{BPIC2014,BPIC2015,BPIC2016,BPIC2017,BPIC2019}\footnote{For BPIC2016, we omitted all click events without a session identifier as these could not be correlated.} using a Neo4j instance with 20GB of main memory allocated\footnote{The Cypher queries are available at \url{https://github.com/multi-dimensional-process-mining/graphdb-eventlogs}}and at~\cite{graphdataset}. We measured the time required for the conversion and the memory requirements for storing the data in Neo4j. All translations succeeded within several minutes, even for the largest datasets however explicitly encoding the structural information requires significant space. The results are shown in Table~\ref{tab:logStatsAll}. We observed execution time and space to grow linearly with the number of entities to derive and relations to reify as per event and entity, we derive one \emph{:E\_EN} relationship and one \emph{:DF} relationship. In general, the size of the source log is not a solid indicator for the size of a graph event log. For example, the BPIC'14 is small in size but defines several related entities resulting in a large graph. More importantly, any graph can be adapted to the needs of a particular research question, e.g., by deriving only a limited number of (composite) entities. \begin{table}[]\small \begin{tabular}{lrrrrrrrr} \hline &&&&&\multicolumn{2}{c}{size (GB)} & time\\ Data Set & Events & Entities & Nodes & Rel'ships & DB & Source & (mins) \\ \hline BPIC'14 & 690,622 & 225,932 & 916.905 & 3.917.800 & 2.05 & 0.08 & 2 \\ BPIC'15 & 262,628 & 5,649 & 268,638 & 1,553,448 & 5.24 & 0.11 & $<1$ \\ BPIC'16 & 7,360,146 & 26,647 & 7,387,414 & 29,413,937 & 17.0 & 1.06 & 16 \\ BPIC'17 & 1,202,267 & 255,170 & 2,614,200 & 17,459,752 & 7.92 & 0.29 & 20 \\ BPIC'19 & 1,595,923 & 328,083 & 1,924,049 & 9,247,455 & 6.02 & 0.52 & 6 \\ \hline \end{tabular} \caption{Statistics on converting 5 real-life event datasets into labeled property graphs} \label{tab:logStatsAll} \end{table} \section{Querying Multi-Dimensional Event Data}\label{sec:querying} In the following we present 6 classes of analysis questions that we formulated to evaluate requirements R5-R11 of Sect.~\ref{sec:background:multi-dim-req} for querying multi-dimensional event data on the LPGs of Sect.~\ref{sec:storing}. For each question we provide a Cypher query and report results and the query processing times (measured on an Intel i7 CPU @ 2.6 GHz machine with 32 GB of memory with Neo4j Browser). We conducted the experiment on the BPIC17 dataset for which we additionally derived the \emph{Case\_AWO} entity type which corresponds to the original case notion, i.e., all events sharing the same \emph{e.case} attribute. We did this in order to be able to verify the correctness of our results against classical process mining software which works with the original case notion only. \par\vspace{.5em}\noindent\textbf{Q1. Query Attributes of Events/Cases}. We want to query for the first-class concepts of event logs: a case and an event based on event/case attributes by using a partial patterns to satisfy R7. The following query returns the event attribute ``end'' and the case attribute ``LoanGoal'' of Case ``Application\_681547497''. Note that all (event and entity) attributes are encoded as properties of event nodes. \begin{lstlisting}[style=smallStyle] MATCH (c:Entity {EntityType: 'Case_AWO'}) <-[:E_EN]- (e:Event) WHERE c.ID = "Application_681547497" AND e.Activity = "A_Submitted" RETURN e.end, e.LoanGoal \end{lstlisting} The query has been processed in 0.061 seconds. After modifying the query to consider all cases, i.e. remove the condition for a specific case in line 2, the query completed in 0.582 seconds. \par\vspace{.5em}\noindent\textbf{Q2. Query Directly-Follows Relations}. Q2 is focused on temporal aspects. Here we show a query that satisfies R8 by considering 2 consecutive events. Directly-follows relations of events in a case are an important characteristic of event logs as they represent the case internal temporal order of events and many of today's process mining techniques rely on these relations. The query below returns the event directly following the node with the activity property ``O\_Created'' of a given offer entity by matching the \textit{:DF\_Offer} relationship. \begin{lstlisting}[style=smallStyle] MATCH (o:Entity {EntityType: 'Offer'}) <-[:E_EN]- (e1:Event) <-[:DF_Offer]- (e2:Event) WHERE o.ID = "Offer_716078829" AND e1.Activity = "O_Created" RETURN e1,e2 \end{lstlisting} The query execution time for one specific offer was 0.064 seconds whereas querying the \textit{:DF\_Offer} relations with destination node ``O\_Created'' for all 42,995 offers took 11.932 seconds. Directly-follows relations of other entities (Application and Workflow) or across entities (Case\_AWO) can be queried by adjusting the query in the \textit{MATCH} and \textit{WHERE} clauses accordingly. \par\vspace{.5em}\noindent\textbf{Q3. Query Eventually-Follows Relations}. We want a query that satisfies R8 by considering the temporal relationship of any 2 events of a case. Eventually-follows relations are also related to the case internal order of events. Event $y$ \emph{eventually follows} event $x$ if $y$ occurs after $x$ in the same case, that is, if $x$ and $y$ are connected through a path of directly-follows relations of arbitrary length. We query the offer specific eventually-follows relationship between ``O\_Created'' and ``O\_Cancelled'' for a given offer as follows: \begin{lstlisting}[style=smallStyle] MATCH (o:Entity {EntityType: 'Offer'}) <-[:E_EN]- (e1:Event) -[:DF_Offer*]-> (e2:Event) WHERE o.ID = "Offer_716078829" AND e1.Activity = "O_Created" AND e2.Activity = "O_Cancelled" RETURN e1,e2 \end{lstlisting} Even though the \textit{MATCH} clause looks similar to the one of the directly-follows query, the *-Operator changes the pattern from a direct relationship to a path of arbitrary length. Since we want to find the eventually-follows relationship of two specific activities we also added condition $\mathit{e2.Activity} = \textit{``O\_Cancelled''}$ to the \textit{WHERE} clause to define the endpoint of the paths we want to match in the graph. For the given offer the query took 0.068 seconds. For all 20,898 offers where ``O\_Created'' is eventually followed by ``O\_Cancelled'' we removed the condition for ``Offer\_716078829'' from the query which then took 4.264 seconds. \par\vspace{.5em}\noindent\textbf{Q4. Case Variants}. We want a query to return a case variant as path in the graph to satisfy R6. A \emph{case variant} is the sequence of activities of a case. Case variants are for example used to detect frequent behaviour of a process. We can query the graph to retain the path of events of a case by walking over all of its directly-follows relationships from the first to the last event. For a given case (Case\_AWO) this can be done as follows: \begin{lstlisting}[style=smallStyle] MATCH (c:Entity {EntityType: 'Case_AWO'}) <-[:E_EN]- (e1:Event) -[:DF_Case_AWO*]-> (e2:Event) WHERE NOT ()-[:DF_Case_AWO]->(e1) AND NOT (e2)-[:DF_Case_AWO]->() AND c.ID = 'Application_681547497' RETURN (e1:Event) -[:DF_Case_AWO*]-> (e2:Event) AS paths \end{lstlisting} The pattern of the match clause follows the same logic as the eventually-follows match pattern. For variants we limit the output to the first and last event of a case, i.e. the events that have no incoming or no outgoing ``:DF\_Case\_AWO'' relationship. The query completed in 0.079 seconds. Similarly, we can query the graph for variants of another entity such as Offer. The paths of events returned by the above query can be turned in a list of activity sequences by Cypher's list operators: \textit{UNWIND} processes each path in the \textit{paths} variable iteratively, function \textit{nodes()} translates the path into a list of nodes, and list comprehension maps each event node to its activity property. The resulting list of activities can be compared for equality with other lists, etc. \par\vspace{.5em}\noindent\textbf{Q5. Query Duration/Distance between two specific Activities}. The information on how much time or how many activities were needed to get an Offer from ``O\_Created'' to ``O\_Accepted'' for example can be used to measure process performance. For Q6 we want to query temporal relations in the form of durations and path lengths to satisfy R8. Say we are interested in the offer entity that took the longest time to get accepted. We can query the eventually-follows relation of two given activities and use their timestamps to calculate the elapsed time between them: \begin{lstlisting}[style=smallStyle] MATCH (e1:Event) -[:E_EN]-> (o:Entity {EntityType: 'Offer'}) <-[:E_EN]- (e2:Event) WHERE e1.Activity = "O_Created" and e2.Activity = "O_Accepted" WITH e1, e2, duration.between(e1.timestamp, e2.timestamp) AS time, o ORDER BY time DESC LIMIT 1 RETURN e1, e2, time.days AS days, (toFloat(time.minutes)/60) AS hours, o \end{lstlisting} The query matches all \textit{:E\_EN} relationships, filters for the given activities and then uses Cypher's duration function to calculate the time spans. Only the result with the longest duration is returned. In case we want to retrieve the distance wrt. the number of activities, we can aggregate over the nodes along the path between the two events with eventually-follows relation and count the hops with the ``Length()'' function as shown in~\cite{Esser2019cs_tue}. The query for the elapsed time completed in 0.585 seconds. Querying for the longest path took 0.755 seconds. \par\vspace{.5em}\noindent\textbf{Q6. Query for Behavior across Multi-Instance Relations}. Event logs such as BPIC'17 can contain multiple case identifiers. A case identifier may be a single entity, e.g. Offer, or any combination of entities such as the Case notion of BPIC'17 combining Application, Workflow and Offer entities. Querying the behavior across different instances of these entities typically requires multiple steps with traditional event logs such as custom scripts to be able to select, project, aggregate and combine the results accordingly. With Q6 we want to satisfy R9 by querying for events correlated to the same entity, R10 by combining data from different entities in the same query, and to satisfy R11 by querying 2 (sub)processes in a single query. We defined a query that returns all paths from ``A\_Create Application'' to ``O\_Cancelled'' of the BPIC'17 Cases for Offers that have ``O\_Created'' directly followed by ``O\_Cancelled'' on entity level, but only for those Cases that have more than one Offer with ``O\_Created'' directly followed by ``O\_Cancelled''. \begin{lstlisting}[style=smallStyle] MATCH (o:Entity {EntityType: "Offer"})<-[:E_EN]-(e1:Event {Activity: "O_Created"}) -[df:DF_Offer]-> (e2:Event {Activity: "O_Cancelled"})-[:E_EN]->(o) MATCH (e2)-[:E_EN]->(c:Entity {EntityType: "Case_AWO"})<-[:E_EN]-(e1)-[:E_EN]->(o) WITH c, count(o) AS ct WHERE ct > 1 MATCH (o:Entity {EntityType: "Offer"})<-[:E_EN]-(e1:Event {Activity: "O_Created"}) -[df:DF_Offer]-> (e2:Event {Activity: "O_Cancelled"})-[:E_EN]->(o) MATCH (e2)-[:E_EN]->(c)<-[:E_EN]-(e1)-[:E_EN]->(o) WITH e2 AS O_Cancelled,c MATCH (A_Created:Event {Activity: "A_Create Application"})-[:E_EN]->(c)<-[:E_EN]-(O_Cancelled:Event {Activity: "O_Cancelled"}) MATCH p = (A_Created) -[:DF_Case_AWO*]-> (O_Cancelled) RETURN p \end{lstlisting} The query demonstrates several central properties of querying multi-dimensional event data in labeled property graphs. The first \textit{MATCH} clauses (lines 1-4) return all case nodes of Cases structurally related to more than one Offer (via \textit{:E\_EN}) where ``O\_Created'' is directly followed by ``O\_Cancelled'' in this offer (via \textit{:DF\_Offer}). Note that the case (Case\_AWO) typically has several other events not related to the offer in between the two events, i.e., they \emph{only} directly follow each other according to \textit{:DF\_Offer} but not according to \textit{:DF\_Case\_AWO}. The second pair of \textit{MATCH} clauses (lines 5-7) return all ``O\_Cancelled'' events that directly succeed ``O\_Created'' (via \textit{:DF\_Offer}) in an Offer that is correlated to one of the cases with multiple offers (found in Lines 1-4). The returned ``O\_Cancelled'' events are used in the last pair of \textit{MATCH} clauses (lines 8-9) to return paths from some ``A\_Create Application'' event to one of the ``O\_Cancelled'' events. This way we get a unique path for every Offer that meets the criteria. The query's execution time was 0.569 seconds in Neo4j Browser. Figure~\ref{fig:Q6} shows 2 of the 218 paths of the query's output in Neo4j's graphical representation; in all 2 cases the ``O\_Created'' and ``O\_Cancelled'' events of one offer are interleaved with events from the Application or other offers. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{q7.pdf} \caption{Q6 Output} \label{fig:Q6} \end{figure} \par\vspace{.5em}\noindent\textbf{Discussion.} We validated the correctness of our queries against an independent baseline implementation. The results of Q1-Q5 were obtained by processing the event log with manual filtering in Disco. Q6 required a manual procedural algorithm using a single-pass search over the data as the evaluation with existing tools was not possible. Our Cypher queries obtained the same result as the baseline implementations~\cite{Esser2019cs_tue}. The graph analysis for Q1-Q6 required only Cypher queries with clauses and functions as described in~\cite{francis2018cypher} (except for the typecasts which are not part of Cypher but provided by Neo4j). Notably, the single-pass baseline algorithm for Q6 required 15 mins compared to the 0.453 secs needed to obtain the same results using Neo4j. Further details on the evaluation of Q1-Q6 regarding time and baselines can be found in~\cite{Esser2019cs_tue}. \section{Constructing Simple Models for Multiple Entities}\label{sec:mining} We now show that our data model of Sect.~\ref{sec:represent} allows \emph{aggregating} the directly-follows relations between events to directly-follows relations between event classes\,---\,taking the notion of entities and entity types into account. We provide queries that satisfy R12-R15 of Sect.~\ref{sec:background:multi-dim-req}. \subsection{Aggregating events into user-defined event classes} \label{sec:mining:class} An event \emph{:Class} is a node describing a \emph{set} of events with the same characteristics, e.g., having the same \emph{Activity} or other combination of data attributes. We can aggregate events into user-defined event classes using the same principles as deriving and correlating \emph{:Entity} nodes: we query for all distinct values of a particular (combination of) event attributes and create a new \emph{:Class} node per retrieved value(s). We illustrate the concept for two event classes: the activity name and life-cycle attribute, and resource. \begin{lstlisting}[style=smallStyle] MATCH (e:Event) WITH distinct e.Activity AS actName,e.lifecycle AS lifecycle MERGE ( c : Class { Name:actName, Lifecycle:lifecycle, Type:"Activity+Lifecycle", ID: actName+"+"+lifecycle}) \end{lstlisting} \begin{lstlisting}[style=smallStyle] MATCH ( e : Event ) WITH distinct e.resource AS name MERGE ( c : Class { Type:"Resource", ID: name}) \end{lstlisting} We then link each \emph{:Class} node to all events of this class when they match on the defining attributes, as for correlating events to entities. We show the query for \emph{Activity+Lifecycle}. \begin{lstlisting}[style=smallStyle] MATCH ( c : Class ) WHERE c.Type = "Activity+Lifecycle" MATCH ( e : Event ) where e.Activity = c.Name AND e.lifecycle = c.Lifecycle CREATE ( e ) -[:E_C]-> ( c ) \end{lstlisting} We may also derive event classes based on behavioral properties of events, e.g., based on an event \emph{(e2:Event) -[:DF]-$\mathord{>}$ (e)} preceding $e$. The above queries satisfy the semantic constraints for \emph{:E\_C} of Sect.~\ref{sec:semantics:e_c}. \subsection{Aggregating directly-follows relations} \label{sec:mining:df_c} The \emph{(c1:Class) -[:DF\_C]-$\mathord{>}$ (c2:Class)} directly-follows relation between class $c1$ and class $c2$ aggregates all directly-follows relations \emph{(e1:Event) -[:DF]-$\mathord{>}$ (e2:Event)} between events $e1$ of $c1$ and $e2$ of $c2$, see Sect.~\ref{sec:semantics:df_c}. We provide a query for the refined \emph{:DF} relationships carrying the \emph{EntityType} property to distinguish the type for which they are defined (see Sect.~\ref{sec:semantics:df_refined}). We may only aggregate \emph{:DF} relationships between events correlated to the same entity $n$ (line 2) for which the \emph{:DF} relationship was also defined (line 3). Further classes $c1$ and $c2$ must be of the same \emph{Type} (line 3). This ensures that we satisfy R15. We aggregate by counting how many \emph{:DF} relationships (df) exist between $c1$ and $c2$ (line 4) and create a \emph{:DF\_C} relationship for this entity type between $c1$ and $c2$. \begin{lstlisting}[style=smallStyle] MATCH (c1:Class) <-[:E_C]- (e1:Event) -[df:DF]-> (e2:Event) -[:E_C]-> (c2:Class) MATCH (e1) -[:E_EN] -> (n) <-[:E_EN]- (e2) WHERE n.EntityType = df.EntityType AND c1.Type = c2.Type WITH n.EntityType as Type,c1,count(df) AS df_freq,c2 MERGE ( c1 ) -[rel2:DF_C {EntityType:Type}]-> ( c2 ) ON CREATE SET rel2.count=df_freq \end{lstlisting} Note that $c1$ and $c2$ can refer to the same node and thus self loops are also included in the network. The query creates ``refined'' \emph{:DF\_C} relationships carrying the \emph{EntityType} property as discussed in Sect.~\ref{sec:semantics:df_refined}. Aggregating \emph{:DF\_type} relations requires a parameterized query to specify the label of the \emph{:DF\_type} relation to aggregate. Observe that the aggregation query builds only on concepts of our data model, \emph{Event}, \emph{Class}, and \emph{Entity} nodes and the relationships \emph{DF}, \emph{E\_EN}, \emph{E\_C}, and requires no further domain knowledge of the underlying event data. In other words, the aggregation query demonstrates that the data model provides the right abstraction concepts for event data over multiple entities. Because of this abstraction, the above query can be applied on ``normal'' entities as well as composite entities of reified relations, satisfying R14. \subsection{Demonstration on BPIC17} \label{sec:mining:demonstration_bpic17} We demonstrate the aggregation queries on the BPIC17 dataset for which have already derived entities and \emph{:DF} relationships for \emph{Application, Workflow, Offer, Case\_AO, Case\_WO, Case\_AW} (see Sect.~\ref{sec:storing:demo_bpic17}), Resource, and the original case notion \emph{Case\_AWO} (see Sect.~\ref{sec:querying}). We consider two use cases: computing the handover-of-work social network between resources, and computing an ``artifact-centric'' directly-follows graph which distinguishes between different entity types. \paragraph{Handover of Work} We derived event \emph{:Class}es for \emph{Resource} as shown in Sect.~\ref{sec:mining:class}. Note that the \emph{:Class} nodes of type \emph{Resource} are semantically different from the \emph{:Entity} nodes of type \emph{Case\_R} created in Sect.~\ref{sec:storing:demo_bpic17}, although we have one node of each per \emph{e.resource} value. We restricted the aggregation query of Sect.~\ref{sec:mining:df_c} to aggregate only the \emph{:DF} relationships of \emph{Case\_AWO} for \emph{:Class}es of type \emph{Resource}. We thereby obtain the Handover-of-Work social network; the \emph{count} property of the \emph{:DF} relationship describes how often a resource handed work (of a specific \emph{Case\_AWO}) to another resource. The aggregation query had an execution time of 8.954 seconds. \begin{wrapfigure}{r}{.5\linewidth} \centering \includegraphics[width=\linewidth]{how.pdf} \caption{Handover of Work Network} \label{fig:HoW} \end{wrapfigure} We can retrieve this network with the query \emph{MATCH (c1) -[dfc:DF\_C]-$\mathord{>}$ (c2) WHERE c1.Type = ``Resource'' AND c2.Type = ``Resource'' AND dfc.EntityType = ``Case\_AWO''}. We verified the correctness of the query using the social network mining plugin of ProM (\url{www.promtools.org}), see~\cite{Esser2019cs_tue} for details. Figure~\ref{fig:HoW} shows the Neo4j graph output of the query above on a sample of 20 cases. With traditional event logs, creating a handover of work network typically requires the use of a tool or programming language whereas Neo4j is capable of creating them by in-DB processing only. \paragraph{Mining behavioral models over multiple entities} In the same way an aggregated directly-follows graph can be obtained in-DB by aggregating \textit{:DF} relations. We aggregated the \emph{:DF} relationships of Application, Workflow, Offer, Case\_AO, Case\_WO, Case\_AW, Case\_AWO for event class \emph{Activity+Lifecycle}. Figure~\ref{fig:mining:dfg}(left) shows the classical directly-follows graph that can be obtained by aggregating all \emph{:DF} relationships for the global case notion of the original log (entity type \emph{Case\_AWO}). Each node is a \emph{:Class} node and each edge is a \emph{:DF\_C} relationship of type \emph{Case\_AWO}; we only queried relationships with $\mathit{count} \geq 500$. Figure~\ref{fig:mining:split}(right) shows a process model discovered by the Split Miner~\cite{DBLP:journals/kais/AugustoCDRP19} from the same event log; the model has a fitness of 95\%, i.e. cannot explain 5\% of the data. \begin{figure} \centering \includegraphics[height=8cm]{figures/bpic17_query_aggregated_dfg.pdf} \includegraphics[height=8cm]{figures/bpic17_sm.pdf} \caption{Classical directly-follows graph (left) and process model discovered by Split Miner on classical event log of BPIC17~\cite{BPIC2017}.} \label{fig:mining:dfg}\label{fig:mining:split} \end{figure} However, both describe the behavior as a complex interleaving of steps of three different entities while the underlying log suffers from convergence and divergence, see Sect.~\ref{sec:background:multi-dim-req}. Figure~\ref{fig:mining:mvp} shows the graph we obtained by querying for the aggregated \emph{:DF\_C} relationships of types \emph{Application} (dark blue) \emph{Workflow} (light blue), \emph{Offer} (orange), and of the reified relations \emph{Case\_AO, Case\_WO, Case\_AW} (grey) occurring $\mathit{count} \geq 500$, i.e., $>98\%$ of all process executions. This graph describes directly-follows relations per entity and thus is similar to an artifact-centric model~\cite{DBLP:journals/tsc/LuNWF15} or a multiple viewpoint model~\cite{DBLP:conf/simpda/BertiA19}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/bpic17_query_aggregated_mvp_dfg.pdf} \caption{Entity-centric directly-follows graph for Application, Workflow, Offer, and their interactions.} \label{fig:mining:mvp} \end{figure} Compared to Fig.~\ref{fig:mining:dfg}, the graph of Fig.~\ref{fig:mining:mvp} explicitly describes the directly-follows behavior of each entity; the behavior of each entity is concurrent to the behavior of other entities up to the few explicit interactions. In contrast, Fig.~\ref{fig:mining:dfg} shows few edges between the event classes associated with Application, Workflow, and Offer and most edges in between because the classical event log interleaved all events. The graph of Fig.~\ref{fig:mining:mvp} is significantly easier to understand and more precise as it was derived from data without convergence and divergence. \section{Conclusion}\label{sec:conclusion} We introduced a new data model for event data based on labeled property graphs. Our data model provides node types and relationship types (see Sect.~\ref{sec:represent}) with semantic constraints (see Sect.~\ref{sec:semantics}) for all first-class concepts of event logs: events, entities (generalizing the case notion), event classes (generalizing the activity and the resource attribute), and the directly-follows relation between events, satisfying requirements R1 and R3 of Sect.~\ref{sec:background:multi-dim-req}. The semi-structured nature of graphs allowed us to represent multiple different, related entities (R2) and the relations between entities and events (R4) through dedicated correlation relationships. Thus, the data model can be seen as a \emph{multi-dimensional event log}, where events of each entity are ordered by ``their'' directly-follows relation leading to a \emph{partial order} of events. Our data model avoids all shortcomings of existing event data models including event tables, event logs, and relational databases, see Sect.~\ref{sec:background:multi-dim-literature}, while building on a standard data storage format. We provided a succinct set of queries to efficiently convert data in event table format into our data model (see Sect.~\ref{sec:storing}. The queries are parameterized where user-provided domain-knowledge is required. We specifically provide queries to reify relations between entities into composite entities allowing to derive directly-follows relations describing interactions between entities (R13). The data model and queries allowed us to convert represent 5 different real-life dataset into our data model. We demonstrated that the query language Cyper allows querying event data in our data model, see Sect.~\ref{sec:querying}. Queries and results are given as graphs, satisfying (R5). Queries Q4 and Q6 retrieve entire paths of events (R6) allowing to analyse the sequences. Q1-Q3 and Q6 select individual cases based on partial patterns (R7) allowing to ``query by example''. Q2,Q3,Q5 and Q6 query for temporal properties (R8) where Q5 specifically considers time; all queries correlate events related to a common entity (R9); Q7 queries aspects of multiple entities in the same query (R10) and allows to query behavior of multiple entities and combine results (R11). Altogether, we could demonstrate the queries over labeled property graphs satisfy R5-R11, which no existing query language on event data offers, see Sect.~\ref{sec:background:multi-dim-literature}. Finally, we demonstrated that our data model and Cypher allow aggregating events to event classes (R12) and directly-follows relations to event classes per entity (R14,R15). The resulting graphs are simpler and describe the behavior more accurately than techniques using other data models, see Sect.~\ref{sec:mining}. \emph{The queries and data sets are publicly available for further research}.~\cite{graphdataset} The model has several limitations and requires further research. Our data model does not model properties of entities and relations between entities; practical applications require a more complete data model of the entities as well. When one event is correlated to multiple entities of the same type, then the current modeling of the directly-follows relation does not distinguish the different entities, rendering queries more complex; the model has to be generalized further. Our aggregation queries aggregate behavior on event type level, thereby hiding multiplicities of entities of the same type involved in a behavior; further research is required to aggregate behavior that preserves multiplicities of entities in interactions. Within the scope of this work, we only consider converting event tables to our data model whereas most event data is stored in relational databases; an automated techniques for conversion is desirable for practical adoption. Cypher is highly expressive but not specifically designed for querying event data\,---\,it takes expertise and patience to write the right queries; query patterns and best practices have to be established. While we demonstrated feasibility and obtained performance that allows for usage in practice, existing graph database systems are still significantly slower than relational databases or dedicated algorithms, specifically due to deficiencies in query optimization which may easily render queries practically infeasible. Further improvements on the performance of graph databases is required, possible specifically taking the partially-ordered nature of our data into account. Our model also enables new lines of research. Providing a more general standard event data model allows for development of new event data analysis and process mining techniques that explicitly consider the presence of multiple entities. The data format enables the adoption of graph mining techniques for event data. \bibliographystyle{spmpsci}
1,116,691,498,045
arxiv
\section{INTRODUCTION} The production of elements heavier than those of the Fe peak occurs via neutron-capture ($n$-capture) processes that are either slow ($s$) or rapid ($r$) compared to the timescale for $\beta$ decay (e.g., Sneden et al.~2008). The main $s$-process operates in low- and intermediate-mass asymptotic giant branch (AGB) stars (e.g., Gallino et al.~1998; Lugaro et al.~2003; Karakas et al.~2009), whose stellar winds enrich the interstellar medium (ISM) with the products of $s$-process nucleosynthesis. A weak $s$-process occurs in the He- and C-burning shells of massive stars (e.g., Raiteri et al.~1993; The et al.~2007), which distribute their products into the ISM via supernova explosions. The astrophysical site of the $r$-process is still debated, but the most likely sites include core collapse supernovae and binary neutron star mergers (e.g., Argast et al.~2004; Beniamini et al.~2016). The latter production sites have received considerable attention recently due to the detection of the electromagnetic counterpart to the gravitational wave source GW170817 (e.g., Abbott et al.~2017; Tanvir et al.~2017). When considering the effects of $n$-capture nucleosynthesis on the chemical evolution of the Galaxy, the important distinction is between the processes associated with long-lived low- and intermediate-mass stars (i.e., the main $s$-process) and those attributed to massive stars with short evolutionary lifetimes (i.e., the weak $s$-process and the $r$-process). Detailed studies of the interstellar abundances of $n$-capture ($Z>30$) elements provide a means to probe the enrichment of the ISM in the products of $s$- and $r$-process nucleosynthesis. Unfortunately, the low cosmic abundances of most heavy elements preclude their detection via absorption from resonance lines in the UV and visible, even those arising from the dominant ionization stages of the elements in interstellar clouds. A further complication in using the interstellar abundances of heavy elements to study nucleosynthetic processes is the unknown amount of depletion onto grains exhibited by a given element in a given line of sight. The most well-studied of the $n$-capture elements thus far detected in the ISM is krypton ($Z=36$), which, as a noble gas, would generally not be expected to be depleted along any lines of sight. Cartledge et al.~(2001, 2003, 2008) examined gas-phase Kr abundances in approximately 50 sight lines, finding that the average Kr abundance is independent of such external sight line properties as the average line-of-sight hydrogen density and the molecular hydrogen fraction. In absolute terms, however, the average interstellar abundance of Kr, which has changed little from one determination to another (e.g., Cardelli \& Meyer 1997; Cartledge et al.~2008), is only $\sim$50\% of the theoretical solar abundance. In a comprehensive examination of interstellar depletions, Jenkins (2009, hereafter J09) found that the depletion of Kr may actually increase (slightly) as the overall strength of depletions increases from one sight line to another. Still, the deficit in the interstellar abundance of Kr relative to the solar system, which is significant even in low depletion sight lines, remains unexplained. In a study of rubidium ($Z=37$) isotope ratios in diffuse interstellar clouds, Walker et al.~(2009) found that the gas-phase Rb/K ratio is (on average) $\sim$40\% of the solar value. The actual interstellar abundance of Rb (relative to H) is uncertain because the observed line of Rb~{\sc i} at 7800.3~\AA{} arises from a trace neutral species (most of the interstellar Rb being singly-ionized). Thus, the observed abundance of Rb~{\sc i} is compared to the abundance of K~{\sc i}, and the ratio is converted to the elemental ratio by making appropriate corrections for the differences in the photoionization and radiative recombination rates (see Federman et al.~2004; Walker et al.~2009). While such corrections inevitably introduce uncertainties, the low Rb/K ratios are puzzling; they do not seem to be related to enhanced Rb depletion since the condensation temperature of K is higher than that of Rb (Lodders 2003). Given that the interstellar Na/K and Li/K ratios are generally found to be consistent with the solar values (Welty \& Hobbs 2001; Knauth et al.~2003), the low Rb/K ratios suggest a deficit in the abundance of Rb in interstellar clouds. Both Kr and Rb are produced primarily by massive stars, through a combination of the weak $s$-process and the $r$-process (Heil et al.~2008). Thus, the unexpectedly low interstellar abundances of Kr and Rb could potentially indicate a deficiency in the contribution from massive stars to the synthesis of heavy elements in the current epoch (Walker et al.~2009). Existing studies of the interstellar abundances of other $n$-capture elements seem to support this conjecture. Sofia et al.~(1999) examined the gas-phase abundances of cadmium ($Z=48$) and tin ($Z=50$) in 5 and 14 sight lines, respectively, using data obtained with the Goddard High-Resolution Spectrograph (GHRS) onboard the \emph{Hubble Space Telescope} (\emph{HST}). They found that Cd shows no evidence of depletion onto grains. The Cd abundances in their small sample appear to be indistinguishable from the solar abundance regardless of the amount of molecular material in the line of sight. In contrast, the gas-phase Sn abundances were found to decrease with increasing molecular fraction, a clear indication of the incorporation of Sn atoms into dust grains in higher density sight lines. In sight lines with low molecular fractions, Sofia et al.~(1999) found that Sn is entirely undepleted, and may even be enhanced in its abundance relative to the solar system. Much, if not most, of the synthesis of Cd and Sn occurs via the main $s$-process in low- and intermediate-mass AGB stars (Arlandini et al.~1999; Bisterzo et al.~2014), and the interstellar abundances of these elements do not appear to be deficient. The present-day interstellar abundance of Sn may even be supersolar (Sofia et al.~1999). The only other $n$-capture elements that have been studied in a large number of diverse sight lines are gallium ($Z=31$) and germanium ($Z=32$). These elements are mainly produced by the weak $s$-process in massive stars (e.g., The et al.~2007; Pignatari et al.~2010), and both are depleted in the ISM relative to the solar system. However, both elements exhibit density-dependent depletion in a way consistent with the depletion behaviors of other more abundant elements (Cartledge et al.~2006; Ritchey et al.~2011, hereafter R11). Additional $n$-capture elements that have been detected in the ISM include arsenic ($Z=33$), which has been observed in three lines of sight (Cardelli et al.~1993; Federman et al.~2003), and lead ($Z=82$), which has been detected in just two individual sight lines (Cardelli 1994; Welty et al.~1995). With so few detections, the depletion behaviors of these elements are largely unknown, making it difficult to interpret their gas-phase abundances. In this investigation, we seek to conduct a comprehensive examination of the gas-phase abundances and depletion behaviors of $n$-capture elements in the ISM. To this end, we carry out an extensive analysis of UV absorption lines arising from the dominant ions Ga~{\sc ii}, Ge~{\sc ii}, As~{\sc ii}, Kr~{\sc i}, Cd~{\sc ii}, Sn~{\sc ii}, and Pb~{\sc ii}, using archival data obtained with the Space Telescope Imaging Spectrograph (STIS) onboard \emph{HST}. The seven species that are the focus of our investigation represent the only dominant ions of $n$-capture elements with UV resonance lines accessible to STIS that have been detected in multiple lines of sight. The elements Ga, Ge, As, Kr, Cd, Sn, and Pb are among the only heavy elements with both relatively high cosmic abundances and relatively low condensation temperatures, which explains why their UV absorption lines are more readily detectable than those of other $n$-capture elements. While extensive studies using STIS data have already been published for Ga~{\sc ii}, Ge~{\sc ii}, and Kr~{\sc i} (Cartledge et al.~2006, 2008; R11), no such studies exist for the rarer species As~{\sc ii}, Cd~{\sc ii}, Sn~{\sc ii}, and Pb~{\sc ii}. (All of the currently published abundances for these latter species were derived from GHRS observations.) Moreover, it is desirable to examine the abundances of all seven $n$-capture elements in the same sample of interstellar sight lines (if possible), rather than compare studies that examined different sets of sight lines for different elements. A thorough examination of the UV spectroscopic data available in the \emph{HST}/STIS archive allows us to discern a significant number of new detections of As~{\sc ii}, Cd~{\sc ii}, Sn~{\sc ii}, and Pb~{\sc ii}, and to expand on previously published results for Ga~{\sc ii}, Ge~{\sc ii}, and Kr~{\sc i}. This effort enables us to better constrain the depletion characteristics of these elements so that we can address claims of abundance deficiencies and enhancements in the ISM due to $s$- and $r$-process nucleosynthesis. The remainder of this paper is organized as follows. An overview of the archival survey is provided in Section 2, which also includes a description of the sample of sight lines and the steps involved in processing the data sets obtained from the STIS archive. Our column density determinations are presented in Section 3, where we also discuss recent updates to the oscillator strengths ($f$-values) relevant to our investigation. In Section 4, we derive the elemental abundances (Section 4.1), and determine depletion parameters for each element, adopting the methodology of J09 (Section 4.2). Since many sight lines yield only upper limits for some species, we also perform a survival analysis on the gas-phase abundance data (Section 4.3). The results of our analyses, and the implications for $s$- and $r$-process nucleosynthesis, are discussed in Section 5. We summarize our main conclusions in Section 6. Two appendices present a compilation of column density measurements from the literature (Appendix A) and an application of the J09 methodology to the rare light element boron (Appendix B). \section{OBSERVATIONS AND DATA PROCESSING} \subsection{Overview of the Archival Survey} The primary aim of our extensive search of the \emph{HST}/STIS archive was the identification of sight lines showing significant absorption from As~{\sc ii}~$\lambda1263$, Cd~{\sc ii}~$\lambda2145$, Sn~{\sc ii}~$\lambda1400$, and Pb~{\sc ii}~$\lambda1433$. While these lines are the principal transitions used to study these species in diffuse interstellar clouds, they are only rarely seen in UV spectra (although Sn~{\sc ii}~$\lambda1400$ is somewhat more common than the others). To construct our sample, we started by examining the sight lines analyzed by R11 in their survey of B~{\sc ii} absorption in the diffuse ISM. The absorption features sought in the present survey are expected to have similar strengths compared to B~{\sc ii}~$\lambda1362$. Moreover, for each of the directions studied by R11, we have detailed knowledge of the line-of-sight component structure from moderately strong absorption lines of dominant ions such as O~{\sc i}~$\lambda1355$. This facilitates the search for weak features from other dominant ions that are expected to exhibit similar absorption profiles. We also expanded our search by examining the available STIS data for all other sight lines in the \emph{HST} archive with the necessary wavelength coverage, considering observations acquired using either the high-resolution gratings (E140H and E230H) or the medium-resolution gratings (E140M and E230M) of the STIS FUV and NUV Multi-Anode Microchannel Array (MAMA) detectors. We noted any sight lines that showed apparent absorption from one or more of the four principal lines of interest at a velocity similar to that of O~{\sc i}~$\lambda1355$. This process resulted in 14 sight lines being added to our initial list of 55 from the B~{\sc ii} survey, yielding our primary sample of 69 interstellar sight lines. (We included all 55 sight lines from R11 in our primary sample, even those that were not suspected of having absorption from As~{\sc ii}, Cd~{\sc ii}, Sn~{\sc ii}, or Pb~{\sc ii}, because the STIS data for these sight lines had already been processed, and thus it was straightforward to calculate upper limits for any non-detections.) Each preliminary detection of As~{\sc ii}~$\lambda1263$, Cd~{\sc ii}~$\lambda2145$, Sn~{\sc ii}~$\lambda1400$, and Pb~{\sc ii}~$\lambda1433$ was scrutinized in detail, and, in some cases, we found that either the absorption was not significant enough (i.e., the equivalent width was smaller than twice the estimated uncertainty), or the velocity deviated too severely from the expected velocity (i.e., by more than 2$-$4 km s$^{-1}$ depending on the spectral resolution). In such cases, it may be that the apparent absorption feature is simply an artifact of noise or is of instrumental origin. In the final analysis (see Section 3.2), we found that 32 sight lines from our primary sample did not exhibit compelling evidence for absorption from at least one of the four main species of interest. We chose not to eliminate these sight lines from our sample, however, since upper limits on column densities can still prove useful, particularly for interpreting the abundances of elements with relatively few confirmed detections (as we demonstrate in Section 4.3). Moreover, while these sight lines do not yield detections for any of the rare $n$-capture species, they each provide information on at least one of the more common species (i.e., Ga~{\sc ii}, Ge~{\sc ii}, and/or Kr~{\sc i}), which constitute an important part of our overall survey. Ultimately, through our extensive examination of STIS archival data, we identified 10 sight lines with secure detections of As~{\sc ii}~$\lambda1263$, 7 sight lines with detections of Cd~{\sc ii}~$\lambda2145$, 27 sight lines with detections of Sn~{\sc ii}~$\lambda1400$, and 8 sight lines with detections of Pb~{\sc ii}~$\lambda1433$. Heidarian et al.~(2015) recently reported the first detection in the ISM of the Pb~{\sc ii} transition at 1203.6~\AA{} from a composite spectrum obtained by co-adding high-resolution \emph{HST}/STIS spectra for over 100 sight lines. Their analysis indicated that the 1203.6~\AA{} line is about a factor of two stronger intrinsically than the line at 1433.9~\AA{}. However, in a typical interstellar sight line, where log~$N$(H~{\sc i})~$\sim$~21.2, the 1203.6~\AA{} line is positioned in a region of the spectrum where the flux is depressed by the damping wing of the nearby Ly$\alpha$ line, limiting the potential usefulness of this feature. Nevertheless, following the discovery of the Pb~{\sc ii}~$\lambda1203$ line in a composite spectrum by Heidarian et al.~(2015), we re-examined the archival STIS data to try to identify individual sight lines displaying this feature. Here, we report the detection of Pb~{\sc ii}~$\lambda1203$ in 4 individual sight lines, bringing our total number of Pb~{\sc ii} detections to 12. We also searched for the weaker line of the Cd~{\sc ii} doublet at 2265.7~\AA{}, and found this line in each of the 7 directions where Cd~{\sc ii}~$\lambda2145$ was detected. In addition to searching for absorption from As~{\sc ii}, Cd~{\sc ii}, Sn~{\sc ii}, and Pb~{\sc ii}, we sought to incorporate available data on Ga~{\sc ii}, Ge~{\sc ii}, and Kr~{\sc i} into our analysis so that the abundances of all seven $n$-capture elements could be analyzed in a consistent manner. Column densities for Ga~{\sc ii} were previously reported by R11 for many of the sight lines in their B~{\sc ii} survey. Of the 14 additional sight lines added to our current sample, 5 have archival STIS data covering the Ga~{\sc ii} transition at 1414.4~\AA{}, and all 5 exhibit significant Ga~{\sc ii} absorption. Published column densities for Ge~{\sc ii} and Kr~{\sc i} are also available for many of the sight lines in our sample. Twenty-five of our sight lines have Ge~{\sc ii} column densities listed in Cartledge et al.~(2006), while a somewhat different subset of 25 sight lines has Kr~{\sc i} column densities given in Cartledge et al.~(2003, 2004, 2008). All of these Ge~{\sc ii} and Kr~{\sc i} measurements were deduced from STIS observations. One of the sight lines in our sample ($\zeta$~Per) has a published Kr~{\sc i} column density from GHRS observations (Cardelli \& Meyer 1997). To maintain consistency in how each sight line in our primary sample was analyzed across all of the elements of interest, we re-examined the STIS data on Ge~{\sc ii}~$\lambda1237$ and Kr~{\sc i}~$\lambda1235$ for each direction with a previously reported Ge~{\sc ii} or Kr~{\sc i} column density. We also sought to obtain Ge~{\sc ii} and Kr~{\sc i} column densities for any other sight lines in our sample without existing determinations in the literature. For one of our primary targets (X~Per), the available STIS data do not cover the Ge~{\sc ii}~$\lambda1237$ or Kr~{\sc i}~$\lambda1235$ lines, but other archival data are available for these species. To derive the Ge~{\sc ii} column density toward X~Per, we examined GHRS data on Ge~{\sc ii}~$\lambda1237$, along with STIS data for the weaker Ge~{\sc ii} line at 1602.5~\AA{}. The Kr~{\sc i} column density that we derive for X~Per is based on an analysis of the Kr~{\sc i}~$\lambda1164$ line, which is available from observations made with the \emph{Far Ultraviolet Spectroscopic Explorer} (\emph{FUSE}) satellite. (While our intention in carrying out our archival survey was to focus exclusively on STIS data, the inclusion of the GHRS and \emph{FUSE} data sets toward X~Per meant that we could derive reliable abundances for all seven $n$-capture elements in this direction.) For another one of our sight lines (HD~99890), R11 reported column densities based on a single, relatively short STIS exposure obtained with the medium-resolution (E140M) grating. In the years since R11 published their study, new high-resolution STIS spectra of HD~99890 have been obtained under two separate observing programs, one of which (GO program 12191) was specifically designed to search for rare heavy elements in the ISM. Since the new high-resolution data are far superior to the spectrum analyzed by R11 (in terms of total exposure time and overall data quality), we redetermined column densities for this sight line using only the high-resolution data. There are 30 additional sight lines not included in our primary sample --- because they showed no sign of absorption from As~{\sc ii}, Cd~{\sc ii}, Sn~{\sc ii}, or Pb~{\sc ii} and had not been analyzed previously by R11 --- that have published Ge~{\sc ii} and/or Kr~{\sc i} column densities from STIS observations (Cartledge et al.~2001, 2003, 2006, 2008; Welty 2007). There are another 19 sight lines with published column densities for Ga~{\sc ii}, Ge~{\sc ii}, As~{\sc ii}, Kr~{\sc i}, Cd~{\sc ii}, Sn~{\sc ii}, and/or Pb~{\sc ii} from GHRS observations (Savage et al.~1992; Cardelli et al.~1993; Hobbs et al.~1993; Cardelli 1994; Welty et al.~1995, 1999; Cardelli \& Meyer 1997; Sofia et al.~1999; Federman et al.~2003; Cartledge et al.~2008). All of these sight lines are included in the analysis described in Section 4, where we evaluate the depletion characteristics of the various elements, though we do not rederive any column densities in these directions. We do apply corrections to the column densities to account for any differences in the assumed $f$-values between the literature studies and our investigation (see Section 3.1 and Appendix A). When combined with our primary sample, this extended sample of sight lines with column density measurements from the literature brings the total number of sight lines considered in this investigation to 128. (This includes 10 additional sight lines with previously published column densities of O~{\sc i} only. These directions were added to our survey so that our sample for oxygen would be as complete as possible, just as for the other elements.) \subsection{Characteristics of the Sight Lines} Table 1 provides the relevant data for the 69 O and B-type stars that served as background targets for our primary sample of interstellar sight lines. Many of these stars were included in the investigation of element depletions by J09, who carefully compiled spectral types for the stars in his survey (and gives references to the original sources). From those spectral types, J09 determined distances and reddenings to his targets following a rigorous process based on spectroscopic parallax that was originally implemented by Bowen et al.~(2008, hereafter B08) in their survey of O~{\sc vi} absorption in the Galactic disk. In general, for any of the stars in our sample that were analyzed by J09 or B08, we adopt the same spectral types, distances, and reddenings given in those references. For stars not included in either investigation, we compute the distances and reddenings using the same set of procedures that those studies invoked, adopting intrinsic colors from Wegner (1994) and absolute visual magnitudes from B08 (see Appendix B of B08 for a detailed description of the method and its associated caveats). References to the sources of the chosen spectral types for these stars are given in Table 1. An exception to this general approach to obtaining distances to our targets is that if a star has an \emph{Hipparcos} parallax measurement that is significant at the 4$\sigma$ level or greater (based on the second reduction of \emph{Hipparcos} data; van Leeuwen 2007), we adopt the \emph{Hipparcos} distance to that star. (J09 had an acceptance threshold for \emph{Hipparcos} results of 10$\sigma$, while B08 had a threshold of 5$\sigma$.) The Galactic coordinates and $V$ magnitudes listed in Table 1 were obtained from the SIMBAD database. We note, however, that these $V$ magnitudes are not necessarily the same as those used to determine the distances and reddenings for some stars since we follow the recommendation of B08 in using magnitudes from the \emph{Tycho} Starmapper catalog (ESA 1997)\footnote{Vizier Online Data Catalog I/239.} to obtain values for $B$ and $V$, after applying the appropriate transformation to the \emph{Tycho} magnitudes $B_T$ and $V_T$ (see Appendix B1 of B08). We did use the SIMBAD values of $B$ and $V$ for one of our stars (CPD$-$30 5410) as this star is not listed in the \emph{Tycho} catalog. Another one of our stars (HD~203338) has a composite spectrum, with both a hot (B1 V) component and a cool (M1 Ibep) component (Simonson 1968), making a determination of the distance and reddening for this sight line challenging. Fortunately, the \emph{Hipparcos} catalog lists values of $B_T$ and $V_T$ separately for the two components,\footnote{The individual magnitudes may be found in the \emph{Hipparcos Double and Multiples: Component Solutions} catalog (ESA 1997).} allowing us to derive an appropriate distance and reddening by focusing solely on the hot component. The results we obtain for this star, $E(\bv)=0.49$ and $d=1.4$~kpc, are similar to the results for other stars that (like HD~203338) are members of the Cep OB2 association, such as HD~207198, HD~207308, and HD~207538 (see Table 1). The distances and Galactic latitudes compiled for the stars in our sample were used to calculate the heights of the stars above or below the midplane of the Galaxy. These $z$ distances are given in the last column of Table 1. Figure 1 presents a series of histograms that illustrates some of the properties of the 128 sight lines considered in this investigation (where we have included those sight lines with column density measurements from the literature). The three panels show distributions for the total hydrogen column densities, $N($H$_{\mathrm{tot}})=N($H~{\sc i}$)+2N($H$_2)$, the distances to the background stars, and the heights of the stars above or below the Galactic plane. (The sources of the adopted values of $N$(H~{\sc i}) and $N$(H$_2$) are discussed in Section 4.1.) The total hydrogen column densities range from log~$N$(H$_{\mathrm{tot}}$)~$\sim$~20.2 to 21.7, with a peak at approximately 21.3. The distances show a bimodal distribution reflecting the tendency for stars to be found either in the local arm of the Galaxy or in one of the two nearest spiral arms (the Sagittarius-Carina spiral arm or the Perseus arm; see also Figure 1 of R11). The $z$ distances (or rather their absolute magnitudes) have an asymmetric distribution, peaking at approximately log~$|z|$~$\sim$~1.8 and tapering off to about 3.5. These distributions are similar to the ones presented in J09 for the 243 sight lines included in that investigation (see Figures 2 and 3 of J09). While this similarity is not surprising, since there is a great deal of overlap between our sample and that of J09, it does suggest that our sample is large enough and diverse enough to be representative of the local Galactic ISM. As a result, the depletion parameters that we derive in Section 4 for the elements under consideration should be broadly applicable. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f1.eps} \caption{Characteristics of the sample of sight lines included in this investigation. Left panel: Distribution of total hydrogen column densities along the lines of sight. Middle panel: Distribution of distances to the target stars. Right panel: Distribution of heights above or below the Galactic plane.} \end{figure} \subsection{Processing of the STIS Data} The relevant STIS data for our primary sample of interstellar sight lines were processed in a manner analogous to that described in Section 2.2 of R11. All high-resolution (E140H and E230H) and medium-resolution (E140M and E230M) echelle observations of our targets covering lines of interest to our survey were downloaded from the Mikulski Archive for Space Telescopes (MAST). The STIS data sets used in this investigation are listed in Table 2, which also provides exposure times and other details concerning the observations. The high-resolution gratings of the STIS FUV and NUV MAMA detectors provide spectra at resolving powers that range from $R=82,000$ to 143,000 depending on the aperture. With the medium resolution gratings, resolving powers between 38,000 and 46,000 may be achieved (see, e.g., Table 2 of R11). Multiple exposures of a target acquired with the same echelle grating were co-added to improve the signal-to-noise (S/N) ratio in the final spectrum. When exposures that employed different apertures were co-added, the data were resampled at the dispersion of the lowest-resolution spectrum contributing to the sum. If a feature of interest appeared in adjacent echelle orders with sufficient continua on both sides of the line, then the overlapping portions of the two orders were averaged together (yielding an increase in the S/N ratio of approximately 40\% in most cases). Typical S/N ratios for this sample range from 20 to 100 (per pixel), although some sight lines have significantly higher values. (The highest S/N ratios are seen in the STIS spectra of X~Per, for which S/N~$\approx$~300 near 1400~\AA{} in the final combined spectrum.) Essential information regarding the interstellar lines of interest to this investigation is given in Table 3. (We analyze the O~{\sc i}~$\lambda1355$ line in addition to those from $n$-capture elements because this feature is useful for defining the overall component structure along a particular line of sight. We further note that the Ge~{\sc ii}~$\lambda1602$ and Kr~{\sc i}~$\lambda1164$ lines were analyzed only for the line of sight to X~Per.) Small spectral segments centered on these lines were cut from the final co-added spectra. These segments were typically 2 \AA{} wide, although it was occasionally necessary to choose a smaller portion of the spectrum to analyze so as to avoid a strong stellar absorption line or an unrelated interstellar feature. (In cases where adjacent echelle orders were averaged together, only the overlapping portions of the spectra were retained.) Each of the individual segments was then normalized to the continuum by fitting a low-order polynomial to regions free of interstellar absorption. We normalized all of the absorption profiles for a given line of sight concurrently so that the profiles from stronger lines, such as O~{\sc i}~$\lambda1355$ and Ge~{\sc ii}~$\lambda1237$, could serve as guides for properly fitting the continua around the weaker absorption features. In most instances, the process of normalizing the continuum proceded in a relatively straightforward manner. However, in the case of Sn~{\sc ii}~$\lambda1400$ toward HD~137595, there are upward deviations in the intensities on either side of an apparent absorption feature, which is at a velocity close to that expected for Sn~{\sc ii}. An unusually high order polynomial would be required to ``smooth out'' the continuum surrounding this absorption feature. Upon further examination of the overlapping echelle orders that cover the Sn~{\sc ii} transition toward this star, we find that one of these upward deviations is quite strong in one order and not seen in the other, indicating that the feature may be a cosmic ray hit or of instrumental origin. Since we cannot be certain that the absorption feature is unaffected by these upward deviations, we report only an upper limit on the column density of Sn~{\sc ii} toward HD~137595. In the case of As~{\sc ii}~$\lambda1263$ toward CPD$-$59~2603, a high negative velocity component of Si~{\sc ii}*~$\lambda1264$ overlaps with the expected position of the As~{\sc ii} line, preventing us from deriving even an upper limit on the column density of As~{\sc ii} in this direction. Finally, while there are medium-resolution (E230M) data that cover the Cd~{\sc ii}~$\lambda\lambda2145,2265$ lines toward 40~Per, strong and relatively narrow stellar features prevent us from obtaining a reliable Cd~{\sc ii} column density (or upper limit) for this sight line. \section{RESULTS ON COLUMN DENSITIES} \subsection{Updated Oscillator Strengths} Oscillator strengths for the transitions relevant to our investigation were generally obtained from the compilations of Morton (2000, 2003), with a few notable exceptions (see Table 3). Heidarian et al.~(2015) recently reported the first experimentally determined oscillator strengths for the Pb~{\sc ii} transitions at 1203.6~\AA{} and 1433.9~\AA{}. Their results, from beam-foil spectroscopy, indicate $f$-values of $0.75\pm0.03$ and $0.321\pm0.034$ for Pb~{\sc ii}~$\lambda1203$ and $\lambda1433$, respectively. Previous studies of interstellar Pb~{\sc ii} abundances (e.g., Welty et al.~1995) relied on theoretical determinations of the oscillator strength of the Pb~{\sc ii}~$\lambda1433$ transition (Migdalek 1976; Cardelli et al.~1993), which Morton (2000) gives as 0.869. The new experimental $f$-value for Pb~{\sc ii}~$\lambda1433$ provided by Heidarian et al.~(2015), which we adopt here, is lower than the theoretical value by almost a factor of 3, yielding an increase in the Pb~{\sc ii} column density derived from the $\lambda1433$ line of 0.43 dex. The oscillator strength for the Ge~{\sc ii} transition at 1237.1~\AA{} had likewise not been determined experimentally until recently. Heidarian et al.~(2017), again using beam-foil spectroscopy, report an $f$-value for Ge~{\sc ii}~$\lambda1237$ of $0.872\pm0.113$. Early studies of gas-phase Ge~{\sc ii} abundances in the ISM (e.g., Savage et al.~1992; Hobbs et al.~1993) relied on the oscillator strength given in Morton (1991) for the Ge~{\sc ii}~$\lambda1237$ transition.\footnote{A typographical error in Morton (1991) was subsequently corrected by Savage et al.~(1992). The corrected $f$-value from Morton (1991) for Ge~{\sc ii}~$\lambda1237$ is 0.8756.} Later studies of interstellar Ge~{\sc ii} (e.g., Welty et al.~1999; Cartledge et al.~2006) adopted the theoretical $f$-value for the $\lambda1237$ transition calculated by Bi{\'e}mont et al.~(1998) of 1.23. This is also the value listed by Morton (2000) for this transition. The new experimental $f$-value for Ge~{\sc ii}~$\lambda1237$ reported by Heidarian et al.~(2017), and adopted in this work, is in better agreement with the earlier theoretical value given in Morton (1991). (While the $f$-value for the Ge~{\sc ii} transition at 1602.5~\AA{} has not been determined experimentally, the column density we obtain from this line toward X~Per agrees with the value we derive from the $\lambda1237$ line at the 1$\sigma$ level.) The only other $n$-capture element transition listed in Table 3 that does not have an experimentally determined $f$-value is As~{\sc ii}~$\lambda1263$. However, for this transition, most theoretical determinations agree with the value calculated by Bi{\'e}mont et al.~(1998), and adopted by Morton (2000), at the 10--20\% level (e.g., Bieron et al.~1991; Cardelli et al.~1993; Brage \& Leckrone 1995). Nevertheless, an experimental confirmation of the oscillator strength for As~{\sc ii}~$\lambda1263$ would be useful. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f2.eps} \caption{Profile synthesis fits to the O~{\sc i}~$\lambda1355$, Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, As~{\sc ii}~$\lambda1263$, Kr~{\sc i}~$\lambda1235$, and Pb~{\sc ii}~$\lambda1203$ lines toward HD~24190. Wavelengths are plotted in the local standard of rest (LSR) frame. The same range in velocity is displayed in each panel. The synthetic profiles are shown as solid lines passing through data points that represent the observed spectra. The fit residuals are plotted above each spectrum. A profile template based on O~{\sc i} was adopted in fitting the As~{\sc ii} and Pb~{\sc ii} lines.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f3.eps} \caption{Same as Figure 2 except for the O~{\sc i}~$\lambda1355$, Ga~{\sc ii}~$\lambda1414$, Sn~{\sc ii}~$\lambda1400$, and Pb~{\sc ii}~$\lambda1433$ lines toward $\zeta$~Per.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f4.eps} \caption{Same as Figure 2 except for the O~{\sc i}~$\lambda1355$, Ga~{\sc ii}~$\lambda1414$, As~{\sc ii}~$\lambda1263$, Cd~{\sc ii}~$\lambda2145$, Sn~{\sc ii}~$\lambda1400$, and Pb~{\sc ii}~$\lambda1433$ lines toward X~Per.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f5.eps} \caption{Same as Figure 2 except for the O~{\sc i}~$\lambda1355$, Ge~{\sc ii}~$\lambda1237$, Kr~{\sc i}~$\lambda1235$, and Cd~{\sc ii}~$\lambda2145$ lines toward HD~27778. A template based on O~{\sc i} was adopted in fitting the Cd~{\sc ii} line.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f6.eps} \caption{Same as Figure 2 except for the Ge~{\sc ii}~$\lambda1237$, As~{\sc ii}~$\lambda1263$, Kr~{\sc i}~$\lambda1235$, and Cd~{\sc ii}~$\lambda2145$ lines toward HD~37061.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f7.eps} \caption{Same as Figure 2 except for the Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, Kr~{\sc i}~$\lambda1235$, and Sn~{\sc ii}~$\lambda1400$ lines toward HD~99890. A template based on O~{\sc i}~$\lambda1355$ (not shown) was adopted in fitting the Kr~{\sc i} and Sn~{\sc ii} lines.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f8.eps} \caption{Same as Figure 2 except for the Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, Kr~{\sc i}~$\lambda1235$, and Sn~{\sc ii}~$\lambda1400$ lines toward HD~108639. The two absorption complexes apparent in the Sn~{\sc ii} profile were fit independently adopting templates based on O~{\sc i}~$\lambda1355$ (not shown).} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f9.eps} \caption{Same as Figure 2 except for the Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, Kr~{\sc i}~$\lambda1235$, and Sn~{\sc ii}~$\lambda1400$ lines toward HD~122879. The two absorption complexes apparent in the Sn~{\sc ii} profile were fit independently adopting templates based on O~{\sc i}~$\lambda1355$ (not shown).} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f10.eps} \caption{Same as Figure 2 except for the Ge~{\sc ii}~$\lambda1237$, As~{\sc ii}~$\lambda1263$, Kr~{\sc i}~$\lambda1235$, and Cd~{\sc ii}~$\lambda2145$ lines toward $\rho$ Oph D.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f11.eps} \caption{Same as Figure 2 except for the Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, Sn~{\sc ii}~$\lambda1400$, and Pb~{\sc ii}~$\lambda1433$ lines toward HD~147889. These data were acquired at medium resolution. A profile template based on O~{\sc i}~$\lambda1355$ (not shown) was adopted in each case.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f12.eps} \caption{Same as Figure 2 except for the O~{\sc i}~$\lambda1355$, Ge~{\sc ii}~$\lambda1237$, Kr~{\sc i}~$\lambda1235$, and Pb~{\sc ii}~$\lambda1203$ lines toward HD~165246. A template based on O~{\sc i} was adopted in fitting the Pb~{\sc ii} line.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f13.eps} \caption{Same as Figure 2 except for the Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, Kr~{\sc i}~$\lambda1235$, and Sn~{\sc ii}~$\lambda1400$ lines toward HD~177989. A template based on O~{\sc i}~$\lambda1355$ (not shown) was adopted in fitting the Sn~{\sc ii} line.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f14.eps} \caption{Same as Figure 2 except for the O~{\sc i}~$\lambda1355$, Ge~{\sc ii}~$\lambda1237$, Kr~{\sc i}~$\lambda1235$, and Cd~{\sc ii}~$\lambda2145$ lines toward HD~207198. A template based on O~{\sc i} was adopted in fitting the Kr~{\sc i} and Cd~{\sc ii} lines.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f15.eps} \caption{Same as Figure 2 except for the Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, As~{\sc ii}~$\lambda1263$, Kr~{\sc i}~$\lambda1235$, Sn~{\sc ii}~$\lambda1400$, and Pb~{\sc ii}~$\lambda1433$ lines toward HD~207538. These data were acquired at medium resolution. A template based on O~{\sc i}~$\lambda1355$ (not shown) was adopted in fitting the As~{\sc ii}, Kr~{\sc i}, Sn~{\sc ii}, and Pb~{\sc ii} lines.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f16.eps} \caption{Same as Figure 2 except for the O~{\sc i}~$\lambda1355$, Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, As~{\sc ii}~$\lambda1263$, Kr~{\sc i}~$\lambda1235$, and Sn~{\sc ii}~$\lambda1400$ lines toward HD~209339. A template based on O~{\sc i} was adopted in fitting the As~{\sc ii} and Kr~{\sc i} lines.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{f17.eps} \caption{Same as Figure 2 except for the Ga~{\sc ii}~$\lambda1414$, Ge~{\sc ii}~$\lambda1237$, Kr~{\sc i}~$\lambda1235$, and Sn~{\sc ii}~$\lambda1400$ lines toward HD~232522. A template based on O~{\sc i}~$\lambda1355$ (not shown) was adopted in fitting the Kr~{\sc i} and Sn~{\sc ii} lines.} \end{figure} \subsection{Profile Synthesis Fits} Column densities were derived through multi-component profile synthesis fits to the normalized spectra, using the code ISMOD (see Sheffer et al.~2008). The profile fitting routine treats the column densities, $b$-values, and velocities of the individual components as free parameters, while minimizing the rms of the fit residuals. For a given sight line, we started by analyzing the O~{\sc i}~$\lambda1355$ profile so that we could develop a basic understanding of the component structure in that direction. Many of the sight lines in our primary sample already had detailed O~{\sc i} (and Ga~{\sc ii}) component structures from R11. For the others, we sought to produce similarly high quality component decompositions following the same basic procedures as in R11. In most cases, direct fits to the O~{\sc i}~$\lambda1355$ profiles yielded good constraints on the velocities and $b$-values of the individual components, particularly when high-resolution (E140H) data were being analyzed. When analyzing medium-resolution (E140M) spectra, it was occasionally necessary to impose certain restrictions on the fits, such as requiring the $b$-values to fall within a particular range as determined either directly from the data or from other species observed at higher resolution.\footnote{A significant fraction of the sight lines in R11 have auxiliary ground-based data on Ca~{\sc ii} and K~{\sc i} obtained at high resolution ($R>150,000$). When available, those species offered an additional means of constraining the line-of-sight component structure.} Once a suitable component decomposition of the O~{\sc i}~$\lambda1355$ profile was in hand, we proceeded to fit the profiles of the other species of interest that were available for that particular sight line. We began by adopting the O~{\sc i} component structure (i.e., the fractional column densities, relative velocities, and $b$-values of the O~{\sc i} components) as input parameters to the profile fitting routine, allowing all of the parameters to vary freely. However, due to the weakness of many of the lines of interest (particularly those of As~{\sc ii}, Sn~{\sc ii}, and Pb~{\sc ii}), it was often necessary to constrain the fits in some way so as to achieve realistic results while mitigating the effects of noise in the profiles. In these cases, depending on the quality of the data, we would either fix the relative velocities of the components, allowing the individual column densities and $b$-values to vary, or we would fix the fractional column densities, relative velocities, and $b$-values, allowing only the total column density and the absolute velocity of the profile template to vary freely. When multiple absorption complexes were evident in the profile, a template was created for each complex and fitted to that portion of the profile independently. A similar approach was adopted by R11, who demonstrated the reliability and effectiveness of using profile templates to fit weak interstellar features. For the present results, one can judge the degree to which the templates match the observed spectra by examining the examples presented in Figures~2 through 17. (In Table 4, we present the final component structures obtained for O~{\sc i}, Ga~{\sc ii}, and Ge~{\sc ii} for sight lines not analyzed by R11. The table also includes component results for HD~99890 derived from high-resolution spectra newly analyzed in this work.) Table 5 presents the total (line-of-sight) column densities of O~{\sc i}, Ga~{\sc ii}, Ge~{\sc ii}, As~{\sc ii}, Kr~{\sc i}, Cd~{\sc ii}, Sn~{\sc ii}, and Pb~{\sc ii} for the 69 sight lines in our primary sample. (A compilation of column density measurements from the literature for an additional 59 sight lines with STIS or GHRS data is presented in Appendix A.) Uncertainties in total column density for the sight lines in our primary sample were calculated as the quadrature sums of the uncertainties in the column densities of the individual components contributing to the absorption profiles. For the individual components, the column density uncertainties are proportional to the corresponding equivalent width uncertainties, which are based on the widths of the components (i.e., the full widths at half maximum) and the rms variations in the continua surrounding the profiles. This approach of basing the column density uncertainties on equivalent width uncertainties is justified since the majority of absorption lines in our survey fall on the linear portion of the curve of growth. The Ge~{\sc ii}~$\lambda1237$ lines, however, are stronger than any of the other lines we examine, and may be somewhat saturated in many cases. We therefore added (in quadrature) an extra amount of uncertainty to the error for Ge~{\sc ii} based on an estimate of the degree of saturation in the line profile. To estimate the degree of saturation, we considered the difference between the fitted column density and the column density one would obtain under the assumption that the line is optically thin. Our method is equivalent to varying the effective $b$-value by about 10\% using a traditional curve-of-growth analysis. (This process increased the uncertainty in the Ge~{\sc ii} column density by only 0.01 dex, on average. However, for the more heavily saturated sight lines, the increase in uncertainty was as high as 0.04 dex.) For all non-detections, and cases where the derived column density was less than twice the calculated uncertainty, we determined 3$\sigma$ upper limits by calculating what the error in column density would be if the undetected line had a component structure identical to that of O~{\sc i}~$\lambda1355$. In situations where we had both high-resolution and medium-resolution spectra covering a particular absorption feature, we analyzed the two absorption profiles separately as a check on consistency. In most of these cases, the column densities derived from the high and medium-resolution profiles agree with each other at the 1$\sigma$ level or better. For HD~1383 and HD~210809, the Ge~{\sc ii} column densities from high and medium-resolution data covering the $\lambda1237$ line agree at about the 2$\sigma$ level. These small discrepancies are likely the result of differences in continuum placement since, in both cases, the absorption profiles have multiple complexes that span a large range in velocity (55 to 75 km~s$^{-1}$), and the medium-resolution profiles have rather low S/N values (S/N~$\sim$~25), making it difficult to judge the proper location of the continuum. That, even in these more difficult cases, the results agree at the 2$\sigma$ level suggests that our error estimates adequately account for uncertainties due to both noise and continuum placement. We also analyzed the Cd~{\sc ii}~$\lambda2145$ and $\lambda2265$ profiles separately (adopting the same component structure for both lines, determined either from the stronger Cd~{\sc ii} line or from O~{\sc i}~$\lambda1355$). For all seven sight lines with Cd~{\sc ii} detections, the column densities derived from the $\lambda2145$ and $\lambda2265$ lines agree at the 1$\sigma$ level or better. Final column densities in cases where both high and medium-resolution spectra yielded reliable determinations were obtained by taking the weighted mean of the two results. The same approach was used to derive final Cd~{\sc ii} column densities from our independent measurements of the $\lambda2145$ and $\lambda2265$ lines. \subsection{Comparison with Previous Studies} As noted earlier, many of the sight lines in our primary sample have been analyzed previously for the purposes of deriving column densities of Ge~{\sc ii}, Kr~{\sc i}, or both (e.g., Cartledge et al.~2003, 2006, 2008). A comparison between our determinations for Ge~{\sc ii} and Kr~{\sc i} for these sight lines and those from the literature is presented in Table 6. Overall, we find good agreement between the column densities we derive and those found in previous investigations, most of which relied on the same STIS data that we examine here. If we consider all of the Ge~{\sc ii} and Kr~{\sc i} comparisons together, we find that in 62\% of the cases the column densities agree at the 1$\sigma$ level or better, while in 96\% of the cases the determinations agree within 2$\sigma$. In only two cases do the column densities disagree by more than 2$\sigma$. One of these is the Ge~{\sc ii} column density toward HD~210809, which we find to be 0.24 dex lower than that given in Cartledge et al.~(2006).\footnote{The Ge~{\sc ii} column densities listed in Cartledge et al.~(2006) have been revised in Table 6 to reflect the new experimental $f$-value we adopt for Ge~{\sc ii}~$\lambda1237$ from Heidarian et al.~(2017).} The difference in this case does not appear to be related to any optical depth effects since the equivalent widths disagree by approximately the same amount as the column densities. Rather, the discrepancy seems to be related to data quality issues. In the time since Cartledge et al.~(2006) published their value, new high-resolution STIS spectra have been obtained for HD~210809 on two separate occasions (under GO programs 11737 and 12192). The new data, when co-added with the original spectrum, yield a factor of nine increase in exposure time for the region near Ge~{\sc ii}~$\lambda1237$, corresponding to a factor of three decrease in the uncertainty of the Ge~{\sc ii} equivalent width. The other case in which our column density determination disagrees with a previously published value by more than 2$\sigma$ is the Kr~{\sc i} column density toward HD~152590. Our result is 0.15 dex lower than the value listed in Cartledge et al.~(2008). Here, again, the equivalent widths disagree by approximately the same amount as the column densities, indicating that the issue lies in the placement of the continuum. Indeed, as Cartledge et al.~(2008) also point out, the underlying stellar spectrum of HD~152590 exhibits an unusually steep curvature in the immediate vicinity of the interstellar Kr~{\sc i}~$\lambda1235$ feature. HD~152590 is one of two sight lines (along with HD~116852) that were previously identified by Cartledge et al.~(2003) as potentially having elevated Kr abundances. That conclusion for HD~152590 was based on the Kr~{\sc i} column density originally published by Cartledge et al.~(2001). In their subsequent work, Cartledge et al.~(2008) revised their column density determination for this sight line downward by 0.17 dex after acquiring additional STIS observations of HD~152590 that substantially increased the exposure time near Kr~{\sc i}~$\lambda1235$. With this revision, HD~152590 no longer appeared to exhibit a Kr abundance that deviated significantly from the mean interstellar value of log~(Kr/H)~$\approx$~$-$9.0 (Cartledge et al.~2008).\footnote{Cartledge et al.~(2008) acquired additional STIS spectra of HD~116852 as well, and used those observations to derive a Kr~{\sc i} column density that was 0.23 dex lower than the value given in Cartledge et al.~(2003). As with HD~152590, this revision eliminated the appearance of a Kr enhancement toward HD~116852.} Our determination for Kr~{\sc i} toward HD~152590 further indicates that the Kr abundance is likely not enhanced in this direction. Two other comparisons between our results and those of Cartledge et al.~(2008) bear mentioning. The Kr~{\sc i} column densities we find toward HD~104705 and HDE~303308 are larger than the Cartledge et al.~(2008) values by 0.18 dex and 0.12 dex, respectively. While these are only $\sim$1$\sigma$ discrepancies (and thus not especially concerning), the reason for the discrepancies illustrates an important aspect of our approach to profile fitting. Both HD~104705 and HDE~303308 are extended sight lines (with path lengths of 5.0 and 3.8 kpc, respectively) that cross the Sagittarius-Carina spiral arm. The interstellar absorption profiles of moderately strong lines like O~{\sc i}~$\lambda1355$ and Ge~{\sc ii}~$\lambda1237$ in these directions exhibit a strong complex of absorption components near $v_{\mathrm{LSR}}=0$~km~s$^{-1}$ (presumably tracing gas in the local ISM) along with weaker components at more negative velocities (which likely probe gas in the Sagittarius-Carina spiral arm). In Kr~{\sc i}~$\lambda1235$, however, only the local ISM components are readily detectable. It appears that Cartledge et al.~(2008), in their analysis of Kr~{\sc i} toward HD~104705 and HDE~303308, included only these readily detectable components in their profile fits. (We find essentially the same column densities if we include only these more obvious features in our fits.) However, such an approach would tend to underestimate the Kr abundance since the total hydrogen column density applies to the entire line of sight. With our approach of fitting weak interstellar features with profile templates (constructed from the components seen in O~{\sc i}~$\lambda1355$ in these cases), we are able to include all of the relevant line-of-sight components in our fits even when some of those components would not be detected significantly on their own. \section{DETERMINATION OF ELEMENT DEPLETION PARAMETERS} \subsection{Elemental Abundances} Since the ionization potentials for all of the neutral and singly-ionized species we examine are greater than (or approximately equal to) that of neutral hydrogen, we can be reasonably well assured that the column densities we obtain for these species represent the total column densities of the elements for the lines of sight included in our survey. As demonstrated in Figure 1, none of the sight lines we examine have total hydrogen column densities below log~$N$(H$_{\mathrm{tot}}$)~$\approx$~20.0, where ionization effects begin to become important. We can thus derive the elemental abundances for these sight lines in the usual way. That is, we define the (logarithmic) abundance of element $X$ as log~($X$/H)~$\equiv$~log~$N$($X$)~$-$~log~$N$(H$_{\mathrm{tot}}$), where $N$($X$) refers to the column density of the element in its preferred stage of ionization for neutral diffuse gas. For most of the sight lines, we were able to calculate total hydrogen column densities from the values of $N$(H~{\sc i}) and $N$(H$_2$) listed in Table 2 of J09 (where references to the original sources of those values may also be found). For HD~148937, which was not included in J09, we obtained $N$(H~{\sc i}) from Diplas \& Savage (1994) and $N$(H$_2$) from Sheffer et al.~(2007). We were unable to determine values of $N$(H$_{\mathrm{tot}}$) for 17 of the 69 sight lines in our primary sample, however, due to a lack of available information on H~{\sc i} and/or H$_2$. For the other 52 sight lines, we list in Table 7 the total hydrogen column densities along with the corresponding elemental abundances based on the column density measurements presented in Table 5. Uncertainties in these quantities were determined through standard error propogation techniques. (Total hydrogen column densities for sight lines with column density measurements from the literature are provided in Appendix A.) In compiling values of $N$(H~{\sc i}) from the literature for the sight lines in his survey, J09 was careful to make corrections for stellar Ly$\alpha$ absorption in cases where the background stars had spectral types B1 or cooler. In two cases relevant to our investigation, however, those corrections carried large uncertainties, and the nominal corrections appear to have been too large. For HD~27778 and HD~203532, J09 obtained interstellar H~{\sc i} column densities of log~$N$(H~{\sc i})$_{\mathrm{ISM}}$~=~20.35 and 20.22, respectively. In both cases, however, the error associated with the determination of $N$(H~{\sc i})$_{\mathrm{ISM}}$ was larger than the value itself. The observed H~{\sc i} column densities toward HD~27778 and HD~203532 are log~$N$(H~{\sc i})$_{\mathrm{obs.}}$~=~$21.10\pm0.12$ and $21.27\pm0.09$ (Cartledge et al.~2004). These uncorrected H~{\sc i} column densities, when combined with the corresponding H$_2$ column densities from Cartledge et al.~(2004), yield total hydrogen column densities of log~$N$(H$_{\mathrm{tot}}$)~=~21.36$^{+0.08}_{-0.09}$ for HD~27778 and 21.44$^{+0.07}_{-0.08}$ for HD~203532. If the J09 corrections were adopted instead, the total hydrogen column densities would be log~$N$(H$_{\mathrm{tot}}$)~=~21.10 and 21.02. Using the results of his depletion analysis, J09 also derived ``synthetic'' values of $N$(H$_{\mathrm{tot}}$) for his sight lines based on the relative gas-phase abundances of the elements observed (and the pattern of depletions expected for those elements). For HD~27778 and HD~203532, the synthetic values of $N$(H$_{\mathrm{tot}}$) were found to be log~$N$(H$_{\mathrm{tot}}$)$_{\mathrm{syn.}}$~=~$21.38\pm0.07$ and $21.37\pm0.08$. The good agreement between these synthetic values and the values obtained when the uncorrected H~{\sc i} column densities are adopted suggests that the corrections for stellar Ly$\alpha$ absorption derived by J09 may be too large for these particular stars. We therefore use the uncorrected values when calculating total hydrogen column densities and abundances for these sight lines. In two other instances, we were able to obtain values for $N$(H$_{\mathrm{tot}}$) despite having incomplete information. No molecular hydrogen column densities are available for HD~37021 or HD~37061 (primarily because these stars were never observed by the \emph{FUSE} satellite). However, as first pointed out by Cartledge et al.~(2001), these sight lines show very little, if any, Cl~{\sc i} absorption in the relatively strong line at 1347.2~\AA, suggesting that little H$_2$ is present, given the close correspondence between neutral chlorine and molecular hydrogen (Jura \& York 1978; Moomey et al.~2012). To derive more quantitative estimates for $N$(H$_2$) in these directions, we examined the available STIS spectra of the two stars in the vicinity of Cl~{\sc i}~$\lambda1347$. Toward HD~37021, there is no discernible absorption feature at the expected position of the Cl~{\sc i} line, and we derive a 3$\sigma$ upper to the Cl~{\sc i} column density of log~$N$(Cl~{\sc i})~$<$~11.6. Weak absorption from Cl~{\sc i}~$\lambda1347$ is detected toward HD~37061 with an equivalent width of $9.1\pm0.8$~m\AA{} and a column density of log~$N$(Cl~{\sc i})~=~$12.60\pm0.04$. These Cl~{\sc i} column densities indicate that the H$_2$ column densities are not likely to be greater than log~$N$(H$_2$)~$\approx$~19.0 (e.g., Moomey et al.~2012). Since the H~{\sc i} column densities toward HD~37021 and HD~37061 are log~$N$(H~{\sc i})~=~$21.65\pm0.13$ and $21.73\pm0.09$ (J09), such small contributions from H$_2$ would not alter the total hydrogen column densities in any significant way. We therefore accept the H~{\sc i} column densities as being representative of the total amounts of hydrogen along the lines of sight. \subsection{Depletion Parameters} One of our primary objectives in investigating the abundances of $n$-capture elements in the ISM is to uncover the extent to which the elements are depleted onto interstellar dust grains and also to examine how the depletions change with changing environmental conditions from one sight line to another. Without this knowledge, it would be difficult to draw any definitive conclusions regarding the ways in which $s$- and $r$-process nucleosynthesis might be affecting interstellar abundances in the current epoch. In a landmark study of interstellar depletions, J09 examined the depletion characteristics of 17 different elements (including Ge and Kr) in a sample of 243 sight lines probing the local Galactic ISM. We seek to perform a similar analysis for the elements studied in this investigation so that their depletion properties may be directly compared with those of more abundant elements. Presumably, this will allow us to ascertain whether the $n$-capture elements follow ``normal'' depletion patterns, or whether their abundances have been affected in some way by nucleosynthetic processes. The unified framework that J09 developed was predicated on the empirical observation that, while different elements exhibit different degrees of depletion, the depletions of most elements tend to increase in a systematic way as the overall strength of depletions increases from one sight line to the next. Sight lines with stronger depletions are thought to contain higher proportions of denser and/or colder gas. This phenomenon of ``density-dependent'' depletions has been studied by others (e.g., Cartledge et al.~2006; R11), and is generally taken as evidence of grain growth in the diffuse ISM. \subsubsection{Definitions and Methodology} As a first step toward developing a unified framework within which to characterize the depletions of different elements along different lines of sight, J09 defined a line-of-sight depletion strength factor, denoted $F_*$, which indicates for a given direction the extent to which depletion processes have succeeded in removing atoms from the gas phase. The value of $F_*$ for a specific sight line is based on a weighted average of the available observed depletions for that direction (see Equation (4) in J09). Sight lines showing strong depletions, such as those seen in the low velocity ($v_{\mathrm{LSR}}=-1$~km~s$^{-1}$) component toward $\zeta$~Oph, have depletion factors near $F_*=1$, while sight lines showing only very modest depletions have values of $F_*$ closer to zero. In an idealized situation, the depletion of element $X$, defined in logarithmic terms as [$X$/H]~=~log~($X$/H)~$-$~log~($X$/H)$_{\sun}$, is related to the sight-line depletion factor $F_*$ according to: \begin{equation} [X/\mathrm{H}]=B_X+A_X(F_*-z_X), \end{equation} \noindent where the depletion parameters, $A_X$, $B_X$, and $z_X$, are unique to element $X$. Among these element-specific parameters, the depletion slope $A_X$ is the most fundamental. It indicates how much the depletion of a particular element changes as the growth of dust grains progresses within interstellar clouds. (Note that the value of the $A_X$ parameter does not depend on the choice of the solar reference abundance to which the measured gas-phase abundances are compared, and that it is largely insensitive to the adopted $f$-values.) The intercept parameter $B_X$ indicates the expected depletion of element $X$ at $F_*=z_X$, where $z_X$ represents a weighted mean value of $F_*$ for the particular set of depletion measurements under consideration. Values of the coefficients $A_X$ and $B_X$ may be obtained for each element through the evaluation of a least-squares fit, with [$X$/H] as the dependent variable and $F_*$ the independent variable. (The reason for the additional term involving $z_X$ in Equation (1) is that for a particular choice of $z_X$ there is a near zero covariance between the formal fitting errors for the solutions of $A_X$ and $B_X$; see J09.) Since neither the element-specific depletion parameters nor the sight-line depletion factors are known \emph{a priori}, J09 used an iterative procedure to evaluate these parameters and determine fiducial values of $F_*$ for the sight lines in his survey. We do not re-evaluate the depletion factors for the sight lines in common with our survey (even though new depletion measurements are now available for those sight lines for elements not considered by J09) since any changes to those $F_*$ values would likely be small. Rather, we simply adopt the values (and their associated uncertainties) already calculated by J09 (accepting only those values determined directly from the observed depletions, and not the so-called ``synthetic'' values derived from least-squares fits to the relative gas-phase abundances, even in cases where an observed value is not available). This choice allows us to more easily compare the depletion trends exhibited by the $n$-capture elements examined here with those found for the more abundant elements studied by J09. The adopted values of $F_*$ for the sight lines in our primary sample are listed along with the total hydrogen column densities in Table 7. ($F_*$ values for sight lines with column density measurements from the literature are given in Appendix A.) \subsubsection{Solar Reference Abundances} Having established suitable values for the total hydrogen column densities and sight-line depletion factors for as many lines of sight as possible from our survey, the only remaining component required for our analysis is the set of solar reference abundances against which to measure the interstellar depletions.\footnote{While the abundances in young F and G-type stars of the solar neighborhood might serve as a better set of reference abundances for the present-day ISM, these abundances are not very much different from those of the Sun (e.g., Lodders et al.~2009). Moreover, the $n$-capture elements that are the focus of our investigation have generally not been observed in such stars.} Following J09, we adopt the recommended solar system abundances from Lodders (2003), which we list in Table 8 for the elements of interest to our survey. While there are more modern references for the chemical composition of the Sun and solar system (e.g., Asplund et al.~2009; Lodders et al.~2009; Grevesse et al.~2015), use of the Lodders (2003) abundances allows us to place the element depletion parameters that we derive on the same scale as those in J09. The differences between the abundances in Lodders (2003) and those in more recent compilations for the $n$-capture elements in our investigation are minimal regardless. Solar abundances for the elements Ga, Ge, As, Cd, Sn, and Pb tend to be based on meteoritic abundances since these are generally more accurate than the corresponding photospheric abundances. Lodders et al.~(2009) compile a set of updated meteoritic abundances, yet these differ from the ones in Lodders (2003) by at most 0.02 dex for the elements of interest. (The solar Kr abundance from Lodders (2003) is based on theoretical $s$-process production rates; the value has not changed significantly in more recent compilations.) A potentially more significant systematic effect involves the correction applied to the solar photospheric (and meteoritic) abundances to account for the gravitational settling that is thought to have occurred between the formation of the protosolar nebula and the present-day Sun. In deriving her recommended solar system (i.e., protosolar) abundances, Lodders (2003) applied a correction of +0.074~dex to the photospheric abundances of all elements heavier than He. In Lodders et al.~(2009), that correction was reduced to +0.053~dex. Asplund et al.~(2009) recommend a correction of +0.04~dex. While we adopt the protosolar abundances from Lodders (2003), for consistency with J09, we acknowledge that the upward correction of 0.074 dex applied to the present-day photospheric abundances may be too large. Of course, this is a systematic correction that affects all depletion measurements in the same way, and thus can be easily adjusted if a different overall correction proves to be more appropriate. \subsubsection{Least-Squares Fits} Depletion parameters for the elements O, Ga, Ge, As, Kr, Cd, Sn, and Pb were determined through linear least-squares fits to trends of [$X$/H] versus $F_*$, adopting the functional form expressed in Equation (1). The least-squares fits were performed using the Interactive Data Language (IDL) procedure FITEXY, which properly accounts for errors in both the $x$ and $y$ coordinates (see Press et al. 2007). The resulting values of $A_X$, $B_X$, and $z_X$ for each element are presented in Table 8, while the fits themselves are shown in Figures 18 through 20.\footnote{Note that the ordinate of the plots in Figures 18$-$20 is log~($X$/H) rather than [$X$/H]. This was done so that the raw abundance measurements could be displayed, making it easy to see how the depletions would change if, for example, a different solar reference abundance were adopted.} (The least-squares fits described in this section are depicted by the solid orange lines in the plots for the different elements.) Since the values we obtain for $B_X$ depend on our choice of reference abundances, the errors associated with the adopted solar system abundances were added in quadrature to the formal fitting errors for $B_X$ to derive the $B_X$ uncertainties given in Table 8. (The same is not true of the $A_X$ uncertainties since the derived values of $A_X$ are entirely independent of the adopted reference abundances.) \begin{figure} \centering \includegraphics[width=0.99\textwidth]{f18.eps} \caption{Gas-phase abundances as a function of the sight line depletion factor ($F_*$) from J09 for the elements O, Ga, and Ge. Solid symbols represent abundances derived in this work; open symbols are used for results obtained from the literature (squares: STIS; diamonds: GHRS). Grey symbols denote sight lines where $\sigma(F_*)\ge0.30$. The solid orange line shows the linear fit based on the methodology of J09, with parameters given in Table 8. The solid black line shows the fit based on a survival analysis, with parameters given in Table 9. The horizontal dotted line in each panel gives the adopted solar system abundance from Lodders (2003).} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\textwidth]{f19.eps} \caption{Same as Figure~1 except for the elements As, Kr, and Cd. All upper limits are our determinations.} \end{figure} \begin{figure} \centering \includegraphics[width=0.66\textwidth]{f20.eps} \caption{Same as Figure~1 except for the elements Sn and Pb. All upper limits are our determinations.} \end{figure} The $B_X$ parameters are not fundamental quantities intrinsic to the elements since they are sensitive to the specific set of depletion measurements available in each case. If different sets of observations were considered, particularly if the sight lines had much different distributions of $F_*$ values, then the derived values of $B_X$ (and $z_X$) could be much different from the values listed in Table 8 (even if the underlying depletion trends were the same). It is therefore important to evaluate two additional depletion parameters, which are insensitive to the particular sample of observations under consideration, so that the depletions of many different elements may be compared in a consistent manner. Following J09, we evaluate for each element the parameters: \begin{equation} [X/\mathrm{H}]_0=B_X-A_Xz_X \end{equation} \noindent and \begin{equation} [X/\mathrm{H}]_1=B_X+A_X(1-z_X), \end{equation} \noindent which yield the expected depletions at $F_*=0$ and $F_*=1$, respectively. The former indicates the initial amount of depletion present in the diffuse ISM before significant grain growth has occurred (or after the outer portions of the grains have been destroyed by the passage of an interstellar shock, for example), while the latter represents the depletion seen in a relatively dense and/or cold interstellar cloud, such as the $v_{\mathrm{LSR}}=-1$~km~s$^{-1}$ cloud toward $\zeta$ Oph. The values we obtain for [$X$/H]$_0$ and [$X$/H]$_1$ for the elements in our survey, calculated from the corresponding values of $A_X$, $B_X$, and $z_X$, are given in Table 8. (The uncertainties in these quantities were determined according to the relations given in J09.) The last three columns of Table 8 list for each element the $\chi^2$ value associated with the least-squares fit, the degrees of freedom in the fit (i.e., the number of observations minus two), and the probability of obtaining a worse fit than that which we obtain. Exceptionally low probabilities of obtaining a worse fit are found for the elements Ga, Sn, and Pb, which could indicate that the model developed by J09 does not provide a complete description of the variations seen in the gas-phase abundances of these elements. (This could be the case if, for example, there are true abundance variations that are superimposed onto the general trend due to depletion.) Alternatively, the low probabilities could be an indication that the uncertainties in the measured column densities have been underestimated. The opposite situation is seen in our results for the elements O and Kr, where the relatively low $\chi^2$ values yield high probabilities of a worse fit. That we obtain such high values for these probabilities may indicate that the column density uncertainties have been overestimated in these cases. \subsection{Survival Analysis} In deriving column densities for the sight lines in our primary sample, considerable effort was made to determine upper limits for any species covered by the observations that were not detected at the 2$\sigma$ level or greater. Moreover, a number of sight lines were included in our sample even though they did not show any evidence of absorption from As~{\sc ii}, Cd~{\sc ii}, Sn~{\sc ii}, or Pb~{\sc ii}, which were the primary objectives of our archival search. This was done in recognition of the fact that upper limits on column densities would still be useful, particularly in cases where few confirmed detections are available. An illustration of this fact is provided by the plot showing the depletion trend for Pb in Figure 20. The least-squares fit for Pb (represented by the solid orange line in the plot) indicates that the depletion slope $A_{\mathrm{Pb}}$ is quite steep, and that the initial value of the depletion [Pb/H]$_0$ is \emph{positive}, suggesting a gas-phase abundance considerably higher than the adopted solar abundance. However, there is reason to suspect that this fit does not truly represent the depletion properties of Pb in the ISM. The sight lines yielding Pb~{\sc ii} detections all fall within a very narrow range in $F_*$ (between about 0.6 and 1.0), meaning that there is very little leverage for determining the slope of the depletion trend. Indeed, the $A_{\mathrm{Pb}}$ parameter has the highest uncertainty among all of the $A_X$ values in Table 8. Furthermore, there are a number of upper limits at small values of $F_*$ that lie below the best-fit line representing the least-squares fit. This is a clear indication that the depletion slope for Pb is most likely not as steep as the least-squares solution would indicate. A similar situation may be seen in the case of As (Figure 19), where there are a handful of upper limits at low $F_*$ that appear to be inconsistent with the least-squares fit. The methods typically employed for analyzing observational data containing nondetections (or ``censored'' data points) are collectively known as survival analysis (e.g., Feigelson \& Nelson 1985; Isobe et al.~1986). For our purposes, we wish to perform a linear regression between the values of [$X$/H] and $F_*$ that does not completely ignore the presence of the upper limits that were derived in cases where the depletions could not be measured. To accomplish this, we employ Schmitt's binned method (Schmitt 1985) to derive the regression coefficients (i.e, the slope and intercept of the regression line), using the task SCHMITTBIN, which is part of the STATISTICS package within the Space Telescope Science Data Analysis System (STSDAS). Along with estimates for the slope and intercept, the SCHMITTBIN task outputs the standard deviations of these estimates using Schmitt's bootstrap error analysis (Schmitt 1985). The linear regression coefficients resulting from our application of this method are presented in Table 9, which also lists for each element the number of detections and nondetections considered in the analysis. (The corresponding regression lines are shown on the depletion plots presented in Figures 18$-$20 as solid black lines.) While our motivation in performing this analysis was to better understand the depletion trends for elements like As and Pb, where the observational data are heavily censored, we applied this method to all elements so that the results for elements like O and Ge, which have zero nondetections, could be compared with the results of the least-squares fits described above to check for consistency. In making these comparisons, we note that the survival analysis does not account for errors in either the independent or dependent variable. Thus, differences in the regression coefficients derived using the two methods might be expected, even in cases where all data points represent detections. Still, by comparing the $A_X$ values from Table 8 with the corresponding slope parameters in Table 9, we find that, in most cases, the results agree within their mutual uncertainties (at about the 2$\sigma$ level or better). The largest differences are seen for the elements As, Sn, and Pb. In all three cases, the survival analysis regression suggests that the depletion slope may not be as steep as indicated by the least-squares fit. Furthermore, the intercept parameters derived through survival analysis for As, Sn, and Pb (Table 9) suggest that the initial depletions of these elements are all approximately zero. These findings contrast with the high \emph{positive} values of [As/H]$_0$, [Sn/H]$_0$, and [Pb/H]$_0$ derived from the least-squares fits (Table 8), which seem to indicate that the gas-phase abundances of As, Sn, and Pb are supersolar in low-depletion sight lines. To further investigate the differences in slope between the least-squares linear fits and the survival analysis regressions for As, Sn, and Pb, we reevaluated the survival analysis fits after removing the upper limits from consideration. We did this to test whether the differences in slope are driven by the inclusion of the upper limits or are simply a result of the fact that the survival analysis regressions do not account for errors in the measured quantities. For As, it seems that the difference in slope between the orange and black lines in Figure 19 is entirely driven by the inclusion of the upper limits since the survival analysis regression is nearly identical to the least-squares linear fit when the upper limits are not considered. For Sn and Pb, however, much of the difference in the slopes of the lines presented in Figure 20 is due to the fact that the survival analysis regressions do not account for errors in the abundances or depletion factors. Ultimately, more precise abundance measurements are needed, particularly at small values of $F_*$, to better constrain the slopes of the depletion trends for these elements. \section{DISCUSSION} \subsection{Extending the Analysis of Interstellar Depletions to Rare $n$-Capture Elements} In the preceding sections, we have described our efforts to examine the gas-phase abundances and depletion behaviors of $n$-capture elements in a comprehensive and consistent manner. Such efforts are necessary for developing a more complete picture of element depletions in the local Galactic ISM. In his analysis of the depletion patterns of 17 different elements, J09 demonstrated that the depletions of most elements are highly correlated, meaning that the depletion of any individual element tends to increase as the overall strength of depletions increases from one sight line to another. (Only for the element N did J09 find that the observed depletions remain constant even as the value of $F_*$ increases.) Our investigation, and in particular our derivation of the element depletion parameters in Section 4.2, allows us to extend the analysis of J09 to include the elements Ga, As, Cd, Sn, and Pb. (In Appendix B, we present depletion parameters derived using the same methods for the element B based on the observational data reported by R11.) These results allow us to better understand the depletion behaviors of elements with low-to-moderate condensation temperatures, specifically those with $T_C$ in the range 600 to 1100~K, where the transition from relatively mild to relatively strong depletions begins to occur (Savage \& Sembach 1996; J09). A central question to ask is whether the elements studied in this investigation participate in the same collective depletion behavior exhibited by the other more abundant elements studied by J09. Before addressing that question specifically, we first compare our results to those reported in J09 for the elements O, Ge, and Kr since these elements were considered in both investigations. We focus mainly on the derived values of the parameters $A_X$ and [$X$/H]$_0$ since together these two parameters fully characterize the depletion trend for a given element. By comparing the $A_X$ and [$X$/H]$_0$ values from Table 8 for O, Ge, and Kr with the corresponding values from Table 4 of J09, we find that in each case the results agree at the 1$\sigma$ level or better,\footnote{There is a systematic offset between our values for $B_{\mathrm{Ge}}$, [Ge/H]$_0$, and [Ge/H]$_1$ and those of J09 due to our use of a new experimental $f$-value for the Ge~{\sc ii}~$\lambda1237$ transition (Heidarian et al.~2017). The J09 values must be increased by +0.149 for comparison with our values.} although our analysis has led to a reduction in the uncertainties associated with each of these parameters. The smaller uncertainties reflect the fact that our samples are larger\footnote{Our O, Ge, and Kr samples are larger than those in J09 both because we have measurements for more sight lines and because we do not place any restrictions on the samples in terms of which sight lines we include in the analysis. Among the restrictions imposed in his analysis, J09 considered only those stars with Galactocentric distances in the range $7<R_{\mathrm{GC}}<10$~kpc out of concern that an abundance gradient could distort the results for stars outside the solar circle. Later, he found that no abundance gradient was evident even when all sight lines were considered.} and may also be due to our having redetermined the O~{\sc i}, Ge~{\sc ii}, and Kr~{\sc i} column densities in a consistent way for all of the sight lines in our primary sample. (Recall the discussion in Section 3.3 concerning Kr~{\sc i} toward HD~104705 and HDE~303308. If we had simply adopted the Kr~{\sc i} column densities from Cartledge et al.~(2008) for these sight lines, the Kr depletions would have seemed unusually large compared to their respective values of $F_*$.) The largest reductions in uncertainty are seen for the depletion slope parameters $A_{\mathrm{Ge}}$ and $A_{\mathrm{Kr}}$. The errors we find for these quantities are lower than their counterparts in J09 by $\sim$40\%. This is particularly significant in the case of Kr, which as a noble gas would generally not be expected to participate in the collective depletion behavior exhibited by many other elements. While J09 found that Kr does seem to show progressively stronger depletions as $F_*$ increases (i.e., $A_{\mathrm{Kr}}=-0.166\pm0.103$; J09), the significance of that result was only 1.6$\sigma$. Our analysis yields the same negative slope for Kr ($A_{\mathrm{Kr}}=-0.166\pm0.059$) but at a significance level of 2.8$\sigma$. We now turn to the issue of how the depletion properties of the elements examined in this investigation (particularly those not considered by J09) compare to the properties of the other more abundant elements that have been studied previously. In Figure 21, we plot the [$X$/H]$_0$ and [$X$/H]$_1$ values we derive (Table 8), along with the values derived by J09 for elements not considered here, against the condensation temperatures of the elements. The adopted values of $T_C$ correspond to the 50\% condensation temperatures computed by Lodders (2003) for a solar system (i.e., protosolar) composition at 10$^{-4}$ bar total pressure. While calculations like these strictly apply only in situations involving chemical equilibrium, the resulting condensation temperatures may still be useful indicators of the tendencies of different elements to form solid compounds within interstellar dust grains. In Figure 22, we present a similar plot of the depletion slopes $A_X$, derived here (Table 8) and in J09, as a function of $T_C$. (Note that our results for B, described in Appendix B, are included among the results shown in Figures 21 and 22.) \begin{figure} \centering \includegraphics[width=0.8\textwidth]{f21.eps} \caption{Element depletions as a function of the condensation temperature ($T_C$) from Lodders (2003). Orange symbols are used for the results obtained in this work; black symbols show results for additional elements examined by J09. Upper panel: Depletion at $F_*=0$ (denoted [$X$/H]$_0$ in Table 8) versus $T_C$. Lower panel: Depletion at $F_*=1$ (denoted [$X$/H]$_1$ in Table 8) versus $T_C$.} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{f22.eps} \caption{Depletion slope ($A_X$) as a function of the condensation temperature of the element. Orange symbols are used for the results obtained in this work; black symbols show results for additional elements examined by J09.} \end{figure} The [$X$/H]$_0$ values (plotted in the upper panel of Figure 21) are indicators of the depletions seen along sight lines showing the lowest levels of depletion overall (among the 243 sight lines analyzed by J09). Presumably, these sight lines contain gas at low enough densities and/or at high enough temperatures that significant grain growth (or mantling) has not yet commenced. Or, perhaps these sight lines probe regions where the grains have been disrupted by shocks, which strip the grains of any mantles they may initially have had. In either case, these initial depletions should provide a fair representation of the composition of dust grains (or, more specifically, the resilient cores of dust grains) that emerge from various stellar sources (e.g., evolved low- and high-mass stars and Type II supernovae). It can be seen from the figure that most elements with $T_C<800$~K remain undepleted at this stage (i.e., that their [$X$/H]$_0$ values are consistent with zero). However, Kr appears to be depleted below the solar system value (at the 2.6$\sigma$ level), while Sn seems overabundant (at the 3.2$\sigma$ level). (The initial depletion level for Pb should be considered highly uncertain since the slope of the depletion trend for this element is not very well constrained.) As for elements with $T_C>800$~K, more than half show a fairly regular trend of increasing initial depletion with increasing $T_C$. The [$X$/H]$_0$ values that we derive for Ga and Ge, for example, seem to be natural extensions of the trend seen in the initial depletions for elements from Cu to Ti. If this trend represents a dust condensation sequence, then the temperature at which the sequence seems to terminate (between 800 and 900 K) could be related to the dust formation temperature. Indeed, dust shells around AGB stars are observed to have temperatures in the range 800 to 1100~K (e.g., Gail et al.~2013). A significant number of elements with fairly high condensation temperatures show much weaker depletions at $F_*=0$ than would be expected from the above trend, however. Indeed, the initial gas-phase abundances of Cl, As, and P appear to be supersolar (at significance levels of 4.3$\sigma$, 2.4$\sigma$, and 6.0$\sigma$, respectively). In the case of Cl, at least, the high \emph{positive} value for the initial depletion most likely reflects the fact that J09 considered only Cl~{\sc ii} measurements in deriving gas-phase Cl abundances, neglecting any contributions from Cl~{\sc i}. In fact, there are a number of sight lines where $N$(Cl~{\sc i})~$>$~$N$(Cl~{\sc ii}), and these sight lines tend to have high molecular fractions due to the close correspondence between Cl~{\sc i} and H$_2$ (A.~M.~Ritchey et al., in preparation; see also Moomey et al.~2012). Since these sight lines preferentially have high values of $F_*$ as well, the depletion slope for Cl derived by J09 is most likely too steep (and the initial depletion value too high). We mentioned already that the depletion slope for As may be too steep since the survival analysis suggests a somewhat shallower trend. Most of the As depletion measurements were made along sight lines with relatively high values of $F_*$. Only one sight line with an As~{\sc ii} detection has $F_*<0.5$ (HD~104705), and that one measurement exerts considerable influence over the slope of the depletion trend. Still, the depletion slope for As does not seem to be unusual compared to those of elements with similar condensation temperatures (Figure 22). Likewise, the depletion slope for P does not seem to be unusual, falling intermediate between the slopes for Mn and Si. Moreover, the P depletion trend appears to be quite well defined, with little ambiguity in the derived parameters (see Figure 6 in J09). If the depletion slopes for As and P are not in error, then the fact that the initial abundances of these elements appear to be higher than the adopted solar reference abundances could indicate either that the solar abundances are incorrect (or are not appropriate as reference abundances), or that the oscillator strengths of the As~{\sc ii} and P~{\sc ii} transitions used to derive the interstellar abundances are not accurate. (Recall that Lodders (2003) applied a correction of +0.074 dex to the photospheric and meteoritic abundances of all elements heavier than He to arrive at her recommended solar system abundances. If such a correction had not been applied, the discrepancies with the initial interstellar abundances for As and P would be even greater.) Lodders (2003) adopts the average of the photospheric P abundance\footnote{The abundances discussed in this paragraph are given on the standard logarithmic scale used in stellar astronomy where the abundance is equal to log~$\epsilon$($X$)~=~log~($X$/H)~+~12.} ($5.49\pm0.04$; Berzinsh et al.~1997) and the meteoritic P abundance ($5.43\pm0.05$; Wolf \& Palme 2001) to arrive at her value of log~$\epsilon$(P)~=~$5.46\pm0.04$ for the solar photosphere. A more modern reference for the solar photospheric abundance of P gives $5.41\pm0.03$ (Scott et al.~2015), which would only exacerbate the disagreement with the initial interstellar abundance implied by the depletion trend for P. (The interstellar P abundance at $F_*=0$ from the fit presented in J09 is equivalent to log~$\epsilon$(P)~=~$5.84\pm0.05$.) The solar abundance of As, which is based solely on the meteoritic abundance since there are no usable As lines in the solar spectrum, has not changed between Lodders (2003) and Lodders et al.~(2009). The latter report a value of log~$\epsilon$(As)~=~$2.32\pm0.04$ from an average of 20 analyses. (For comparison, the interstellar abundance of As at $F_*=0$ from our least-squares fit is equivalent to log~$\epsilon$(As)~=~$2.87\pm0.20$.) If we assume that the solar As and P abundances are accurate, then the next question to ask is whether they are appropriate as reference abundances for the present-day ISM. In the case of P, at least, the answer seems to be that the solar abundance is an approriate benchmark for local gas. Recent abundance determinations for local stars using near-infrared lines (Caffau et al.~2011) and ultraviolet lines (Roederer et al.~2014a) give [P/Fe] ~$\approx$~0 and [P/S]~$\approx$~0 for stars with metallicities close to that of the Sun. The situation for As is less clear as this element has been detected in only seven metal-poor stars (Roederer 2012; Roederer et al.~2014b). The [As/Fe] ratios in these stars are generally supersolar (with a mean value of +0.28 dex; Roederer et al.~2014b) and show little evolution across the entire metallicity range probed. Still, the implications of these results for the As abundance in the present-day ISM are unclear because there are no determinations for As in stars with metallicities approaching that of the Sun. (The most metal-rich star in the samples shown in Roederer (2012) and Roederer et al.~(2014b) has [Fe/H]~$\approx$~$-$0.8.) As discussed in Section 3.1, the oscillator strength for the As~{\sc ii} transition at 1263.8~\AA{} has not been experimentally verified. Various theoretical determinations of the $f$-value range from 0.18 to 0.35 (e.g., Cardelli et al.~1993; Ganas 2000), and have a mean not very much different from the value of 0.259 adopted by Morton (2000) and used in this investigation. The oscillator strength would need to be a factor-of-three larger than the Morton (2000) value to account for the 0.47 dex discrepancy between the initial interstellar abundance of As (implied by the least-squares fit) and the solar system abundance. The situation is somewhat more complicated for P because different investigators have used a variety of P~{\sc ii} lines (e.g., $\lambda1152$, $\lambda1301$, and $\lambda1532$) to derive interstellar P abundances. The oscillator strength of the $\lambda1152$ transition is the most secure as a result of the beam-foil measurements performed by Federman et al.~(2007). While those experimental results were published after most studies of interstellar P~{\sc ii} abundances, the Federman et al.~(2007) $f$-value of $0.272\pm0.029$ for P~{\sc ii}~$\lambda1152$ essentially confirmed the value of 0.245 listed by Morton (2003), which was based on earlier experimental and theoretical results. Since J09 adopted the Morton (2003) $f$-values, the P abundances he uses in his analysis (at least those derived from the $\lambda1152$ line) should be secure. (Other P~{\sc ii} transitions such as $\lambda1301$ show larger discrepancies in their theoretical $f$-values. It remains to be seen whether future experiments can resolve such discrepancies.) The [$X$/H]$_1$ values (plotted in the lower panel of Figure 21) are representative of the depletions seen in relatively cold, diffuse clouds, where substantial grain growth has already occurred. The trend with condensation temperature, which is evident in the figure, is reminiscent of the cold-cloud depletion pattern seen toward $\zeta$ Oph (e.g., Savage \& Sembach 1996). (This is not at all surprising, of course, since $\zeta$ Oph was used to define the depletion scale at $F_*=1$.) Most elements with $T_C<800$~K show relatively mild depletions at $F_*=1$ ($-$0.3 dex on average), while elements with higher condensation temperatures tend to show progressively stronger depletions (culminating in a depletion of $-$3.1 dex for Ti). However, just like at $F_*=0$, there are certain elements (i.e., As, P, Si, and Mg) that exhibit depletions at $F_*=1$ that are weaker than expected given the trend seen for the other elements. The depletion slopes for Si and Mg, like those for As and P, do not appear to be unusual (although there is a considerable spread in the values of the slopes for elements with condensation temperatures near that of Fe; Figure 22). Rather, the absolute depletions of As, P, Si, and Mg seem to be offset from those of other elements at all values of $F_*$. This seems to suggest that these elements are either prevented from being incorporated into the initial grains that condense (in the outer atmospheres of late-type stars, for example), or are incorporated primarily into grain mantles that are subsequently destroyed or disrupted by shocks after the grains are deposited into the ISM.\footnote{Jones (2000) argues that sputtering due to ion-grain collisions in supernova-generated shock waves will preferentially remove Si and Mg atoms over the heavier Fe atoms. If most grains deposited into the ISM are processed by shocks, then this phenomenon would seem to explain why only about 40\% of the available Si and Mg atoms are locked up in grains in low depletion sight lines, while nearly 90\% of the Fe atoms remain in the dust phase at $F_*=0$ (see Figure 21).} In either case, the fact that the depletion slopes seem to be normal suggests that these elements participate as expected in the mantling process that presumably takes place within interstellar clouds. Joseph (1988) suggested that phosphorus may be chemically blocked from depleting in the outer atmospheres of late-type stars because it can form the stable molecule PN. In making this suggestion, Joseph (1988) cited the work of Gail \& Sedlmayr (1986), who had argued that nitrogen would be blocked from depleting because it forms N$_2$, a highly stable molecule with saturated valences that result in a high activation energy barrier for further gas-phase reactions. Since N and P have the same number of valence electrons, the idea was that PN would perform the same function for P as N$_2$ does for N. Upon entering the ISM, the PN molecules would be subject to the interstellar radiation field and would dissociate, allowing the P atoms to deplete along with the other refractory elements in interstellar clouds. We now have evidence that the depletion behavior of As in the ISM is similar to that of P. Both elements exhibit less depletion than expected at all values of $F_*$, but otherwise seem to behave normally, showing progressively stronger depletions as $F_*$ increases. Since As is in the same periodic group as P and N, perhaps it too is blocked from being incorporated into those initial dust grains that condense in the extended atmospheres of evolved stars. (While AsN, the As-bearing analog of N$_2$ and PN, has never been observed in interstellar or circumstellar environments, its predicted abundance would likely be below current detection limits given the low cosmic abundance of As.) The chemistry of phosphorus in circumstellar envelopes remains uncertain despite recent progress on both the observational and theoretical fronts. A number of P-bearing molecules, including PN, HCP, CP, PO, and PH$_3$, have been detected in the circumstellar shells around carbon-rich and oxygen-rich evolved stars (e.g., Tenenbaum et al.~2007; Milam et al.~2008; Tenenbaum \& Ziurys 2008). However, there is still some debate regarding which molecules are the dominant reservoirs of P in these environments. Models that assume thermochemical equilibrium typically predict that HCP will be the dominant P-bearing species in C-rich envelopes, while PH$_3$, PS, or PO will dominate in O-rich environments (e.g., Ag{\'u}ndez et al.~2007; Milam et al.~2008). However, such models have struggled to explain the high abundance of PN observed in O-rich circumstellar envelopes (e.g., Milam et al.~2008; De Beck et al.~2013). De Beck et al.~(2013) find that the abundance of PN in the circumstellar shell of the O-rich AGB star IK Tau (PN/H$_2$~$\approx$~$3\times10^{-7}$) is comparable to that of PO, and conclude that these two species are the main gas-phase reservoirs of P in the circumstellar envelopes of O-rich stars. Gobrecht et al.~(2016) modelled the dust formation process in IK Tau, paying particular attention to the effects of non-equilibrium chemistry induced by periodic shocks related to stellar pulsations. They found that PN is formed very efficiently in the post-shock gas and maintains its high abundance throughout the dust formation region. Since their predicted abundance for PN matches the observed abundance (within a factor of two), and since the observed abundance is comparable to the solar P abundance, their results seem to validate the idea that P does not participate in the dust formation process because it is locked up in stable gas-phase molecules. (However, even if this idea turns out to be correct, and assuming it applies to As as well, it would not explain the apparent supersolar abundances of As and P in low depletion sight lines, a problem for which there is no obvious solution at present.) While the [$X$/H]$_0$ and [$X$/H]$_1$ values are sensitive to the adopted reference abundances and to the oscillator strengths used to derive column densities from the interstellar absorption lines, the $A_X$ parameters do not depend on these factors. They reflect the changes in depletion between $F_*=0$ and $F_*=1$, and so depend only on the \emph{relative} abundances obtained for different sight lines. From a physical perspective, the depletion slopes are related to the rates at which different elements are incorporated into grain mantles under changing environmental conditions in interstellar clouds. As discussed in more detail in J09, the consumption rate for a given element depends on both the depletion slope and the gas-phase abundance of the element at a particular value of $F_*$ (see also Jenkins 2013). From Figure 22, it is clear that the $A_X$ parameters are correlated to some extent with the condensation temperatures of the elements. Elements with higher condensation temperatures tend to have steeper slopes, and therefore tend to be incorporated more readily into the grains, than elements with lower values of $T_C$. However, while this is generally true, there are a handful of elements that appear to exhibit unusual slopes compared to elements with similar condensation temperatures. Both B and Cl exhibit much steeper depletion slopes than would be expected for elements with $T_C\sim900$~K. We have already discussed a likely explanation for this in the case of Cl (namely, that the J09 analysis neglected to include Cl~{\sc i} column densities when deriving gas-phase Cl abundances). However, the unusually steep slope of the B depletion trend remains unexplained. The B~{\sc ii} line at 1362.5~\AA{} is detected in a variety of different sight lines (R11), and the depletion slope for this element appears to be well constrained (see Appendix B). It is particularly unusual that the slope for B is so much steeper than that for Ga, which is chemically similar to B yet has a somewhat higher condensation temperature (Lodders 2003). It could be that the condensation temperature for B is larger than that calculated by Lodders (2003), but then the initial depletion of B would appear to be at odds with the trend seen for many other elements in the upper panel of Figure 21. (The nominal slopes derived for S and Pb also appear to be too steep in Figure 22. However, the slopes for these elements are not very well constrained and as a result have large associated uncertainties.) At the other extreme, the depletion slope for Cd appears to be much shallower than expected compared to other elements with $T_C\sim700$~K. Indeed, the value we obtain for the slope ($A_{\mathrm{Cd}}=-0.028\pm0.221$) is consistent with zero, suggesting that Cd follows N in exhibiting no differential depletion in interstellar clouds. This conclusion is consistent with that of Sofia et al.~(1999), who found no changes in Cd depletion with increasing molecular fraction. Many trace elements condense by forming solid solutions with host phases composed of more abundant elements. According to Lodders (2003), the host phases for Cd condensation are enstatite (MgSiO$_3$) and troilite (FeS), with Cd replacing Mg and Fe in these compounds. Enstatite, along with forsterite (Mg$_2$SiO$_4$), is one of the host phases for Zn, and troilite is responsible for removing S from the gas phase (Lodders 2003). Since both S and Zn seem to show differential depletion (J09), it is not clear why Cd would not participate in this process.\footnote{In actuality, there are still considerable uncertainties regarding the depletion behavior of S in the diffuse ISM, primarily because the only available S~{\sc ii} lines are in most cases moderately, if not heavily, saturated, meaning that there are relatively few reliable abundance determinations and the measurements that do exist may be biased toward lower gas-phase abundances (see the discussion in J09).} Another unusual result is that Kr exhibits a \emph{nonzero} depletion slope, contrary to expectations based on most earlier studies (e.g., Cardelli \& Meyer 1997; Cartledge et al.~2008) that found no changes in the mean gas-phase Kr abundance when the abundances were plotted against the molecular hydrogen fractions or the average line-of-sight hydrogen densities. We find the same negative value for $A_{\mathrm{Kr}}$ as J09 found. However, by re-examining some of the Kr measurements from the literature and by adding new measurements, we have reduced the uncertainty associated with that value by $\sim$40\%, strengthening the case that Kr participates in the depletion process occurring in interstellar clouds. Lodders (2003) argues that heavy noble gases like Kr can be sequestered by the formation of clathrate hydrates (e.g., Kr$\cdot$6H$_2$O), and that 50\% of the available Kr will be sequestered in this way at a temperature of 52~K. While this process may account for the change in Kr depletion between $F_*=0$ and $F_*=1$, it probably cannot explain the significant offset between the interstellar abundances and the solar system abundance, which even at $F_*=0$ is 0.23 dex. In low depletion sight lines, where $F_*$ approaches zero, the gas temperatures are likely not cold enough for significant Kr condensation in clathrate compounds. By examining the depletion behaviors of rarely-studied elements, and evaluating the results alongside those for more abundant elements, we gain a deeper understanding of the processes that govern the gas-phase abundances of the elements in diffuse clouds. While many questions remain concerning the dust condensation and dust destruction processes, the results presented here, when combined with those in J09, should help to constrain future modelling efforts that seek to understand these processes. In particular, the depletion slopes, which are related to the rates at which the elements are incorporated into grain mantles, can serve as a guide to the changes that occur in the composition of the grains between low and high depletion sight lines. While the rare elements considered in this investigation contribute little to the total mass of interstellar dust grains, the depletion patterns they exhibit can still provide constraints on dust compositions. Typically, for trace elements to condense into solid compounds, certain host minerals (composed of major elements) must already be present in the grains (Lodders 2003). Thus, a detailed accounting of the depletions of many different elements can be useful for determining the most likely grain compositions (e.g., Savage \& Sembach 1996; Draine 2004). The initial depletions, represented by the [$X$/H]$_0$ values, should be of particular interest to those who model the dust formation processes in AGB stars and Type II supernovae (e.g., Sarangi \& Cherchneff 2015; Gobrecht et al.~2016). \subsection{Implications for $s$- and $r$-Process Nucleosynthesis in the Present Epoch} Having evaluated the depletion characteristics of Ga, Ge, As, Kr, Cd, Sn, and Pb, and having considered how the results compare to those of other more abundant elements, we are in a position to address claims regarding enhancements and deficiencies in the abundances of $n$-capture elements in the ISM. Our investigation was premised on the suggestion (e.g., Walker et al.~2009) that heavy elements synthesized primarily by massive stars are underabundant in the present-day ISM relative to the solar system, while those produced mainly by low- and intermediate-mass stars are not deficient, and may even show supersolar abundances (e.g., Sofia et al.~1999). Walker et al.~(2009) based their suggestion on the relatively limited data available at the time concerning the interstellar abundances of $n$-capture elements. While Ge, Kr, Rb, Cd, and Sn had been studied in a number of different sight lines, much less was known about the abundances of interstellar Ga, As, and Pb. This was also before J09 published his study, which considered the depletions of many different elements (including Ge and Kr) within a unified framework. Having extended that framework in this investigation to include Ga, As, Cd, Sn, and Pb, we are better positioned to critically evaluate the original suggestion by Walker et al.~(2009) of a deficit in the contribution from massive stars to the production of $n$-capture elements. First, we should establish the dominant production routes for the elements of interest. Using an updated Galactic chemical evolution (GCE) model, which considers $s$-process yields from AGB stars of low and intermediate mass ($M=1.3$$-$$8M_{\sun}$), Bisterzo et al.~(2014) predict the main $s$ contributions to the solar abundances of the elements from Kr to Bi. For the elements relevant to our investigation, the predicted main $s$ contributions are 13.9\% for Kr, 18.1\% for Rb, 46.1\% for Cd, 52.5\% for Sn, and 87.2\% for Pb. This prediction for Pb includes, in addition to the main $s$-process, a major contribution to the synthesis of $^{208}$Pb by the so-called strong $s$-process in low-mass stars of low metallicity (Gallino et al.~1998). While Bisterzo et al.~(2014) do not give $s$-process predictions for Ga, Ge, or As, an earlier study (not based on GCE) does (Bisterzo et al.~2011). Following Arlandini et al.~(1999), Bisterzo et al.~(2011) derive main $s$ contributions to the solar abundances by averaging the yields of AGB models with $M=1.5$ and $3M_{\sun}$ at $Z=0.5Z_{\sun}$. They find main $s$ contributions of 4.4\% for Ga, 7.1\% for Ge, and 6.2\% for As. The remaining fractions for all of these elements are presumably contributed by massive stars through a combination of the weak $s$-process and the $r$-process. If it were true that heavy elements produced primarily by massive stars were underabundant in the ISM relative to the solar system, then we would expect this to apply to the elements Kr, Rb, Ga, Ge, and As, which have massive star contributions in the range 82\% to 96\%. We would not expect the elements Cd, Sn, or Pb to be as affected since massive stars contribute much less to their production. From the depletion results presented in Section 4.2, and discussed above (Section 5.1), it is clear that Kr is underabundant in the ISM at all values of $F_*$, a conclusion consistent with the long-established deficit in the interstellar Kr abundance (e.g., Cardelli \& Meyer 1997; Cartledge et al.~2008). (Naturally, this conclusion regarding Kr rests on the assumption that the solar Kr abundance, which relies on theoretical $s$-process production rates, is accurate.) While the trapping of Kr atoms within clathrate compounds may explain the slight increase in the depletion of Kr with $F_*$, it probably cannot account for the overall deficit in the interstellar abundance since at least some sight lines should be characterized by temperatures that are too high for significant Kr condensation. Walker et al.~(2009) found evidence that the interstellar abundance of Rb is also lower than expected. That conclusion was based on their finding of subsolar Rb/K ratios since the dominant ionization stage of Rb is not observed and hence the total gas-phase Rb abundance is not known directly. For the other three elements produced primarily by massive stars (Ga, Ge, and As) evidence of unusual abundance deficiencies relative to the solar system is less forthcoming. Both Ga and Ge are depleted in the ISM, even in low depletion sight lines; at $F_*=0$, Ga is depleted by 0.43 dex and Ge by 0.22 dex (Table 8). However, these initial depletions seem to follow the general trend in [$X$/H]$_0$ values exhibited by many other elements (i.e., Cu, Mn, Cr, Fe, Ni, and Ti; see Figure 21), a trend which may in fact correspond to a condensation sequence. Arsenic shows no sign of being deficient in the ISM, and may instead be enhanced in its abundance relative to the solar system. Recall that both As and P exhibit much less depletion at all values of $F_*$ compared to elements with similar condensation temperatures, and that, at $F_*=0$, the gas-phase As and P abundances appear to be supersolar (Section 5.1 and Figure 21). (These conclusions necessarily assume that the least-squares fits for As and P are truly representative and that the $f$-values of the relevant transitions are accurate.) Since Ga and Ge seem to exhibit normal depletion patterns, and As seems to be enhanced rather than deficient, we conclude that a simple dichotomy between the production of heavy elements by low-mass and high-mass stars cannot account for the unexpectedly low interstellar abundances of Kr and Rb. While a comparison between the absolute gas-phase abundances of $n$-capture elements and their expected depletions can yield insight into the possible effects of $s$- and $r$-process nucleosynthesis, another way to explore those effects is to search for variations in the abundances among different lines of sight that are unrelated to differences in depletion. Having delineated the general trends due to the changes in depletion with $F_*$, we can search for any significant scatter in the abundances superimposed onto those trends. Such scatter could be an indication of nucleosynthetic enrichment and/or inefficient mixing in the ISM, for example. To investigate these possibilities, we take the least-squares linear fits (described in Section 4.2) as a basis, and calculate the root mean square (rms) deviations about those fits for all of the elements in our survey. Following J09, we define a residual equal to [$X$/H]$_{\mathrm{obs}}$~$-$~[$X$/H]$_{\mathrm{fit}}$, where [$X$/H]$_{\mathrm{obs}}$ is the observed depletion of element $X$ and [$X$/H]$_{\mathrm{fit}}$ is the depletion expected based on the linear fit defined by Equation (1). The errors in the residuals are calculated by adding in quadrature the errors in [$X$/H]$_{\mathrm{obs}}$ and [$X$/H]$_{\mathrm{fit}}$, where the latter account for errors in the best-fit coefficients $A_X$ and $B_X$ and the sight-line depletion factors $F_*$ (see Appendix B of J09). The rms deviations in the depletion residuals with respect to the least-squares linear fits are presented in Table 10 for the elements O, Ga, Ge, As, Kr, Cd, Sn, and Pb. (Similar information is presented in Appendix B for the element B.) We also list separately in Table 10 the mean errors in the observed depletions [$X$/H]$_{\mathrm{obs}}$, in the expected depletions [$X$/H]$_{\mathrm{fit}}$, and in the residuals themselves, so that one can easily judge whether the uncertainties in the residuals are dominated by observational errors, uncertain fit parameters, or some combination of both. As an example, we can see that for the elements O and Kr, the fit parameters are well determined and the errors in the residuals are dominated by the observational uncertainties associated with the column density measurements. Only in the case of Pb is the mean error in the expected depletions larger than that in the observed depletions. This reflects the fact that the fit parameters for Pb are poorly constrained, a consequence of the Pb depletion measurements being clustered around only relatively high values of $F_*$. For most of the elements examined here, we find that the scatter in the abundances with respect to the depletion trends is nearly equal to, if not somewhat less than, the mean error in the residuals, when one accounts for both the observational errors and the uncertainties in the fit parameters (Table 10). This is not the case for Sn, however, where the rms deviations about the least-squares fit are approximately 0.18 dex, while the mean error in the residuals is only 0.12 dex. This result is consistent with our having found an extremely low probability of obtaining a worse fit for Sn (as indicated in the last column of Table 8). While the scatter in the Pb abundances (which is approximately 0.21 dex) is even greater than the scatter for Sn, the mean error in the residuals for Pb is also much greater (due to the poorly constrained fit and the relatively large observational uncertainties). None of these conclusions regarding the scatter in the depletion data would be significantly altered were the survival analysis fits (described in Section 4.3) adopted as the basis for the residuals rather than the least-squares fits. Another way to examine the scatter in the measured depletions with respect to the depletion trends is to search for significant deviations in the residuals for individual lines of sight. If we divide the depletion residuals by their respective errors, we can identify sight lines whose measured depletions deviate from expected values by some specified factor. Appyling this procedure to the O depletion data, for example, we find that in only 5 out of 91 sight lines ($\sim$5\%) do the measured depletions deviate from the expected ones by 2$\sigma$ or more (using the least-squares fit as the basis). Similar results are obtained for many of the $n$-capture elements. We find that 2$\sigma$ or greater deviations occur in only 3 out of the 43 sight lines with Ga depletion measurements, 4 out of the 66 sight lines with Ge depletions, and 1 out of the 12 sight lines with depletion measurements for Pb. For As, Kr, and Cd, there are no sight lines that deviate from the depletion trends by 2$\sigma$ or more. In the case of Sn, however, we find that 9 out of 37 sight lines ($\sim$24\%) exhibit depletions that deviate from expectations (based on the least-squares fit) by more than 2$\sigma$. While differences in ionization along different lines of sight could potentially lead to scatter in the abundances (if the ionization potential of the ion is significantly greater than that of H~{\sc i}), the ionization potential of Sn~{\sc ii} (14.63 eV) is among the lowest of the $n$-capture species examined here. It thus remains an intriguing possibility that the significant scatter in the gas-phase Sn abundances has a nucleosynthetic origin. More than half of the solar system abundance of Sn is contributed by the main $s$-process occurring in low- and intermediate-mass AGB stars (Bisterzo et al.~2011, 2014). The long evolutionary lifetimes of such stars mean that their contributions tend to increase with time relative to those from massive stars. As a result, some evolution in the interstellar abundance of Sn could conceivably have taken place in the 4.6 Gyr since the formation of the solar system (Sofia et al.~1999). If the enrichment process is relatively localized and/or if the timescale for enrichment is significantly shorter than the timescale on which the interstellar gas becomes well mixed, then the variations we find in the gas-phase Sn abundances might be expected. However, if $s$-process enrichment combined with inefficient mixing is the cause of the observed variations for Sn, then similar variations might be expected for Cd and Pb, both of which, like Sn, owe much of their production to the main $s$-process. There are significantly fewer interstellar detections of Cd~{\sc ii} and Pb~{\sc ii} than there are of Sn~{\sc ii}. However, the gas-phase Cd abundances are remarkably consistent from one line of sight to another. The Pb abundances seem to show a high degree of scatter (despite the fact that most sight lines with Pb~{\sc ii} detections have similar values of $F_*$), but that scatter is matched by large uncertainties in the depletion residuals (Table 10). Support for our finding that the local Galactic ISM shows evidence of enrichment due to $s$-process nucleosynthesis is provided by abundance determinations of $n$-capture elements in Galactic planetary nebulae (Sterling \& Dinerstein 2008; Sterling et al.~2015, 2016). Planetary nebulae are particularly interesting in this context as they can be used to study the products of the final stages of AGB star evolution. Many planetary nebulae are found to be enriched in $n$-capture elements produced by the main $s$-process during the thermally pulsing AGB stage of their progenitor stars (Sterling \& Dinerstein 2008; Sterling et al.~2015). Sterling et al.~(2015) find that nearly 40\% of planetary nebulae with determined Se and/or Kr abundances are $s$-process enriched, with the degree of enrichment varying significantly from one object to the next. Other heavy elements found to be enhanced in planetary nebulae include Rb, Cd, and Xe (e.g., Sterling et al.~2016). While Sn has yet to be detected in such objects, its abundance in the envelopes of low-mass stars at the end of the AGB phase is predicted to be enhanced considerably (e.g., Cristallo et al.~2015). The substantial $s$-process enhancements that have already been detected in planetary nebulae indicate that enrichment of the ISM in the products of $s$-process nucleosynthesis must be occurring at some level. \section{SUMMARY AND CONCLUSIONS} In this investigation, we have examined the gas-phase abundances and depletion behaviors of the $n$-capture elements Ga, Ge, As, Kr, Cd, Sn, and Pb, which are the only elements heavier than Zn that have been detected (in their dominant ionization stage) in multiple interstellar sight lines. Our study was motivated by previous findings that were suggestive of unusual deficiencies and potential enhancements in the interstellar abundances of elements produced through $s$- and $r$-process nucleosynthesis. We carried out our survey by analyzing the interstellar absorption profiles for a sample of 69 sight lines with high- and/or medium-resolution STIS archival spectra covering the species of interest. Column densities (or upper limits to the column densities) of Ga~{\sc ii}, Ge~{\sc ii}, Kr~{\sc i}, As~{\sc ii}, Cd~{\sc ii}, Sn~{\sc ii}, and Pb~{\sc ii} were derived for our sample through detailed profile synthesis fits, often using the O~{\sc i}~$\lambda1355$ line as a template for the line-of-sight component structure. We note that our survey has produced the first reported detections of the Pb~{\sc ii}~$\lambda1203$ transition in individual sight lines (after the discovery of this feature in a composite spectrum by Heidarian et al.~2015). Column density measurements for an additional 59 sight lines with abundance determinations in the literature, from STIS and GHRS observations, were included in our analysis (as described in Appendix A). Having collected as much information as possible on the gas-phase abundances of $n$-capture elements, either through our own analysis of archival STIS spectra or by adopting measurements from the literature, we determined depletion parameters for each element following the methodology of J09, who developed a unified framework for describing the depletion behaviors of 17 different elements (including O, Ge, and Kr). Our work extends the analysis of J09 to B, Ga, As, Cd, Sn, and Pb, and provides updated results for O, Ge, and Kr. The two key parameters needed to characterize the depletion trends for different elements are the initial depletions [$X$/H]$_0$ and the depletion slopes $A_X$. Values of these parameters, derived through least-squares linear fits, are provided in Table 8 for O, Ga, Ge, As, Kr, Cd, Sn, and Pb (and in Appendix B for B). For the hard-to-detect elements As and Pb, there are more nondetections than actual measurements, and the detections tend to be clustered at relatively high values of the sight-line depletion factor $F_*$. This leads to depletion parameters that are poorly constrained. To address this situation, we performed a survival analysis on the gas-phase abundance data for each element, the results of which are presented in Table 9. While the survival analysis regression fits consider both the actual measurements and any upper limits, they do not account for errors in the independent and dependent variables like the least-squares fits do. Nevertheless, the survival analysis regressions suggest that the depletion slopes for As and Pb may not be as steep as indicated by the least-squares fits. More precise measurements are needed, particularly in low depletion sight lines, to better constrain the depletion slopes in these cases. The derived depletion parameters for the elements studied in this investigation were compared with those for elements considered by J09 and trends were sought between the ensemble of depletion results and the condensation temperatures of the elements. These efforts have provided us with a better understanding of the depletion behaviors of elements with low-to-moderate condensation temperatures, specifically those with $T_C$ between 600~K and 1100~K, where the transition from relatively mild to relatively strong depletions is seen to occur. By examining the depletion properties of many different elements at once, we are better able to identify peculiarities in specific cases. For the $n$-capture elements that were the focus of our investigation, such peculiarities could have implications not only for the dust-grain depletion processes but also for the nucleosynthetic processes responsible for the production of heavy elements in the current epoch of Galactic evolution. We summarize our main findings for specific elements below: \begin{enumerate} \item The two most readily detectable $n$-capture elements, Ga and Ge, exhibit normal depletion patterns, meaning that their initial depletions and depletion slopes are in line with expectations based on the depletion results for many other elements with a range of condensation temperatures. Indeed, there seems to be a fairly well-defined trend of increasing initial depletion with increasing $T_C$ for the elements Ge, Ga, Cu, Mn, Cr, Fe, Ni, and Ti. This could represent a condensation sequence resulting from dust formation in the outer atmospheres of late-type stars or supernovae. (Note, however, that B, Cl, As, P, Si, and Mg do not seem to follow this trend.) \item Arsenic appears to exhibit less depletion than expected at all values of $F_*$, a trait it shares with the chemically-similar element P. Moreover, the gas-phase abundances of As and P appear to be supersolar in low-depletion sight lines (if the slope indicated by the least-squares fit for As is accurate). While As and P may be chemically blocked from depleting in the circumstellar envelopes of evolved stars, because these elements can form stable molecules with N, such a phenomenon would not explain their enhanced abundances at $F_*=0$. In the case of As, it would be helpful if the oscillator strength of the As~{\sc ii} transition at 1263.8 \AA{} were experimentally verified. \item Our analysis of the depletion trend for Kr strengthens the finding of J09 that this noble gas participates in the collective depletion behavior exhibited by most other elements. We find the same negative value for the slope parameter $A_{\mathrm{Kr}}$ as J09 did, yet our analysis has reduced the error in this quantity by $\sim$40\%. The small increase in Kr depletion with $F_*$ may be due to the sequestering of Kr atoms within clathrate hydrates. However, it seems unlikely that such a process could account for the significant offset between the interstellar Kr abundances and the solar abundance since the temperature required for Kr condensation is well below that which characterizes an average interstellar sight line. \item Among the elements not previously examined by J09, only Cd shows no evidence of differential depletion, making it only the second element (after N) to exhibit that trait. Since the host minerals necessary for Cd condensation appear to be the same as those responsible for removing S and Zn from the gas phase, and since both of the latter elements do seem to exhibit differential depletion, it is not clear why Cd would not also participate in that process. It may be that the interstellar measurements are not precise enough to detect the small changes in Cd depletion that would generally be expected. \item The interstellar gas-phase abundances of Sn and Pb show a large amount of scatter at any given value of $F_*$. In the case of Pb, this may simply reflect the relatively large observational uncertainties. For Sn, however, the scatter appears to be real and may be evidence of intrinsic variations in the abundance of Sn from one location to another. Such variations could conceivably result from $s$-process enrichment by low- and intermediate-mass AGB stars if the enrichment timescale is sufficiently short compared to the timescale on which the ISM becomes well mixed. \end{enumerate} Significant progress has been made in understanding the interstellar abundances of some heavy elements. For the most part, that progress was made possible by the ability of \emph{HST} to acquire high-resolution UV spectra at relatively high signal to noise for targets probing the local part of the Milky Way Galaxy. Further advances, however, will likely require use of a next-generation UV space telescope with a significantly larger aperture compared to \emph{HST}. A large aperture UV space telescope would yield more detections of rare species in low density sight lines (which could help to better define the depletion behaviors of As and Pb, for example). Such an instrument might also yield the first detections of as-yet-undetected elements (particularly those with low intrinsic abundances but low condensation temperatures, like Xe and Hg, or those with somewhat higher abundances but high condensation temperatures, like Sr and Ba). Finally, a large aperture space telescope with UV and optical capabilities would enable a broader study of the abundances of $n$-capture elements in the ISM of nearby galaxies and high-redshift absorption systems. \acknowledgments This research has made use of the SIMBAD database operated at CDS, Strasbourg, France. A.M.R.~acknowledges support from the Kenilworth Fund of the New York Community Trust. D.L.L.~thanks the Robert A.~Welch Foundation of Houston, TX, for support through grant F-634. Additional support for this work was provided by the Space Telescope Science Institute through grant HST-AR-12123.001-A. Observations were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. \facility{\emph{HST} (STIS)} \software{ISMOD (see Sheffer et al.~2008), STSDAS}
1,116,691,498,046
arxiv
\section{Introduction} \label{sect:introduction} European options can be exercised only at expiry while American options can be exercised anytime until expiry. Due to this additional flexibility the American options can be more valuable. In order to avoid arbitrage the price must be always at least the same as the final payoff function. A put option gives the right to sell the underlying asset for a specified strike price while a call option gives the right to buy the asset for a strike price. The seminal paper \cite{Black73} by Black and Scholes employs a geometrical Brownian motion with a constant volatility as a model for the price of the underlying asset. The market prices of options show that the volatility varies depending on the strike price and expiry of option. Several more generic models for the asset prices have been developed which are more consistent with market prices. Merton proposed adding log-normally distributed jumps to this model \cite{Merton76}. Heston \cite{Heston93} made the volatility to be a mean reverting stochastic process. Bates \cite{Bates96} combined the Heston stochastic volatility model and Merton jump-diffusion model. There are many methods for pricing options. The Monte Carlo method simulate asset price paths to compute the option price. This is an intuitive and flexible method, but it can be slow when high precision is require and it is more complicated and less efficient for American options. Instead in this paper, the pricing is based on partial(-integro) differential equation (P(I)DE) formulations. Another approach is based on numerical integration techniques. One benefit of these formulations is that for many options they can provide a highly accurate price much faster than the Monte Carlo method. Here the European options are priced by solving a P(I)DE and the American options by solving an LCP with the same operator. These operators are two-dimensional with a stochastic volatility and one-dimensional otherwise. The potential integral part of the model results from the jumps. The most common way to discretize the differential operators is the finite difference method. For European and American options the discrization leads to a system of linear equations and an LCP, respectively, at each time step. Under stochastic volatility models efficient PDE based methods for American options have been considered in \cite{Clarke99,Haentjens15,Ikonen08,Zvan98}, for example. A penalty approximation is employed for the resulting LCPs in \cite{Zvan98} and an operator splitting method in \cite{Haentjens15,Ikonen08}. An alternating direction implicit (ADI) method is used in \cite{Haentjens15} while iterative methods are used for resulting linear systems in \cite{Clarke99,Ikonen08,Zvan98}. Under jump-diffusion models PIDE methods lead to a system with a full matrix at each time step and their efficient solution for American options has been considered in \cite{dHalluin04,Kwon11a,Salmi11,Salmi12,Salmi14a}, for example. A penalty method together with FFT based fast method for evaluating the jump integral was used in \cite{dHalluin04}. An iterative method was proposed for LCPs with full matrices in \cite{Salmi11}. An implicit-explicit (IMEX) method was proposed in \cite{Kwon11a} to treat the integral term explicitly and the same approach was studied in \cite{Salmi14a}. The generalizations of the above methods for the combined Bates model have been developed and studied in \cite{Toivanen10,Ballestra10,Salmi13,Salmi14b,vonSydow15}. Unfortunately, such high-fidelity simulations are still too expensive for many practical applications and reduced order modeling (ROM) is a promising tool for significantly alleviating computational costs~\cite{antoulas2005,benner2015}. Most existing ROM approaches are based on projection. In projection-based reduced order modeling the state variables are approximated in a low-dimensional subspace. Bases for this subspace are typically constructed by Proper Orthogonal Decomposition (POD)~\cite{sirovich87} of a set of high-fidelity solution snapshots. While many approaches have already been developed for the efficient reduction of linear computational models three main strategies have been explored so far for efficiently reducing nonlinear computational models. The first one is based on linearization techniques~\cite{Rewienski_LAA_2006,Gu_IEEE_2008}. The second one is based on the notion of precomputations~\cite{Barbic_ACM_2005,Balajewicz_JFM_2013, Balajewicz_JCP_2016,Balajewicz_ND_2012,Cordier_EF_2013}, but is limited to polynomial nonlinearities. The third strategy relies on the concept of hyper-reduction --- that is, the approximation of the reduced operators underlying a nonlinear reduced-order model (ROM) by a scalable numerical technique based on a reduced computational domain~\cite{Ryckelynck_JCP_2005,An_TOG_2008,chaturantabut2010, Carlberg_IJNME_2011,Amsallem_IJNME_2012,Farhat_IJNME_2014,Amsallem_SMO_2014,Farhat_IJNME_2015}. In the case where the governing equations include a constraint equation it is often beneficial to construct a basis that satisfies these constraints a priori~\cite{Burkovska15}. For example, in the case of non-negativity constraints, a non-negative basis can be constructed via non-negative matrix factorization (NNMF)~\cite{Balajewicz15}. This approach was employed for option pricing in \cite{Balajewicz16}. For pricing European options ROMs have been developed in \cite{Cont11,Sachs13}. Only recently ROMs have been applied for pricing American options in \cite{Burkovska15,Haasdonk13}. A common problem associated with option pricing is the calibration of model parameters to correspond to the market prices of options. This is typically formulated as a least squares -type optimization problem. The calibration is computationally expensive as it requires pricing a large number of options with varying parameters. The use of ROMs to reduce this computational cost has been studied in \cite{Pironneau09,Sachs10,Sachs14}. The main contribution of the present work is the development of a cheap and accurate hyper-reduction approach for the early exercise constraint of American options. Our proposed approach is based on the fact that accurate price predictions do not necessarily require accurate approximations of the Lagrange multipliers. This has been observed in practice for the reduction of structural contact problems~\cite{Balajewicz15}. Our numerical experiments summarized in this paper suggest that using the binary matrix as the basis for the Lagrange multipliers performs remarkably well for all reproductive and predictive simulations considered. This approach is simpler, faster, and comparable in accuracy to previous approaches based on the NNMF~\cite{Balajewicz16}. This paper is organized as follows. In Section~\ref{sect:FOMs}, the full order models considered in this work are overviewed. In Section~\ref{sect:ROMs} the proposed new ROM approach is laid out. In Section~\ref{sect:Numerics} the proposed approach is applied to several problems. Finally in Section~\ref{sec:conclusions}, conclusions are offered and prospects for future work are summarized. \section{Full Order Models} \label{sect:FOMs} Merton \cite{Merton76} proposed the price $s \ge 0$ of an underlying asset to follow the stochastic differential equation \begin{equation} ds = (g - \mu \xi) s dt + \sigma_s s dw_s + s dJ, \end{equation} where $t$ is the time, $g$ is the growth rate of the asset price, $\sigma_s$ is its volatility, $w_s$ is a Wiener process, and $J$ is a compound Poisson process with the jump intensity $\mu$ and the log-normal jump distribution \begin{equation} p(y) = \frac{1}{y \delta \sqrt{2\pi}} \exp \left( - \frac{(\log y - \gamma)^2}{2 \delta^2} \right). \end{equation} The relative expected jump is $\xi = \exp \left( \gamma + \tfrac{1}{2} \delta^2 \right) - 1$. The Black--Scholes model is obtained by setting the jump intensity $\mu$ to zero. Under the Merton model the price $u(s,\tau)$ of a European option can be obtained by solving the one-dimensional PIDE \begin{equation}\label{Merton} \frac{\partial u}{\partial \tau} = \frac{1}{2} \sigma_s^2 s^2 \frac{\partial^2 u}{\partial s^2} + (r - \mu \xi) s \frac{\partial u}{\partial s} - (r + \mu) u + \mu \int_{0}^\infty u(sy,\tau) p(y) dy =: L^M u, \end{equation} where $\tau = T - t$ is the time until expiry, $T$ is the expiry time, $r$ is the interest rate. Bates \cite{Bates96} proposed the price $s$ and its instantaneous variance $v \ge 0$ to follow the stochastic differential equations \begin{equation} \begin{split} ds &= (g - \mu \xi) s dt + \sqrt{v} s dw_s + s dJ \\ dv &= \kappa (\theta - v) dt + \sigma_v \sqrt{v} dw_v, \\ \end{split} \end{equation} where $\theta$ is the mean level of $v$, $\kappa$ is the rate of mean reversion, $\sigma_v$ is the volatility of $\sqrt{v}$, and $w_v$ is a Wiener process. The Wiener prosesses $w_s$ and $w_v$ have the correlation $\rho$. Under the Bates model the price $u(s,v,\tau)$ of a European option can be obtained by solving the two-dimensional PIDE \begin{equation}\label{Bates} \begin{split} \frac{\partial u}{\partial \tau} &{} = \frac{1}{2} v s^2 \frac{\partial^2 u}{\partial s^2} + \rho \sigma_v v s \frac{\partial^2 u}{\partial s \partial v} + \frac{1}{2} \sigma_v^2 v \frac{\partial^2 u}{\partial v^2} + (r - \mu \xi) s \frac{\partial u}{\partial s} + \kappa ( \theta - v ) \frac{\partial u}{\partial v} \\ &{} - (r + \mu) u + \mu \int_{0}^\infty u(sy,v,\tau) p(y) dy =: L^B u, \end{split} \end{equation} The Heston model is obtained by setting the jump intensity $\mu$ to zero. In the following, put options are considered. Their price at the expiry is given by the pay-off function $g(s) = \max\{K - s,\, 0\}$. As the equations are solved backward in time, this leads to the initial condition \begin{equation}\label{initial} u(s,0) = g(s) \qquad\text{and}\qquad u(s,v,0) = g(s) \end{equation} for one-dimensional and two-dimensional models, respectively. For computing an approximate solution the infinite domain is truncated at $s = s_{\max}$ and $v = v_{\max}$, where $s_{\max}$ and $v_{\max}$ are sufficiently large so that the error due to truncation is negligible. The price $u$ of a European put option satisfies the Dirichlet boundary conditions \begin{equation} u = K e^{-r \tau}\;\;\text{at}\;\; s = 0 \qquad\text{and}\qquad u = 0\;\;\text{at}\;\; s = s_{\max}. \end{equation} For a non-negative interest rate $r \ge 0$, the price $u$ of an American put option satisfies the Dirichlet boundary conditions \begin{equation} u = K\;\;\text{at}\;\; s = 0 \qquad\text{and}\qquad u = 0\;\;\text{at}\;\; s = s_{\max}. \end{equation} Under the stochastic volatility models, the Neumann boundary condition $\tfrac{\partial u}{\partial v} = 0$ is posed at $v = v_{\max}$. The second derivatives in \eqref{Bates} vanish on the boundary $v = 0$. It is shown in \cite{Ekstrom10} that this degenerated form defines appropriate boundary condition at $v = 0$. Due to early exercise possibility the price $u$ of an American option satisfies the LCP \begin{equation}\label{American} \tfrac{\partial u}{\partial \tau} - L u = \lambda, \quad u \ge g, \quad \lambda \ge 0, \quad \lambda (u - g) = 0, \end{equation} where the operator $L$ is either $L^M$ or $L^B$ depending on the model and $\lambda$ is a Lagrange multiplier; see \cite{Ito09}, for example. For an easier numerical solution, the P(I)DE for European options and the LCP for American options are reformulated for $w$ which satisfies the homogeneous Dirichlet boundary condition $w = 0$ at $s = 0$. Furthermore, $w$ for American options is chosen so that it satisfies the positivity constraint $w \ge 0$ instead of the more complicated constraint $u \ge g$. For European options $w$ is chosen to be $w = u - e^{-r \tau} g$ while for American options it is chosen to be $w = u - g$. For European options the choice $w = u - e^{-r \tau} g$ leads to the P(I)DE \begin{equation}\label{formwe} \tfrac{\partial w}{\partial \tau} - L w = e^{-r \tau} (L + r) g. \end{equation} For American options the choice $w = u - g$ leads to the LCP \begin{equation}\label{formw} \tfrac{\partial w}{\partial \tau} - L w = \lambda + L g, \quad w \ge 0, \quad \lambda \ge 0, \quad \lambda w = 0. \end{equation} For American options a quadratic penalty formulation is obtained by choosing the Lagrange multiplier to be \begin{equation}\label{lambdadef} \lambda = -\tfrac{1}{\varepsilon} \max \left\{-w,0\right\} w. \end{equation} This leads to the nonlinear P(I)DE \begin{equation} \tfrac{\partial w}{\partial \tau} - L w + \tfrac{1}{\varepsilon} \max \left\{-w,\, 0\right\} w = L g. \end{equation} For the finite difference discretization, a grid is defined by $s_i$, $i = 0, 1, 2, \ldots, N_s$, for the interval $[0, s_{\max}]$. The spatial partial derivatives with respect to $s$ are discretized using central finite difference \begin{equation} \frac{\partial w}{\partial s} (s_i) \approx \tfrac{1}{\Delta s_{i-1} + \Delta s_i} \left[ - \tfrac{\Delta s_i}{\Delta s_{i-1}} w_{i-1} + \left( \tfrac{\Delta s_i}{\Delta s_{i-1}} - \tfrac{\Delta s_{i-1}}{\Delta s_i} \right) w_i + \tfrac{\Delta s_{i-1}}{\Delta s_i} w_{i+1} \right] \end{equation} and \begin{equation} \frac{\partial^2 w}{\partial s^2} (s_i) \approx \tfrac{2}{\Delta s_{i-1} + \Delta s_i} \left[ \tfrac{1}{\Delta s_{i-1}} w_{i-1} - \left( \tfrac{1}{\Delta s_{i-1}} + \tfrac{1}{\Delta s_i} \right) w_i + \tfrac{1}{\Delta s_i} w_{i+1} \right], \end{equation} where $\Delta s_i = s_{i+1} - s_i$. Similarly for the interval $[0, v_{\max}]$, a grid is defined $v_j$, $j = 0, 1, 2, \ldots, N_v$. The spatial partial derivatives with respect to $v$ are discretized using the above central finite differences. A nine-point finite difference stencil for $\tfrac{\partial^2 w}{\partial s \partial v}$ is obtained by employing the central finite differences in both directions. While this approximation can be unstable with high correlations $\rho$ it is stable for numerical experiments presented in Section \ref{sect:Numerics}. On the boundary $v = 0$, a one-sided finite difference approximation is used for $\tfrac{\partial w}{\partial v}$. The integrals can be discretized using a second-order accurate quadrature formula. Here the linear interpolation is used for $w$ between grid points and exact integration; see \cite{Salmi11}, for details. Under the Merton model the discretization of the integral leads to a full matrix while under the Bates model it leads to full diagonal blocks. Under models without jumps the time discretization is performed by taking the first time steps using the implicit Euler method and after using the second-order accurate BDF2 method. Under jump models the integral is treated explicitly. In the first time step using the explicit Euler method and in the following time steps using the linear extrapolation based on the two previous time steps. This IMEX-BDF2 method is described in \cite{Salmi14a}. With the explicit treatment of the integral it is not necessary to solve systems with dense matrices. At the time $(k+1) \Delta \tau$, the grid point values contained in the vector ${\bf w}^{k+1}$ are obtained by solving the system \begin{equation}\label{BDF2stepe} \left( {\bf I} + \tfrac{2}{3} \Delta \tau {\bf D} \right) {\bf w}^{k+1} \\ = \left( \tfrac{4}{3} {\bf w}^k - \tfrac{1}{3} {\bf w}^{k-1} \right) + \Delta \tau {\bf J} \left( \tfrac{4}{3} {\bf w}^k - \tfrac{2}{3} {\bf w}^{k-1} \right) + \tfrac{2}{3} \Delta \tau {\bf f} \end{equation} for European options and \begin{equation}\label{BDF2step} \begin{split} & \left( {\bf I} + \tfrac{2}{3} \Delta \tau {\bf D} + \tfrac{1}{\varepsilon} \diag \left( \max \left\{ -{\bf w}^{k+1},\, 0\right\} \right) \right) {\bf w}^{k+1} \\ &{} = \left( \tfrac{4}{3} {\bf w}^k - \tfrac{1}{3} {\bf w}^{k-1} \right) + \Delta \tau {\bf J} \left( \tfrac{4}{3} {\bf w}^k - \tfrac{2}{3} {\bf w}^{k-1} \right) + \tfrac{2}{3} \Delta \tau {\bf f} \end{split} \end{equation} for American options, where the matrices ${\bf J}$ and ${\bf D}$ corresponds to the terms due to the jumps and the rest, respectively. The vector ${\bf f}$ contains the grid point values of $e^{-r \tau} (L + r) g$ and $Lg$. The operator $\diag ( \cdot )$ gives a diagonal matrix with the diagonal entries defined by the argument vector. The maximum is taken componentwise. The systems \eqref{BDF2stepe} and \eqref{BDF2step} can be expressed more compactly as \begin{equation} {\bf A} {\bf w}^{k+1} = {\bf r}^{k+1} \end{equation} and \begin{equation} \left( {\bf A} + \tfrac{1}{\varepsilon} \diag \left( \max \left\{ -{\bf w}^{k+1},\, 0\right\} \right) \right) {\bf w}^{k+1} = {\bf r}^{k+1} \end{equation} with suitably defined ${\bf A}$ and ${\bf r}^{k+1}$. The discrete counterpart of the Lagrange multiplier $\lambda$ in \eqref{lambdadef} reads \begin{equation} {\boldsymbol \lambda}^{k+1} = -\tfrac{1}{\varepsilon} \diag \left( \max \left\{ -{\bf w}^{k+1},\, 0\right\} \right) {\bf w}^{k+1}. \end{equation} \section{Reduced Order Models} \label{sect:ROMs} Let ${\bf U} \in {\mathbb R}^{N \times n}$ be the basis for ${\bf w}$ with $n \ll N$. These basis are constructed by applying POD to a collection of solution snapshots. A solution snapshot, or simply a snapshot, is defined as a state vector ${\bf w}^k$ computed as the solution of~\eqref{BDF2step} for some instance of its parameters. A solution matrix is defined as a matrix whose columns are individual snapshots. To construct ${\bf U}$, the following optimization problem is solved \begin{equation}\label{eq:SVDpb} \underset{{\bf U}\in\mathbb{R}^{N \times n},\,{\bf V} \in\mathbb{R}^{n \times K}} {\text{minimize}} \displaystyle \| {\bf X} - {\bf U} {\bf V} \|_F^2, \end{equation} where $K$ is the number of solution snapshots. Hence, the basis ${\bf U}$ is comprised of the first $n$ left singular vectors of the snapshot matrix ${\bf X}$ and ${\bf V} = {\bf \Sigma} {\bf W}^T$, where ${\bf \Sigma}$ is the diagonal matrix of the first $n$ singular values of ${\bf \Sigma}$, and ${\bf W}$ is the matrix of its first $n$ right singular vectors. For European options the reduced solution ${\bf w} = {\bf U} {\bf w}_r$ is governed by \begin{equation}\label{rom_e} {\bf U}^T {\bf A} {\bf U} {\bf w}_r^{k+1} = {\bf U}^T {\bf r}^{k+1}. \end{equation} This ROM has the form \begin{equation}\label{rom_fe} {\bf A}_r {\bf w}_r^{k+1} = {\bf r}_r^{k+1}, \end{equation} where ${\bf A}_r = {\bf U}^T {\bf A} {\bf U}$ is precomputed offline while the right-hand side ${\bf r}_r^{k+1} = {\bf U}^T {\bf r}$ can be computed efficiently online with the number of operations depending on $n$. Thus, the online computational cost of forming and solving the problems \eqref{rom_fe} scales with the size $n$ of the reduced basis and it does not depend on the size $N$ of FOM. For American options the reduced solution ${\bf w} = {\bf U} {\bf w}_r$ is governed by \begin{equation}\label{rom_1} \left( {\bf U}^T {\bf A} {\bf U} + \tfrac{1}{\varepsilon} {\bf U}^T \diag \left( \max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\} \right) {\bf U} \right) {\bf w}_r^{k+1} \\ = {\bf U}^T {\bf r}^{k+1}. \end{equation} The product ${\bf U}^T \diag \left( \max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\} \right) {\bf U}$ is the only product in~\eqref{rom_1} that cannot be precomputed offline. Since the cost of evaluating this product scales with the size of the full order model, Eq.~\eqref{rom_1} does not offer major computational savings. To attain computational savings, the traditional approach involves including a second layer of approximation, sometimes called ``hyper-reduction''. One of the most popular hyper-reduction approaches is the Discrete Empirical Interpolation Method (DEIM)~\cite{chaturantabut2010}. We recapitulate the traditional DEIM algorithm as a starting point for our innovation. Let ${\bf U}_\lambda \in {\mathbb R}^{N \times n_{\lambda}}$ be basis for $\max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\}$, thus \begin{equation}\label{assumption} {\bf U}_\lambda {\bf h}_r \approx \max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\}, \end{equation} where ${\bf h}_r$ is the corresponding coefficient vector. The vector ${\bf h}_r$ can be determined by selecting $m$ unique rows from the overdetermined system ${\bf U}_\lambda {\bf h}_r \approx \max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\}$. Specifically, consider a binary matrix ${\bf P} \in \{0,1\}^{N \times n_\lambda}$ satisfying ${\bf P}^T {\bf P} = {\bf I}_{n_\lambda}$. Assuming ${\bf P}^T{\bf U}$ is nonsingular, the coefficient vector ${\bf h}_r$ can be determined uniquely from \begin{equation} {\bf P}^T \max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\} = ({\bf P}^T{\bf U}_{\lambda}){\bf h}_r \end{equation} and the final approximation is \begin{align} \max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\} \approx {\bf U}_\lambda {\bf h}_r &= {\bf U}_\lambda ({\bf P}^T {\bf U}_\lambda)^{-1} {\bf P}^T \max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\}\\ &=\widetilde{{\bf U}}_\lambda \max \left\{ - {\bf C} {\bf w}_r^{k+1},\,0\right\}, \end{align} where $\widetilde{{\bf U}}_\lambda = {\bf U}_\lambda ({\bf P}^T {\bf U}_\lambda)^{-1}$, and ${\bf C} = {\bf P}^T {\bf U}$. Thus, the product ${\bf U}^T \diag \left(\max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\} \right) {\bf U}$ in Eq.~\eqref{rom_1} is approximated by ${\bf U}^T \diag \left(\widetilde{{\bf U}}_\lambda \max \left\{ - {\bf C} {\bf w}_r^{k+1},\, 0\right\} \right) {\bf U}$ that, unlike its predecessor, can be computed efficiently online. In particular \begin{equation} {\bf U}^T \diag \left(\widetilde{{\bf U}}_\lambda \max \left\{ - {\bf C} {\bf w}_r^{k+1},\, 0\right\} \right) {\bf U} = \sum_{i=1}^{n_\lambda} {\bf K}_i \max \left\{ -\left[{\bf C}{\bf w}_r^{k+1}\right]_i,\, 0 \right\}, \end{equation} where ${\bf K}_i = {\bf U}^T \diag \left( \left[\widetilde{{\bf U}}_\lambda\right]_{:,i} \right) {\bf U}$ and $\left[\widetilde{{\bf U}}_\lambda\right]_{:,i}$ refers to the $i^{\text{th}}$-column of $\widetilde{{\bf U}}_\lambda$. The matrices ${\bf K}_i$ can be computed offline, once and for all, while $\max \left\{ -\left[{\bf C} {\bf w}_r^{k+1}\right]_i,\, 0 \right\}$ can be computed efficiently online since ${\bf C} \in \mathbb{R}^{n_\lambda \times n}$ does not scale with the size of the full order model. Although this straightforward implementation of DEIM succeeds in reducing the computational complexity of the ROM, this approach cannot be expected to yield accurate price predictions because DEIM does not enforce non-negativity. Even if the basis ${\bf U}_\lambda$ are constructed to be non-negative a priori, using, for example, NNMF, non-negativity is still not guaranteed because $\widetilde{{\bf U}}_\lambda = {\bf U}_\lambda ({\bf P}^T {\bf U}_\lambda)^{-1}$ is not guaranteed to be non-negative. One possible remedy is use instead a non-negative variation of the DEIM, called NNDEIM~\cite{amsallem2016}. Yet another remedy involves an angle-greedy procedure for constructing the non-negative bases~\cite{Burkovska15}. In this work, we introduce an alternative approach that does not require computation of non-negative basis for the Lagrange multipliers. Our proposed approach is based on the fact that accurate price predictions do not necessarily require accurate approximations of the Lagrange multipliers. In particular, requiring that ${\bf U}_\lambda {\bf h}_r \approx \max \left\{-{\bf U}{\bf w}_r^{k+1},0 \right\}$ may not be necessary. This has been observed in practice for the reduction of structural contact problems~\cite{Balajewicz15}. Our numerical experiments summarized in this paper suggest that using the binary matrix ${\bf P}$ as the basis for the Lagrange multipliers performs remarkably well for all reproductive and predictive simulations considered. With this approximation, the reduced order model simplifies considerably. In particular, with ${\bf U}_\lambda = {\bf P}$, $\widetilde{{\bf U}}_\lambda = {\bf P}$ and thus, the product ${\bf U}^T \diag \left(\max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\} \right) {\bf U}$ in Eq.~\eqref{rom_1} is approximated by the relatively simple product ${\bf C}^T \diag \left(\max \left\{ - {\bf C} {\bf w}_r^{k+1},\, 0\right\} \right) {\bf C}$. Thus, the final form of the ROM is as follows \begin{equation}\label{rom_2} \left( {\bf A}_r + \tfrac{1}{\varepsilon} {\bf C}^T \diag \left( \max \left\{ - {\bf C} {\bf w}_r^{k+1},\, 0\right\} \right) {\bf C} \right) {\bf w}_r^{k+1} \\ = {\bf r}^{k+1}_r, \end{equation} where ${\bf A}_r = {\bf U}^T {\bf A} {\bf U}$, and ${\bf r}_r = {\bf U}^T {\bf r}$. All components in Equation~\eqref{rom_2} scale with the size of the reduced order model. Finally, to construct the selection matrix ${\bf P}$, the standard DEIM algorithm for selecting the interpolation indices is utilized~\cite{chaturantabut2010}. However, in our proposed approach, the DEIM algorithm is applied to ${\bf m}\odot{\bf U}_{:,i}$, for $i =1,2,\ldots,n$, where ${\bf m} \in \{0,1\}^{N \times 1}$ is a binary mask vector. The non-zero elements of the mask vector ${\bf m}$, correspond to elements in the snapshots where early exercise has occurred at least once, that is, elements $j$ such that ${\bf U}_{j,i} \le 0$ for any $i$. The binary mask vector ensures consistency with the nonlinear function that is being approximated, i.e. $\max \left\{ - {\bf U} {\bf w}_r^{k+1},\, 0\right\}$. \section{Numerical Experiments} \label{sect:Numerics} All numerical examples considered here price a European put option and an American put option with the strike price $K = 100$ and the expiry $T = 0.5$. Only at the money options are considered, that is, the value of $u$ at $s = K$ is sought. Under the stochastic volatility models the value of $u$ is computed at the instantaneous variance $v = \theta$. The full order models are discretized using quadratically refined spatial grids similar to ones employed by the FD-NU method in \cite{vonSydow15b}. The $s$-grid is defined by $s_i = \left[ \left(\tfrac{i}{\alpha N_s} - 1 \right) \left|\tfrac{i}{\alpha N_s} - 1 \right| + 1 \right] K$, $i = 0, 1, \ldots, N_s$ with $\alpha = \tfrac{3}{8}$. For the stochastic volatility models the variance grid is defined by $v_j = \left(\tfrac{j}{N_v}\right)^2 v_{\max}$ with $v_{\max} = 1$. The uniform time steps are given by $\Delta \tau = \tfrac{1}{N_\tau} T$. In the experiments the number of spatial and temporal steps are chosen to be $N_s = 128$, $N_v = 64$, and $N_\tau = 32$. With this choice and the employed parameter ranges the absolute discretization error is about $10^{-2}$ or less. In the case of the American option, an iteration reduces the penalty parameter $\varepsilon$ with the five values $10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, and $10^{-5}$. This is the main reason for higher run times with the American option. The snapshot matrix ${\bf X}$ is given by all vectors ${\bf w}^k$, $k = 1, 2, \ldots, N_\tau$, in all training runs. For these training runs each model parameter is sampled at its extreme values and at the midpoint between them. Thus, with two, five, and eight model parameters there are $3^2 = 9$, $3^5 = 243$, and $3^8 = 6561$ training runs, respectively. In the predictive ROM simulations, each parameter has two values which are the midpoint values between the values used in the training. Thus, with two, five, and eight model parameters there are $2^2 = 4$, $3^5 = 32$, and $2^8 = 256$ prediction runs, respectively. The sizes of the two reduced basis given by $n$ and $n_{\lambda}$ are chosen to be the same. The measured error is the absolute difference between the prices given by the reduced order model and the full order model. All errors shown in Figures~\ref{fig:BlackScholes}--~\ref{fig:Bates} are computed for the predictive simulations. That is, for simulations with parameters not included in the training simulations used to generate the ROMs. \subsection{Black--Scholes Model} The model parameters for the Black--Scholes model are varied in the range: \begin{equation} (r,\, \sigma_s) \in [0.025,\, 0.035] \times [0.35,\, 0.45]. \end{equation} The price of the European and American options vary roughly in the ranges $[8.91, 11.94]$ and $[9.06, 12.04]$, respectively. Figure \ref{fig:BlackScholes} shows the reduction of the maximum and mean errors of the price of these options with the growth of the reduced basis sizes $n = n_{\lambda}$. \begin{figure}[tb] \begin{centering} \includegraphics[width=0.8\textwidth]{BlackScholesErrors} \caption{Under the Black--Scholes model the error with respect to the the reduced basis size $n = n_{\lambda}$} \label{fig:BlackScholes} \end{centering} \end{figure} \subsection{Merton Model} The model parameters for the Merton model are varied in the range: \begin{equation} (r,\, \sigma_s,\, \mu,\, \delta,\, \gamma) \in [0.025,\, 0.035] \times [0.35,\, 0.45] \times [0.15,\, 0.25] \times [0.3,\, 0.5] \times [-0.7,\, -0.3]. \end{equation} The price of the European and American options vary roughly in the ranges $[9.50, 13.97]$ and $[9.65, 14.08]$, respectively. Figure \ref{fig:Merton} shows the reduction of the maximum and mean errors of the price of these options with the growth of the reduced basis sizes $n = n_{\lambda}$. \begin{figure}[tb] \begin{centering} \includegraphics[width=0.8\textwidth]{MertonErrors} \caption{Under the Merton model the error with respect to the the reduced basis size $n = n_{\lambda}$} \label{fig:Merton} \end{centering} \end{figure} \subsection{Heston Model} The model parameters for the Heston model are varied in the range: \begin{equation} (r,\, \kappa,\, \theta,\, \sigma_v,\, \rho) \in [0.025,\, 0.035] \times [3,\, 5] \times [0.35^2,\, 0.45^2] \times [0.35,\, 0.45] \times [-0.75,\, -0.25]. \end{equation} The price of the European and American options vary roughly in the ranges $[8.72, 11.88]$ and $[8.87, 11.98]$, respectively. Figure \ref{fig:Heston} shows the reduction of the maximum and mean errors of the price of these options with the growth of the reduced basis sizes $n = n_{\lambda}$. \begin{figure}[tb] \begin{centering} \includegraphics[width=0.8\textwidth]{HestonErrors} \caption{Under the Heston model the error with respect to the the reduced basis size $n = n_{\lambda}$} \label{fig:Heston} \end{centering} \end{figure} \subsection{Bates Model} The model parameters for the Bates model are varied in the range: \begin{equation} \begin{split} (r,\, \kappa,\, \theta,\, \sigma_v,\, \rho,\, \mu,\, \delta,\, \gamma) \in {} & [0.025,\, 0.035] \times [3,\, 5] \times [0.35^2,\, 0.45^2] \times [0.35,\, 0.45] \times \\ & [-0.75,\, -0.25] \times [0.15,\, 0.25] \times [0.3,\, 0.5] \times [-0.7,\, -0.3]. \end{split} \end{equation} The price of the European and American options vary roughly in the ranges $[9.38, 13.95]$ and $[9.53, 14.07]$, respectively. Figure \ref{fig:Bates} shows the reduction of the maximum and mean errors the price of these options with the growth of the reduced basis sizes $n = n_{\lambda}$. We note that for this model essentially the same errors can be obtained based only on $2^8 = 256$ training runs sampling the extreme values of the model parameters. \begin{figure}[tb] \begin{centering} \includegraphics[width=0.8\textwidth]{BatesErrors} \caption{Under the Bates model the error with respect to the the reduced basis size $n = n_{\lambda}$} \label{fig:Bates} \end{centering} \end{figure} \subsection{Computational Speed-up} For each problem considered, the speed-up factor delivered by its ROM for the online computations is reported in Table~\ref{tab:speedupe} for the European option and in Table~\ref{tab:speedup} for the American option. All models are solved in MATLAB on a Intel Xeon 2.6GHz CPU and all CPU times were measured using the \verb=tic-toc= function on a single computational thread via the \verb=-singleCompThread= start-up option. A ROM is integrated in time using the same scheme and time-step used to solve its corresponding FOM; see Section~\ref{sect:FOMs} for details. The online speed-up is calculated by evaluating the ratio between the time-integration of the FOM and the time-integration of the ROM. \begin{table} \caption{For the European option CPU times in seconds for online computations.} \centering \begin{tabular}{l|c c|c c|c} & \multicolumn{2}{c|}{FOM} & \multicolumn{2}{c|}{ROM} \\ Model & unknowns & CPU time & unknowns & CPU time & speed-up \\ \hline Black--Scholes &\phantom{8}127 & 0.0011 & 16 & 0.00064 & 1.7 \\ Merton &\phantom{8}127 & 0.0022 & 16 & 0.00084 & 2.6 \\ Heston & 8255 & 0.16\phantom{99}& 40 & 0.0011\phantom{9} & 145 \\ Bates & 8255 & 0.36\phantom{99}& 40 & 0.0015\phantom{9} & 240 \\ \end{tabular} \label{tab:speedupe} \end{table} \begin{table} \caption{For the American option CPU times in seconds for online computations.} \centering \begin{tabular}{l|c c|c c|c} & \multicolumn{2}{c|}{FOM} & \multicolumn{2}{c|}{ROM} \\ Model & unknowns & CPU time & unknowns & CPU time & speed-up \\ \hline Black--Scholes &\phantom{8}127 & 0.026 & 16 & 0.025 & 1.0 \\ Merton &\phantom{8}127 & 0.027 & 16 & 0.026 & 1.0 \\ Heston & 8255 & 7.9\phantom{99}& 40 & 0.034 & 232 \\ Bates & 8255 & 8.0\phantom{99}& 40 & 0.034 & 235 \\ \end{tabular} \label{tab:speedup} \end{table} \section{Conclusions} \label{sec:conclusions} Reduced order models (ROMs) were constructed for pricing European and American options under jump-diffusion and stochastic volatility models. For American options they are based on a penalty formulation of the linear complementarity problem. The finite difference discretized differential operator is projected using basis resulting from a proper orthogonal decomposition. The grid points for the penalty term are chosen using the discrete empirical interpolation method. In numerical experiments, from two to eight model parameters are varied in a given range. For the one-dimensional Black--Scholes and Merton models about 16 ROM basis vectors were enough to reach 0.1\% accuracy for the considered American option. For the European option about 8 basis vectors lead to this accuracy. For the two-dimensional Heston and Bates models about 40 basis vectors were needed to reach the same accuracy for the American option. Slightly less basis vectors lead to the same accuracy for the European option. For these two-dimensional models the computational speed-up was over 200 when the full order model (FOM) and ROM have roughly the same 0.1\% accuracy level for the American option. For the European option the solution of the FOM and the ROM under the Bates model required about 0.36 and 0.0015 seconds, respectively. For the American option the solution of the FOM and the ROM for two-dimensional models required about 8 and 0.034 seconds, respectively. With the one-dimensional models the speed-up was negligible. Particularly the results with the Bates model and eight parameters varying are impressive. For one-dimensional models probably for most applications FOMs are sufficiently fast. For two-dimensional models often FOMs are computationally too expensive and in such cases the proposed ROMs can enable the use of these models. Performance of the proposed ROM approach is quite similar to previous approaches based on the NNMF. For example, the maximum ROM price error using 40 basis vectors under the Heston model using the proposed approach and the previous approach based on NNMF is $2.9 \times 10^{-3}$, and $4.2 \times 10^{-3}$, respectively. While for the Bates model, the maximum ROM price error using 40 basis vectors using the proposed approach and the previous approach based on NNMF is $6.8 \times 10^{-3}$, and $4.0 \times 10^{-3}$ respectively. A potential application for these ROMs is the calibration of the model parameters based on market data. With a least squares calibration formulation, option prices and their sensitivities can be computed quickly and accurately for varying parameters by employing ROMs. \label{sect:bib}
1,116,691,498,047
arxiv
\section{Introduction} Our world consists of objects with collections of atoms and molecules which are bound together by ionic or covalent bonds. On a larger length scale these objects are collected in planetary systems in galaxies, which are bound together by gravitational forces. The ionic and covalent bonds are established by electromagnetic forces whereas the planetary systems and the galaxies are hold together by gravitational forces. Although the two forces differ enormously in strength by a factor of $\approx 10^{36}$ they have, however, some common features. The radial strengths of both forces are proportional to the inverse square (ISF), $r^{-2}$, of the distances between mass centers , and both forces are believed to extend to infinity. The two forces can also result in regular closed orbits for the dynamics of a collection of force centres, as is demonstrated by our solar system and the orbitals of the bounded electrons at a atomic nucleus. The two other fundamental forces are the strong and weak nuclear forces and they are both short ranged. All other forces are ''derived forces" such as the harmonic forces or the attractive induced dipole-dipole forces. Isaac Newton formulated the classical mechanics in his book PHILOSOPHI\AE \ NATURALIS PRINCIPIA MATHEMATICA ($Principia$) \cite{Newton1687}, where he also proposed the law of gravity and solved Kepler's equation for a planets motion. According to Newton, gravity varies with the inverse square of the distance $r$ between two celestial objects, and a planet exposed to the gravitational force from the Sun moves in an elliptical orbit. The Moon exhibits, however, periodic ''revolving orbits" and Newton shows in $Principia$ that this behaviour, which is caused by the daily rotation of the Earth, could be taken into account by and additional inverse cubic force proportional to $r^{-3}$ (ICF). But it raises the question: for which forces can a system of objects have regular orbits? It is only possible to solve the classical mechanics differential equations for two objects. The classical second-order differential equation for the dynamics of two objects with a central force proportional to $r^n$ can be solved for a series of values of the power $n$ . An important result was obtained by Bertrand \cite{Bertrand}, who proved that all bound orbits are closed orbits. Later investigations have proved the existence of regular orbits for a series of values of the power $n$ of the central force, including the ICF \cite{Whittaker,Broucke1980,Mahomed2000}. For a system consisting of many objects, the dynamics of coupled harmonic oscillators demonstrates that it indeed is possible to have stable regular dynamics for systems with other forces than the gravitational forces, but else there is no theoretical proofs, nor any other examples of that it is possible. Here it is, however, demonstrated by Molecular Dynamics simulations (MD) of planetary systems \cite{Toxvaerd2022}, that a planetary system also can have planets with stable regular orbits for attractive forces which varies as\\ $-r^{-1}$\\ $-r^{-2} \pm \alpha \times r^{-1}$\\ $-r^{-2} \pm \alpha \times r^{-3}$ for $\alpha \in [-100,10]$.\\ But it has not been possible to obtain stable regular orbits for $r^{-3}$. \section{ The force between two spherically symmetrical objects}\label{sec 2} Newton was aware of that the extension of an object can affect the gravitational force between two objects, and in $\textit{Theorem XXXI} $ in $Principia$ \cite{Newtonshell} he also solved this problem for ISF between spherically symmetrical objects. Newton's $\textit{Theorem XXXI} $ states that: 1. A spherically symmetrical body affects external objects gravitational as though all of its mass were concentrated at a point at its center. 2. If the body is a spherically symmetric shell no net gravitational force is exerted by the shell on any object inside, regardless of the object's location within the shell. Newtons theorem is, however, only valid for ISF. The forces between spherically symmetrical objects with forces proportional to $r^{-1}$, inverse forces (IF), or ICF depends on the objects extension. Newton's derivation of the theorem is by the use of Euclidean geometry, but the forces between two spherically symmetrical objects can also be derived by the use of algebra. Let the objects No. $i$ and $j$ be spherically symmetrical with masses $m_i$ and $m_j$ and with a uniform density within the balls with the radii $\sigma_i$ and $\sigma_j$. The attraction, IF, ISF or ICF, on a mass $\delta m_i$ at $\textbf{s}_i$ in object $i$ and at the distance $s_{ij}$ from a mass $\delta m_j$ at $\textbf{s}_j$ in $j$ is \begin{equation} \delta \mathbf{F}_{ij}=-\beta \delta m_i \delta m_j s_{ij}^n \hat{\mathbf{s}}_{ij}, \end{equation} with $n$=-1, -2 and -3, respectively, and the total force $\textbf{F}_{ij}$ is obtained by a quadruple integration, first between $\delta m_i$ at $\textbf{s}_i$ and mass elements $\delta m_j$ in a sphere in $j$ with radius $\sigma_j^{'} \le\sigma_j$, then over spheres centred at $\textbf{r}_j$ with radius $\sigma_j^{'}$, and then correspondingly between mass $m_j$ located in the center, $\textbf{r}_j$ and mass elements $\delta m_i$. Consider mass elements $\delta m_j(\textbf{s}_j)=4 \pi \sigma_j^{'2} m_j d\sigma'_j/(4 \pi/3 \sigma_j^3)$ at $\textbf{s}_{j}$ in a thin shell $[\sigma'_j,\sigma'_j+d \sigma'_j]$ with center at $\textbf{r}_j$ and a distance $r'_{ij}=\mid \textbf{s}_i- \textbf{r}_j \mid > \sigma_i+\sigma_j \ge \sigma_i+\sigma'_j$ to $\textbf{s}_i$. The force $\delta \textbf{F}_{ij}=-\beta \delta m_i m_j s_{ij}^n\hat{\textbf{r}}'_{ij}$ on $\delta m_i$ from object $j$ is \cite{WikipediaNewtonshell} \begin{equation} \delta \mathbf{F}_{ij} =-\beta \frac{\delta m_i}{4r_{ij}^{'2}} \int_0^{\sigma_j} \frac{\delta m_j}{\sigma'_j}\int^{r'_{ij}+ \sigma'_j}_{r'_{ij}-\sigma'_j}s_{ij}^n[s_{ij}^2+r_{ij}^{'2}-\sigma_j^{'2}] ds_{ij}\hat{\mathbf{r}}'_{ij}. \end{equation} The integrals are very simple for ISF since \begin{eqnarray} -\beta \frac{\delta m_i}{4r_{ij}^{'2}} \int_0^{\sigma_j} \frac{\delta m_j}{\sigma'_j}\int^{r'_{ij}+ \sigma'_j}_{r'_{ij}-\sigma'_j}s_{ij}^{-2}[s_{ij}^2+r_{ij}^{'2}-\sigma_j^{'2}] ds_{ij}\\ \nonumber = -\beta \frac{\delta m_i}{4r_{ij}^{'2}} \int_0^{\sigma_j} \frac{\delta m_j}{\sigma'_j} 4\sigma'_j=-\beta \frac{\delta m_i m_j}{r_{ij}^{'2}}, \end{eqnarray} and the integration over shells centred at $\textbf{r}_i$ with mass elements $\delta m_i$ leads to $Theorem$ $XXXI$. The integrations are more complex for $n \ne -2$. The simplest way to proceed is to expand the first integral in powers of $\sigma'_j/r'_{ij}$. The first terms in the final expressions for the force between $i$ and $j$ are given below. For the IF function $s^{-1}$: \begin{equation} \mathbf{F}_{ij}(r_{ij}) \simeq -\frac{\beta_1 m_im_j}{r_{ij}}(1 -\frac{\sigma_i^2+\sigma_j^2}{5r_{ij}^2})\hat{\mathbf{r}}_{ij}+ \mathcal{O}(r_{ij}^{-4}) \end{equation} For $s^{-2}$ one obtains the usual expression for the gravitational ISF force ($\beta_2 = G$) which does not depend on the extensions of the two spherically symmetrical objects \begin{equation} \mathbf{F}_{ij}(r_{ij}) = -\frac{G m_im_j}{r_{ij}^2}\hat{\mathbf{r}}_{ij} . \end{equation} For $s^{-3}$ the ICF radial force is \begin{equation} \mathbf{F}_{ij}(r_{ij}) = -\frac{\beta_3 m_im_j}{r_{ij}^3}(1 +\frac{2\sigma_i^2+2\sigma_j^2}{5r_{ij}^2})\hat{\mathbf{r}}_{ij}+ \mathcal{O}(r_{ij}^{-6}). \end{equation} The dynamics of planetary systems with the different kind of gravitational attractions is given in the next Section. \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure1.eps} \caption{A loop of the innermost planet from a position at time $t=2.5\times 10^6$, marked by a big black sphere to a position at $t=2.5007325\times 10^6$ (293000 discrete time steps), marked by a small black sphere. The position of the ''Sun" is with an enlarged red sphere. Some simultaneous loops of two other planets in the planetary system are shown in the next figure.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure2.eps} \caption{ The simultaneous orbits with bows for two other planets in the planetary system. The orbits are obtained for one million discrete time steps in the time interval $t \in [2.5\times 10^6, 2.5025\times 10^6]$.} \end{center} \end{figure} \section{Dynamics of planetary systems with different gravitational forces} A discrete and exact algorithm for obtaining planetary systems is derived in a recent article \cite{Toxvaerd2022}. The algorithm is symplectic and time reversible and has the same invariances as Newton's analytic dynamics. For Kepler's solution of the two body system of a Sun and a planet one can compare the two dynamics \cite{Toxvaerd2020}, which leads to the same orbits. The discrete dynamics is absolute stable and without any adjustments for conservation of energy, momentum and angular momentum for billion of time steps. The algorithm and how to obtain planetary systems is given in the Appendix. Here the algorithm is used to obtain planetary systems with forces different from the Newtonian inverse square gravitational forces. \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure3.eps} \caption{ Bows in a loop with green and light blue for the outermost planet. The start position at $t=2.5\times 10^6$ is marked with a big black sphere and the first three bows is with green color. The position at $t=2.5025\times 10^6$ after the first three bows is shown by a smaller black sphere, and the succeeding five bows is with light blue. Several consecutive loops of the planet are shown in Figure 5.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure4.eps} \caption{ A planet which after 13 orbits almost returns to its start position. The start position at $t=2.5\times 10^6$ is shown with a big blue sphere, and the position at $t=2.5051975\times 10^6$ after thirteen orbits by a smaller black sphere. The next figure shows several orbits of the planet together with the orbits of the outermost planet.} \end{center} \end{figure} \subsection{ Planetary systems for objects with inverse forces} The IF between two objects $i$ and $j$ is given by the Eq. (4). The Eq. (4) gives the first-order size-correction for the forces between spherically symmetrical uniform mass objects. The investigation is conducted in two ways by MD simulations. $\textbf{A}$: One can simply create planetary systems in the same way as described in \cite{Toxvaerd2022} and in the Appendix, or alternatively $\textbf{B}$: one can replace the Newtonian ISF forces between objects in an ordinary planetary system by the corresponding inverse IF forces. $\textbf{A}$: The results of obtaining planetary systems spontaneously by merging of objects as in \cite{Toxvaerd2022} are shown in the next Figures. Planetary systems with strength $\beta_1=1$ were created spontaneously at time $t=0$ from different configurations, distributions of velocities and masses $m_i(0)=1$ of objects. (For units of length, time and strength of the attractions in the MD systems see the Appendix.) Ten different planetary systems were formed and the overall result and conclusion from the ten systems is, that it is easily to obtain planetary systems with IF forces. But the regular orbits deviate, however, qualitatively from the elliptical orbits in an ordinary planetary system. A typical regular orbit is shown in Figure 1. Figure 1 shows a loop of the innermost planet in one of the ten planetary systems, which was simulated with IF. The planetary system was started with thousand objects and the planetary system with IF contained 38 planets after $10^9$ MD time steps corresponding to a MD time $t=2.5\times 10^6$, and where the inner planets have performed several thousand bound rotations. The planet in Figure 1 performs a loop, but with a change of its elliptical major axis by $\approx \pi/3$ at the passage of the ''Sun", by which the total regular orbit appears with three consecutive bows with an angle of $\approx2\pi/3$. The total angular momentum for the system is conserved by Newton's exact discrete algorithm \cite{Toxvaerd2022}, but also the angular momentum of the individual planets in the system are conserved to a high degree so the three bows are in the same plane. Most of the planets exhibit this regular dynamics. Figure 2 shows the simultaneous orbits for two other planets in the same planetary system. The planetary system with the object shown in Figure 1 and Figure 2 consists of 38 objects in bound orbits around a central heavy object (the ''Sun" with $m_{\textrm{Sun}}$=867). In \cite{Broucke1980} Broucke has obtained the orbit for one planet (Figure 2 in \cite{Broucke1980}). There is, however, only some similarities between the present orbits for a many-body three dimensional planetary system and the 2D orbit of a single planet. \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure5.eps} \caption{ The outermost planet (green) together with the planet from Figure 4 (red) with its orbits in bands. The outermost planet changes its major orbit axis with $\approx \pi/4$ at every past of the Sun. The central Sun is with red.} \end{center} \end{figure} All the planets in the planetary systems with inverse forces show, what Newton probably would have called revolving orbits, but not all of the planets have orbits of the form shown in Figure 1 and Figure 2. The next figure, Figure 3, show the consecutive bows of the outermost planet in the same planetary system. The planet changes its principal axis by $\approx \pi/4$ by which it performs eight bows in its regular orbit. Figure 5 shows with green 3-4 loops of this planet together with another planet in the same planetary system. It has not been possible to obtain simple elliptical regular orbits, but there are examples of planets with a smaller change of their principal axis at the passage of the Sun. Figure 4 gives such an example of a planet, which after thirteen loops return to its start position, and a collection of consecutive loops for this planet, shown by red in Figure 5, demonstrates that this regular pattern is maintained over many consecutive loops. $\textbf{B}$: Planetary systems with IF forces were obtained in another way by replacing the Newtonian ISF forces in an ordinary planetary system with the IF forces. The discrete dynamics with IF was started with the end-positions of the planet in the planetary system \cite{Toxvaerd2022}. A replacement with $\beta_1=\beta_2=G$ results in a collapse of the planetary systems and with only two planets in revolving orbits similar to the orbits shown in Figure 2. The other planets were engulfed by the Sun. This is due to that the inverse forces with $\beta_1=\beta_2=G$ and acting on a planet are about a thousand time stronger than the Newtonian gravitational forces. Planets in the Newtonian planetary systems in \cite{Toxvaerd2022} are located at mean distances to their Suns at $<r_{i,Sun}> \approx [100,30000]$. For an ordinary planet with a Newtonian ISF force field and at a position $r_{i,Sun}= 1000$ the corresponding IF force is of the order thousand time stronger than the Newtonian ISF force. So in order to establish whether it is possible to obtain simple elliptical orbits without revolving orbits, the forces in the Newtonian planetary systems in \cite{Toxvaerd2022} were replaced with IF forces and with $\beta_1 \approx G/1000$. The replacement was performed in the following way: A planet $i$ with a rather circular orbit and at a mean distance $<r_{i,Sun}> \approx 1000$ was selected and the strength $\beta_1=0.00105$ was determined so the planet follow the same elliptical orbit shortly after the replacement. The result of this replacement of the forces in the planetary system on the orbit of this planet is shown in Figure 6, which show the orbit of an ordinary planet before( red) and after (green) the replacement. The replacement is for $\beta=0.00105$ for which the planet followed the gravitational orbit (red) over a long period of time before it deviated and exhibited the revolving orbits shown in the figure, but with a small change of its principal axis by passage at the Sun and with elliptical-like orbits. The other planet in the Newtonian planetary system changed their orbits to the revolving orbits (blue orbit Figure 6), also shown in the previous figures. It has not been possible to obtain simple elliptical orbits, which spontaneously appears in an ordinary Newtonian planetary system. The simulations were performed by the first order IF expression, Eq. 4, but simulations with- and without the first order correction $\mid\delta\textbf{F}_{\textrm{IF}}\mid= \beta_1 m_i m_j(\sigma_i^2+\sigma_j^2)/5r_{ij}^3$ showed, that the first-order correction only has a minor quantitative effect, and that the exclusion of this term do not change the overall qualitative result. \subsection{Simulation of systems with inverse cubic forces} \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure6.eps} \caption{ The red elliptical orbits is for a planet with Newtonian ISF forces and the green orbits are after the forces at the position marked with a black sphere is replaced with IF forces and with $\beta_1$=0.00105 by which the planet in a short time follow the elliptical path before the revolving behaviour. The orbit (blue) of a planet at a mean distance slightly bigger than the planet shown by red changed spontaneously its elliptical orbit to the bows also shown in the previous figures.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure7.eps} \caption{ The orbits for a planet in a planetary system with ICF. The planetary system is obtained from an ordinary planetary system and with elliptical orbits (red) by replacing the ISF forces by ICF and with a strength $\beta_3 \approx 1230\times G$. The black circles is the position of the planet at the time where the replacement took place and the green curve is for $\beta_3=1228.75$ and the blue curve is for $\beta_3=1228.5$.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure8.eps} \caption{Top view of the orbits of the planet also shown in details in the previous figure. The blue curve is for ICF with Eq. (6) and the magenta curve is with the zero-order ICF $-1228.5m_im_j/r_{ij}^3$.} \end{center} \end{figure} A system of objects with masses $m_i(0)=1$ and pure ICF given by Eq. (6) does not self-assemble to a planetary system. The objects either fuse together or expand as free objects. This observation is valid for different values of the gravitational constant $\beta_3$ in Eq. (6) and it was not possible to create a planetary system with ICF. Another way to demonstrate the instability of planetary systems with pure ICF attractions is to replace the Newtonian gravitational ISF in a planetary system by ICF as described in the previous subsection. Thus it is possible to determine a values of $\beta_3 \pm \delta$, by which a given planet in an Newtonian planetary system either engulfs by the Sun by changing the forces from $-G/r^2$ to $-(\beta_3+\delta)/r^3(1+(2\sigma_i^2+2\sigma_j^2)/5r_{ij}^2)$ or leaves the Sun as a free object for $\beta_3-\delta$. Figure 7 shows this ''tipping point" for the same planetary system and planet as is shown in Figure 6 with red for ISF and green for IF. The planet with pure ICF is engulfed by the Sun for $\beta_3+\delta=1228.75$ (green curve), but escapes the Sun for $\beta_3-\delta=1228.5$ (blue curve). The Eq. (6) is with the first asymptotic correction in a rapid converging expansion for the extension of the spherically symmetrical objects with an uniform density. The zero order expression for the ICF system: $-\beta_3 m_im_j/r_{ij}^3$ gives the same qualitatively result, as shown in Figure 8. The tipping point is the same either one includes the first order correction or not. \section{ Newton's proportions for the Moon's revolving orbits} \begin{figure} \begin{center} \includegraphics[width=7cm,angle=-90]{Figure9.eps} \caption{The planet also shown in Figure 7 with red, but now with IF or ICF included in the ISF attraction. The orbit with red is with pure ISF; the orbit with green is with ICF: $\alpha_3/r^3=-100/r^3$ included from the position marked by a black sphere, and the orbit with blue is with the IF: $\alpha_1/r=0.01/r$ included. The planet with ISF + ICF (green) has revolved $\approx$ 23-24 times before the principal axis in the elliptical orbit has changed $2\pi$.} \end{center} \end{figure} The Moon exhibits apsidal precession, which is called Saroscyclus and it has been known since ancient times. Newton shows in Proposition 43-45 in $Principia$, that the added force on a single object from a fixed mass center which can cause its apsidal precession must be a central force between the planet and a mass point fixed in space (the Sun). In Proposition 44 he shows that an inverse-cube force (ICF) might causes the revolving orbits, and in Proposition 45 Newton extended his theorem to arbitrary central forces by assuming that the particle moved in nearly circular orbit \cite{Chandrasekhar1995}. The Moon's apsidal precession is explained by flattering by the rotating Earth with tide waves, which causes an ICF on the Moon. For Newton's analyse of the Moon's apsidal precession see \cite{Aoki1992}. New investigations of isotopes from the Moon reveal that it was created $\approx$ 4.51 billion year ago and $\approx$ 50 to 60 million years after the emergence of the Earth and our solar system \cite{Barboni2017,Thiemens2019}, and the Earth contained the Hadean ocean(s) with tide waves shortly after the creation of the Moon \cite{Harrison}, so an ICF has not affect the overall stability of the Moon's regular orbit. The rotation of the Earth and the Moon's orbit around the Earth results in an ICF which has accelerated the Moon out to its present position with its apsidal precession. The early orbit of the Moon may have had a high eccentricity \cite{Zuber2006}, but it is difficult to determine the evolution of the Moon's orbit due to the many factors which influence its evolution \cite{Green2017}. One can, however, conclude that the presence of an additional force on the Moon due to the tide waves has not affected the overall stability of the Moon's regular orbit. The planetary system and the orbit shown with red in Figure 7 are simulated with either ICF or IF included in the attractions. The planetary system is affected by including an $\alpha_3 r^{-3}$ ICF, and the systems are destroyed for $\alpha_3 \ge 100$. The ISF planetary system with the planet shown in Figure 7 with red contains twenty one planets and only three survived by including $100*r^{-3}$ in the attraction whereas all twenty one planets remained in regular orbits for ICF with $\alpha_3 \le 10*r^{-3}$. The orbits in a planetary system with ISF+ICF forces exhibit the revolving behaviour predicted by Newton: Figure 9 shows the orbit of the planet, also shown in Figure 7, with red without additional attractions, with (green) with ISF+ICF and with $\alpha_3=-100$, and with blue with ISF+IF and with $\alpha_1=0.01$. The behaviour of ISF+ICF and ISF+IF is in agreement with Newton's $Proposition$ $45$. Inclusion of IF in the gravitational attractions enhances, however, the revolving behaviour and stabilizes the planetary system, whereas inclusion of the ICF also results in revolving orbits, but it destabilizes the planetary system. The planetary ISF+ICF system is not stable for $\alpha_3 > 100$ and for pure ICF attractions. \section{Conclusion} The discrete algorithm (Appendix A), derived in \cite{Toxvaerd2022} is used to obtain planetary system with forces other than gravitational forces. The main conclusion is, that it is easy to obtain planetary systems with inverse gravitational forces. However, it is not possible to obtain planetary systems with inverse cubic gravitational forces, even if one smoothly replaces the inverse square gravitational forces in a stable planetary system with inverse cubic forces. A detailed investigation of the planetary system after the replacement of the forces shows, that one can determine a strength of the gravitational constant $\beta_3$ for inverse cubic forces for which a planet either detaches itself from the planetary system for $ \beta_3-\delta$, or are engulfed by the ''Sun" for $ \beta_3 +\delta$ (Figure 7 and Figure 8). So the attractions in our universe with inverse square forces for the gravitational attractions between masses and the Coulomb attractions between charges is the limit value for regular orbits. A system of objects will, for inverse attractions with $\propto r^{-n}$ with $n \ge 3$, have the well known thermodynamic behaviour with gas-liquid-solid phases, but without regular orbits between units in the system. The orbits of the planets in a planetary systems with pure inverse forces have ''revolving orbits". The regular orbits deviate, however, significantly from the slightly perturbed elliptic orbits in an ordinary planetary with additional weak non-gravitational attractions. The principal axis changes with $\approx \pi/3$ at every loops (Figure 1, Figure 2, Figure 6 for the main part of the regular orbits in a planetary system with inverse forces. But also changes with $\pi/4$ is observed (Figure 3 and Figure 5) together with other smaller, but rather constant changes (Figure 4 and Figure 5). Newton stated in Proposition 43-45 in $Principia$, that the Moons revolving orbits could be explained by an additional attraction, $r^{-n}$, to the gravitational attraction with $n \ne 2$. The present simulations of planetary systems with gravitational attractions and an additional attractions with either $n=1$ or $n=3$ confirm Newton's Propositions, but whereas attractions with additional inverse attractions stabilize the planetary systems, the inclusion of a weak inverse cubic attractions also gives '' revolving orbits" (Figure 9), but it will destabilize the planetary system by adding sufficient strong inverse cubic attractions to the inverse square gravitational forces. \acknowledgments This work was supported by the VILLUM Foundation’s Matter project, grant No. 16515. \\ $\textbf{Data Availability Statement}$ Data will be available on request. \section{Appendix} The gravitational force, $ \textbf{F}_i(\textbf{r}_i)$, on a planet $i$ at $\textbf{r}_i$ in a planetary systems with $N$ celestial objects is \begin{equation} \mathbf{F}_i(\textbf{r}_i)= \sum_{j \ne i}^N \mathbf{F}_{ij}(r_{ij}) \end{equation} where the summations over forces $\textbf{F}(r_{ij})$ is given by one of the Eqn 4-6. Newton derived the discrete central difference algorithm when he obtained his second law \cite{Toxvaerd2020}. In Newton's classical discrete dynamics \cite{Newton1687,Toxvaerd2020} a new position $\textbf{r}_k(t+\delta t)$ at time $t+\delta t$ of an object $k$ with the mass $m_k$ is determined by the force $\textbf{f}_k(t)$ acting on the object at the discrete positions $\textbf{r}_k(t)$ at time $t$, and the position $\textbf{r}_k(t-\delta t)$ at $t - \delta t$ as \begin{equation} m_k\frac{\textbf{r}_k(t+\delta t)-\textbf{r}_k(t)}{\delta t} =m_k\frac{\textbf{r}_k(t)-\textbf{r}_k(t-\delta t)}{\delta t} +\delta t \textbf{f}_k(t), \end{equation} where the momenta $ \textbf{p}_k(t+\delta t/2) = m_k (\textbf{r}_k(t+\delta t)-\textbf{r}_k(t))/\delta t$ and $ \textbf{p}_k(t-\delta t/2)= m_k(\textbf{r}_k(t)-\textbf{r}_k(t-\delta t))/\delta t$ are constant in the time intervals in between the discrete positions. Newton postulated Eq. (A2) and obtained his second law, and the analytic dynamics in the limit $ lim_{\delta t \rightarrow 0}$. The algorithm, Eq. (A2), is usual presented as the ''Leap frog" algorithm for the velocities \begin{equation} \textbf{v}_k(t+\delta t/2)= \textbf{v}_k(t-\delta t/2)+ \delta t/m_k \textbf{f}_k(t). \end{equation} The positions are determined from the discrete values of the momenta/velocities as \begin{equation} \textbf{r}_k(t+\delta t)= \textbf{r}_k(t)+ \delta t \textbf{v}_k(t+\delta t/2). \end{equation} Let all the spherically symmetrical objects have the same (reduced) number density $\rho= (\pi/6)^{-1} $ by which the diameter $\sigma_i$ of the spherical object $i$ is \begin{equation} \sigma_i= m_i^{1/3} \end{equation} and the collision diameter \begin{equation} \sigma_{ij}= \frac{\sigma_{i}+\sigma_{j}}{2}. \end{equation} If the distance $r_{ij}(t)$ at time $t$ between two objects is less than $\sigma_{ij}$ the two objects merge to one spherical symmetrical object with mass \begin{equation} m_{\alpha}= m_i + m_j, \end{equation} and diameter \begin{equation} \sigma_{\alpha}= (m_{\alpha})^{1/3}, \end{equation} and with the new object $\alpha$ at the position \begin{equation} \textbf{r}_{\alpha}(t)= \frac{m_i}{m_{\alpha}}\textbf{r}_i(t)+\frac{m_j}{m_{\alpha}}\textbf{r}_j(t), \end{equation} at the center of mass of the the two objects before the fusion. (The object $\alpha$ at the center of mass of the two merged objects $i$ and $j$ might occasionally be near another object $k$ by which more objects merge, but after the same laws.) The momenta of the objects in the discrete dynamics just before the fusion are $\textbf{p}^N(t-\delta t/2)$ and the total momentum of the system is conserved at the fusion if \begin{equation} \textbf{v}_{\alpha}(t-\delta t/2)= \frac{m_i}{m_{\alpha}}\textbf{v}_i(t-\delta t/2)+ \frac{m_j}{m_{\alpha}}\textbf{v}_j(t-\delta t/2), \end{equation} which determines the velocity $\textbf{v}_{\alpha}(t-\delta t/2)$ of the merged object. The algorithm for planetary system consists of the equations (A3)+(A4) for time steps without merging of objects, and the fusion of objects is given by the equations (A6),(A7), (A8), (A9) and (A10). Newtons discrete algorithm (A3), which is used in almost all MD simulations, is usually called the Verlet- or Leap-frog algorithm and it has the same invariances as his exact analytic dynamics \cite{Toxvaerd2022,Toxvaerd1994,Toxvaerd2012}. The invariances are maintained by the extension to planetary systems (A6),(A7), (A8), (A9) and (A10) \cite{Toxvaerd2022} . The gravitational strengths in the article are in units of $\beta_i^*=G=1$ and the mass $m_i(0)=1$ and diameters of the planets $\sigma_i(0)=1$ at the start time $t=0$. For units and set-up of the systems see also \cite{Toxvaerd2022}. The planetary systems in the articles are obtained for thousand objects, which at $t=0$ are separated with a mean distance $<r_{ij}> \approx 1000$ and with a Maxwell-Boltzmann distributed velocities with mean velocity $ <v_i> \approx 1$, for the set-up of the systems see also \cite{Toxvaerd2022}. The systems are followed at least $10^9$ MD time steps, i.e. $t=2.5\times 10^6$ time-units, which corresponds to $\approx 10^3$ to $10^4$ orbits for a planet.
1,116,691,498,048
arxiv
\section{introduction} There are two main purposes for studying numerical relativity: One is to investigate the mathematical issues of the geometry of the Einstein manifold. These issues include cosmic censorship, the hoop conjecture, the Penrose inequality, and critical phenomenon \cite{berger02}. The other, more practical one, is to study the dynamics of astrophysically compact objects. To meet the needs of existing (e.g., LIGO \cite{LIGO}, VIRGO \cite{VIRGO}, GEO600 \cite{GEO}, and TAMA \cite{TAMA}) and planned (e.g. LISA \cite{LISA}) gravitational wave detectors, the theoretical prediction of the gravitational waveform for realistic sources has become urgent. In both of the aspects of numerical relativity, the most important targets for study are black holes and neutron stars, especially those in binary systems. After decades of exploration by many researchers, recently there have been exciting breakthroughs in the simulation of the evolution of binary black holes (BBHs) \cite{bbhsuccess,moving_punctur1,moving_punctur2}. Soon after these breakthroughs, the moving puncture method based on the BSSN formalism \cite{modified_3+1} was widely used by numerical relativity groups to deal with black hole systems \cite{vaishnav07,SpeU07,brugmann08} and neutron star systems \cite{binary_netron_star}. Much interesting physics related to BBHs has been explored, including the waveform of the gravitational radiation \cite{waveform,baker07c}, the spin-orbit coupling effect \cite{spin_orbit_coupling}, and the recoil velocity \cite{recoil,campanelli07c}. It has been emphasized in almost all of these studies that the treatment of the singularity problem, i.e., the moving puncture method and the gauge condition, are both key factors in this series of successes. Not needing any inner boundary condition makes the moving puncture method much simpler to handle, compared with the excision method \cite{mpbhana}. This simplicity has made the moving puncture method popular in the numerical relativity community. The Bona-Masso type slicing gauge conditions \cite{bona95} for the lapse function and many driver gauge conditions (e.g., the $\Gamma$-driver) for the shift vector \cite{balakrishna96,meter06} have been shown to be very important in making the moving puncture method work. However, in \cite{meter06} it was shown that, although the details of the gauge conditions used in the punctured BBH evolutions are different, only certain gauge choices allow one to evolve a single moving puncture black hole. It is desirable to better understand the effect of the gauge choices on black hole evolutions. In order to study numerical relativity and also investigate the specific aforementioned topics we develop, from scratch, a code based on the moving puncture method. We adopt a fourth-order finite differencing scheme for the spatial derivatives and the Crank-Nicholson scheme for the time integrations. We use the GrACE package \cite{GrACE} to implement both the mesh refinement and the parallelization in our code. In this paper we present our primary results about the evolution of a single static black hole, of a single moving black hole, of the head-on collision of a BBH, and of the head-on collision perturbed by a scalar field. Our results confirm many results obtained in the previous works by other groups. On the other hand we find a new gauge condition, which has not been tried by other researchers, that can also give stable and accurate black hole evolution calculations. We also observe the effects of the existence of a massless scalar field in delaying the head-on collision, depending on the initial configuration of the scalar field. All of these results enhance our confidence in this code, and thus we will apply the code to more realistic astrophysical calculations in the near future. The remainder of the paper is organized as follows: In Section \ref{secii}, we first summarize the BSSN formulation, the conventional modifications and adjustments, and the gauge conditions. Then we describe the numerical methods used in this code in section \ref{seciii}, including the details of the FMR/AMR algorithm, the finite-differencing stencils, and the Kreiss-Oliger dissipation. In Section \ref{seciv}, the initial data is outlined. In Section \ref{secv}, we present our numerical results on the evolutions of a single static black hole, of a single moving black hole with and without spin, and of the head-on collision of a BBH, with and without a massless scalar field. We summarize and discuss the implications of our findings in Sec. \ref{secvi}. \section{formulation} \label{secii} \subsection{Review of the Basic Equations} The code is based on the BSSN formalism \cite{modified_3+1}, which is a conformal-traceless ``3+1" formulation of the Einstein equations. In this formalism, the spacetime is decomposed into three-dimensional spacelike slices, described by a three-metric $\gamma_{ij}$; its embedding in the four-dimensional spacetime is specified by the extrinsic curvature $K_{ij}$ and the variables, the lapse $\alpha$ and shift vector $\beta^i$, that specify a coordinate system. Our conventions are that Latin indices run over 1, 2, 3, whereas Greek indices run over 0, 1, 2, 3. Throughout the paper we adopt geometrical units with $G=c=1$. In this paper we follow the notations of \cite{baumgarte03}. The metric $\gamma_{ij}$ is conformally transformed via \begin{equation} \phi\equiv\frac{1}{12} \ln \gamma, \,\,\, \tilde{\gamma}_{ij}\equiv e^{-4\phi} \gamma_{ij}, \end{equation} where $\gamma$ denotes the determinant of the metric $\gamma_{ij}$. The conformal exponent $\phi$ is evolved as an independent variable, whereas $\tilde{\gamma}_{ij}$ is subjected to the constraint that the determinant of $\tilde{\gamma}_{ij}$ is chosen to be unimodular, i.e., $\tilde{\gamma}=1$. The extrinsic curvature is subjected to the same conformal transformation, and its trace $K$ is evolved as an independent variable. That is, in place of $K_{ij}$ we evolve: \begin{equation} K\equiv\gamma^{ij}K_{ij}, \,\,\, \tilde{A}_{ij}\equiv e^{-4\phi} K_{ij}-\frac{1}{3}\tilde\gamma_{ij} K. \label{defineA} \end{equation} Similar to the conformal metric, $\tilde{A}_{ij}$ is subjected to a constraint that $\tilde{A}_{ij}$ is traceless, i.e., tr$\tilde{A}_{ij}=0$. New evolution variables, i.e., the conformal connections, \begin{equation} \tilde{\Gamma}^i\equiv-\tilde{\gamma}^{ij}{}_{,j}, \end{equation} are introduced, defined in terms of the contraction of the spatial derivative of the inverse conformal three-metric ${\tilde\gamma}^{ij}$. With these dynamical variables the evolution equations read \begin{eqnarray} \partial_t \phi &=& \beta^i \phi_{,i} - \frac{1}{6} \alpha K + \frac{1}{6} \beta^i{}_{,i},\label{eq4}\\ \partial_t\tilde\gamma_{ij} &=&\beta^k\tilde{\gamma}_{ij,k} - 2 \alpha \tilde A_{ij}+2\tilde\gamma_{k(i} \beta^k{}_{,j)} - \frac{2}{3} \tilde \gamma_{ij}\beta^k{}_{,k},\label{dtgij}\\ \partial_t K&=&\beta^i K_{,i}-D^2\alpha + \alpha[\tilde A_{ij} \tilde A^{ij}+\frac{1}{3} K^2+4\pi(\rho + s)], \nonumber\\ &&\label{dtK}\\ \partial_t\tilde A_{ij}& = &\beta^k \tilde A_{ij,k} + e^{- 4 \phi} [\alpha(R_{ij}-8\pi s_{ij} )-D_iD_j\alpha]^{TF} \nonumber\\ &+&\alpha (K \tilde A_{ij} - 2\tilde A_{ik}\tilde A^k_j) +2\tilde A_{k(i} \beta^k{}_{,j)}-\frac{2}{3}\tilde A_{ij}\beta^k{}_{,k}, \nonumber\\ &&\label{dtAij}\\ \partial_t \tilde{\Gamma}^i &=& \beta^j \tilde \Gamma^i{}_{,j} - 2 \tilde A^{ij} \alpha_{,j} \nonumber\\ &+& 2 \alpha \Big( \tilde \Gamma^i_{jk} \tilde A^{kj} - \frac{2}{3} \tilde \gamma^{ij} K_{,j} - 8 \pi \tilde \gamma^{ij} s_j + 6 \tilde A^{ij} \phi_{,j} \Big) \nonumber\\ & -& \tilde \Gamma^j \beta^i{}_{,j} + \frac{2}{3} \tilde \Gamma^i \beta^j{}_{,j} + \frac{1}{3} \tilde \gamma^{ki} \beta^j_{,jk} + \tilde \gamma^{kj} \beta^i_{,kj}.\label{dtGamma} \end{eqnarray} Here $\rho$, $s$, $s_i$, $s_{ij}$ are source terms which come from matter. For a vacuum spacetime $\rho=s=s_i=s_{ij}=0$. In the above evolution equations $D_i$ is the covariant derivative associated with the three-metric $\gamma_{ij}$, and ``TF" indicates the trace-free part of tensor objects. The Ricci tensor $R_{ij}$ is given as \begin{eqnarray} R_{ij}&=&\tilde{R}_{ij}+R_{ij}^\phi,\\ \tilde{R}_{ij}&=&-\frac{1}{2}\tilde\gamma^{mn}\tilde\gamma_{ij,mn} +\tilde\gamma_{k(i}\tilde\Gamma^k{}_{,j)} +\tilde\Gamma^k\tilde\Gamma_{(ij)k}\nonumber\\ &&+\tilde\gamma^{mn}(2\tilde\Gamma^k{}_{m(i}\tilde\Gamma_{j)kn} +\tilde\Gamma^k{}_{in}\tilde\Gamma_{kmj}),\label{confricci}\\ R^\phi_{ij}&=&-2\tilde D_i\tilde D_j\phi-2\tilde\gamma_{ij} \tilde D^k\tilde D_k\phi\nonumber\\ &&+4\tilde D_i\phi\tilde D_j\phi-4\tilde\gamma_{ij} \tilde D^k\phi\tilde D_k\phi .\label{ricciphi} \end{eqnarray} The Einstein equations also lead to a set of physical constraint equations that are satisfied within each spacelike slice: \begin{eqnarray} e^{-4\phi}(\tilde{R}-8\tilde{D}^i\tilde{D}_i\phi- 8\tilde D^k\phi\tilde D_k\phi)\qquad\quad&&\nonumber\\ +\frac{2}{3}K^2-\tilde{A}_{ij}\tilde{A}^{ij}-16\pi\rho&=&0,\\ \tilde{D}^i\tilde{A}_{ij}+6\tilde{A}_{ij}\tilde{D}^i\phi- \frac{2}{3}\tilde{D}_jK-8\pi s_j&=&0, \end{eqnarray} which are usually referred to as the Hamiltonian and the momentum constraints. Here $\tilde R=\tilde R^i{}_i$ is the conformal Ricci scalar on a three-dimensional time slice, and $\tilde{D}_i$ is the covariant derivative associated with the conformal three-metric ${\tilde\gamma}_{ij}$. Besides being used to obtain the evolution equations (\ref{dtK}) and (\ref{dtGamma}) in the BSSN formulation, the Hamiltonian and the momentum constraints are also applied to the volume integrals of the ADM mass and the angular momentum, respectively \cite{YHBS02}: \begin{eqnarray} M&=&\frac{1}{16\pi}\oint_{\partial\Omega}({\tilde\Gamma}^i -8{\tilde\gamma}^{ij}\partial_je^\phi){\rm d}{\tilde\Sigma}_i\\ &=&\frac{1}{16\pi}\int_\Omega{\rm d}^3x[e^{5\phi} (16\pi\rho + \tilde{A}_{ij}\tilde{A}^{ij} -\frac{2}{3}K^2)\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\quad +\tilde{\Gamma}^k{}_{,k} - e^\phi\tilde{R}],\label{volmass}\\ J_i &=&\frac{1}{8\pi} \epsilon_{ij}{}^k \oint_{\partial\Omega} e^{6\phi}x^j \tilde{A}^\ell{}_k{\rm d}\tilde{\Sigma}_\ell\\ &=&\frac{1}{8\pi}\epsilon_{ij}{}^k\int_\Omega{\rm d}^3x [e^{6\phi}(\tilde{A}^j{}_k+\frac{2}{3}x^jK_{,k}\nonumber\\ &&\qquad\qquad\quad- \frac{1}{2} x^j\tilde{A}_{\ell m} \tilde{\gamma}^{\ell m}{}_{,k}+8\pi x^js_k)], \end{eqnarray} where ${\rm d}\tilde{\Sigma}_i=(1/2)\epsilon_{ijk}{\rm d}x^j{\rm d}x^k$. These two global quantities are useful tools for the system diagnostics to validate the calculations. The volume integral (\ref{volmass}) is slightly different from the one in \cite{YHBS02} due to the further application of the unimodular determinant of the conformal metric (\ref{detg1}). Refer to Appendix \ref{unimod} for the details. \subsection{Equation Adjustments}\label{seciib} The specific choice of evolution variables introduces five additional constraints, \begin{eqnarray} \tilde{\gamma}-1&=&0,\label{detg1}\\ \text{tr}\tilde{A}_{ij}&=&0,\label{trA0}\\ \tilde{\Gamma}^i+\tilde{\gamma}^{ij}{}_{,j}&=&0.\label{Gofg} \end{eqnarray} Our code actively enforces the algebraic constraints (\ref{detg1}) and (\ref{trA0}) by replacing $\tilde\gamma_{ij}$ and $\tilde A_{ij}$ with the following: \begin{eqnarray} \tilde\gamma_{ij}&\rightarrow&\tilde\gamma^{-1/3}\tilde\gamma_{ij}, \label{gamave} \\ \tilde A_{ij}&\rightarrow&\tilde A_{ij}-\frac{1}{3}\tilde\gamma_{ij}{\rm tr} \tilde A_{ij}.\label{Aave} \end{eqnarray} To enforce Eq.~(\ref{Gofg}) all the undifferentiated $\tilde\Gamma^i$ in the evolution equations are substituted with $-\tilde\gamma^{ij}{}_{,j}$. As to the variable choice for the conformal factor, the alternative $\chi$, first proposed in \cite{moving_punctur1}, has been widely adopted. In the $\chi$ method the conformal exponent $\phi$ [which is $O(\ln r)$ near the puncture] is replaced with a new variable $\chi\equiv e^{-4\phi}$ [which is $O(r^4)$ near the puncture]. $\chi$ grows linearly near the puncture during the time evolution; such linear behavior leads to a more accurate evolution near the puncture. In the $\chi$ method, equation (\ref{eq4}) is replaced by \begin{equation} \partial_t\chi=\frac{2}{3}\chi(\alpha K-\beta^{i}{}_{,i})+\beta^{i}\chi_{,i}. \end{equation} Note that $\phi_{,i}=-\chi_{,i}/4\chi$ and $\phi_{,ij} =\chi_{,i}\chi_{,j}/4\chi^2-\chi_{,ij}/4\chi$ are applied to the evolution equations (\ref{dtK}), (\ref{dtAij}) [via Eq.~(\ref{ricciphi})], and (\ref{dtGamma}). In these substitutions, the divisions by $\chi$ need to be taken care of in the numerical implementation to avoid division by zero or unphysically negative values of $\chi$. In \cite{confexp2} a small $\epsilon$ is set to replace $\chi$ in division if $\chi$ is less than $\epsilon$. In \cite{confexp1} $W\equiv e^{-2\phi}$ is chosen to be the conformal factor variable instead of $\chi$, to avoid the effect of unphysical negative values of $\chi$ on the evolution of the other variables \footnote{It is also pointed out in \cite{confexp1} that $W$ can make the numerical computation near the black hole more accurate than both $\chi$ and $\phi$. It is shown in \cite{shibata08} that $W$ is more convenient than $\phi$ to compute the Ricci tensor since $R_{ij}=\tilde{R}_{ij}+R^W_{ij}$ with $R^W_{ij}=\tilde{D}_i\tilde{D}_j W/W+\tilde{\gamma}_{ij}(\tilde{D}_k\tilde{D}^k W/W-2\tilde{D}_k W \tilde{D}^k W/W^2)$, which is formally simpler than Eq.~(\ref{ricciphi}).}. However, we did not encounter any such difficulty in the work for this paper, therefore it is not necessary to apply the aforementioned modifications, although we anticipate the appearance of this difficulty in some complicated scenarios in future work. \subsection{Gauge Conditions} As mentioned in the introduction, the gauge conditions are important for the numerical simulations of dynamical spacetime, and this is especially true for the moving puncture method. The Bona-Masso type slicing gauge conditions \cite{bona95} for the lapse function and many driver gauge conditions (e.g., the $\Gamma$-driver) for the shift vector \cite{balakrishna96,meter06} are currently the main type of gauge conditions used in the punctured black hole calculations. In this work, we will only focus on these types of gauge conditions, which can be written as \begin{eqnarray} \partial_t\alpha&=&-2\alpha K+\lambda_1\beta^i\alpha_{,i},\label{lapse_eq}\\ \partial_t\beta^i&=&\frac{3}{4}f(\alpha)B^i+\lambda_2\beta^j\beta^i{}_{,j}, \label{beta_eq}\\ \partial_t B^i&=&\partial_t\tilde{\Gamma}^i-\eta B^i+\lambda_3\beta^jB^i{}_{,j} -\lambda_4\beta^j \tilde{\Gamma}^i{}_{,j},\label{b_eq} \end{eqnarray} where $\eta$ and the four $\lambda$'s are the parameters to be chosen, and $f(\alpha)$ is a function of $\alpha$. Here the $\lambda$'s can only be set to be 0 or 1. We set $f(\alpha)=1$ in all the cases except in Sec.~\ref{secva}, where $f(\alpha)=\alpha$. The gauge conditions used for moving black holes in the literature include: (1) $\lambda_1=1,\lambda_2=\lambda_3=\lambda_4=0$ (e.g., see \cite{campanelli07c}); and (2) $\lambda_1=\lambda_2=\lambda_3=\lambda_4=1$ (e.g., see \cite{baker07c,brugmann08}), with the proper $\eta$'s. In \cite{meter06}, the authors investigated several cases of the above gauge equations. In \cite{gundlach06a}, the authors discussed the above case (2) analytically. In this work, we will explore this problem more thoroughly (see the following sections for details). In particular, we are concerned about the effect of the advection terms on the stability and accuracy of the evolution, and we try to find the viable gauges for moving and/or spinning black holes. Throughout this paper we will fix $\eta=2$. \section{numerical method}\label{seciii} In this section, we briefly describe the key numerical techniques used in this work. For the discretization, our code uses the cell-centered method, which takes the data to be defined at the center of the spatial grid cell. We also use a finite centered-differencing method with fourth-order accuracy to approximate the spatial derivatives, which closely follows \cite{zlochower05}. In the temporal part, we use the iterative Crank-Nicholson method for time integration, which gives a second-order accuracy \cite{teukolsky00}. We take d$t$=(Courant factor)$\times$d$x$, and the Courant factor is set to be 1/4. We apply the standard centered finite differencing approximation to all spatial derivatives except the advection terms (i.e., the terms of the form $\beta^{j}\partial_j F$). For these advection terms we use the following fourth-order lop-sided stencils: \begin{eqnarray} \partial_x F_{i,j,k}&=&\frac{1}{12{\rm d}x}(-F_{i-3,j,k}+6F_{i-2,j,k} -18F_{i-1,j,k}\nonumber \\ &&+10F_{i,j,k}+3F_{i+1,j,k})\quad\mbox{for}\quad\beta^x<0,\\ \partial_x F_{i,j,k}&=&\frac{1}{12{\rm d}x}(F_{i+3,j,k}-6F_{i+2,j,k} +18F_{i+1,j,k}\nonumber \\ &&-10F_{i,j,k}-3F_{i-1,j,k})\quad\mbox{for}\quad\beta^x>0, \end{eqnarray} along the $x$-direction. The stencils are similar along the $y$- and $z$-directions. We also install in the code a Kreiss-Oliger dissipation \cite{KO} of the form \begin{equation} \partial_tF\rightarrow{\rm RHS}+\epsilon(-1)^{n/2}\sum h_i^{n+1}D_{i+}^{n/2+1} D_{i-}^{n/2+1}F, \end{equation} where RHS represents the corresponding evolution equation for $F$, $h_i$ is the grid spacing in the $i$th direction, $D_{i+}$ and $D_{i-}$ are the forward and backward differencing operators in the $i$th-direction, $n$ is the order of the finite difference used to evaluate the RHS, and $\epsilon$ is the dissipation coefficient to be chosen in various cases. In order to increase the numerical resolution without increasing the computation cost much, the mesh refinement method is used in the numerical simulations. We use the GrACE package \cite{GrACE} to implement both the mesh refinement and the parallelization in our code. This package is able to deal with the adaptive system, considering both partitioning and load-balancing for distributed adaptive mesh refinement applications. However, we only use fixed mesh refinement (FMR) in this work. The computational domain is represented by a hierarchy of nested Cartesian grids; we adopt the cell-centered scheme, and the grid hierarchy follows the Berger and Oliger algorithm \cite{BO}. The hierarchy consists of $L$ levels of refinement indexed by $\ell=0$, ... , $L-1$, for which $\ell=0$ is the coarsest level and $\ell=L-1$ is the finest one. A refinement level consists of one or more Cartesian grids with constant grid spacing $h_\ell$ on level $\ell$. All grid blocks have the same logical structure, and refinements are bisected in each coordinate direction. A refinement factor of $2$ is used such that $h_\ell=h_0/2^\ell$. The grids are properly nested such that the coordinate extent of any grid at level $\ell$, $\ell>0$, is completely covered by the grids at level $\ell-1$. We do not refine the time step. So the time step for each level is set as d$t$ = (Courant factor) $\times$ $h_{\rm min}$, where $h_{\rm min}$ is the resolution of the finest level. Therefore, there is no interpolation of data between different time slices. \begin{figure}[t] \begin{tabular}{rl} \includegraphics[width=0.233\textwidth]{guard1.eps}& \includegraphics[width=0.233\textwidth]{guard2.eps} \end{tabular} \caption{(closely follows Fig.1 of \cite{imbiriba04}) Two-dimensional diagrams for the guard cell filling. The thick vertical lines in both panels represent the refinement boundaries separating fine and coarse grid regions. The left panel shows the first step, in which one of the coarse grid cells (red circles with black rim) are filled using a quartic interpolation across 25 interior fine grid cells (green diamonds). The right panel shows the second step in which two fine grid guard cells (red diamonds with black rim) are filled using quartic interpolations across 25 coarse grid values (circles). These coarse grid values include two layers of guard cells (green circles), obtained from the coarse grid region to the right of the interface, and three layers of interior cells (green circles with black rims). The final step (not shown in the figure) is to use ``derivative matching'' to fill the guard cells for the coarse grid.} \label{guard} \end{figure} Another problem in the process of the mesh refinement is how to treat the refinement boundary well to avoid possible numerical noise. Here we follow the guard cell scheme described in \cite{imbiriba04}. This method has three steps, shown in Fig.~\ref{guard}. In the first step, interior fine grid cells are used to fill the interior grid cells of the next lower level. This restriction operation is depicted for the case of two spatial dimensions in the left panel of Fig.~\ref{guard}. The restriction is basically a three-dimensional interpolation in this work, and is accurate to fourth-order in the grid spacing to match the accuracy of the finite-differencing scheme. As in the left panel of Fig.~\ref{guard}, the values of the coarser grid cells (red circle with black rim) located within the finer grid cells (green diamonds) are filled with the finer values by quartic interpolations. The stencil of the finer cells needs to take care to ensure that only interior fine grid points, and no fine grid guard cells, are used in this first step. Secondly, the fine grid guard cells (red diamonds with black rims) are filled by prolongation from the grid of the next lower level. Before the prolongation, the coarser grid updates its own cells (green circles with black rim in the right panel of Fig.~\ref{guard}) from the finer grids in the first step. The stencil used in the prolongation operation is shown in the right panel of Fig.~\ref{guard}. The prolongation is a three-dimensional interpolation, and is also accurate to fourth-order, like the restriction in the first step. In this case, the coarser grid stencil includes two layers of guard cells (green circles), as well as its updated interior grid points (green circles with black rims). In the last step, the coarser guard cells close to the interface (bold line) are filled by using derivative matching, the difference between the finer cell and the neighbor finer guard cell across the interface matches the difference between the coarser cell and the neighbor coarser cell across the interface \footnote{Sometimes we find that it is less accurate if the last step is fulfilled, especially in the higher-order finite-differencing scheme. We then simply neglect the last step in the interface treatment for better accuracy.}. \section{Initial Data for punctured black holes}\label{seciv} For the initial data of punctured black holes, we consider the Bowen-York type initial data, in which the maximal slicing and the conformal flat form are adopted \cite{brandt97}. Let $\psi$ be the conformal factor, $\psi\equiv e^\phi$. The conformal extrinsic curvature reads \begin{eqnarray} \tilde{A}_{ij}&=&\psi^{-6}\hat{K}_{ij}=\frac{3}{2} \sum_I\frac{\psi^{-6}}{r^{2}_I}[2P^I_{(i}n^I_{j)}\nonumber\\ &&\quad-(f_{ij}-n^I_{i}n^I_{j})P_I^{k}n^I_{k} +\frac{4}{r_I}n^I_{(i}\epsilon_{j)k\ell}S_I^{k}n_I^{\ell}],\label{YorkA} \end{eqnarray} where $f_{ij}$ is the three flat metric; $P_I^i$ and $S_I^i$ are constant vectors, standing for the linear momentum and spin momentum of the $I$-th black hole respectively; $n_I^i$ is the radial normal vector with respect to $f_{ij}$, which points from the position of the $I$-th black hole to the space point. In the puncture method described in \cite{brandt97}, $\psi=1+\sum\frac{m_I}{2r_I}+u$ with mass parameter $m_I$ for the $I$-th black hole, and $u$ is determined by \begin{equation} (\partial^2_x+\partial^2_y+\partial^2_z)u=-\frac{1}{8}\hat{K}^{ij}\hat{K}_{ij} (1+\sum_I\frac{m_I}{2r_I}+u)^{-7}.\label{by_eq} \end{equation} If all black holes are at rest and spinless, (\ref{YorkA}) implies $\hat{K}_{ij}=0$, so $u=0$. In the head-on collision case, we will implement this kind of initial data with $m_1=m_2=0.5$. For a single black hole, when the linear momentum and the spin are small, we can solve the above equation approximately as (see Appendix \ref{spinID} for more detail) \cite{small_p} \begin{eqnarray} u=\frac{{\vec P}^2}{m^2}[u_1+u_2(3\mu_P^2-1)]&+&6\frac{u_3}{m^4}{\vec S}^2 (1+\mu_S^2)\nonumber\\ &+&\frac{u_4}{m^3}{\vec P}\times{\vec S}\cdot{\vec{n}},\label{IDsoln} \end{eqnarray} with \begin{eqnarray} u_1&=&\frac{5\ell}{8}(1-2\ell+2\ell^2-\ell^3+\frac{1}{5}\ell^4),\nonumber\\ u_2&=&\frac{1}{40b^2}(15+117\ell-79\ell^2+43\ell^3\nonumber\\ &&\qquad\qquad-14\ell^4+2\ell^5+84\ln\ell/b),\\ u_3&=&\frac{\ell}{20}(1+\ell+\ell^2-4\ell^3+2\ell^4),\nonumber\\ u_4&=&\frac{\ell^2}{10}(10-25\ell+21\ell^2-6\ell^3),\nonumber \end{eqnarray} where $b=2r/m$, $\ell=1/(1+b)$, $\mu_P={\vec{P}}\cdot{\vec{n}}/P$ and $\mu_S={\vec{S}}\cdot{\vec{n}}/S$. For the approximate solution (\ref{IDsoln}) of a moving black hole with spin, the ADM mass, the linear momentum, and the angular momentum are \begin{eqnarray} M_{\rm ADM}&=&m+\frac{5}{8}\frac{{\vec P}^2}{m} +\frac{2}{5}\frac{{\vec S}^2}{m^3},\label{admmass}\\ {\vec P}_{\rm ADM}&=&{\vec P},\\ {\vec S}_{\rm ADM}&=&{\vec S}. \end{eqnarray} On the other hand, when $\vec{P}=0$ while $\vec{S}$ is very large, the conformal factor can be approximated as (see Appendix \ref{spinID} for more detail) \cite{lovelace08} \begin{eqnarray} \psi=\frac{(6S^2)^{1/8}}{\sqrt{r}}.\label{high_spin_solution} \end{eqnarray} In this paper we will fix $m=1$ for all single black hole simulations. \section{Numerical results}\label{secv} In this section we report the numerical results for: (1) a single moving black hole without and with spin: The moving action and the spinning action of a single black hole are fundamental elements for BBH simulations. As our code aims at simulating BBH coalescence, evolving a single moving and spinning black hole becomes an essential test. In addition, since the gauge choice is critical for the moving puncture method, we would like to study if there are any other gauge conditions which can also support the moving puncture technique, besides the known ones. Our main achievement in this part of the work is that we discover one new gauge condition besides the known ones which can support moving puncture black hole simulations. The results on the gauge condition tests are listed in Table \ref{shifttype}, where Gauge VII is the aforementioned new set of gauge conditions. The successes of the gauge usage in other groups' work, e.g., \cite{moving_punctur1,moving_punctur2,brugmann08,vaishnav07,SpeU07,meter06}, are also reconfirmed in Table \ref{shifttype}. (2) the head-on collisions of BBHs: The head-on collision of a BBH system is the simplest dynamical spacetime in which a complete gravitational waveform of the merger of two black holes could be produced. Therefore, we use this scenario to examine the performance of the code. Besides, in order to go beyond the cases of a head-on collision in vacuum, the case of a head-on collision perturbed by a massless scalar field is also studied. With such kinds of cases we try to understand qualitatively the effect of the existence of neutral matter on the collision of a BBH, especially on its gravitational radiation. It is shown that the waveform could be affected significantly by the scalar field. \subsection{Static Black Hole}\label{secva} \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=0.47\textwidth]{fig1.eps} \end{tabular} \caption{The root mean square of the change in the trace of extrinsic curvature between consecutive time steps as a function of time in the static case with equatorial symmetry. The solid (red) line is the result with the conventional setting. The dashed line is the result with the modifications suggested in \cite{YHBS02}. This shows that the both settings give stable and convergent results.} \label{fig0} \end{figure} Although the BSSN formulation with the ``1+log'' lapse condition and the $\Gamma$-driver shift condition has been shown to be well-posed and hyperbolic \cite{gundlach06a,BeHS04}, it is still useful to confirm the stability and convergence of the formulation and the applied modifications before we move forwards to the moving/spinning black hole cases in the following subsections. Therefore, for the equation adjustments, we enforce the constraints (\ref{detg1})--(\ref{Gofg}) by using Eqs.~(\ref{gamave}) and (\ref{Aave}), and the substitution of the conformal connection ${\tilde\Gamma}^i$ with $-\tilde\gamma^{ij}{}_{,j}$, as described in Sec. \ref{seciib}. For the gauge condition, Eqs.~(\ref{lapse_eq})--(\ref{b_eq}) are applied with $f(\alpha)=\alpha$ and the parameter choice $\lambda_3=0$, $\lambda_1=\lambda_2=\lambda_4=1$, which is close to Gauge VII in Table \ref{table1}, i.e., the newly viable gauge condition (see Section \ref{secvb}). The grid width $h=0.2$ and the outer boundary is $\pm16$, $\pm16$, $16$, respectively, assuming equatorial symmetry. In this simple case we only consider a unigrid for the computational domain. \begin{table}[htbp] \caption{The gauge choices tested in this work correspond to Eqs.~(\ref{lapse_eq}), (\ref{beta_eq}) and (\ref{b_eq}); ``$\vee$" stands for the gauge condition with the corresponding advection term, while ``$\times$" stands for without.} \label{shifttype} \begin{ruledtabular} \begin{tabular}{cccccc} Gauge No.&$\beta^i\partial_i\alpha$&$\beta^j\partial_j\beta^i$& $\beta^j\partial_jB^i$&$\beta^j\partial_j\tilde{\Gamma}^i$&Tests\\ \hline I &$\times$&$\times$&$\times$&$\times$&FAIL\\ \hline II &$\vee$ &$\times$&$\times$&$\times$&PASS\\ \hline III &$\vee$ &$\vee$ &$\times$&$\times$&FAIL\\ \hline IV &$\vee$ &$\times$&$\vee$ &$\times$&FAIL\\ \hline V &$\vee$ &$\times$&$\times$&$\vee$ &FAIL\\ \hline VI &$\vee$ &$\vee$ &$\vee$ &$\times$&FAIL\\ \hline VII &$\vee$ &$\vee$ &$\times$&$\vee$ &PASS\\ \hline VIII&$\vee$ &$\times$&$\vee$ &$\vee$ &FAIL\\ \hline IX &$\vee$ &$\vee$ &$\vee$ &$\vee$ &PASS \label{table1} \end{tabular} \end{ruledtabular} \end{table} The result for such a conventional setting is shown in Fig.~\ref{fig0}. This figure shows a log plot for the root mean square (r.m.s.) of the changes in the trace of extrinsic curvature $K$ (the solid red line) between consecutive time steps. In the plot the curve of the change in $K$ rises around $t=200$ during the period of settlement. The change in $K$ decreases exponentially afterwards, without a sign of rise to the end of the run. This indicates that the conventional settings give a stable evolution for a single static black hole. The result is also consistent with the analytic understanding of the BSSN formulation and of the gauge choice. Meanwhile, some modifications \cite{YHBS02}, especially the enforcement of the constraints (\ref{detg1})--(\ref{Gofg}), have been shown to be at least as good as the conventional ones in the numerical result \cite{SpeU07}. We are therefore interested in understanding the performance of the code with the modifications, and the comparison between these two sets. Briefly, the modifications in \cite{YHBS02} are summarized as follows: Instead of treating all components of ${\tilde\gamma}_{ij}$ equally, only five of the six components of ${\tilde\gamma}_{ij}$ are evolved dynamically, and the $zz$-component is computed using Eq.~(\ref{detg1}), \begin{equation} {\tilde\gamma}_{zz}=1+\frac{{\tilde\gamma}_{yy}{\tilde\gamma}_{xz}^2 -{\tilde\gamma}_{xy}{\tilde\gamma}_{yz}{\tilde\gamma}_{xz} +{\tilde\gamma}_{xx}{\tilde\gamma}_{yz}^2}{{\tilde\gamma}_{xx} {\tilde\gamma}_{yy}-{\tilde\gamma}_{xy}^2}. \end{equation} Similarly, ${\tilde A}_{zz}$ is determined from the other five components of ${\tilde A}_{ij}$ using Eq.~(\ref{trA0}), \begin{equation} {\tilde A}_{zz}=-\frac{{\tilde A}_x{}^x+{\tilde A}_y{}^y +{\tilde A}_{xz}{\tilde\gamma}^{xz}+{\tilde A}_{yz}{\tilde\gamma}^{yz}} {{\tilde\gamma}^{zz}}. \end{equation} Instead of substituting for the undifferentiated conformal connection ${\tilde\Gamma}^i$ by $-\tilde\gamma^{ij}{}_{,j}$ according to the constraint (\ref{Gofg}), the constraint is added to the evolution equation (\ref{dtGamma}) of ${\tilde\Gamma}^i$ via \begin{equation} \partial_t{\tilde\Gamma}^i=\mbox{rhs of }(\ref{dtGamma})-(\xi+2/3) ({\tilde\Gamma}^i+\tilde\gamma^{ij}{}_{,j})\beta^k{}_{,k}, \end{equation} where $\xi$ is usually chosen to be $2/3$. With otherwise the same settings as in the conventional ones, the result with the modifications in \cite{YHBS02} is also indicated by the dashed line in Fig.~\ref{fig0}. Without too long a settlement period, the change in $K$ decreases exponentially to the end of the run. A comparison of these two sets of modifications shows that they are equally good in converging the evolution to a numerically stable state, although the modifications in \cite{YHBS02} give a better settlement at the early stage of the evolution. This indicates that there is still room for modifying the BSSN formulation to achieve better stability and convergence. The major purpose of this work is to build a reliable code for BBH simulations, therefore we will stick to the conventional modifications in the rest of this paper, although the modifications in \cite{YHBS02} might show some subtle advantage in stabilization over the conventional ones. \subsection{Moving Black Hole without Spin}\label{secvb} \begin{figure*}[thbp] \begin{tabular}{c} \includegraphics[width=0.93\textwidth]{fig2.eps} \end{tabular} \caption{The gauge tests for a moving black hole without spin (velocity $v\approx0.615$). The profiles of several dynamical variables at $t=30$ are shown, except for case VI, for which the time is $t=12$. The horizontal axis is the $z$-axis, the moving direction of the black hole. The vertical axis is the corresponding value for different variables: the solid (red) line is $\tilde{\Gamma}^z$; the dashed line is $\alpha-1$; the (magenta) dot-dashed line is $\phi$; the (blue) dot-dot-dashed line is $\beta^z$. The different panels correspond to the different cases in which the gauge number is marked on the upper-right corner of the panel and they are also listed in Table \ref{table1}. For Gauges III, V, and VIII, the results show some ill behavior explicitly while Gauges I and IV have tails of noise behind the black hole. The rest of the three gauge choices II, VII, and IX give almost the same well-behaved result. The results of Gauges II and IX are consistent with other research groups' work. Gauge VII is found to work well with the moving puncture method in this work.} \label{fig1} \end{figure*} We now study the cases of a moving black hole without spin, which are similar to the ones in \cite{meter06}. Three levels of grids are used in this and the next two subsections. The outer boundary of the coarsest level is set at $\pm16$, and the boundaries of the finer levels are located at $\pm8$ and $\pm4$, respectively. The gridwidth for the highest level is $1/8$. The black hole is located at $(0,0,-3)$ initially with the linear momentum vector $\vec P=(0,0,1)$. The gauge choices tested in this work are listed in Table \ref{shifttype}. Gauges I, II, IV, V, VI and IX (Figs.~3, 5, 7, 8, 6, 10 in \cite{meter06}, respectively) have been tested in \cite{meter06}. Differing from the numerical initial data used in \cite{meter06}, the approximate analytic initial data described in Sec.~\ref{seciv} is used in the tests. Nevertheless, our results are consistent with the results in \cite{meter06}. We summarize our results obtained for the nine tested gauges in Fig.~\ref{fig1}. In each panel of Fig.~\ref{fig1}, the number on the upper-right corner indicates the case with the same gauge number in Table \ref{table1}, and the result of that case is plotted in the panel. In each case, the black hole moves along the $z$-axis, and the conformal connection $z$-component ${\tilde\Gamma}^z$ (the red solid line), the lapse function $\alpha$ (the dashed line), the conformal exponent $\phi$ (the magenta dot-dashed line), and the shift vector $z$-component $\beta^z$ (the blue dot-dot-dashed line) are chosen as the monitors for the stability of each run. The profiles of these variables are recorded at time $t=30$ in each panel, except for Panel VI, where the recorded time is $t=12$. In Panel I, it can be seen that $\alpha$, $\phi$, and $\beta^z$ all behave well, but ${\tilde\Gamma}^z$ has some tail of ripple behind the black hole. The ripple implies a rising instability in the evolution. It is known that adding the $\alpha$ advection term in Eq.~(\ref{lapse_eq}), i.e., $\lambda_1=1$, suppresses this unstable mode. We verify it by comparing the profiles of the variables in Panel I and II. In Panel II, all variables behave well and there is no ripple of noise for the curve of ${\tilde\Gamma}^z$. This result is consistent with the one in \cite{lousto08}. We then set $\lambda_1=1$ in Eq.~(\ref{lapse_eq}) in the following cases, since it is necessary for the stability of a moving black hole. In Panel III, it is shown that the addition of the advection term of $\vec\beta$ in Eq.~(\ref{beta_eq}), i.e., $\lambda_1=\lambda_2=1$, will result in a big bump behind the black hole. This instability is somehow strong, such that it causes the ill-behavior of $\alpha$ and $\phi$. Adding the advection term of $\vec B$ in Eq.~(\ref{b_eq}), i.e., $\lambda_1=\lambda_3=1$, will bring in a small tail, as seen in Panel IV, although such an instability does not seemingly affect $\alpha$, $\phi$, and $\beta^z$, for they behave well in the plot. The subtraction of the advection term of $\tilde{\Gamma}^i$ in Eq.~(\ref{b_eq}), i.e., $\lambda_1=\lambda_4=1$, will result in a ``distorted'' profile for $\tilde{\Gamma}^z$ in Panel V. (A gauge choice close to Gauge V has been used to simulate the inspiral of a BBH system with a small initial separation in \cite{moving_punctur2}.) From the above three cases, we see that solely adding the advection term of $\vec\beta$ or $\vec B$, or subtracting the advection term of $\tilde{\Gamma}^i$ will bring in instability in general. This understanding leads us to try combinations of these three cases. \begin{figure*}[ht] \includegraphics[totalheight=0.5\textheight,width=\textwidth]{fig3.eps} \caption{The gauge tests for a moving black hole with spin parallel (upper panel) and perpendicular (lower panel) to the moving direction (velocity $v\approx0.5$, the specific angular momentum $a\approx0.5102$). The profiles of several dynamical variables at $t=30$ are shown as in Fig.~\ref{fig1}. From left to right, the figures correspond to the results with Gauges II, VII and IX listed in Table \ref{table1}, respectively. These three ``good'' gauges give almost the same results.} \label{fig2} \end{figure*} Panel VI shows that the combination of the $\vec\beta$ and the $\vec B$ advection term additions, i.e., $\lambda_1=\lambda_2=\lambda_3=1$, results in a profile with high frequency noise for $\tilde{\Gamma}^z$ around the black hole and makes an even stronger instability. The code crashes soon after time $t=12$ when the black hole has only moved slightly. However, in Panel VII, it is shown that the combination of the $\vec\beta$ advection term addition and $\tilde{\Gamma}^i$ advection term subtraction, i.e., $\lambda_1=\lambda_2=\lambda_4=1$, can suppress the unstable modes introduced by the sole usage of these two terms (see Gauges III and V). We can see that the curve profiles of all the variables in Panel VII look almost the same as those in Panel II. It is interesting to note that, according to our literature search, Gauge VII has never been previously used in any BBH simulations. Therefore, it deserves further study. In Panel VIII, we consider the combination of adding the $\vec B$ advection term and subtracting the $\tilde{\Gamma}^i$ advection term, i.e., $\lambda_1=\lambda_3=\lambda_4=1$. The performance in Panel VIII shows a set of curve profiles which are similar to those obtained with Gauge V. Finally, we consider in Panel IX the combination of all three advection terms in the shift equation, i.e., $\lambda_1=\lambda_2=\lambda_3=\lambda_4=1$. The combination of the BSSN equations with Gauge IX has been proven to be strongly hyperbolic in the sense of first-order in time, second-order in space systems \cite{gundlach06a}, and thus yields a well-posed initial-value problem. Panel IX shows that all the variables behave very well and smoothly. The curve profiles in Panels II, VII, and IX look almost the same. The linearized analysis of the BSSN formulation with the gauge conditions in \cite{meter06} shows that both Gauges II and VII have zero-speed modes. However, from the results described in this section, we can not distinguish the difference between Gauges II, VII and IX. Furthermore, Fig.~5 of \cite{meter06}, which corresponds to gauge II, shows no zero-speed modes at all. We conjecture that the nonlinearity of the full theory could eliminate the zero-speed modes for Gauges II and VII in general. In fact, Gauge II \cite{campanelli07c,lousto08}, as well as Gauge IX \cite{brugmann08,vaishnav07,SpeU07}, has been successfully used in the simulations of black hole evolution. And Gauge VII is as good as Gauges II and IX for black hole simulations, at least in the cases tested in this work. Therefore, we can also expect that Gauge VII is very likely to be viable in the generic cases of BBH evolution. Someone may raise the doubt that the moving velocity is not large enough to excite the zero speed mode for the newly found gauge condition here. This is not the case in fact, at least for the approximate initial data. Considering the ADM mass (\ref{admmass}), we have the maximal moving velocity $\vec{v}=\vec{P}_{\rm ADM}/M_{\rm ADM}$ for a black hole without spin when $P=2\sqrt{10}/5$. And the moving velocity $(v\approx0.615)$ for the above tested moving black hole with the linear momentum $P=1$ almost equals this maximal velocity ($v\approx0.63$). We have tested this maximal moving velocity also. The result gives the same conclusion mentioned above. Someone may also raise the doubt that the zero speed modes for Gauge VII mentioned in \cite{meter06} might not be excited in this tested scenario. Therefore, we test it further, with both the case of a moving black hole with spin and the case of a high-spin black hole, in the following subsections. Our results will show that Gauge VII, as well as Gauges II and IX, passes these two tests. \subsection{Moving Black Hole with Spin}\label{secvc} \begin{figure*}[ht] \includegraphics[totalheight=0.5\textheight,width=\textwidth]{fig4.eps} \caption{The gauge tests for a single rapid-rotating black hole (spin parameter $a\approx0.9$). The comparison of profiles of the three ``good'' gauges at $t=30$ is shown. From top to bottom, and from left to right, the panels show the conformal factor $\phi$, the lapse $\alpha-1$, the conformal connection $\tilde{\Gamma}^z$ and the shift $\beta^z$, respectively. These three gauge choices give almost the same results.} \label{fig3} \end{figure*} In this subsection, we investigate the effect of spin on a moving black hole. Firstly, we set the spin direction of the black hole to be parallel to its moving direction, specifically, $\vec P=(0,0,1)$ and $\vec S=(0,0,1)$. Therefore, the only difference between the setting here and in the previous subsection is that, in this subsection, the black hole has a spin of amplitude $1$ which is along the moving direction. Here we only test the ``good'' gauge choices, i.e., Gauges II, VII and IX. The curve profiles at $t=30$ are presented in the upper panels of Fig.~\ref{fig2}. We find that, with any of these three gauges, the code can stably simulate a spinning black hole. The curves in the three upper panels look the same, not showing a sign of instability. Compared with Fig.~\ref{fig1}, we find that the black holes in this case move slower ($v\approx 0.5$) than the spinless ones ($v\approx 0.615$) with the same gauge choices described in the previous subsection. This is consistent with the theoretical prediction: From Eq.~(\ref{admmass}) we can see that the moving black hole with a nonvanishing spin will have a larger ADM mass than the spinless one due to the rotational energy. For the moving velocity being $\vec{v}=\vec{P}_{\rm ADM}/M_{\rm ADM}$, a spinning black hole will move slower than a spinless one with the same linear momentum. Our conclusion in this test is that Gauges II, VII and IX all handle this scenario well, since these three gauges give almost the same result. Secondly, we set the spin direction of the black hole to be perpendicular to the moving direction, specifically, $\vec P=(0,0,1)$ and $\vec S=(1,0,0)$. Similarly, we only test Gauges II, VII and IX. The curve profiles at $t=30$ are presented in the lower panels of Fig.~\ref{fig2}. The results are similar to those obtained in the parallel-spin case. From Eq.~(\ref{admmass}), the moving velocity of the black hole for this case is the same as the velocity in the parallel-spin case, i.e., $v\approx 0.5$. We can read this from the results presented in Fig.~\ref{fig2}. In this case, the peak amplitude of the variable $\phi$ becomes smaller than in the parallel-spin case. However, this difference basically comes from the spin orientation; it does not bring any instability to the results with the three ``good'' gauges II, VII and IX. These three ``good'' gauges give essentially the same stable behavior. Similar to the moving velocity being bounded, the magnitude of the specific angular momentum, $\vec a=\vec{S}/M^2_{\rm ADM}$, has a maximal value for the approximate initial data (\ref{IDsoln}). For $S=\sqrt{5/6}$, the black hole has a maximal magnitude $a=3\sqrt{30}/32\approx0.5135$. The magnitude of $\vec a$ in both the parallel-spin and perpendicular-spin cases is $a\approx0.5102$. Therefore, the magnitude of $\vec a$ in these two cases is very close to the maximal magnitude of the specific angular momentum that the initial data can have. In the next subsection we would like to test these three gauge choices with an even higher spin magnitude of $\vec a$. \subsection{Rapidly-rotating black hole} We have tested Gauge VII in the spinless and the spinning moving black hole cases. We find that, with this gauge choice, the code can simulate these dynamical spacetimes well. Since experience tells us that a higher spin is more likely to trigger a quicker instability in the numerical simulations, one might doubt whether the newly found gauge condition can handle a high-spin case well. In this subsection, we test the three ``good'' gauge choices with a rapidly-rotating black hole. The initial data used is another approximate analytic solution for the Bowen-York initial data. We describe the details in Appendix \ref{spinID}. This initial data is similar to that used in Sec.~\ref{secvc}, except that the conformal factor $\psi$ is given by Eq.~(\ref{high_spin_solution}) rather than Eq.~(\ref{IDsoln}) \cite{dain08,lovelace08} and $\vec P=0$. Here we set the angular momentum vector to be $\vec S=(0,0,10000)$, which results in the specific angular momentum $a\approx0.9$ \cite{lovelace08}. In this case, we again only care about the corresponding behavior of the runs with the three ``good'' gauge choices. The outer boundary is set at $r=128$, and six levels of grids for FMR are used in the runs. The curve profiles at $t=30$ are presented in Fig.~\ref{fig3}. These three ``good'' gauge choices can all give a stable and accurate simulation for this highly spinning single black hole case. Looking at the plots in this figure, it is difficult to distinguish between the results obtained with these three gauges. The curves overlap each other well. The profiles of $\tilde{\Gamma}^z$ and $\beta^z$ are consistent with the experience that they almost give the same shape. Due to the limits on computational resources, the runs are stopped at $t=30$. This might raise the doubt whether the runtime is long enough to excite an unstable mode. Since the spin of the black hole in this case is close to the maximal one ($a_{\rm max}\approx 0.9282$) that a punctured black hole can have in a Bowen-York type initial data \cite{lovelace08}, we expect that some difference between these runs, caused from instability, will appear in this period of time. However, we find that the results with these three gauge choices are almost the same. This should indicate that Gauge VII is as good as the other two in this case, unless all three of these gauge choices cause the same instability, which is highly unlikely. Meanwhile, the results in Sec.~\ref{secva} can be regarded as complementary to the one in this subsection (about the the long-term stability issue of Gauge VII). Therefore we conclude that the new gauge choice, Gauge VII, survives in this high-spin test. \subsection{Head-on Collision of two equal mass black holes} In the previous subsections, we have presented the numerical simulations for a single black hole with/without spin with the different gauge choices. In order to test our code as well as the three gauge choices discussed above further, we present in this and the next subsections the numerical results about the head-on collisions of a BBH system, which is the simplest case for BBHs. We use time-symmetric initial data for the two black holes, i.e., the so-called Brill-Lindquist initial data \cite{brill64}. Specifically, the initial data takes the form \begin{eqnarray} e^\phi&=&1+\frac{m_1}{2|\vec{r}-\vec{c}_1|} +\frac{m_2}{2|\vec{r}-\vec{c}_2|},\\ \tilde{\gamma}_{ij}&=&f_{ij},\qquad K=0,\qquad\tilde{A}_{ij}=0. \end{eqnarray} where $m_1$ and $m_2$ are the mass parameters for the two black holes, $\vec{c}_1$ and $\vec{c}_2$ are the positions of the black holes, and $f_{ij}$ stands for the flat three-metric. In this case, we set the mass parameters to be $m_1=m_2=0.5$ and the positions of the two black holes at $(0,0,\pm1.1515)$, respectively. This value has been used in \cite{cook94,SpeU07,alcubierre03} and corresponds to an initial separation of the black holes equal to that of an approximate ISCO configuration. In other words, our initial data corresponds to two identical black holes which have no spin and no linear momentum, attracting each other from rest at the ISCO. \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig5.eps} \caption{The ($\ell$=2, $m$=0) mode of $r\Psi_4$ extracted from a head-on collision of the Brill-Lindquist initial data starting from the approximate ISCO separation $d=2.303$ at four different radii $r$=20, 30, 40 and 50. Considering the velocity of the gravitational wave, the time delay has been shifted in the plot.} \label{waveR_chi_headon} \end{figure} In this and the next subsections, the computational domain is $\pm 64\times\pm 64\times 64$ and $64\times64\times32$ grid points are used in every level. To reduce the computational load equatorial symmetry is assumed. Six levels of grids are used for the mesh refinement. The refinement boundaries are placed at 32, 16, 8, 4, and 2. As mentioned in Sec.\ref{seciib}, the $\chi$-version of the evolution equations are also available in our code. In this and the next subsections, we test both the $\phi$-version and the $\chi$-version of the code in the head-on collision cases. The results obtained from these two versions are consistent in the scenarios \footnote{In this work, we mainly stick to the $\phi$-version of the code. However, in our experience, the $\chi$-version of the code sometimes gives better stability and convergence, although it also suffers from a possible problem during the evolution, due to $\chi$'s turning negative near the punctures.}. \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig6.eps} \caption{The differences between the ($\ell$=2,$m$=0) mode of the waveform $r\Psi_4$ for different gauge choices extracted at $r=30$. The solid line is the difference between Gauges II and IX, the (red) dashed line is the difference between Gauges VII and IX, and the (olive) dot-dot-dashed line is the difference between Gauges II and VII. The largest difference is roughly $0.2\%$ of the amplitude of $r\Psi_4$.} \label{waveR_chi_gauge} \end{figure} Although conceptually simple, the head-on collision of a BBH system is still a highly dynamical process in spacetime. During the collision, the system will emit a complete gravitational waveform from the merger of two black holes. To quantify the gravitational waveform, we use the Newman-Penrose scalar $\Psi_{4}$. The method of computing this quantity is described in Appendix \ref{appendix_psi4}. Due to the symmetric property of this BBH system, the gravitational wave has only the ($\ell$=2,$m$=0) mode. First, we consider Gauge IX, which has been widely adopted in simulations of BBH systems. We extract the waveform at $r$=20, 30, 40, and 50. Since the leading order of $\Psi_4$ is $1/r$ asymptotically, $r\Psi_4$ should be independent of the extraction position, except for the time delay due to the gravitational wave propagation. We consider the velocity of the gravitational wave to be the speed of light and thus subtract the time delay accordingly. The waveforms are plotted in Fig.~\ref{waveR_chi_headon}. The result is quantitatively consistent \footnote{Note the difference of the tetrad by a factor of two between our setting and the setting in \cite{fiske05}. This results in the difference in the magnitude of the waveform by a factor of two.} with the result reported in \cite{fiske05,SpeU07}. Initially there seems to be some small amplitude oscillations before the larger oscillations for the monitors at larger radii $r$, but not for the one at smaller $r$. This noise mainly comes from the reduced accuracy of the evolution in the coarser grids. We next study the effect of the gauge conditions discussed in the previous subsections in the head-on collision. The differences of the waveforms with the gauge choices are shown in Fig.~\ref{waveR_chi_gauge}. As expected, the codes with these three ``good'' gauge conditions can all evolve the head-on collision process stably. However, the gauge conditions indeed affect the waveform \cite{brown08}. The highest peak of the difference between Gauges II and IX, $\Delta(r\Psi_4)_{\rm II-IX}$, is about $4\times10^{-5}$. From the plot, we can see that the pattern and the amplitude of $\Delta(r\Psi_4)_{\rm VII-IX}$, the (red) dashed line, is close to $\Delta(r\Psi_4)_{\rm II-IX}$, the solid line. The amplitude of the wave is about $0.02$, so the relative difference is about $0.2\%$ of the amplitude of $r\Psi_4$. But it is interesting to note that the amplitude of $\Delta(r\Psi_4)_{\rm II-VII}$, the (green) dot-dashed line, is smaller, only about $0.05\%$. We note that these differences are smaller than the difference of wave form resulting from the different initial lapse profile (for example $\alpha=1$, $1/\psi^2$, $1/\psi^4$, etc.) which is typically $1\%$ of the amplitude of $r\Psi_4$. \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig7.eps} \caption{The ($\ell$=2,$m$=0) mode of the waveform $r\Psi_4$ for a massless scalar field perturbing the head-on collision of two identical black holes without spin or linear momentum. $A$ is the strength of the perturbation. $A=0.0$ means without perturbation. The extraction radius is $r=30$. The time delay effect of the perturbation is clear. The perturbation will efficiently amplify the waveform.} \label{waveR_chi_scalarP} \end{figure} \subsection{Head-on Collision Perturbed by a Massless Scalar Field} In the previous case, we studied the head-on collision of a BBH system without matter. However, matter is attracted into strong gravitational systems. Thus matter also plays an important role in binary compact objects. The evolution of a dynamic spacetime generally needs to include matter. On the other hand, most of the astrophysical systems, including BBH mergers, are usually surrounded by an accretion disk or some kind of matter. Therefore, it is interesting to ask how this matter will affect the gravitational wave signal, which is expected to be detected in the near future. In order to check the consistency with the matter coupling in the code and to investigate the effect on the gravitational waveform by matter, we study the head-on collision ``perturbed'' by a scalar field in this subsection. Here ``perturbed'' means that the amplitude of the scalar field is small, and we do not need to solve the constraint equation to get exact initial data. Instead, we set the dynamical variables of geometry to be identical to those in the previous subsection. Furthermore, the matter part of the dynamical variables are set independently. However when we evolve this initial data, we do not do any approximation, that is to say we solve the fully coupled Einstein-Klein-Golden equation numerically. The evolution equation of a scalar field is described in Appendix \ref{appendix_scalar}. For each simulation, we provide the following initial data for $\Phi$ and $\partial_t\Phi$: the profiles of $\Phi$ are a rest sphere shaped scalar field located at the center of the two black holes \begin{equation} \Phi(t=0)=Ae^{-r^2}, \qquad\partial_t\Phi(t=0)=0, \end{equation} where $A$ is the amplitude of the scalar field. We test the scalar field with different amplitudes to see its effect on the waveform. The amplitudes are set to be $A$=0.02, 0.04, 0.06, and 0.08. The results are plotted in Fig.~\ref{waveR_chi_scalarP}. It is clear to see that the waveform is delayed with the existence of the scalar field, and the larger the amplitude of the scalar field is, the longer the delay time. In the meantime, the amplitude of the waveform also becomes larger. This phenomenon can be understood as follows: The scalar field is located at the middle of the black hole. Some part of the scalar field will escape outwards when it evolves. This escaping part of the scalar field will delay the motion of the two black holes toward each other. Meanwhile, some part of the scalar field will be absorbed into the black holes. Thus, the black holes will become larger, and this results in a larger amplitude of the gravitational waveform. \section{summary}\label{secvi} In summary, we have constructed from scratch a new numerical code based on the BSSN formalism and the moving puncture technique. In the code, an FMR/AMR algorithm is implemented via the GrACE package, a fourth-order spatial finite-differencing scheme and an iterative Crank-Nicholson scheme for time integrations are applied in solving the Einstein equations. Some adjustments of the BSSN formulation for the constraint equations in \cite{YHBS02} are also examined in the work. We have compared the alternative adjustments with the conventional ones with a static Schwarzschild black hole and found that with both of them the black hole can be evolved stably and accurately. We then investigated the viability of several gauge choices through the simulation of a single moving black hole with and without spin. In addition to obtaining results consistent with those of other researchers, we found a new gauge choice with which one can also simulate a moving punctured black hole well. We next tested our code with the head-on collisions of a BBH system in a vacuum and with the perturbation of a massless scalar field. The gravitational waveform obtained in this code from the collision in vacuum is quantitatively the same as that obtained in the work by other groups. The purpose of the head-on collision perturbed by a scalar field is to understand qualitatively the effect of matter on the evolution of binary black holes, as well as testing the code further. The result shows that, with a specific configuration, the existence of a scalar field could delay the merger of binary black holes, as people have expected. The strength of the scalar field will significantly affect the gravitational waveform. The main goal of this work is the construction of a new code for the study of numerical relativity. However, re-investigating the conventional methods and exploring the alternatives to the methods are also emphasized during the development of this code. As one can see in Sec.~\ref{secva}, both adjustments for the constraint equations give stable and convergent results. However, Fig.~\ref{fig0} also gives the impression that the alternate adjustment performs better than the conventional adjustment. This simply indicates that there is still an issue of the optimal choice of the constraint addition in the Einstein equations. For the gauge condition, our study shows that a new gauge choice, i.e., Gauge VII, is able to pass all of our tests. Therefore, this gauge could be a possible choice for BBH simulations, although more investigation is needed. The waveform obtained from the head-on collisions shows that with the code one can simulate the evolution of a BBH system. This enables us not only to continue studying numerical relativity in more complicated scenarios, like the inspiral of binary black holes, the recoil problem, and the final spin problem, but also to verify the existing results and to go even farther. We believe there will never be too much double-checking of existing achievements, and any further investigations based on them. The result of the head-on collision perturbed by a scalar field gives us some insight into the possible distortion of a gravitational wave, since matter is abundant in most of the gravitating systems. All of these results verify that the code is reliable and ready to be engaged in the study of more realistic, astrophysical scenarios and of numerical relativity. We plan to use this new code to work on the inspiral of BBH systems in the very near future. One black hole can be determined totally by only 2 parameters, i.e., mass and spin. Thus it is interesting to ask how to determine the product black hole of a BBH system from the information of the two initial black holes. The spin expansion method in \cite{boyle08} shows us some hint as to how to solve this problem, and we plan to study this problem with the new code next. \section*{Acknowledgments} We are grateful to Dr.~Ronald Taam for helpful discussions and encouragement and also to Dr.~Manish Parashar for offering us the GrACE library. This work was supported in part by the National Science Council under the grants NSC95-2112-M-006-017-MY2 and NSC97-2112-M-006-008. Z.~Cao is supported in part by the NSFC (Nos.~10671196 and 10731080). This work was also supported in part by the National Center of Theoretical Sciences. We are grateful to the National Center for High-performance Computing for the use of computer time and facilities.
1,116,691,498,049
arxiv
\section{Introduction} The purpose of this document is to study a family of auto-equivalences of the derived category of the principal block of the BGG-category $\mathcal{O}$. In the geometric setting (i.e., perverse sheaves or $D$-modules on the flag variety) it is well known that much of the information of interest to representation theory is encoded in the convolution structure on the relevant categories of sheaves/$D$-modules. This is the theory of the geometric Hecke algebra and `Hecke patterns', see \cite{B}, \cite{BBM}, \cite{BD}, \cite{BeGi}, \cite{L}, \cite{LV}, \cite{T}, \cite{So10}. The equivalences studied in this note correspond to the `standard generators' of the Hecke algebra. One of the goals is to show that many of the results regarding category $\mathcal{O}$ in the literature are very natural from this point of view: namely that of category $\mathcal{O}$ as a `reasonably faithtful module' for the Hecke algebra (see \cite{So10}). Our approach is algebraic - perverse sheaves and the geometry of the flag variety are notably absent in our arguments. In the conclusion we do explain how stronger results can be achieved using an additional assumption (Assumption \ref{koszul}). However, as far as I am aware, the only known proof of this assumption is geometric. Let me now describe the contents of this document and indicate the main results. In \S\ref{s:not}-\S\ref{s:complexesoffunctors} we set up some homological algebra that culminates in \S\ref{s:genconst} in the form of Thm.\ \ref{mainthm} which is originally due to Rickard \cite[Thm.\ 2.1]{Ri} (also see \cite[\S 2.2.3]{Ro}, \cite[Lemma 4.1.1.]{ABG}, \cite[Thm.\ 7.3.16]{Vo}). In \S\ref{s:catO} we introduce the BGG category $\mathcal{O}$ and following \cite[\S2.10]{Ja} consider translation and wall crossing functors. Thm.\ \ref{mainthm} is exploited to construct the aforementioned derived auto-equivalences of the principal block of $\mathcal{O}$ (Prop.\ \ref{dequivO}). Using these we give a quick proof of `Bott's Theorem' \cite[Thm.\ 15]{Bott} in Thm.\ \ref{bottsthm}. The constructed derived equivalences satisfy the braid relations, in our setting this is due to Rouquier \cite[Thm.\ 4.4]{Ro}. In \S\ref{s:lastnongraded} we exploit the braid relations to show that there is a derived auto-equivalence that switches tilting modules with projective modules (Thm.\ \ref{switchtilting}). Our proof is formally the same as that of \cite[Prop.\ 2.3]{BBM} (also see \cite[Thm.\ 8]{StM}). In fact, the auto-equivalences considered in this document are Koszul dual (in the sense of \cite{BGS}) to the Radon transforms of \cite{BBM}. In Cor.\ \ref{tiltingcharformula} and Cor.\ \ref{ringelselfduality} we recover Soergel's character formula for tilting modules \cite[Thm.\ 6.7]{So98} and the Ringel self duality of the principal block (implicit in \cite{So98}). It should be pointed out that although Soergel doesn't explicitly construct a derived equivalence in \cite{So98} (he works with categories of modules with Verma/dual Verma flags), the derived functor of Arkhipov's twisting functor considered by him is a derived equivalence. In fact, (derived) twisting functors correspond to the Radon transforms of \cite{BBM} and so our approach is essentially Koszul dual to Soergel's. In \S\ref{s:gO}, following Soergel and Stroppel, we considered graded category $\mathcal{O}$. This section makes heavy use of \cite{St}. Proceeding as in the non-graded case we construct derived auto-equivalences in this setting and prove graded analogues of the results in the previous sections. In particular, we direct the reader to Thm.\ \ref{dequivOgraded} and \S\ref{s:gtilting}. Finally, in \S\ref{s:kl}, we explain the connection between our auto-equivalences and Kazhdan-Lusztig theory. The main results are Thm.\ \ref{klequivtilt} and Thm.\ \ref{klconj}. Assumption \ref{koszul} and Thm.\ \ref{klconj} are the only results in this note that depend on geometric results. \subsection*{Acknowledgments} I am grateful to W.\ Soergel for some extremely helpful correspondence. I also thank A.\ Ram for convincing me that this note needed to be written, without his encouragement this document would have never seen the light of day. Part of this document was written while I was a graduate student at the University of Wisconsin-Madison and I thank the department there for its support. This work is partially supported by the NSF grant DMS-0652641. \section{Notations and conventions}\label{s:not} \subsection{}Functors between additive categories will be assumed to be additive. \subsection{}The terms `functorial', `natural' and `canonical' will be used as synonyms for `a morphism of functors'. \subsection{}If $\mathcal{A}$ is an additive category, we write $\mathrm{Kom}(\mathcal{A})$ for the category of complexes in $\mathcal{A}$. If $\mathcal{A}$ is abelian, we write $\mathrm{D^b}(\mathcal{A})$ for the bounded derived category of $\mathcal{A}$. \subsection{}When working with triangulated categories we denote the shift functor by $[1]$. Distinguished triangles $X\to Y \to Z\to X[1]$ will often be written as $X\to Y\to Z\leadsto$. \subsection{}Let $\mathcal{T}$ be a triangulated category. We say that an object $X\in\mathcal{T}$ is filtered by objects $Y_1,\ldots, Y_n$ if there exists a sequence of objects $0=X_0, X_1, \ldots, X_n=X$ and distinguished triangles $X_{i-1}\to X_i \to Y_i \leadsto$. We will often use this notion in the following situation: let $H$ be a cohomological functor on $\mathcal{T}$. Let $X, X_i, Y_i$ be as above. Assume that $H(Y_i[m])=0$ for all $m\in \ZZ$ and all $i$. Then, proceeding by induction it follows that $H(X[m])=0$ for all $m\in \ZZ$. \subsection{}If $\mathcal{A}$ is an abelian or triangulated category, we write $K_0(\mathcal{A})$ for the Grothendieck group of $\mathcal{A}$. If $\mathcal{A}$ is abelian, then $K_0(\mathcal{A})$ and $K_0(\mathrm{D^b}(\mathcal{A}))$ are canonically isomorphic and we take the liberty of identifying them with each other. \section{Reminders on adjoint functors}\label{s:1} \subsection{}\label{s:adjunctions}Let $f_*\colon\mathcal{A}\to\mathcal{B}$ and $f^*\colon\mathcal{B}\to\mathcal{A}$ be functors. An adjunction $(f^*,f_*)$ between $f^*$ and $f_*$ is the data of two natural transformations $\varepsilon\colon f^*f_*\to\mathrm{id}_{\mathcal{A}}$ and $\eta:\mathrm{id}_{\mathcal{B}}\to f_*f^*$ such that the compositions \begin{equation}\label{eq:unitcounit} f_*\mapright{\eta \mathbbm{1}_{f_*}} f_*f^*f_*\mapright{\mathbbm{1}_{f_*}\varepsilon}f_* \qquad \mbox{and} \qquad f^*\mapright{\mathbbm{1}_{f^*} \eta}f^*f_*f^*\mapright{\varepsilon \mathbbm{1}_{f^*}} f^*\end{equation} are equal to the identity on $f_*$ and $f^*$, respectively. The morphisms $\eta$ and $\varepsilon$ are the unit and counit of the adjunction respectively. An adjunction gives an isomorphism, functorial in $A\in\mathcal{A}$ and $B\in\mathcal{B}$: \[\alpha_{A,B}\colon \mathrm{Hom}_{\mathcal{A}}(f^*B, A){\mapright \sim} \mathrm{Hom}_{\mathcal{B}}(B, f_*A), \qquad \phi\mapsto \mathbbm{1}_{f_*}\phi\circ\eta_B. \] The inverse is given by $\psi\mapsto \varepsilon_A\circ \mathbbm{1}_{f^*}\psi$. Conversely, a functorial isomorphism $\alpha_{A,B}$ as above provides an adjunction $(f^*,f_*)$. Namely, set $\varepsilon_A=\alpha^{-1}_{A,f_*A}(\mathrm{id}_{f_*A})$ and $\eta_B=\alpha_{f^*B,B}(\mathrm{id}_{f^*B})$. If $(f^*,f_*)$ is an adjunction, then the functor $f^*$ is left adjoint to $f_*$ and the functor $f_*$ is right adjoint to $f^*$. \begin{lemma}\label{keylem} Let $\mathcal{A}$ and $\mathcal{B}$ be additive categories. Suppose $(f^*, f_*)$ is an adjunction between functors $f^*\colon\mathcal{A}\to\mathcal{B}$ and $f_*\colon\mathcal{B}\to \mathcal{A}$. Let $X\in \mathcal{A}$, $Y\in\mathcal{B}$. \begin{enumerate} \item If $f^*X \neq 0$, then the unit map $\eta_X\colon X \to f_*f^*X$ is non-zero. \item If $f_*Y \neq 0$, then the counit map $\varepsilon_Y\colon f^*f_*Y \to Y$ is non-zero. \end{enumerate} \end{lemma} \begin{proof} As the composition $f^* X \mapright{f^*(\eta_X)} f^*f_*f^*X \mapright{\varepsilon_{f^*X}} f^*X$ is the identity on $f^*X$ (see \eqref{eq:unitcounit}), we infer that if $f^*X\neq 0$, then $\eta_X\neq 0$. The proof of (ii) is similar. \end{proof} \subsection{}\label{s:transpose}Let $f^*, g^*\colon\mathcal{A}\to\mathcal{B}$, $f_*,g_*\colon\mathcal{B}\to\mathcal{A}$ be functors and let $(f^*,f_*)$, $(g^*,g_*)$ be adjunctions. Let $\eta$ and $\varepsilon$ denote the unit and counit of the adjunction $(f^*,f_*)$, and let $\eta'$ and $\varepsilon'$ denote the unit and counit of the adjunction $(g^*,g_*)$. Let $\phi\colon f_*\to g_*$ be a natural transformation. The \emph{transpose} $\phi^{\vee}: g^*\to f^*$ is the composition \begin{equation}\label{eq:transpose} g^*\mapright{\mathbbm{1}_{g^*} \eta}g^*f_*f^*\mapright{\mathbbm{1}_{g^*}\phi\mathbbm{1}_{f^*}}g^*g_*f^*\mapright{\varepsilon'\mathbbm{1}_{f^*}}f^*. \end{equation} The following is a reformulation of \cite[Ch.\ 4 \S7, Thm.\ 2]{MacL}. \begin{prop}\label{transposeunique} Suppose $(f^*, f_*)$ and $(g^*, g_*)$ are adjunctions between functors $f^*,g^*\colon\mathcal{A}\to\mathcal{B}$ and $f_*,g_*\colon \mathcal{B}\to\mathcal{A}$. Let \[ \alpha\colon \mathrm{Hom}_{\mathcal{A}}(f^* - , -)\mapright{\sim}\mathrm{Hom}_{\mathcal{B}}(-,f_* -), \quad \alpha'\colon\mathrm{Hom}_{\mathcal{A}}(g^*-,-)\mapright{\sim}\mathrm{Hom}_{\mathcal{B}}(-,g_*-), \] be the canonical isomorphisms obtained from this data. Let $\phi\colon f_*\to g_*$ be a natural transformation. Then $\phi^{\vee}\colon g^*\to f^*$ is the unique natural transformation such that the following diagram commutes: \[\xymatrixrowsep{1.5pc}\xymatrixcolsep{1.5pc}\xymatrix{ \mathrm{Hom}_{\mathcal{A}}(f^* -, -)\ar[r]^-{\circ \phi^{\vee}}\ar[d]_{\alpha}^{\sim}&\mathrm{Hom}_{\mathcal{A}}(g^* - , -)\ar[d]_{\sim}^{\alpha'} \\ \mathrm{Hom}_{\mathcal{B}}(-, f_* -)\ar[r]_-{\phi\circ}& \mathrm{Hom}_{\mathcal{B}}(-, g_* -) } \] \end{prop} \begin{proof}By definition, $\alpha'^{-1}(\phi\circ\alpha(?))=\varepsilon'\circ \mathbbm{1}_{g^*}\phi\circ \mathbbm{1}_{g^*f_*}? \circ \mathbbm{1}_{g^*}\eta$. Since all morphisms involved are natural transformations, \begin{align*} \varepsilon'\circ \mathbbm{1}_{g^*}\phi\circ \mathbbm{1}_{g^*f_*}? \circ \mathbbm{1}_{g^*}\eta &=\varepsilon'\circ\mathbbm{1}_{g^*g_*}?\circ\mathbbm{1}_{g^*}\phi\mathbbm{1}_{f^*}\circ\mathbbm{1}_{g^*}\eta \\ &=?\circ \varepsilon'\mathbbm{1}_{f^*}\circ\mathbbm{1}_{g^*}\phi\mathbbm{1}_{f^*}\circ\mathbbm{1}_{g^*}\eta \\ &=?\circ\phi^{\vee}. \end{align*} So $\alpha'^{-1}(\phi\circ\alpha(?))=?\circ\phi^{\vee}$ which gives the commutativity of the diagram. As $\alpha$ and $\alpha'$ are isomorphisms, the natural transformation $\circ\phi^{\vee}\colon \mathrm{Hom}_{\mathcal{A}}(f^*-,-)\to\mathrm{Hom}_{\mathcal{A}}(g^*-,-)$ is unique. Hence, $\phi^{\vee}$ is unique by the Yoneda Lemma. \end{proof} \begin{prop}\label{transposecommute}Suppose $(f^*, f_*)$ and $(g^*, g_*)$ are adjunctions between functors $f^*, g^*\colon\mathcal{A}\to\mathcal{B}$ and $f_*,g_*:\mathcal{B}\to\mathcal{A}$. Let $\phi\colon f_*\to g_*$ be a natural transformation. \begin{enumerate} \item Let $\eta,\varepsilon$ denote the unit and counit of $(f^*,f_*)$ and let $\eta',\varepsilon'$ be the unit and counit of $(g^*, g_*)$. Then the following diagrams commute: \[\xymatrixcolsep{1.5pc}\xymatrixrowsep{1.5pc}\xymatrix{ f^*f_*\ar[r]^{\varepsilon}&\mathrm{id} \\ g^*f\ar[u]^{\phi^{\vee}\mathbbm{1}_{f_*}}\ar[r]_{\mathbbm{1}_{g^*}\phi}&g^*g_*\ar[u]_{\varepsilon'}} \qquad \xymatrixcolsep{1.5pc}\xymatrixrowsep{1.5pc}\xymatrix{ f_*f^*\ar[r]^{\phi\mathbbm{1}_{f^*}}&g_*f^* \\ \mathrm{id}\ar[u]^{\eta}\ar[r]_{\eta'}&g_*g^*\ar[u]_{\mathbbm{1}_{g_*}\phi^{\vee}} } \] \item Assume $\mathcal{A}$ and $\mathcal{B}$ are additive. Let $\psi\colon f_*\to g_*$ be a natural transformation, then $(\phi+\psi)^{\vee}=\phi^{\vee}+\psi^{\vee}$. \item Let $(h^*, h_*)$ be an adjunction between functors $h^*\colon\mathcal{A}\to\mathcal{B}$ and $h_*\colon\mathcal{B}\to\mathcal{A}$. Further, let $\psi\colon g_*\to h_*$ be a natural transformation. Then $(\psi\circ\phi)^{\vee} = \phi^{\vee}\circ \psi^{\vee}$. \end{enumerate} \end{prop} \begin{proof} (i) follows from the commutativity of the diagram in Prop.\ \ref{transposeunique}. (ii) follows from our standing assumption that functors between additive categories are additive, i.e., the induced maps on $\mathrm{Hom}$ groups are homomorphisms. (iii) follows from the uniqueness part of Prop.\ \ref{transposeunique}. \end{proof} \begin{prop}\label{transposeiso}Let $f^*\colon\mathcal{A}\to\mathcal{B}$, $f_*\colon\mathcal{B}\to\mathcal{A}$ be functors and let $(f^*,f_*)$ be an adjunction. \begin{enumerate} \item $\mathbbm{1}_{f_*}^{\vee}=\mathbbm{1}_{f^*}$. \item Assume $\mathcal{A}$ and $\mathcal{B}$ are additive. Then $0^{\vee}=0$. \item If $e\colon f_*\to f_*$ is idempotent, then $e^{\vee}\colon f^*\to f^*$ is also idempotent. \end{enumerate} \end{prop} \begin{proof}Each of the equalities follows from the uniqueness part of Prop.\ \ref{transposeunique}. Details are left to the reader out of sheer laziness. \end{proof} \subsection{} Let $(f^*,f_*)$ and $(g^*,g_*)$ be adjunctions between functors $g^*\colon\mathcal{A}\to\mathcal{B}$, $g_*\colon\mathcal{B}\to\mathcal{A}$, $f^*\colon\mathcal{B}\to\mathcal{C}$ and $f_*\colon\mathcal{C}\to\mathcal{B}$. Then we have the data of four morphisms (units and counits): $\eta\colon \mathrm{id}_{\mathcal{B}} \to f_*f^*$, $\varepsilon\colon f^*f_* \to \mathrm{id}_{\mathcal{C}}$, $\eta'\colon \mathrm{id}_{\mathcal{A}} \to g_*g^*$ and $\varepsilon\colon g^*g_* \to \mathrm{id}_{\mathcal{B}}$. It is well known that $f^*g^*$ is left adjoint to $g_*f_*$. It is sometimes useful to have a precise version of this: let $\overline{\eta}$ and $\overline{\varepsilon}$ be the compositions \[\mathrm{id}_{\mathcal{A}}\mapright{\eta'}g_*g^*\mapright{\mathbbm{1}_{g_*}\eta\mathbbm{1}_{g^*}}g_*f^*f_*g^* \quad\mbox{and}\quad f^*g^*g_*f_*\mapright{\mathbbm{1}_{f^*}\varepsilon'\mathbbm{1}_{f_*}}f^*f_*\mapright{\varepsilon}\mathrm{id}_{\mathcal{B}},\] respectively. \begin{lemma}\label{transposelemma}The natural transformations $\overline{\eta}$ and $\overline{\varepsilon}$ define an adjunction $(f^*g^*, g_*f_*)$. Further, $\varepsilon^{\vee}=\eta'$ and $(\eta')^{\vee}=\varepsilon$. \end{lemma} \begin{proof}We have \begin{align*} \mathbbm{1}_{g_*f_*}\overline{\varepsilon}\circ\overline{\eta}\mathbbm{1}_{g_*f_*} &= \mathbbm{1}_{g_*f_*}\varepsilon\circ\mathbbm{1}_{g_*f_*f^*}\varepsilon'\mathbbm{1}_{f_*}\circ\mathbbm{1}_{g_*}\eta\mathbbm{1}_{g^*g_*f_*}\circ\eta'\mathbbm{1}_{g_*f_*} \\ &= \mathbbm{1}_{g_*f_*}\varepsilon\circ\mathbbm{1}_{g_*}\eta\mathbbm{1}_{f_*}\circ\mathbbm{1}_{g_*}\varepsilon'\mathbbm{1}_{f_*}\circ\eta'\mathbbm{1}_{g_*f_*} \\ &=\mathbbm{1}_{g_*f_*}, \end{align*} where the first equality is the definition of $\overline{\varepsilon}$ and $\overline{\eta}$, the second equality holds due to $\eta$ and $\varepsilon'$ being natural transformations and the last equality follows from the definition of unit/counit \eqref{eq:unitcounit}. The proof that $\overline{\varepsilon}\mathbbm{1}_{f^*g^*}\circ\mathbbm{1}_{f^*g^*}\overline{\eta}=\mathbbm{1}_{f^*g^*}$ is similar. Thus, $\overline{\eta}$ and $\overline{\varepsilon}$ define an adjunction $(f^*g^*, g_*f_*)$. Further, \[\varepsilon^{\vee}=\varepsilon\mathbbm{1}_{f^*f_*}\circ\overline{\eta} =\varepsilon\mathbbm{1}_{f^*f_*}\circ \mathbbm{1}_{f^*}\eta \mathbbm{1}_{f_*} \circ \eta' =\eta',\] where the first equality is the definition of transpose \eqref{eq:transpose}, the second equality is the definition of $\overline{\eta}$ and the last equality follows from the definition of the unit/counit \eqref{eq:unitcounit}. Similarly, \[(\eta')^{\vee}=\varepsilon\circ \mathbbm{1}_{f^*}\varepsilon'\mathbbm{1}_{f_*}\circ \mathbbm{1}_{f^*f_*}\eta'=\varepsilon.\qedhere\] \end{proof} \subsection{}Let $(h^*, h_*)$ be another adjunction, between functors $h^*\colon \mathcal{Z}\to\mathcal{A}$, $h_*\colon \mathcal{A}\to\mathcal{Z}$. Using the procedure above there are, \emph{a priori}, two different ways to define an adjunction $(f^*g^*h^*, h_*g_*f_*)$: either first construct an adjunction $(g^*h^*, h_*g_*)$ and then an adjunction $(f^*(g^*h^*), (h_*g_*)f_*)$ or first construct an adjunction $(f^*g^*, g_*f_*)$ and then an adjunction $((f^*g^*)h^*, h_*(g_*f_*))$. Let \[ \mathrm{Hom}_{\mathcal{C}}(f^*g^*h^*X, Y)\mapright{\alpha} \mathrm{Hom}_{\mathcal{B}}(g^*h^*X, f_*Y) \mapright{\alpha'}\mathrm{Hom}_{\mathcal{Z}}(X, h_*g_*f_* Y),\] \[ \mathrm{Hom}_{\mathcal{C}}(f^*g^*h^*X, Y)\mapright{\alpha''}\mathrm{Hom}_{\mathcal{A}}(h^*X, g_*f_*Y) \mapright{\alpha'''}\mathrm{Hom}_{\mathcal{Z}}(X, h_*g_*f_* Y),\] $X\in Z$, $Y\in\mathcal{C}$, be the sequences of canonical isomorphisms obtained this way. \begin{prop}The following diagram commutes. \[\xymatrix{ \mathrm{Hom}_{\mathcal{C}}(f^*g^*h^*X, Y) \ar[r]^{\alpha''}\ar[d]_{\alpha}& \mathrm{Hom}_{\mathcal{A}}(h^*X, g_*f_*Y) \ar[d]^{\alpha'''} \\ \mathrm{Hom}_{\mathcal{B}}(g^*h^*X, f_*Y) \ar[r]^{\alpha'} & \mathrm{Hom}_{\mathcal{Z}}(X, h_*g_*f_*Y) }\] \end{prop} \begin{proof} Both $\alpha'\circ \alpha$ and $\alpha'''\circ \alpha''$ are equal to the composite canonical isomorphism \begin{align*} &\mathrm{Hom}_{\mathcal{C}}(f^*g^*h^*X, Y)\mapright{\sim}\mathrm{Hom}_{\mathcal{B}}(g^*h^*X, f_*Y)\mapright{\sim} \mathrm{Hom}_{\mathcal{A}}(h^*X, g_*f_*Y)\\ &\mapright{\sim} \mathrm{Hom}_{\mathcal{Z}}(X, h_*g_*f_*Y).\qedhere \end{align*} \end{proof} \subsection{} Let $f_!,g_!\colon\mathcal{A}\to\mathcal{B}$, $f^!,g^!\colon\mathcal{B}\to\mathcal{A}$ be functors and let $(f_!,f^!)$, $(g_!, g^!)$ be adjunctions. Write $\eta$ and $\varepsilon$ for the unit and counit of $(f_!, f^!)$, and write $\eta'$ and $\varepsilon'$ for the unit and counit of $(g_!, g^!)$. Suppose $\psi\colon g_!\to f_!$ is a natural transformation. Then the \emph{right transpose} $\vphantom{\psi}^{\vee}\psi\colon f^!\to g^!$ is the composition \begin{equation}\label{eq:righttranspose} f^!\mapright{\eta'\mathbbm{1}_{f^!}}g^!g_!f^!\mapright{\mathbbm{1}_{g^!}\psi'\mathbbm{1}_{f^!}}g^!f_!f^!\mapright{\mathbbm{1}_{g^!}\varepsilon}g^!. \end{equation} The next result allows us to transport all the statements for transposes to right transposes. \begin{prop}Let $(f_!, f^!)$ and $(g_!, g^!)$ be adjunctions between functors $f_!,g_!\colon\mathcal{A}\to\mathcal{B}$ and $f^!,g^!\colon\mathcal{B}\to\mathcal{A}$. Let $\phi\colon f^!\to g^!$ be a natural transformation. Then $\vphantom{\phi}^{\vee}(\phi^{\vee})=\phi$. Similarly, if $\psi\colon g_! \to f_!$ is a natural transformation, then $(\vphantom{\psi}^{\vee}\psi)^{\vee} = \psi$ \end{prop} \begin{proof}Let $\eta,\varepsilon$ be the unit and counit of $(f_!, f^!)$ and let $\eta',\varepsilon'$ be the unit and counit of $(g_!,g^!)$. Then \begin{align*} \vphantom{\phi}^{\vee}(\phi^\vee)&=\mathbbm{1}_{g^!}\varepsilon\circ \mathbbm{1}_{g^!}\varepsilon'\mathbbm{1}_{f_!f^!}\circ\mathbbm{1}_{g^!g_!}\phi\mathbbm{1}_{f_!f^!}\circ \mathbbm{1}_{g^!g_!}\eta\mathbbm{1}_{f^!}\circ \eta'\mathbbm{1}_{f^!} \\ &=\mathbbm{1}_{g^!}\varepsilon\circ \mathbbm{1}_{g^!}\varepsilon'\mathbbm{1}_{f_!f^!}\circ\mathbbm{1}_{g^!g_!}\phi\mathbbm{1}_{f_!f^!}\circ \eta'\mathbbm{1}_{f^!f_!f^!}\circ \eta\mathbbm{1}_{f^!} \\ &=\mathbbm{1}_{g^!}\varepsilon\circ \mathbbm{1}_{g^!}\varepsilon'\mathbbm{1}_{f_!f^!}\circ \eta'\mathbbm{1}_{g^!f_!f^!}\circ\phi\mathbbm{1}_{f_!f^!}\circ\eta\mathbbm{1}_{f^!} \\ &= \mathbbm{1}_{g^!}\varepsilon\circ \phi \mathbbm{1}_{f_!f^!}\circ\eta\mathbbm{1}_{f^!} \\ &= \phi \circ \mathbbm{1}_{f^!}\varepsilon\circ \eta\mathbbm{1}_{f^!} \\ &= \phi. \end{align*} The first equality is by the definition of transpose \eqref{eq:transpose} and right transpose \eqref{eq:righttranspose}, the second, third and fifth equalities are due to the fact that all morphisms involved are natural transformations. The fourth and last equalities follow from the definition of the unit/counit \eqref{eq:unitcounit}. The proof that $(\vphantom{\psi}^{\vee}\psi)^{\vee}=\psi$ is similar. \end{proof} \section{Complexes of functors}\label{s:complexesoffunctors} \subsection{}Let $\mathcal{A},\mathcal{B}$ be additive categories. Write $\mathscr{H}\!om(\mathcal{A},\mathcal{B})$ for the additive category of functors $\mathcal{A}\to \mathcal{B}$ with morphisms given by natural transformations. Let $\mathcal{C}$ be another additive category. Let $F\in\mathrm{Kom}(\mathscr{H}\!om(\mathcal{B},\mathcal{C}))$, $G\in\mathrm{Kom}(\mathscr{H}\!om(\mathcal{A},\mathcal{B}))$. Define the object $F G$ in $\mathrm{Kom}(\mathscr{H}\!om(\mathcal{A},\mathcal{C}))$ to be the complex whose degree $n$ component is $\bigoplus_{i+j=n}F^iG^j$ with differential \[ d_{FG}\colon F^iG^j \to F^{i+1}G^j \oplus F^iG^{j+1}, \qquad d_{FG}=d_F\mathbbm{1}_{G^j}+(-1)^i\mathbbm{1}_{F^i}d_G. \] \begin{remark} $FG$ is the total complex of the double complex $\{F^iG^j\}_{i,j}$. \end{remark} \begin{prop}\label{2catassoc} Let $\mathcal{A},\mathcal{B},\mathcal{C},\mathcal{D}$ be additive categories. Let $F\in\mathrm{Kom}(\mathscr{H}\!om(\mathcal{C},\mathcal{D}))$, $G\in\mathrm{Kom}(\mathscr{H}\!om(\mathcal{B},\mathcal{C}))$, $H\in\mathrm{Kom}(\mathscr{H}\!om(\mathcal{A},\mathcal{B}))$. Then $(FG)H = F(GH)$. \end{prop} \begin{proof} The degree $n$ component of both $(FG)H$ and $F(GH)$ is $\bigoplus_{i+j+k=n}F^iG^jH^k$. It remains to check that the differentials on both sides coincide. The differential for $(FG)H$, $d_{(FG)H}\colon F^iG^jH^k \to F^{i+1}G^jH^k \oplus F^iG^{j+1}H^k \oplus F^iG^jH^{k+1}$ is \begin{align*} d_{(FG)H} &= d_{FG}\mathbbm{1}_{H^k}+(-1)^{i+j}\mathbbm{1}_{F^iG^j}d_H \\ &=d_F\mathbbm{1}_{G^jH^k}+(-1)^i\mathbbm{1}_{F^i}d_G\mathbbm{1}_{H^k} + (-1)^{i+j}\mathbbm{1}_{F^iG^j}d_H. \end{align*} The differential for $F(GH)$, $d_{F(GH)}\colon F^iG^jH^k \to F^{i+1}G^jH^k \oplus F^iG^{j+1}H^k \oplus F^iG^jH^{k+1}$ is \begin{align*} d_{F(GH)}&=d_{F}\mathbbm{1}_{G^jH^k}+(-1)^i\mathbbm{1}_{F^i}d_{GH} \\ &= d_{F}\mathbbm{1}_{G^jH^k} + (-1)^i \mathbbm{1}_{F^i}d_G\mathbbm{1}_{H^k} + (-1)^{i+j}\mathbbm{1}_{F^iG^j}d_H.\qedhere \end{align*} \end{proof} \subsection{}Let $\mathcal{A}$ and $\mathcal{B}$ be additive categories. Let $(f^*_i, f_{i*})$, $i\in\ZZ$, be adjunctions between functors $f_{i*}\colon\mathcal{A}\to\mathcal{B}$ and $f_{i}^*\colon\mathcal{B}\to\mathcal{A}$. Suppose we have a complex of functors \[ F_* = \cdots\mapright{d_{-2}}f_{-1*}\mapright{d_{-1}} f_{0*} \mapright{d_0} f_{1*} \mapright{d_{1}} \cdots, \] with $f_{0*}$ in degree $0$. Set \[ F^* = \cdots \mapright{d_1^{\vee}}f_1^*\mapright{d_{0}^{\vee}} f_0^* \mapright{d_{-1}^{\vee}} f_{-1}^*\mapright{d_{-2}^{\vee}}\cdots, \] with $f^*_0$ in degree $0$. Then Prop.\ \ref{transposecommute} (iii) and Prop.\ \ref{transposeiso} (ii) imply that $F^*$ is also a complex. The degree $0$ term of $F^*F$ is $\bigoplus_{i\in\ZZ} f_i^*f_{i*}$. View the identity functor as a complex concentrated in degree $0$. Define $\mathrm{ev}\colon F^*F_*\to\mathrm{id}$ by \[ \left(\begin{smallmatrix}\cdots & -\varepsilon_{-2}&-\varepsilon_{-1}&\varepsilon_0&\varepsilon_1&-\varepsilon_2&-\varepsilon_3&\varepsilon_4&\varepsilon_5&\cdots\end{smallmatrix}\right)\colon\bigoplus_{i\in\ZZ} f_i^*f_{i*}\to \mathrm{id},\] where $\varepsilon_i$ is the counit of the adjunction $(f_i^*, f_{i*})$. The differential on the degree $-1$ term of $F^*F_*$ is given by \[\left(\begin{smallmatrix} d_i^{\vee}\mathbbm{1}_{f_{i*}} \\ (-1)^{i+1}\mathbbm{1}_{f^*_{i+1}}d_i \end{smallmatrix}\right) \colon f^*_{i+1}f_{i*}\to f^*_if_{i*} \oplus f^*_{i+1}f_{i+1*}. \] This combined with Prop.\ \ref{transposecommute} (i) implies that $\mathrm{ev}$ is a chain map. Similarly, the degree $0$ term of $F_*F^*$ is $\bigoplus_{i\in\ZZ}f_{i*}f_i^*$. Define $\mathrm{coev}\colon\mathrm{id}\to F_*F^*$ by \[ \left(\begin{smallmatrix} \vdots \\ -\eta_{-2} \\ -\eta_{-1} \\ \eta_0 \\ \eta_1 \\ -\eta_2 \\ -\eta_3\\ \eta_4\\ \eta_5 \\ \vdots \end{smallmatrix}\right)\colon \mathrm{id}\to \bigoplus_{i\in\ZZ}f_{i*}f_i^*,.\] where $\eta_i$ is the counit of the adjunction $(f^*_i, f_{i*})$. The differential on the degree $0$ term is given by \[ \left(\begin{smallmatrix} d_i\mathbbm{1}_{f_i^*}\\ (-1)^i\mathbbm{1}_{f_{i*}}d_{i-1}^{\vee} \end{smallmatrix}\right)\colon f_{i*}f_i^*\to f_{i+1*}f_i^*\oplus f_{i*}f_{i-1}^*. \] This combined with Prop.\ \ref{transposecommute} (i) gives that $\mathrm{coev}$ is a chain map. \begin{prop}\label{complexesadjoint}The compositions \[\xymatrix{F_* \ar[r]^-{\mathrm{coev}\mathbbm{1}_{F_*}}& F_*F^*F_*\ar[r]^-{\mathbbm{1}_{F_*}\mathrm{ev}}&F_* }\quad \mbox{and}\quad\xymatrix{F^*\ar[r]^-{\mathbbm{1}_{F^*}\mathrm{coev}}&F^*F_*F^*\ar[r]^-{\mathrm{ev}\mathbbm{1}_{F^*}}&F^*} \] are equal to the identity on $F_*$ and $F^*$, respectively. \end{prop} \begin{proof}This follows from the corresponding properties of $\eta_i$ and $\varepsilon_i$ (cf.\ example \ref{prooffromscratch}). \end{proof} \section{A general construction}\label{s:genconst} \subsection{}\label{s:trianggen}Let $\mathcal{T}$ be a triangulated category. Let $\mathcal{A},\mathcal{B}\subseteq \mathcal{T}$ be subcategories. For $X\in \mathcal{T}$ write $[X]\in\mathcal{A}$ (resp.\ $\mathcal{B}$) if there exists an object in $\mathcal{A}$ (resp.\ $\mathcal{B}$) isomorphic to $X$. Define \begin{align*} \mathcal{A} * \mathcal{B} = \{Y\in\mathcal{T}\,|\,&\mbox{there is a distinguished triangle $X\to Y\to Z\leadsto$}\\ &\mbox{with $[X]\in\mathcal{A}$ and $Z\in\mathcal{B}$}\}. \end{align*} The operation $*$ is associative (see \cite[Lemma 1.3.10]{BBD}). Inductively define $\mathcal{A}^{*i}$, $i\in\ZZ_{\geq 0}$, by $\mathcal{A}^{*0}=0$ and $\mathcal{A}^{*i+1} = \mathcal{A} *\mathcal{A}^{*i}$. Set $\mathcal{A}^{*\infty}=\bigcup_{i\in\ZZ_{\geq 0}} \mathcal{A}^{*i}$. It is evident that $X\in\mathcal{A}^{*n}$ if and only if $X$ is filtered by some $Y_1,\ldots, Y_n\in\mathcal{A}$. \begin{lemma}\label{devissagetriang}Let $\mathcal{T}$ and $\mathcal{T}'$ be triangulated categories. Let $\mathcal{L}\subset \mathcal{T}$ be a subcategory (not necessarily triangulated). Suppose that $\mathcal{L}^{*\infty}=\mathcal{T}$. Let $f,g\colon \mathcal{T}\to\mathcal{T}'$ be exact functors and let $\varepsilon\colon f\to g$ be a morphism of exact functors. If $\varepsilon_L\colon fL \to gL$ is an isomorphism for each $L\in\mathcal{L}$, then $\varepsilon\colon f\to g$ is an isomorphism. \end{lemma} \begin{proof}Proceed by induction, assume that if $i<n$, then $\varepsilon_L\colon fL\to gL$ is an isomorphism for each $L\in \mathcal{L}^{*i}$. Let $M\in\mathcal{L}^{*n}$, then we have a distinguished triangle $N \to M \to L\leadsto$ with $N\in\mathcal{L}^{*n-1}$ and $L\in\mathcal{L}$. So we obtain a commutative diagram \[ \xymatrix{ N\ar[r]\ar[d]_{\varepsilon_N}^{\sim}& M\ar[r]\ar[d]_{\varepsilon_M} & L\ar@{~>}[r]\ar[d]_{\varepsilon_L}^{\sim}& \\ N\ar[r]& M\ar[r] & L\ar@{~>}[r]& }\] The outer vertical arrows are isomorphisms by hypothesis. This forces the middle arrow to also be an isomorphism. \end{proof} \subsection{}Let $\mathcal{A}$ and $\mathcal{B}$ be abelian categories. Let $F\in\mathrm{Kom}(\mathscr{H}\!om(\mathcal{A},\mathcal{B}))$. Assume that each component of $F$ is an exact functor. For further simplicity assume that $F$ is bounded. Then $F$ defines a functor $\mathrm{Kom}(\mathcal{A}) \to\mathrm{Kom}(\mathcal{B})$ (it is defined exactly as the `composition' in \S\ref{s:complexesoffunctors}). Since each component of $F$ is exact, this gives an exact functor $\mathrm{D^b}(\mathcal{A})\to\mathrm{D^b}(\mathcal{B})$. The following is originally due to Rickard \cite[Thm.\ 2.1]{Ri} (also see \cite[\S2.2.3]{Ro}, \cite[Lemma 4.1.1]{ABG}, \cite[Thm.\ 7.3.16]{Vo}. \begin{thm}\label{mainthm} Let $\mathcal{A}$ and $\mathcal{B}$ be abelian categories. Assume each object in $\mathcal{A}$ has finite length. Let $(\pi^*,\pi_*)$ and $(\pi_*, \pi^!)$ be adjunctions between exact functors $\pi_*\colon\mathcal{A}\to\mathcal{B}$ and $\pi^*,\pi^!\colon\mathcal{B}\to\mathcal{A}$. Then we have the data of four morphisms (units and counits): \[ \eta\colon \mathrm{id}_{\mathcal{B}} \to \pi_*\pi^*, \quad \varepsilon\colon \pi^*\pi_*\to \mathrm{id}_{\mathcal{A}}, \quad \eta'\colon\mathrm{id}_{\mathcal{A}} \to \pi^!\pi_*, \quad \varepsilon'\colon \pi_*\pi^! \to\mathrm{id}_{\mathcal{B}}. \] Define complexes of functors $\Theta^*$ and $\Theta^!$: \[ \Theta^*=0\to \pi^*\pi_* \mapright{\varepsilon} \mathrm{id}_{\mathcal{A}} \to 0 \qquad\mbox{and}\qquad \Theta^!= 0 \to \mathrm{id}_{\mathcal{A}} \mapright{\eta'} \pi^!\pi_* \to 0, \] with $\pi^*\pi_*$ and $\pi^!\pi_*$ in degree $0$. By Lemma \ref{transposelemma} and Prop.\ \ref{complexesadjoint}, $\Theta^*$ is left adjoint to $\Theta^!$. Fix an adjunction $(\Theta^*,\Theta^!)$ and denote the unit by $\mathrm{coev}$ and the counit by $\mathrm{ev}$. \begin{enumerate} \item If $[\pi^*\pi_*\pi^!\pi_* X] = [\pi^*\pi_* X] + [\pi^!\pi_*X]$ in $K_0(\mathcal{A})$ for each $X\in\mathcal{A}$, then $\mathrm{ev}\colon\Theta^*\Theta^!\to\mathrm{id}$ is an isomorphism of functors on $\mathrm{D^b}(\mathcal{A})$. \item If $[\pi^!\pi_*\pi^*\pi_* X] = [\pi^*\pi_* X] + [\pi^!\pi_*X]$ in $K_0(\mathcal{A})$ for each $X\in\mathcal{A}$, then $\mathrm{coev}\colon\mathrm{id}\to\Theta^!\Theta^*$ is an isomorphism of functors on $\mathrm{D^b}(\mathcal{A})$. \end{enumerate} \end{thm} \begin{proof} By definition, the functor $\Theta^*\Theta^!$ is given by the complex \[\xymatrixcolsep{3.5pc}\xymatrix{ 0\ar[r]&\pi^*\pi_*\ar[r]^-{\left(\begin{smallmatrix}\varepsilon\\\mathbbm{1}_{\pi^*\pi_*}\eta'\end{smallmatrix}\right)}&\mathrm{id}_{\mathcal{A}}\oplus\pi^*\pi_*\pi^!\pi_*\ar[r]^-{\left(\begin{smallmatrix}-\eta'&\varepsilon\mathbbm{1}_{\pi^!\pi_*}\end{smallmatrix}\right)}&\pi^!\pi_*\ar[r]& 0.}\] By definition of the unit $\eta'$ and the counit $\varepsilon'$, the composition \[\xymatrixcolsep{3.5pc}\xymatrix{\pi^*\pi_* \ar[r]^-{\mathbbm{1}_{\pi^*\pi_*}\eta'} & \pi^*\pi_*\pi^!\pi_* \ar[r]^-{\mathbbm{1}_{\pi^*}\varepsilon'\mathbbm{1}_{\pi_*}} &\pi^*\pi_*}\] is the identity on $\pi^*\pi_*$. Thus, $\xymatrixcolsep{3.5pc}\xymatrix{ \pi^*\pi_*\ar[r]^-{\left(\begin{smallmatrix}\varepsilon\\\mathbbm{1}_{\pi^*\pi_*}\eta'\end{smallmatrix}\right)}&\mathrm{id}_{\mathcal{A}}\oplus\pi^*\pi_*\pi^!\pi_* }$ is a monomorphism. A similar argument shows that $\xymatrixcolsep{3.5pc}\xymatrix{ \mathrm{id}_{\mathcal{A}}\oplus\pi^*\pi_*\pi^!\pi_*\ar[r]^-{\left(\begin{smallmatrix}-\eta'&\varepsilon\mathbbm{1}_{\pi^!\pi_*}\end{smallmatrix}\right)}&\pi^!\pi_* }$ is an epimorphism. Hence, if $X\in\mathcal{A}$, then $\Theta^*\Theta^!X$ is isomorphic (in $\mathrm{D^b}(\mathcal{A})$) to an object in $\mathcal{A}$. Let $L\in \mathcal{A}$ be simple, then under the hypothesis of (i): \[[\Theta^*\Theta^! L] = [\pi^*\pi_*\pi^!\pi_*L]+[L] - [\pi^*\pi_*L]-[\pi^!\pi_*L] = [L] \quad \mbox{in $K_0(\mathcal{A})$}.\] This forces $\Theta^*\Theta^!L\simeq L$. Lemma \ref{keylem} (ii) gives that $\mathrm{ev}\colon\Theta^*\Theta^!L\to L$ is non-zero. Since $L$ is simple, this implies that $\mathrm{ev}\colon\Theta^*\Theta^!L\to L$ is an isomorphism. As every object in $\mathcal{A}$ is of finite length, every object in $\mathcal{A}$ is filtered by simple objects. Further, every object in $\mathrm{D^b}(\mathcal{A})$ is filtered by shifts of objects in $\mathcal{A}$. Thus, every object in $\mathrm{D^b}(\mathcal{A})$ is filtered by shifts of the simple objects in $\mathcal{A}$. Applying Lemma \ref{devissagetriang} now gives (i). The functor $\Theta^!\Theta^*$ is given by the complex \[ \xymatrixcolsep{3.5pc}\xymatrix{0\ar[r]& \pi^*\pi_*\ar[r]^-{\left(\begin{smallmatrix}\eta'\mathbbm{1}_{\pi^*\pi_*} \\ -\varepsilon\end{smallmatrix}\right)}&\pi^!\pi_*\pi^*\pi_*\oplus\mathrm{id}_{\mathcal{A}}\ar[r]^-{\left(\begin{smallmatrix}\mathbbm{1}_{\pi^!\pi_*}\varepsilon & \eta'\end{smallmatrix}\right)} &\pi^!\pi_*\ar[r]&0}. \] Now an argument similar to the one for (i) gives (ii). \end{proof} \begin{example}\label{prooffromscratch}We will now work out a `proof from scratch' of Thm.\ \ref{mainthm} in the special case $\pi^!=\pi^*$. Let $\mathcal{A}$ and $\mathcal{B}$ be abelian categories. Assume each object in $\mathcal{A}$ has finite length. Let $(\pi^*,\pi_*)$ and $(\pi_*,\pi^*)$ be adjunctions between exact functors $\pi_*\colon\mathcal{A}\to\mathcal{B}$ and $\pi^*\colon\mathcal{B}\to\mathcal{A}$. Then we have the data of four morphisms (units and counits): \[ \eta\colon\mathrm{id}_{\mathcal{B}}\to\pi_*\pi^*, \quad \varepsilon\colon\pi^*\pi_*\to\mathrm{id}_{\mathcal{A}}, \quad \eta'\colon\mathrm{id}_{\mathcal{A}}\to\pi^*\pi_*\quad \varepsilon'\colon\pi_*\pi^*\to\mathrm{id}_{\mathcal{B}}.\] Let $\Theta^*=0\to\pi^*\pi_*\mapright{\varepsilon}\mathrm{id}_{\mathcal{A}}\to 0$ and $\Theta^!=0\to\mathrm{id}_{\mathcal{A}}\mapright{\eta'}\pi^*\pi_*\to 0$ with $\pi^*\pi_*$ in degree $0$ in both cases. Let's show that $\Theta^*$ is left adjoint to $\Theta^!$. It is helpful to keep track of terms in this computation `in color' (I apologize to the reader trying to read this in monochrome). The functor $\color{blue}\Theta^*\color{red}\Theta^!$ is given by the complex \[\xymatrixcolsep{3.5pc}\xymatrix{ \color{blue}\pi^*\pi_*\color{red}\mathrm{id}_{\mathcal{A}}\ar[r]^-{\left(\begin{smallmatrix}\varepsilon\\\mathbbm{1}_{\color{blue}\pi^*\pi_*}\eta'\end{smallmatrix}\right)}&\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\oplus\color{blue}\pi^*\pi_*\color{red}\pi^*\pi_*\ar[r]^-{\left(\begin{smallmatrix}-\eta'&\varepsilon\mathbbm{1}_{\color{red}\pi^*\pi_*}\end{smallmatrix}\right)}&\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\pi^*\pi_*}\] with $\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\oplus\color{blue}\pi^*\pi_*\color{red}\pi^*\pi_*$ in degree $0$. Define $\mathrm{ev}\colon\color{blue}\Theta^*\color{red}\Theta^!\color{black}\to\mathrm{id}_{\mathcal{A}}$ by \[\xymatrixcolsep{3.5pc}\xymatrix{ \color{blue}\pi^*\pi_*\color{red}\mathrm{id}_{\mathcal{A}}\ar[d]\ar[r]^-{\left(\begin{smallmatrix}\varepsilon\\\mathbbm{1}_{\color{blue}\pi^*\pi_*}\eta'\end{smallmatrix}\right)}&\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\oplus\color{blue}\pi^*\pi_*\color{red}\pi^*\pi_* \ar[d]_{\left(\begin{smallmatrix} -\mathrm{id} & \varepsilon\circ\color{blue}\mathbbm{1}_{\pi^*}\color{black}\varepsilon'\color{red}\mathbbm{1}_{\pi_*} \end{smallmatrix}\right)} \ar[r]^-{\left(\begin{smallmatrix}-\eta'&\varepsilon\mathbbm{1}_{\color{red}\pi^*\pi_*}\end{smallmatrix}\right)}&\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\pi^*\pi_*\ar[d]\\ 0\ar[r]&\mathrm{id}_{\mathcal{A}}\ar[r]&0 }\] We have \[ \left(\begin{smallmatrix} -\mathrm{id} & \varepsilon\circ\mathbbm{1}_{\pi^*}\varepsilon'\mathbbm{1}_{\pi_*} \end{smallmatrix}\right) \circ \left(\begin{smallmatrix}\varepsilon\\\mathbbm{1}_{\pi^*\pi_*}\eta'\end{smallmatrix}\right) = -\varepsilon + \varepsilon\circ\mathbbm{1}_{\pi^*}\varepsilon'\mathbbm{1}_{\pi_*}\circ\mathbbm{1}_{\pi^*\pi_*}\eta'=0, \] where the last equality is by the definition of the unit $\eta'$ and the counit $\varepsilon'$. Thus, $\mathrm{ev}$ is a chain map. The functor $\color{green}\Theta^!\color{blue}\Theta^*$ is given by the complex \[ \xymatrixcolsep{3.5pc}\xymatrix{\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\pi^*\pi_*\ar[r]^-{\left(\begin{smallmatrix}\eta'\mathbbm{1}_{\color{blue}\pi^*\pi_*} \\ -\varepsilon\end{smallmatrix}\right)}&\color{green}\pi^*\pi_*\color{blue}\pi^*\pi_*\color{black}\oplus\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\mathrm{id}_{\mathcal{A}}\ar[r]^-{\left(\begin{smallmatrix}\mathbbm{1}_{\color{green}\pi^*\pi_*}\varepsilon & \eta'\end{smallmatrix}\right)} &\color{green}\pi^*\pi_*\color{blue}\mathrm{id}_{\mathcal{A}}} \] with $\color{green}\pi^*\pi_*\color{blue}\pi^*\pi_*\color{black}\oplus\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\mathrm{id}_{\mathcal{A}}$ in degree $0$. Define $\mathrm{coev}\colon\mathrm{id}_{\mathcal{A}}\to\color{green}\Theta^!\color{blue}\Theta^*$ by \[ \xymatrixrowsep{4.5pc}\xymatrixcolsep{3.5pc}\xymatrix{ 0\ar[r]\ar[d]&\mathrm{id}_{\mathcal{A}} \ar[d]_-{\left(\begin{smallmatrix} \mathbbm{1}_{\pi^*}\eta\mathbbm{1}_{\pi_*}\circ\eta' \\ -\mathrm{id} \end{smallmatrix}\right)} \ar[r]&0\ar[d]\\ \color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\pi^*\pi_*\ar[r]^-{\left(\begin{smallmatrix}\eta'\mathbbm{1}_{\color{blue}\pi^*\pi_*} \\ -\varepsilon\end{smallmatrix}\right)}&\color{green}\pi^*\pi_*\color{blue}\pi^*\pi_*\color{black}\oplus\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\mathrm{id}_{\mathcal{A}}\ar[r]^-{\left(\begin{smallmatrix}\mathbbm{1}_{\color{green}\pi^*\pi_*}\varepsilon & \eta'\end{smallmatrix}\right)} &\color{green}\pi^*\pi_*\color{blue}\mathrm{id}_{\mathcal{A}} } \] We have \[\left(\begin{smallmatrix}\mathbbm{1}_{\pi^*\pi_*}\varepsilon & \eta'\end{smallmatrix}\right) \circ \left(\begin{smallmatrix} \mathbbm{1}_{\pi_*}\eta\mathbbm{1}_{\pi^*}\circ\eta' \\ -\mathrm{id} \end{smallmatrix}\right) = \mathbbm{1}_{\pi^*\pi_*}\varepsilon\circ\mathbbm{1}_{\pi^*}\eta\mathbbm{1}_{\pi_*}\circ\eta' - \eta' = 0, \] where the last equality is by the definition of the unit $\eta$ and the counit $\varepsilon$. Thus, $\mathrm{coev}$ is also a chain map. The functor $\color{green}\Theta^!\color{blue}\Theta^*\color{red}\Theta^!$ is given by the complex (we omit the differential since it is no longer relevant to the discussion) \begin{align*} 0\to &\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\pi^*\pi_*\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\to\color{green}\mathrm{id}\color{blue}\pi^*\pi_*\color{red}\pi^*\pi_*\color{black}\oplus\color{green}\pi^*\pi_*\color{blue}\pi^*\pi_*\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\oplus\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\mathrm{id}_{\mathcal{A}}\color{black} \to \\ &\to \color{green}\pi^*\pi_*\color{blue}\pi^*\pi_*\color{red}\pi^*\pi_*\color{black}\oplus\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\pi^*\pi_*\color{black}\oplus\color{green}\pi^*\pi_*\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\to \color{green}\pi^*\pi_*\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\pi^*\pi_*\color{black}\to 0 \end{align*} with $\color{green}\pi^*\pi_*\color{blue}\pi^*\pi_*\color{red}\pi^*\pi_*\color{black}\oplus\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\pi^*\pi_*\color{black}\oplus\color{green}\pi^*\pi_*\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\mathrm{id}_{\mathcal{A}}$ in degree $0$. The composition $\xymatrix{\color{red}\Theta^!\ar[r]^-{\mathrm{coev}\mathbbm{1}_{\color{red}\Theta^!}}&\color{green}\Theta^!\color{blue}\Theta^*\color{red}\Theta^!\ar[r]^-{\mathbbm{1}_{\color{green}\Theta^!}\mathrm{ev}}&\color{green}\Theta^!}$ is given by \[\xymatrix{ \color{red}\mathrm{id}_{\mathcal{A}}\ar[r]\ar[d]_{ \left(\begin{smallmatrix} 0 \\ \mathbbm{1}_{\color{green}\pi^*}\color{black}\eta\mathbbm{1}_{\color{blue}\pi_*}\circ\eta' \\ -\mathrm{id} \end{smallmatrix}\right) } &\color{red}\pi^*\pi_*\ar[d]_{ \left(\begin{smallmatrix} \mathbbm{1}_{\color{green}\pi^*}\eta\mathbbm{1}_{\color{blue}\pi_*\color{red}\pi^*\pi_*}\circ\eta'\mathbbm{1}_{\color{red}\pi^*\pi_*} \\ -\mathbbm{1}_{\color{red}\pi^*\pi_*} \\ 0 \end{smallmatrix}\right) }\\ \color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\pi^*\pi_*\color{red}\pi^*\pi_*\color{black}\oplus\color{green}\pi^*\pi_*\color{blue}\pi^*\pi_*\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\oplus\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\ar[r]\ar[d]_{ \left(\begin{smallmatrix} \varepsilon\circ\mathbbm{1}_{\color{blue}\pi^*}\varepsilon'\mathbbm{1}_{\color{red}\pi^*} &0 &-\mathrm{id} \end{smallmatrix}\right)} &\color{green}\pi^*\pi_*\color{blue}\pi^*\pi_*\color{red}\pi^*\pi_*\color{black}\oplus\color{green}\mathrm{id}_{\mathcal{A}}\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\pi^*\pi_*\color{black}\oplus\color{green}\pi^*\pi_*\color{blue}\mathrm{id}_{\mathcal{A}}\color{red}\mathrm{id}_{\mathcal{A}}\color{black}\ar[d]_{ \left(\begin{smallmatrix} \mathbbm{1}_{\color{green}\pi^*\pi_*}\varepsilon\circ\mathbbm{1}_{\color{green}\pi^*\pi_*\color{blue}\pi^*}\varepsilon'\mathbbm{1}_{\color{red}\pi_*} &0 &-\mathbbm{1}_{\color{green}\pi^*\pi_*} \end{smallmatrix}\right)}\\ \color{green}\mathrm{id}_{\mathcal{A}}\ar[r]&\color{green}\pi^*\pi_* }\] It is evident that the vertical composition on the left is the identity. Furthermore, \begin{align*} &\left(\begin{smallmatrix}\mathbbm{1}_{\pi^*\pi_*}\varepsilon\circ\mathbbm{1}_{\pi^*\pi_*\pi^*}\varepsilon'\mathbbm{1}_{\pi_*} & 0 & -\mathbbm{1}_{\pi^*\pi_*}\end{smallmatrix}\right)\circ \left(\begin{smallmatrix}\mathbbm{1}_{\pi^*}\eta\mathbbm{1}_{\pi_*\pi^*\pi_*}\circ\eta'\mathbbm{1}_{\pi^*\pi_*} \\ -\mathbbm{1}_{\pi^*\pi_*} \\ 0 \end{smallmatrix}\right) \\ &=\mathbbm{1}_{\pi^*\pi_*}\varepsilon\circ\mathbbm{1}_{\pi^*\pi_*\pi^*}\varepsilon'\mathbbm{1}_{\pi_*} \circ \mathbbm{1}_{\pi^*}\eta\mathbbm{1}_{\pi_*\pi^*\pi_*}\circ\eta'\mathbbm{1}_{\pi^*\pi_*} \\ &=\mathbbm{1}_{\pi^*\pi_*}\varepsilon\circ\mathbbm{1}_{\pi^*}\eta\mathbbm{1}_{\pi^*}\circ\mathbbm{1}_{\pi^*}\varepsilon'\mathbbm{1}_{\pi^*}\circ \eta'\mathbbm{1}_{\pi^*\pi_*} \\ &= \mathbbm{1}_{\pi^*\pi_*}. \end{align*} The second equality is due to $\eta$ and $\varepsilon'$ being natural transformations. The third equality is by the definition of the units $\eta,\eta'$ and the counits $\varepsilon,\varepsilon'$. So the vertical composition on the right is also the identity. Thus, the composition $\xymatrix{\Theta^!\ar[r]^-{\mathrm{coev}\mathbbm{1}_{\Theta^!}}&\Theta^!\Theta^*\Theta^!\ar[r]^-{\mathbbm{1}_{\Theta^!}\mathrm{ev}}&\Theta^!}$ is the identity on $\Theta^!$. A similar computation shows that the composition $\xymatrix{\Theta^*\ar[r]^-{\mathbbm{1}_{\Theta^*}\mathrm{coev}}&\Theta^*\Theta^!\Theta^*\ar[r]^-{\mathrm{ev}\mathbbm{1}_{\Theta^*}}&\Theta^*}$ is the identity on $\Theta^*$. Hence, $\Theta^*$ is left adjoint to $\Theta^!$. Now let $L\in\mathcal{A}$ be simple. Then $\Theta^*\Theta^!L$ is the complex \[\xymatrixcolsep{3.5pc}\xymatrix{ 0\ar[r]&\pi^*\pi_*L\ar[r]^-{\left(\begin{smallmatrix}\varepsilon_L\\ \pi^*\pi_*(\eta'_L)\end{smallmatrix}\right)}&L\oplus\pi^*\pi_*\pi^*\pi_*L\ar[r]^-{\left(\begin{smallmatrix}-\eta'_L&\varepsilon_{\pi^*\pi_*L}\end{smallmatrix}\right)}&\pi^*\pi_*L\ar[r]& 0}\] with $L\oplus\pi^*\pi_*\pi^*\pi_*L$ in degree $0$. By definition of the unit $\eta'$ and the counit $\varepsilon'$, the composition \[\xymatrixcolsep{3.5pc}\xymatrix{\pi^*\pi_*L \ar[r]^-{\pi^*\pi_*(\eta'_L)} & \pi^*\pi_*\pi^*\pi_*L \ar[r]^-{\pi^*(\varepsilon'_{\pi_*L})} &\pi^*\pi_*L}\] is the identity on $\pi^*\pi_*L$. Thus, $\xymatrixcolsep{3.5pc}\xymatrix{ \pi^*\pi_*L\ar[r]^-{\left(\begin{smallmatrix}\varepsilon_L\\ \pi^*\pi_*(\eta'_L)\end{smallmatrix}\right)}&L\oplus\pi^*\pi_*\pi^*\pi_*L }$ is a monomorphism. Similarly, $\xymatrixcolsep{4pc}\xymatrix{ L\oplus\pi^*\pi_*\pi^*\pi_*L\ar[r]^-{\left(\begin{smallmatrix}-\eta'_L&\varepsilon_{\pi^*\pi_*L}\end{smallmatrix}\right)}&\pi^*\pi_*L }$ is an epimorphism. Thus, $\Theta^*\Theta^!$ is isomorphic (in $\mathrm{D^b}(\mathcal{A})$) to its zeroth cohomology $H^0(\Theta^*\Theta^!L)$. Assume that $[\pi^*\pi_*\pi^*\pi_*L]=2[\pi^*\pi_*L]$ in $K_0(\mathcal{A})$ for each simple $L\in\mathcal{A}$, then \[[H^0(\Theta^*\Theta^!L)]=[\Theta^*\Theta^! L] = [\pi^*\pi_*\pi^*\pi_*L]+[L] - 2[\pi^*\pi_*L] = [L].\] This forces $H^0(\Theta^*\Theta^!L)$ and hence $\Theta^*\Theta^!L$ to be isomorphic to $L$. Lemma \ref{keylem} (ii) gives that $\mathrm{ev}\colon\Theta^*\Theta^!L\to L$ is non-zero. Since $L$ is simple, this implies that $\mathrm{ev}\colon\Theta^*\Theta^!L\to L$ is an isomorphism. As every object in $\mathcal{A}$ is of finite length, every object in $\mathcal{A}$ is filtered by simple objects. Thus, every object in $\mathrm{D^b}(\mathcal{A})$ is filtered by shifts of the simple objects in $\mathcal{A}$. Applying Lemma \ref{devissagetriang} now gives that $\mathrm{ev}\colon\Theta^*\Theta^!\to\mathrm{id}_{\mathcal{A}}$ is an isomorphism. A similar argument shows that $\mathrm{coev}\colon\mathrm{id}_{\mathcal{A}}\to\Theta^!\Theta^*$ is an isomorphism. Hence, $\Theta^*$ and $\Theta^!$ are mutually inverse derived equivalences. \end{example} \section{Category $\mathcal{O}$ and translation functors}\label{s:catO} \subsection{} Let $\fg\supset \fb\supset \fh$ be a complex semisimple Lie algebra, a Borel subalgebra and a Cartan subalgebra contained in it, respectively. Let $U(\fg)$ denote the universal enveloping algebra of $\fg$ and let $\mathfrak{z}\subset U(\fg)$ denote the center. Let $\mathcal{O}$ be the BGG-category $\mathcal{O}$. That is, $\mathcal{O}$ consists of all finitely generated $U(\fg)$-modules which are locally finite over $\fb$ and semisimple over $\fh$. For $\lambda\in\fh^*$ let $\Verma{\lambda}= U(\fg)\otimes_{\fb}\CC_{\lambda}$ be the Verma module; here $\CC_{\lambda}$ is the one dimensional $\fh$-module given by $\lambda$ and extended to $\fb$ trivially. Let $\simple{\lambda}$ denote the unique simple quotient of $\Verma{\lambda}$. It is well known (see \cite{BGG}) that every object in $\mathcal{O}$ has finite length and that if $L\in\mathcal{O}$ is simple, then $L\simeq \simple{\lambda}$ for some $\lambda\in\fh^*$. Let $\cdot^{\vee}\colon \mathcal{O} \to \mathcal{O}$ denote the contravariant duality on $\mathcal{O}$. Namely, if $M\in\mathcal{O}$, then $M^{\vee}$ is the vector space of linear functions $M\to \CC$ with finite dimensional support. The $\fg$-action on $M^{\vee}$ is given by the $\fg$-action on $M$ twisted by the Chevalley anti-automorphism. If $L\in\mathcal{O}$ is simple, then $L^{\vee}\simeq L$. Furthermore, $\cdot^{\vee\vee}\simeq \mathrm{id}$. The modules $\coVerma{\lambda}$ will be referred to as dual Verma modules. \subsection{}\label{ss:weyl} Let $W$ be the Weyl group of $\fg \supset \fb$, let $\ell \colon W \to \ZZ_{\geq 0}$ denote the length function and let $\leq$ denote the Bruhat order on $W$. In particular, $x<y$ means $x\leq y$ and $x\neq y$. The identity element in $W$ is denoted by $e$. Let $\rho\in\fh^*$ be the half sum of positive roots and let $w_0\in W$ be the longest element of the Weyl group. For $w\in W$ and $\lambda\in \fh^*$ put $w\cdot \lambda = w(\lambda+\rho)-\rho$. \subsection{}Let $\lambda\in\fh^*$ be integral dominant but perhaps singular. In other words, $\lambda$ is integral and $\lambda+\rho$ lies in the closure of the dominant Weyl chamber. Let $\mathcal{O}_{\lambda}\subset \mathcal{O}$ be full subcategory consisting of those objects in $\mathcal{O}$ whose (generalized) infinitesimal character coincides with the one of $\simple{\lambda}$. That is, those objects which have the same annihilator in $\mathfrak{z}$ as the module $\simple{\lambda}$. For instance, the so called principal block $\mathcal{O}_0$ consists of objects with trivial infinitesimal character. \subsection{}Let $s\in W$ be a simple reflection. Let $\lambda\in \fh^*$ be an integral dominant weight such that the stabilizer of $\lambda$ under the `dot-action' of $W$ (see \S\ref{ss:weyl}) is $\{e, s\}$. Let $\pi_{s*}\colon \mathcal{O}_0 \to \mathcal{O}_{\lambda}$ be the functor of translation onto the $s$-wall and let $\pi_s^*\colon \mathcal{O}_{\lambda}\to \mathcal{O}_0$ be the functor of translation off the $s$-wall. The functor $\pi_s^*$ is both left and right adjoint to $\pi_{s*}$. \subsection{}Let $x\in W$. To lighten notation we set \[ \Verma{x} = \Verma{w_0x^{-1}\cdot 0} \qquad\mbox{and} \qquad \simple{x} = \simple{w_0x^{-1}\cdot 0}.\] The following is well known: \begin{prop}[{\cite[Satz.\ 2.10(i), Thm.\ 2.11, Satz 2.17]{Ja}}]\label{translationeffect} Let $s\in W$ be a simple reflection and let $x\in W$. Then \begin{enumerate} \item $\pi_s^*\pi_{s*}\Verma{x}$ has a filtration with subquotients isomorphic to $\Verma{x}$ and $\Verma{sx}$ each occuring with multiplicity one. \item $\pi_s^*\pi_{s*}\coVerma{x}$ has a filtration subquotients isomorphic to $\coVerma{x}$ and $\coVerma{sx}$ each occuring with multiplicity one. \item If $sx < x$, then $\pi_s^*\pi_{s*}\simple{x} = 0$. \end{enumerate} \end{prop} \subsection{}\label{s:definetheta} Fix adjunctions $(\pi_s^*, \pi_{s*})$ and $(\pi_{s*}, \pi^*_s)$. Write $\varepsilon$ for the counit of the pair $(\pi_s^*, \pi_{s*})$ and $\eta'$ for the unit of the pair $(\pi_{s*},\pi_s^*)$. Following \cite[\S4.1.5]{Ro} and \cite{Ri} set \[ \Theta_s^* = 0\to \pi_s^*\pi_{s*} \mapright{\varepsilon} \mathrm{id} \to 0 \quad \mbox{and}\quad \Theta_s^!= 0\to\mathrm{id}\mapright{\eta'}\pi_s^*\pi_{s*}\to 0, \] with $\pi_s^*\pi_{s*}$ in degree $0$ in both cases. \begin{prop}\label{dequivO}The functors $\Theta_s^*$ and $\Theta_s^!$ are mutually inverse self-equivalences of $\mathrm{D^b}(\mathcal{O}_0)$. \end{prop} \begin{proof} Prop.\ \ref{translationeffect} (i) implies that at the level of $K_0(\mathcal{O}_0)$, $[\pi_s^*\pi_{s*}\Verma w] = [\Verma w] + [\Verma{sw}]$. As the classes of Verma modules give a basis of $K_0(\mathcal{O}_0)$, we deduce that $[\pi_s^*\pi_{s*}\pi_s^*\pi_{s*}X]=2[\pi_s^*\pi_{s*}X]$ for all $X\in\mathcal{O}_0$. Applying Thm.\ \ref{mainthm} gives the desired result. \end{proof} \begin{lemma}\label{adjinjective}Let $s\in W$ be a simple reflection and let $x\in W$ be arbitrary. \begin{enumerate} \item The morphism $\eta'\colon \Verma x\to \pi_s^*\pi_{s*}\Verma x$ is injective. \item The morphism $\varepsilon\colon \pi^*_s\pi_{s*}\coVerma x \to \coVerma x$ is surjective. \end{enumerate} \end{lemma} \begin{proof}We will only show (i), the proof of (ii) is similar. Prop.\ \ref{translationeffect} (i) implies that $\pi_s^*\pi_{s*}\Verma w$ is non-zero and has a filtration with subquotients isomorphic to Verma modules. According to \cite[Thm.\ 7.6.6]{Dix} any morphism between Verma modules is either $0$ or injective. We infer that $\eta'_s\colon \Verma w\to \pi_s^*\pi_{s*}\Verma w$ is either zero or injective. Lemma \ref{keylem} implies that the map is injective. \end{proof} \begin{prop}\label{equivspreservevermas}Let $s\in W$ be a simple reflection and let $M\in \mathcal{O}_0$. \begin{enumerate} \item If $M$ admits a filtration with subquotients isomorphic to Verma modules, then $\Theta^!_s M$ is in $\mathcal{O}_0$, i.e., the complex $\Theta^!_s M$ has cohomology concentrated in degree $0$. \item If $M$ admits a filtration with subquotients isomorphic to dual Verma modules, then $\Theta^*_s M$ is in $\mathcal{O}_0$. \end{enumerate} \end{prop} \begin{proof}Follows from Lemma \ref{adjinjective}. \end{proof} \begin{prop}\label{behaviourofequivs}Let $s\in W$ be a simple reflection and let $x\in W$. \begin{enumerate} \item If $x<sx$, then $\Theta^*_s\Verma{x} \simeq \Verma{sx}$. \item If $x<sx$, then $\Theta^!_s \coVerma x \simeq \coVerma{sx}$. \item If $sx<x$, then $\Theta^!_s \simple{x} \simeq \simple{x}[1]$ (or equivalently $\Theta^*_s \simple{x} \simeq \simple{x}[-1]$). \end{enumerate} \end{prop} \begin{proof} If $x<sx$, then Prop.\ \ref{translationeffect} (i) implies that $\pi_s^*\pi_{s*}\Verma{sx}$ represents a class in $\mathrm{Ext}^1(\Verma{x}, \Verma{sx})$. Using Lemma \ref{adjinjective} we deduce that $\Theta^!_s\Verma{sx}\simeq \Verma{x}$. This gives (i). The proof of (ii) is similar. For (iii), we observe that if $sx<x$, then Prop.\ \ref{translationeffect} (iii) implies $\pi_s^*\pi_{s*}\simple{x} =0$. So $\Theta^!_s \simple{x}\simeq \simple{x}[1]$. \end{proof} \begin{thm}[{Bott's Theorem, \cite[Thm.\ 15]{Bott}}]\label{bottsthm}Let $x\in W$ and let $w_0$ be the longest element in $W$. Then \[\mathrm{Ext}^i(\Verma{x}, \simple{w_0}) = \begin{cases} \CC & \mbox{if $i=\ell(xw_0)$}, \\ 0 & \mbox{otherwise}. \end{cases}\] \end{thm} \begin{proof} Let $s_1,\ldots, s_m$ be a sequence of simple reflections such that $s_1\cdots s_mx=w_0$ and $\ell(s_i\cdots s_mx)<\ell(s_{i-1}\cdots s_mx)$ for each $1< i < m+1$. That such a sequence exists follows from $w_0$ being the longest element in $W$. Note that $m=\ell(w_0)-\ell(x)=\ell(xw_0)$. So \begin{align*} \mathrm{Ext}^i(\Verma{x}, \simple{w_0}) &= \mathrm{Ext}^i(\Theta^!_{s_m}\cdots \Theta_{s_1}^!\Verma{w_0}, \simple{w_0}) \\ &= \mathrm{Ext}^i(\Verma{w_0}, \Theta_{s_1}^*\cdots \Theta_{s_m}^* \simple{w_0}) \\ &= \mathrm{Ext}^{i-\ell(ww_0)}(\Verma{w_0}, \simple{w_0}) \\ &=\begin{cases} \CC& \mbox{if $i=\ell(xw_0)$}; \\ 0 & \mbox{otherwise}. \end{cases} \end{align*} The first equality is a given by Prop. \ref{behaviourofequivs} (i), the second equality is by adjointness and Prop.\ \ref{dequivO}, the third equality is by Prop.\ \ref{behaviourofequivs} (iii) and the final equality is due to the fact that the Verma module $\Verma{w_0}$ is projective in $\mathcal{O}_0$ (see the first comment in the proof of Prop.\ \ref{behaviourofequivs}). \end{proof} \begin{remark}As noted by Bott (see the remarks at the end of \cite{Bott}), the result above gives a realization of the Weyl character formula in $K_0(\mathcal{O}_0)$: $[\simple{w_0}]=\sum_{x\in W}(-1)^{\ell(xw_0)}[\Verma{x}]$. \end{remark} \section{Tilting modules and Soergel's character formula}\label{s:lastnongraded} \subsection{}For each $w\in W$ fix a reduced word $w=s\cdots t$. Set \[ \Theta_w^* = \Theta_s^*\cdots \Theta_t^* \quad \mbox{and} \quad \Theta_w^! = \Theta_s^!\cdots\Theta^!_t.\] Up to natural isomorphism, the $\Theta_w^*$, $\Theta^!_w$ are independent of the choice of reduced word: \begin{thm}[{\cite[Thm.\ 4.4]{Ro}}]\label{braidrelsO} Let $w,w'\in W$. If $\ell(ww') = \ell(w) + \ell(w')$, then \[\Theta^*_w\Theta_{w'}^*\simeq \Theta^*_{ww'}.\] \end{thm} \begin{prop}\label{lem1}Let $w\in W$. \begin{enumerate} \item $\Theta_w^* \Verma{e} \simeq \Verma{w}$. \item $\Theta^!_w \Verma{e} \simeq \coVerma{w}$. \end{enumerate} \end{prop} \begin{proof} Let $w=s\cdots t$ be a reduced word. Then $\Theta^*_w\simeq\Theta^*_s\cdots \Theta^*_t$ by Thm.\ \ref{braidrelsO}. Hence, by Prop.\ \ref{behaviourofequivs} (i), \[ \Theta^*_w \Verma{e} \simeq \Theta^*_s \cdots \Theta^*_t \Verma{e} \simeq \Verma{s\cdots t}=\Verma{w}. \] This proves (i). The proof of (ii) is analogous (note that $\Verma{e}=\coVerma{e}$). \end{proof} \begin{lemma}\label{longelementlemma}Let $x\in W$ and let $w_0$ be the longest element in $W$. \begin{enumerate} \item $\Theta_{w_0}^*\coVerma{x}\simeq \Verma{w_0x}$. \item $\Theta^!_{w_0}\Verma{x}\simeq \coVerma{w_0x}$. \end{enumerate} \end{lemma} \begin{proof}We have \[ \Theta_{w_0}^*\coVerma x \simeq \Theta_{w_0}^*\Theta^!_x \Verma e \simeq \Theta^*_{w_0x}\Theta^*_{x^{-1}}\Theta^!_x \Verma e \simeq \Theta_{w_0x}^*\Verma e \simeq \Verma{w_0x}. \] The first isomorphism is Prop.\ \ref{lem1} (ii), the second isomorphism follows from Thm.\ \ref{braidrelsO}, the third isomorphism follows from Prop.\ \ref{dequivO} and the last isomorphism is Prop.\ \ref{lem1} (i). This proves (i). Using Prop.\ \ref{dequivO} we deduce that $(\Theta^*_{w_0})^{-1}=\Theta^!_{w_0}$. Thus, (ii) follows from (i). \end{proof} \begin{prop}\label{triangdeltafiltr}Let $X\in\mathcal{O}_0$, then, as an object of $\mathrm{D^b}(\mathcal{O}_0)$, $X$ is filtered by objects of the form $\Verma{x}[i]$, $i\geq 0$, $x\in W$. \end{prop} \begin{proof} Let $\mathcal{O}_{\Delta}$ be the subcategory of $\mathrm{D^b}(\mathcal{O}_0)$ consisting of objects $\Verma{x}[i]$, $i\in\ZZ_{\geq 0}$, $x\in W$. We will use the notation introduced in \S\ref{s:trianggen}. If $M\in\mathcal{O}_{\Delta}^{*\infty}$, then $M[i]\in\mathcal{O}_{\Delta}^{*\infty}$ for all $i\in\ZZ_{\geq 0}$. It suffices to show that $\mathcal{O}_0 \subset \mathcal{O}_{\Delta}^{*\infty}$. Since every object in $\mathcal{O}_0$ has finite length, this reduces to showing that each $\simple{x}$, $x\in W$, is in $\mathcal{O}_{\Delta}^{*\infty}$. Proceed by induction on the length of $x$. If $\ell(x)=0$, then $x=e$ and $\simple{x}=\simple{e}=\Verma{e}$ which is clearly in $\mathcal{O}_{\Delta}^{*\infty}$. Now let $x\in W$ and assume that if $\ell(x')<\ell(x)$, then $\simple{x'}\in\mathcal{O}_{\Delta}^{*\infty}$. Let $N_x$ be the kernel of the map $\Verma{x}\twoheadrightarrow \simple{x}$. Then the exact sequence $0 \to N_x \to \Verma{x} \to \simple{x} \to 0$ gives a distinguished triangle $\Verma{x} \to \simple{x} \to N_x[1] \leadsto$ in $\mathrm{D^b}(\mathcal{O}_0)$. By the induction hypothesis $N_x[1]\in \mathcal{O}_{\Delta}^{*\infty}$. Consequently, $\simple{x}\in \mathcal{O}_{\Delta}^{*\infty}$. \end{proof} \subsection{} For each $x\in W$ there exists a unique (up to isomorphism) indecomposable object, denoted $D_x$, characterized by the following properties: \begin{enumerate} \item $D_x$ admits a filtration $0=V_0 \subset V_1 \subset\cdots \subset V_k=D_x$ such that each $V_i/V_{i-1}$ is isomorphic to a dual Verma module and $V_k/V_{k-1}\simeq \coVerma{x}$. \item $\mathrm{Ext}^i(D_x, \coVerma{y}) = 0$ for all $i\neq 0$ and $y\in W$. \end{enumerate} The $D_x$ are the so-called indecomposable tilting modules. They are self-dual, i.e., $D_x^{\vee}\simeq D_x$. See \cite[\S5]{So98} for a streamlined treatment of tilting modules. \subsection{}It is well known (see \cite[\S4]{BGG}) that category $\mathcal{O}$ has enough projectives. For $\lambda\in\fh^*$ let $P_{\lambda}$ denote the indecomposable projective cover of $\simple{\lambda}$. Further, for $x\in W$ let $P_x$ denote the indecomposable projective cover of $L_x$ and set $I_x = P_x^{\vee}$. The following result is the category $\mathcal{O}$ analogue of \cite[Thm.\ 6.10]{BeGi} ($D$-modules) and \cite[\S2.3]{BBM} (perverse sheaves). The proof presented here is formally the same as that of \cite[Prop.\ 2.3]{BBM}, also see \cite[Thm.\ 8]{StM}. Actually, the Radon transforms of \cite{BBM} are Koszul dual (in the sense of \cite{BGS}) to the $\Theta^*_w$. \begin{thm}\label{switchtilting}Let $x\in W$ and let $w_0$ be the longest element in $W$. Then \begin{enumerate} \item $\Theta^*_{w_0}D_x \simeq P_{w_0x}$; \item $\Theta^*_{w_0}I_x \simeq D_{w_0x}$. \end{enumerate} \end{thm} \begin{proof} We will only prove (i), the proof of (ii) is similar. Since $D_x$ has a dual Verma filtration, Prop.\ \ref{equivspreservevermas} (ii) implies that $\Theta^*_{w_0}D_x$ lies in $\mathcal{O}_0$. Let $y\in W$ and let $i>0$, then \[\mathrm{Ext}^i_{\mathcal{O}_0}(\Theta_{w_0}^* D_x, \Verma{y}) = \mathrm{Ext}_{\mathcal{O}_0}^i(D_x, \Theta_{w_0}^! \Verma{y} ) = \mathrm{Ext}_{\mathcal{O}_0}^i(D_x, \coVerma{w_0y}) = 0. \] The first equality is given by Prop.\ \ref{dequivO} and Thm.\ \ref{braidrelsO}. The second equality is Lemma \ref{longelementlemma} (ii) and the last equality is by the definition of $D_x$. Combining this with Lemma \ref{triangdeltafiltr} we deduce that if $i>0$, then $\mathrm{Ext}^i_{\mathcal{O}_0}(\Theta_{w_0}^* D_x, X)=0$ for all $X\in\mathcal{O}_0$. Thus $\Theta_{w_0}^*D_x$ is projective. Since $D_x$ is indecomposable and $\Theta_{w_0}^*$ is an equivalence, we deduce that $\Theta_{w_0}^* D_x$ is indecomposable. It remains to show that $\Theta_{w_0}^* D_x$ surjects onto $\simple{w_0x}$. As $D_x$ is self-dual, Lemma \ref{longelementlemma} (i) implies that $\Theta_{w_0}^*D_x$ surjects onto $\Verma{w_0x}$. Thus, $\Theta_{w_0}^*D_x$ surjects onto $\simple{w_0x}$. \end{proof} \begin{cor}[{\cite[Thm.\ 6.7]{So98}}]\label{tiltingcharformula}Let $x,y\in W$ and let $w_0$ be the longest element in $W$. Then, at the level of the Grothendieck group $K_0(\mathcal{O}_0)$: \[ [ D_x : \Verma{y} ] = [P_{w_0x} : \Verma{w_0y}] = [\Verma{w_0y} : \simple{w_0x}]. \] \end{cor} \begin{proof}Working in $K_0(\mathcal{O}_0)$, we have \[ [D_x : \Verma{y} ] = [\Theta^*_{w_0} D_x : \Theta^*_{w_0}\Verma{y}] = [P_{w_0x} : \coVerma{w_0y}] = [\Verma{w_0y} : \simple{w_0x}]. \] The first equality is a consequence of Prop.\ \ref{dequivO}. The second equality is obtained from Thm.\ \ref{ringelselfduality} (i) and by combining Lemma \ref{longelementlemma} (ii) with the fact that at the level of $K_0(\mathcal{O}_0)$, $[\Theta^*_sX]= [\Theta^!_s X]$ for all $X\in\mathcal{O}_0$ and each simple reflection $s\in W$. The last equality is BGG reciprocity (see \cite[\S6 Prop.\ 2]{BGG}). \end{proof} \begin{cor}[\cite{So98}]\label{ringelselfduality}$\bigoplus_{x\in W} \mathrm{End}(P_x) \simeq \bigoplus_{x\in W}\mathrm{End}(D_x)$. \end{cor} \begin{proof}Let $w_0$ be the longest element in $W$. Then $w_0^{-1}=w_0$. Thus, Prop.\ \ref{dequivO} gives that $(\Theta^*_{w_0})^{-1}=\Theta^!_{w_0}$. So, by Thm.\ \ref{ringelselfduality} (i), we have \[ \bigoplus_{x\in W} \mathrm{End}(P_x)\simeq\bigoplus_{x\in W} \mathrm{End}(\Theta_{w_0}^!P_x)\simeq\bigoplus_{w\in W}\mathrm{End}(D_x). \qedhere \] \end{proof} \section{Complements on graded category $\mathcal{O}$}\label{s:gO} We start by reviewing some ideas of Soergel and Stroppel. \subsection{}In the following graded will always mean $\ZZ$-graded. Modules over an algebra will mean right modules. Let $A$ be a finite dimensional graded $\CC$-algebra. Let $A\mathrm{-mof}$ be the category of all finite dimensional $A$-modules and let $A\mathrm{-gmof}$ be the category of all graded finite dimensional $A$-modules. Denote by $\mathrm{Hom}_A(-,-)$ (resp.\ $\mathrm{Hom}_{A^{\mathrm{gr}}}(-,-)$) the morphisms in $A\mathrm{-mof}$ (resp.\ $A\mathrm{-gmof}$). Let $\nu\colon A\mathrm{-gmof} \to A\mathrm{-mof}$ be the functor of forgetting the grading. This is a faithful functor. Let $M=\bigoplus_{i\in\ZZ} M_i$ be a graded $A$-module with $M_i$ the component of degree $i$. For $n\in\ZZ$, define $M\langle n \rangle$ by $M\langle n \rangle_i = M_{i-n}$. Thus, $\nu M\langle n \rangle = \nu M$ and $\mathrm{Hom}_A(\nu M, \nu N) = \bigoplus_{n\in\ZZ}\mathrm{Hom}_{A^{\mathrm{gr}}}(M\langle n \rangle, N)$, $M,N\in A\mathrm{-gmof}$. Let $M\in A\mathrm{-mof}$. Suppose there is a $\tilde M \in A\mathrm{-gmof}$ such that $\nu \tilde M = M$, then we say that $\tilde M$ is a lift of $M$.\begin{lemma}Any two lifts of an indecomposable module $M\in A\mathrm{-mof}$ are isomorphic up to grading shift. \end{lemma} \begin{proof}Let $M', M''$ be two lifts of $M$. Then the identity map $M\to M$ in \[ \mathrm{Hom}_A(M, M) = \bigoplus_{n\in\ZZ}\mathrm{Hom}_{A^{\mathrm{gr}}}(M'\langle n \rangle, M'')\] decomposes into homogeneous components $\mathrm{id} = \sum_n \mathrm{id}_n$. By the Fitting Lemma, $\mathrm{Hom}_A(M, M)$ is a local ring. Thus, $\mathrm{id}_j$ must be invertible for some $j$. \end{proof} \begin{prop}\label{projprop}Let $P\in A\mathrm{-mof}$ be an indecomposable projective. Then any lift of $P$ is an indecomposable projective in $A\mathrm{-gmof}$. \end{prop} \begin{proof}Let $\tilde P$ be a lift of $P$. Let $0\to M\to N\mapright{f} \tilde P \to 0$ be an exact sequence in $A\mathrm{-gmof}$. As $\nu \tilde P = P$ is projective, there exists $g\in \mathrm{Hom}_A (P, \nu N)$ such that $fg = \mathrm{id}_P$. Let $g = \sum_i g_i$ be the decomposition of $g$ into homogeneous components corresponding to the decomposition $\mathrm{Hom}_A(P, \nu N) = \bigoplus_{n\in\ZZ}\mathrm{Hom}_{A^{\mathrm{gr}}}(\tilde P\langle n \rangle, N)$. By the Fitting Lemma, $\mathrm{End}_A(P)$ is a local ring. Hence, $fg_j$ is invertible for some $j$. Let $h\in \mathrm{Hom}_{A^{\mathrm{gr}}}(\tilde P\langle -j\rangle, \tilde P )$ denote the inverse of $fg_j$, then $g_jh$ is homogeneous of degree $0$ and $fg_jh=\mathrm{id}_{\tilde P}$. Thus, $P$ is projective. That it is indecomposable is clear. \end{proof} \subsection{} Let $S=S(\fh)$ denote be the algebra of regular functions on $\fh^*$. We consider $S$ as an evenly graded algebra with linear functions in degree $2$. Let $S_+\subset S$ denote the maximal ideal consisting of functions that vanish at $0$. Let $S_+^W\subset S_+$ be the sub-ideal consisting of $W$-invariant (regular action) functions in $S_+$. Set $C=S/S_+^W$, then $C$ is the so-called coinvariant algebra of $W$. Let $\lambda\in\fh^*$ be integral dominant. Let $W_{\lambda}\subseteq W$ denote the stabilizer of $\lambda$ under the dot action (see \S\ref{ss:weyl}). \begin{thm}[{\cite[Endomorphismensatz 7]{So90}}] There is an isomorphism of algebras \[ \mathrm{End}_{\fg}(\projective{w_0\cdot \lambda}) \simeq C^{\lambda}, \] where $C^{\lambda}$ denotes the subalgebra of $W_{\lambda}$-invariants in $C$. \end{thm} \subsection{} Define \[ \VV\colon \mathcal{O}_{\lambda} \to C^{\lambda}\mathrm{-mof}, \quad M\mapsto \mathrm{Hom}_{\fg}(\projective{w_0\cdot \lambda}, M).\] \begin{thm}[{\cite[Struktursatz 9]{So90}}]The functor $\VV$ is full and faithful on projective objects. \end{thm} \subsection{}\label{s:gradedrings}Certainly $C$ and $C^{\lambda}$ inherit a grading from $\fh$. According to \cite[Thm.\ 2.1]{St}, if $P\in \mathcal{O}_{\lambda}$ is projective, then $\VV P$ admits a lift. Let $[W/W_{\lambda}]$ denote the set of minimal length coset representatives of $W/W_{\lambda}$. For each $x\in [W/W_{\lambda}]$, let $\widetilde{\VV \projective{x\cdot \lambda}}$ be a fixed lift of $\VV \projective{x\cdot\lambda}$ with highest non-zero component in degree $\ell(x)$. Set \begin{align*} A_{\lambda} &= \mathrm{End}_{\fg}(\bigoplus_{x\in [W/W_{\lambda}]} \projective{x\cdot\lambda}) \\ &= \mathrm{End}_{C^{\lambda}}(\bigoplus_{x\in W/W_{\lambda}}\VV \projective{x\cdot \lambda}) \\ &= \bigoplus_{n\in\ZZ}\mathrm{Hom}_{(C^{\lambda})^{\mathrm{gr}}}(\bigoplus_{x\in [W/W_{\lambda}]}\widetilde{\VV P_{x\cdot \lambda}}\langle n\rangle,\bigoplus_{x\in [W/W_{\lambda}]}\widetilde{\VV P_{x\cdot \lambda}}).\end{align*} In particular, $A_{\lambda}$ is a graded ring. Furthermore, as $\bigoplus_{x\in [W/W_{\lambda}]} \projective{x\cdot \lambda}$ is a minimal projective generator of $\mathcal{O}_{\lambda}$, there is an equivalence of categories \[ \mathcal{O}_{\lambda} \mapright{\sim} A_{\lambda}\mathrm{-mof}, \quad M\mapsto \mathrm{Hom}_{A_{\lambda}}(\bigoplus_{x\in [W/W_{\lambda}]} \projective{x\cdot \lambda}, M).\] We will not distinguish between $\mathcal{O}_{\lambda}$ and $A_{\lambda}\mathrm{-mof}$. If $\lambda=0$, we simply write $A$ instead of $A_0$. Set $\mathcal{O}_{\lambda}^{\ZZ} = A_{\lambda}\mathrm{-gmof}$. \begin{thm}[{\cite[Thm.\ 8.1, Thm.\ 8.2]{St}}]\label{stadjoints}Let $s\in W$ be a simple reflection. The translation functors $\pi_{s*}$ and $\pi_s^*$ are gradable. More precisely, there exist functors $\theta_0^{\lambda}\colon \mathcal{O}_0^{\ZZ} \to \mathcal{O}_{\lambda}^{\ZZ}$ and $\pi^0_{\lambda}\colon \mathcal{O}_{\lambda}^{\ZZ}\to \mathcal{O}_0^{\ZZ}$ that commute with grading shifts and are such that the following diagrams commute \[\xymatrix{ \mathcal{O}_0^{\ZZ}\ar[r]^-{\theta_0^{\lambda}}\ar[d]_-{\nu} & \mathcal{O}_{\lambda}^{\ZZ}\ar[d]^-{\nu} \\ \mathcal{O}_0\ar[r]^-{\pi_{s*}} & \mathcal{O}_{\lambda} }\qquad \xymatrix{ \mathcal{O}_{\lambda}^{\ZZ}\ar[r]^-{\theta_{\lambda}^0}\ar[d]_-{\nu} & \mathcal{O}_{0}^{\ZZ}\ar[d]^-{\nu} \\ \mathcal{O}_{\lambda}\ar[r]^-{\pi_{s}^*} & \mathcal{O}_{0} } \] (here $\lambda$ is an integral dominant weight with stabilizer $\{e,s\}$). \end{thm} \begin{thm}[{\cite[Thm.\ 8.4]{St}}] The functor $\theta^0_{\lambda}$ is left adjoint to $\theta_0^{\lambda}\langle -1\rangle$ and the functor $\theta^0_{\lambda}\langle -1\rangle$ is right adjoint to $\theta_{0}^{\lambda}$. \end{thm} \begin{warning}There is a misprint in \cite[Thm.\ 8.4]{St}. The result therein states that $\theta_{\lambda}^0$ is left adjoint to $\theta_0^{\lambda}\langle 1 \rangle$. However, examining its proof, we have $\mathrm{Hom}_{\mathcal{O}_0^{\ZZ}}(\theta_{\lambda}^0M, N) \simeq \mathrm{Hom}_{\mathcal{O}_0^{\ZZ}}(M, N\otimes W^{\circledast})$. Where, in the notation of \cite{St}, $W^{\circledast}= \mathrm{Hom}_{C^{\lambda}}(\VV P_{\lambda}, \mathrm{res}\, \VV P)\langle -1\rangle$ (see two lines above \cite[Cor.\ 8.5]{St}). Further, $\theta_0^{\lambda}=-\otimes\mathrm{Hom}_{C^{\lambda}}(\VV P_{\lambda}, \mathrm{res}\, \VV P)$ in \cite{St} (see \cite[Thm.\ 8.1]{St}). \end{warning} \subsection{}We now work mainly with the principal block, i.e., the categories $\mathcal{O}_0$ and $\mathcal{O}_0^{\ZZ}$. For each $x\in W$, set \[ \gprojective{x} = \bigoplus_{n\in\ZZ}\mathrm{Hom}_{C^{\mathrm{gr}}}(\bigoplus_{x\in W} \widetilde{\VV\projective{x}}\langle n\rangle, \widetilde{\VV \projective{x}}).\] By definition, $\gprojective{x}\in\mathcal{O}^{\ZZ}_0$ is a lift of $\projective{x}$; by Prop.\ \ref{projprop}, each $\gprojective{x}$ is an indecomposable projective in $\mathcal{O}^{\ZZ}_0$. Let $\gsimple{x}$ denote the unique irreducible quotient of $\gprojective{x}$. Certainly $\nu \gsimple{x}$ is irreducible, we deduce that $\gsimple{x}$ is a lift of $\simple{x}$. By \cite[Thm.\ 2.1]{St}, the $\gsimple{x}$ are concentrated in degree $0$. Finally, according to \cite[\S3.3]{St}, Verma modules admit lifts. We let $\gVerma{x}$ denote the lift of $\Verma{x}$ that has $\gsimple{x}$ as its unique simple quotient. \begin{warning}Not all objects of $\mathcal{O}$ lift, see \cite[\S4]{St}. \end{warning} \subsection{}Let $s$ be a simple reflection and let $\theta_0^{\lambda}$ and $\theta_{\lambda}^0$ be as in Thm.\ \ref{stadjoints}. Let $\theta_s = \theta_{\lambda}^0\theta_0^{\lambda}$. \begin{thm}[{\cite[Thm.\ 3.6, Thm.\ 5.3]{St}}]\label{ses}Let $x\in W$. \begin{enumerate} \item If $sx<x$, then there is a short exact sequence \[ 0 \to \gVerma{x}\langle 1 \rangle \to \theta_s \gVerma{x} \to \gVerma{sx} \to 0.\] \item If $x<sx$, then there is a short exact sequence \[ 0 \to \gVerma{sx} \to\theta_s\gVerma{x} \to \gVerma{x}\langle -1\rangle \to 0.\] \end{enumerate} \end{thm} \subsection{}Set \[ \pi_{s*} = \theta_0^{\lambda}, \quad \pi_s^*=\theta_{\lambda}^0\langle 1 \rangle, \quad \pi_s^! = \theta_{\lambda}^0\langle -1\rangle. \] Then we have adjunctions $(\pi_s^*, \pi_{s*})$ and $(\pi_{s*}, \pi_s^!)$. Note that $\pi_s^! = \pi_s^*\langle -2\rangle$. Let $\eta'$ be the unit of $(\pi_{s*}, \pi_s^!)$ and let $\varepsilon$ be the counit of $(\pi_s^*, \pi_{s*})$. Define complexes of functors \[ \TT_s = 0 \to \pi_s^*\pi_{s*}\mapright{\varepsilon} \mathrm{id} \to 0 \quad \mbox{and} \quad \TT_s^{-1} = 0 \to \mathrm{id} \mapright{\eta'} \pi_s^!\pi_{s*} \to 0, \] with $\pi_s^*\pi_{s*}$ (resp.\ $\pi_s^!\pi_{s*}$) in cohomological degree $0$. It is straightforward to verify that there are natural isomorphisms $\nu \TT_s \simeq \Theta_s^*\nu$ and $\nu\TT_s^{-1} \simeq \Theta_s^! \nu$ (see \cite[Prop.\ 2.2]{Bass}). \begin{thm}\label{dequivOgraded}The functors $\TT_s$ and $\TT_s^{-1}$ are mutually inverse equivalences of $\mathrm{D^b}(\mathcal{O}_0^{\ZZ})$. \end{thm} \begin{proof}Let $x\in W$. Using Thm.\ \ref{ses} we compute in $K_0(\mathcal{O}^{\ZZ}_0)$: if $sx<x$, then $[\pi^*_s\pi_{s*}\gVerma{x}] = [\gVerma{x}\langle 2\rangle] + [\gVerma{sx}\langle 1\rangle]$ \[ [\pi^!_s\pi_{s*}\pi_s^*\pi_{s*}\gVerma{x}] = [\gVerma{x}\langle 2 \rangle]+ [\gVerma{sx}\langle 1 \rangle]+ [\gVerma{x}] + [\gVerma{sx}\langle -1 \rangle] = [\pi^*_s\pi_{s*}\gVerma{x}] + [\pi^!_s\pi_{s*}\gVerma{x}]. \] If $sx>x$, then $[\pi^*_s\pi_{s*}\gVerma{x}]= [\gVerma{sx}\langle 1\rangle] + [\gVerma{x}]$. \[ [\pi^!_s\pi_{s*}\pi_s^*\pi_{s*}\gVerma{x}] = [\gVerma{sx}\langle 1\rangle] + [\gVerma{x}] + [\gVerma{sx}\langle -1\rangle] + [\gVerma{x}\langle -2\rangle] = [\pi_s^*\pi_{s*}\gVerma{x}] + [\pi_s^!\pi_{s*}\gVerma{x}]. \] Further, $\pi^!_s\pi_{s*}\pi_s^*\pi_{s*}=\pi_s^*\pi_{s*}\pi_s^!\pi_{s*}$. As the graded Verma modules $\gVerma{x}\langle n\rangle$, $x\in W$, $n\in\ZZ$, constitute a basis of $K_0(\mathcal{O}_0^{\ZZ})$ we deduce that we are in the situation of Thm.\ \ref{mainthm}. Consequently, $\TT_s$ and $\TT_s^{-1}$ are mutually inverse equivalences. \end{proof} \subsection{}By \cite[\S6]{St} there is a `graded duality' $\DD\colon \mathcal{O}_0^{\ZZ} \to \mathcal{O}_0^{\ZZ}$. The functor $\DD$ is contravariant, commutes with reflection across the wall (i.e., $\DD\theta_s \simeq \theta_s\DD$) and satisfies the following: \[ \DD^2 \simeq \mathrm{id}, \quad \DD(M\langle n \rangle) \simeq (\DD M) \langle -n \rangle, \quad \nu(\DD M) \simeq (\nu M)^{\vee}, \quad \DD\gsimple{x} \simeq \gsimple{x}, \] for all $M\in \mathcal{O}_0^{\ZZ}$, $n\in \ZZ$ and $x\in W$. We set $\gcoVerma{x} = \DD \gVerma{x}$. It is clear that $\gcoVerma{x}$ is a lift of $\coVerma{x}$. \begin{lemma}\label{gradedadjinjective}Let $s\in W$ be a simple reflection and let $x\in W$ be arbitrary. \begin{enumerate} \item The morphism $\eta'\colon \gVerma{x} \to \pi_s^!\pi_{s*}\gVerma{x}$ is injective. \item The morphism $\varepsilon\colon \pi_s^*\pi_{s*}\gcoVerma{x}\to \gcoVerma{x}$ is surjective. \end{enumerate} \end{lemma} \begin{proof}Left to the reader (see Lemma \ref{adjinjective}). \end{proof} \begin{prop}\label{gbehaviourofequivs}Let $s\in W$ be a simple reflection and let $x\in W$. \begin{enumerate} \item If $x<sx$, then $\TT_s\gVerma{x}\langle -1\rangle \simeq \gVerma{sx}$. \item If $x<sx$, then $\TT_s^{-1}\gcoVerma{x}\langle 1 \rangle \simeq \gcoVerma{sx}$. \item If $sx<x$, then $\TT_{s}^{-1}\gsimple{x} \simeq \gsimple{x}[1]$ (or equivalently $\TT_s \gsimple{x} \simeq \gsimple{x}[1]$). \end{enumerate} \end{prop} \begin{proof}This is proved in exactly the same way as Prop.\ \ref{behaviourofequivs}. If $x<sx$, then by Thm.\ \ref{ses}(i), the object $\pi_s^!\pi_{s*}\gVerma{sx}=\theta_s \gVerma{sx}\langle -1 \rangle$ represents a class in $\mathrm{Ext}^1(\gVerma{x}\langle -1\rangle,\gVerma{sx})$. Using Lemma \ref{gradedadjinjective} (i) we deduce that $\TT_s^{-1}\gVerma{sx}\simeq \gVerma{x}\langle -1 \rangle$. This shows (i). For (ii), we have \[ \pi_{s}^*\pi_{s*}\gcoVerma{sx}=\theta_s\gcoVerma{sx}\langle 1 \rangle) = \theta_s\DD(\gVerma{sx}\langle -1\rangle)=\DD(\theta_s\gVerma{sx}\langle -1\rangle). \] So, applying $\DD$ to Thm.\ \ref{ses} (i), we deduce that $\pi_{s}^*\pi_{s*}\gcoVerma{sx}$ represents a class in $\mathrm{Ext}^1(\gcoVerma{sx}, \gcoVerma{x}\langle 1\rangle))$. Using Lemma \ref{gradedadjinjective} (ii) we obtain that $\TT_s\gcoVerma{sx} \simeq \gcoVerma{x}\langle 1\rangle$. This proves (ii). If $sx<x$, then by Thm.\ \ref{stadjoints} and Prop.\ \ref{translationeffect} (iii), we have that $\nu\pi_s^*\pi_{s*}\gsimple{x} =0$. Thus, $\pi_{s}^*\pi_{s*}\gsimple{x} =0$. This implies (iii). \end{proof} \begin{prop}[{cf.\ Thm.\ \ref{bottsthm}}] Let $n\in \ZZ$, $x\in W$ and let $w_0$ be the longest element in $W$. Then \[ \mathrm{Ext}^i(\gVerma{x}\langle n \rangle ,\gsimple{w_0}) = \begin{cases} \CC &\mbox{if $i=\ell(ww_0)$ and $n=-\ell(ww_0$);} \\ 0 &\mbox{otherwise}. \end{cases}\] \end{prop} \begin{proof} This is proved in exactly the same way as Thm.\ \ref{bottsthm}. Let $s_1,\ldots, s_m$ be a sequence of simple reflections such that $s_1\cdots s_mw=w_0$ and $\ell(s_i\cdots s_mw_0)< \ell(s_{i-1}\cdots s_m w_0)$ for each $1< i < m+1$. Note that $m=\ell(w_0)-\ell(w) = \ell(ww_0)$. We have \begin{align*} \mathrm{Ext}^i(\gVerma{x}\langle n \rangle, \gsimple{w_0}) & = \mathrm{Ext}^i(\TT_{s_m}^{-1} \cdots \TT_{s_1}^{-1}\gVerma{w_0}\langle m+n\rangle, \gsimple{w_0}) \\ &= \mathrm{Ext}^i(\gprojective{w_0}\langle m+n\rangle, \TT_{s_1}\cdots \TT_{s_m}\gsimple{w_0}) \\ &= \mathrm{Ext}^{i-\ell(ww_0)}(\gprojective{w_0}\langle m+n \rangle, \gsimple{w_0}) \\ &=\begin{cases} \CC &\mbox{if $i=\ell(ww_0)$ and $n=-\ell(ww_0$);} \\ 0 &\mbox{otherwise}. \end{cases}\qedhere \end{align*} \end{proof} \subsection{}For each $w\in W$ fix a reduced word $w=s\cdots t$. Set \[ \TT_{w} = \TT_s \cdots \TT_t \quad \mbox{and} \quad \TT_w^{-1}= \TT_t^{-1} \cdots \TT_s^{-1}.\] \begin{thm}[\cite{Ro}]\label{gradedbraid}Let $w,w'\in W$. If $\ell(ww')=\ell(w)+\ell(w')$, then \[ \TT_w\TT_{w'} \simeq \TT_{ww'}.\] \end{thm} \begin{proof}This follows from \cite[Prop.\ 3.2]{Ro}, since all the isomorphisms in \emph{loc. cit.} are of complexes of \emph{graded} bimodules. \end{proof} \subsection{}\label{s:gtilting}Let $w_0$ be the longest element in $W$. For each $x\in W$, set \begin{equation}\label{definegtilt} \gtilting{x} = \TT_{w_0}^{-1}\gprojective{w_0x}\langle \ell(w_0)\rangle.\end{equation} Then $\gtilting{x}$ is a lift of the tilting module $D_x$. Further, using Prop.\ \ref{gbehaviourofequivs} we deduce that \begin{align*} \TT_{w_0}\gcoVerma{x} &\simeq \TT_{w_0}\TT^{-1}_{x^{-1}}\gVerma{e}\langle \ell(x)\rangle\\ &\simeq \TT_{w_0x}\TT_{x^{-1}}\TT^{-1}_{x^{-1}}\gVerma{e}\langle \ell(x) \rangle\\ &\simeq \TT_{w_0x}\gVerma{e}\langle \ell(x) \rangle \\ &\simeq \gVerma{w_0x} \langle \ell(x) + \ell(w_0x)\rangle \\ &= \gVerma{w_0x} \langle \ell(w_0)\rangle. \end{align*} Thus, $\gtilting{x}$ is the unique (up to isomorphism) indecomposable object in $\mathcal{O}_0^{\ZZ}$ satisfying the following properties: \begin{enumerate} \item $\gtilting{x}$ admits a filtration $0 = V_0 \subset V_1 \subset \cdots \subset V_k = \gtilting{x}$ such that $V_i/V_{i-1}$ is isomorphic to the shift of a graded dual Verma module and $V_k/V_{k-1}\simeq \gcoVerma{x}$. \item $\mathrm{Ext}^i(\gtilting{x}, \gcoVerma{y}\langle n \rangle)=0$ for all $i\neq 0$, $n\in \ZZ$ and $y\in W$. \end{enumerate} \begin{prop}\label{tiltselfdual}The modules $\gtilting{x}$ are self-dual, i.e., $\DD\gtilting{x}\simeq \gtilting{x}$. \end{prop} \begin{proof} As $\nu\DD \gtilting{x} \simeq (\nu\gtilting{x})^{\vee} = D_x^{\vee} \simeq D_x$, we must have $\DD\gtilting{x} \simeq \gtilting{x}\langle n \rangle$ for some $n\in\ZZ$. Now $\gsimple{x}\langle m\rangle$ occurs as a subquotient of $\gtilting{x}\langle n\rangle$ if and only if $m=n$. On the other hand $\DD \gcoVerma{x} \simeq \gVerma{x}$ occurs as a submodule of $\gtilting{x}$. The result follows. \end{proof} \section{Complements on Kazhdan-Lusztig theory}\label{s:kl} \subsection{}The Hecke algebra $\mathcal{H}$ is the free $\ZZ[v,v^{-1}]$-module $\bigoplus_{x\in W}\ZZ[v,v^{-1}]H_x$ with $\ZZ[v,v^{-1}]$-algebra structure given by \begin{align} H_xH_y &= H_{xy} && \mbox{if $\ell(xy)=\ell(x) + \ell(y)$}, \label{braidh}\\ (H_s+v)(H_s-v^{-1}) &= 0 && \mbox{if $s$ is a simple reflection}. \label{quadh} \end{align} \subsection{} There is a unique ring automorphism $d\colon \mathcal{H}\to\mathcal{H}$ defined by \[ d(v)=v^{-1}, \quad d(H_x)=H_{x^{-1}}^{-1}. \] An element $C\in\mathcal{H}$ is called self dual if $d(C)=C$. For each $x\in W$ there exists a unique self-dual element $C_x$ such that $C_x\in H_x + \sum_{y}v\ZZ[v]H_y$ (see \cite{KL}). \subsection{} Let $b\colon \mathcal{H}\to\mathcal{H}$ be the ring automorphism defined by \[ b(v) = -v^{-1}, \quad b(H_x)=H_x.\] Then $b$ commutes with $d$. Thus, $C'_x=b(C_x)$ is the unique self-dual element such that $C'_x\in H_x + \sum_y v^{-1}\ZZ[v^{-1}]H_y$. \subsection{}Consider the Grothendieck group $K_0(\mathcal{O}_0^{\ZZ})$. For $[X]\in K_0(\mathcal{O}_0^{\ZZ})$, set \[ v^n[X]=[X\langle -n \rangle], \quad H_x[X] = [\TT_xX\langle -\ell(x) \rangle].\] This defines an action of $\mathcal{H}$ on $K_0(\mathcal{O}_0^{\ZZ})$. The relations \eqref{braidh} follow from Thm.\ \ref{gradedbraid}. To see \eqref{quadh}, let $s$ be a simple reflection, then \begin{align*} [(\TT_s\langle -1\rangle + \mathrm{id}\langle -1\rangle)(\TT_s\langle -1 \rangle - \mathrm{id}\langle 1 \rangle)X] &=[(\pi_s^*\pi_{s*}\langle -1\rangle)(\TT_s\langle -1 \rangle - \mathrm{id}\langle 1 \rangle)X] \\ &=[(\pi_s^!\pi_{s*}\langle 1 \rangle)(\TT_s\langle -1 \rangle - \mathrm{id}\langle 1 \rangle)X] \\ &=[(\TT_s^{-1}\langle 1 \rangle + \mathrm{id}\langle 1 \rangle)(\TT_s\langle -1 \rangle - \mathrm{id}\langle 1 \rangle)X] \\ &=[(\mathrm{id} - \TT_{s}^{-1}\langle 2 \rangle + \TT_s - \mathrm{id}\langle 2\rangle)X]\\ &=[(\mathrm{id} - \pi_s^*\pi_{s*} + \mathrm{id}\langle 2 \rangle + \pi_s^*\pi_{s*}-\mathrm{id} - \mathrm{id}\langle 2\rangle)X]\\ &=0. \end{align*} \subsection{}The map $\phi\colon \mathcal{H} \to K_0(\mathcal{O}_0^{\ZZ})$, $H\mapsto H[\gVerma{e}]$ defines an isomorphism of left $\mathcal{H}$-modules. By Prop.\ \ref{gbehaviourofequivs} we have that $\phi(H_x) = [\gVerma{x}]$. Further, the map $\phi$ intertwines the automorphism $d$ and the contravariant duality, i.e., $\phi(d(H)) = \DD\phi(H)$. Thus, self-dual elements in $\mathcal{H}$ map to elements $[L]\in K_0(\mathcal{O}_0^{\ZZ})$ such that $[\DD L]=[L]$. Certainly, $[\DD \gsimple{x}] = [\gsimple{x}]$. The Kazhdan-Lusztig conjecture (a Theorem since about 30 years), concerning multiplicities of simple modules in Verma modules \cite{KL}, can be formulated as \begin{equation}\label{kleq}\tag{*}\phi^{-1}([\gsimple{x}]) = b(C_x).\end{equation} Unfortunately (but not surprisingly), the work we have done so far does not give enough information to prove this. The problem is that although the $\phi^{-1}([\gsimple{x}])$ are self dual, we do not have enough information to infer \begin{equation}\label{uppertriang}\tag{**}\phi^{-1}([\gsimple{x}])\in H_x + \sum_y v^{-1}\ZZ[v^{-1}]H_y.\end{equation} However, let's at least get the following out of the way. \begin{thm}[{cf.\ \cite[Thm.\ 4.4]{So08}}]\label{klequivtilt}The following two statements are equivalent: \begin{enumerate} \item $\phi^{-1}([\gsimple{x}]) = b(C_x)$. \item $\phi^{-1}([\gtilting{x}])=C_x$. \end{enumerate} \end{thm} \begin{proof} The Grothendieck group $K_0(\mathcal{O}_0^{\ZZ})$ comes with a symmetric $\ZZ[v,v^{-1}]$-bilinear form given by \[ \langle [M], [N] \rangle = \sum_{i,n} (-1)^i \dim\,\mathrm{Ext}^i(M, \DD N \langle n \rangle)v^{-n}, \] for $M,N\in\mathcal{O}_0^{\ZZ}$. With respect to this form the $[\gprojective{x}]$ and the $[\gsimple{x}]$ are dual bases, whereas the $[\gVerma{x}]$ form an orthonormal basis. Via $\phi$ this descends to the $\ZZ[v,v^{-1}]$-bilinear form on $\mathcal{H}$ defined by $\langle H_x ,H_y\rangle =\delta_{x,y}$. Let $\{P_x\}_{x\in W}$ be the basis dual to $\{b(C_x)\}_{x\in W}$ in $\mathcal{H}$. In \cite[\S3]{Virkh}, the basis dual to $\{C_x\}_{x\in W}$ is constructed combinatorially; denote this basis by $\{P'_x\}_{x\in W}$. Then in \cite[Thm.\ 4.3]{Virkh} it is shown that $b(C_x)H_{w_0} = P'_{xw_0}$ for all $x\in W$. Let $i\colon \mathcal{H}\to\mathcal{H}$ denote the ring anti-automorphism given by $i(v)=v$ and $i(H_x)=H_{x^{-1}}$. The morphisms $b,d$ and $i$ pairwise commute. Consequently, applying $i$ to $b(C_x)H_{w_0} = P'_{xw_0}$ we infer that $H_{w_0}b(C_{x^{-1}}) = P'_{w_0x^{-1}}$ or equivalently $H_{w_0}b(C_{x}) = P'_{w_0x}$ for all $x\in W$. On the other hand, it is clear that $P'_{x} = b(P_x)$ for all $x\in W$. Thus, applying $b$ to the above, we deduce that \[ H_{w_0}C_x = P_{w_0x} \] for all $x\in W$. Combining this with \eqref{definegtilt} gives the result. \end{proof} \begin{assumption}\label{koszul} The ring $A$ is positively graded, i.e., $A=\bigoplus_{i\geq 0}A_i$, where $A_i$ is the homogeneous component of degree $i$. Further, the ring $A_0$ is semisimple. \end{assumption} \begin{remark}The above assumption is known to be true \cite[Lemma 19, Erweiterungssatz 17]{So90}, also see \cite{BGS}. However, as far as I am aware, all known proofs of this require geometric arguments. \end{remark} \begin{thm}\label{klconj}If Assumption \ref{koszul} holds, then \eqref{kleq} holds. \end{thm} \begin{proof}Let $x\in W$. As $A$ is positively graded and the unique simple quotient of $\gVerma{x}$ (namely $\gsimple{x}$) is concentrated in degree $0$, we infer that $\gVerma{x}$ is concentrated in degrees $\geq 0$. Since $A_0$ is semisimple, the degree $0$ component of $\gVerma{x}$ is also semisimple. This forces the degree $0$ component of $\gVerma{x}$ to be $\gsimple{x}$. Thus, at the level of $K_0(\mathcal{O}_0^{\ZZ})$ we have \[ [\gVerma{x}] = \gsimple{x}+ \sum_{y < x} m_{y,x}[\gsimple{y}\langle n_{y,x}\rangle], \] for some $m_{y,x}\in \ZZ_{\geq 0}$ and $n_{y,x}> 0$. By induction on the Bruhat order this implies \[ [\gsimple{x}] = [\gVerma{x}] + \sum_{y<x} m'_{y,x}[\gVerma{y}\langle n'_{y,x}\rangle] \] for some $m'_{y,x}\in \ZZ$ and $n'_{y,x}>0$. This gives \eqref{uppertriang} which immediately yields \eqref{kleq}. \end{proof}
1,116,691,498,050
arxiv
\section{Introduction} A class of graphs is {\em nowhere dense} if for every $r \geq 1$, there exists $t\geq 1$ such that no graph in the class contains a subdivision of the complete graph $K_t$ where each edge is subdivided at most $r$ times as a subgraph. Examples of nowhere dense classes include most sparse graph classes studied in the literature, such as planar graphs, graphs with bounded treewidth, graphs excluding a fixed (topological) minor, graphs with bounded maximum degree, graphs that can be drawn in the plane with a bounded number of crossings per edges, and more generally graph classes with bounded expansion. At first sight, being nowhere dense might seem a weak requirement for a graph class to satisfy. Yet, this notion captures just enough structure to allow solving a wide range of algorithmic problems efficiently: In their landmark paper, Grohe, Kreutzer, and Siebertz~\cite{GKS17} proved for instance that every first-order property can be decided in almost linear time on graphs belonging to a fixed nowhere dense class. One reason nowhere dense classes attracted much attention in recent years is the realization that they can be characterized in several, seemingly different ways. Algorithmic applications in turn typically build on the `right' characterization for the problem at hand and sometimes rely on multiple ones, such as in the proof of Grohe {\it et al.}~\cite{GKS17}. Nowhere dense classes were characterized in terms of shallow minor densities~\cite{NOdM-nowhere-dense} and consequently in terms of generalized coloring numbers (by results from \cite{Zhu09}), low tree-depth colorings~\cite{NOdM-nowhere-dense} (by results from \cite{NOdM-decomp}), and subgraph densities in shallow minors \cite{Taxi_hom}; they were also characterized in terms of quasi-uniform wideness~\cite{NOdM10, KRS17, PST17}, the so-called splitter game~\cite{GKS17}, sparse neighborhood covers~\cite{GKS17}, neighborhood complexity~\cite{EGKKPRS16}, the model theoretical notion of stability~\cite{AA14}, as well as existence of particular analytic limit objects~\cite{NodM-modelings}. The reader is referred to the survey on nowhere dense classes by Grohe, Kreutzer, and Siebertz~\cite{GKS13} for an overview of the different characterizations, and to the textbook by Ne{\v{s}}et{\v{r}}il and Ossona de Mendez~\cite{NOdM-book} for a more general overview of the various notions of sparsity for graphs (see also \cite{SurveyND}). The main contribution of this paper is a new characterization of nowhere dense classes that brings together graph structure theory and the combinatorics of partially ordered sets (posets). Informally, we show that the property of being nowhere dense can be captured by looking at the dimension of posets whose order diagrams are in the class when seen as graphs. Recall that the {\em dimension $\dim(P)$} of a poset $P$ is the least integer $d$ such that the elements of $P$ can be embedded into $\mathbb{R}^d$ in such a way that $x<y$ in $P$ if and only if the point of $x$ is below the point of $y$ with respect to the product order of $\mathbb{R}^d$. Dimension is a key measure of a poset's complexity. The standard way of representing a poset is to draw its \emph{diagram}: First, we draw each element as a point in the plane, in such a way that if $a<b$ in the poset then $a$ is drawn below $b$. Then, for each relation $a<b$ in the poset not implied by transitivity (these are called \emph{cover relations}), we draw a $y$-monotone curve going from $a$ up to $b$. The diagram implicitly defines a corresponding undirected graph, where edges correspond to pairs of elements in a cover relation. This is the \emph{cover graph} of the poset. Let us also recall that the {\em height} of a poset is the maximum size of a chain in the poset (a set of pairwise comparable elements). Recall that a {\em monotone} class means a class closed under taking subgraphs. Our main result is the following theorem. \begin{theorem} \label{thm:main} Let $\mathcal{C}$ be a monotone class of graphs. Then $\mathcal{C}$ is nowhere dense if and only if for every integer $h\geq 1$ and real number $\epsilon>0$, $n$-element posets of height at most $h$ whose cover graphs are in $\mathcal{C}$ have dimension $\mathcal{O}(n^\epsilon)$. \end{theorem} This result is the latest step in a series of recent works connecting poset dimension with graph structure theory. This line of research began with the following result of Streib and Trotter~\cite{ST14}: For every fixed $h\geq 1$, posets of height $h$ with a planar cover graph have bounded dimension. That is, the dimension of posets with planar cover graphs is bounded from above by a function of their height. This is a remarkable theorem, because in general bounding the height of a poset does not bound its dimension, as shown for instance by the height-$2$ posets called {\em standard examples}, depicted in Figure~\ref{fig:kelly} (left). Requiring the cover graph to be planar does not guarantee any bound on the dimension either, as shown by Kelly's construction~\cite{Kel81} of posets with planar cover graphs containing large standard examples as induced subposets (Figure~\ref{fig:kelly}, right). Thus, it is the combination of the two ingredients, bounded height and planarity, that implies that the dimension is bounded. \begin{figure}[t] \centering \includegraphics[scale=1.0]{kelly} \caption{\label{fig:kelly} The standard example $S_4$ (left) and Kelly's construction containing $S_4$ (right). The {\em standard example} $S_m$ ($m\geq 2$) is the height-$2$ poset consisting of $m$ minimal elements $a_1, \dots, a_m$ and $m$ maximal elements $b_1, \dots, b_m$ and the relations $a_i < b_j$ for all $i, j \in [m]$ with $i\neq j$. It has dimension $m$.} \end{figure} Soon afterwards, it was shown in a sequence of papers that requiring the cover graph to be planar in the Streib-Trotter result could be relaxed: Posets have dimension upper bounded by a function of their height if their cover graphs \begin{itemize} \item have bounded treewidth, bounded genus, or more generally exclude an apex-graph as minor~\cite{JMMTWW}; \item exclude a fixed graph as a (topological) minor~\cite{Walczak17, MW15}; \item belong to a fixed class with bounded expansion~\cite{JMW17+}. \end{itemize} A class of graphs has {\em bounded expansion} if for every $r \geq 1$, there exists $c\geq 0$ such that no graph in the class contains a subdivision of a graph with average degree at least $c$ where each edge is subdivided at most $r$ times as a subgraph. This is a particular case of nowhere dense classes. Zhu~\cite{Zhu09} characterized bounded expansion classes as follows: A class has bounded expansion if and only if for every $r \geq 0$, there exists $c \geq 1$ such that every graph in the class has weak $r$-coloring number at most $c$. Weak coloring numbers were originally introduced by Kierstead and Yang~\cite{KY03} as a generalization of the degeneracy of a graph (also known as the \emph{coloring number}). They are defined as follows. Let $G$ be a graph and consider some linear order $\pi$ on its vertices (it will be convenient to see $\pi$ as ordering the vertices of $G$ from left to right). Given a path $Q$ in $G$, we denote by $\operatorname{left}(Q)$ the leftmost vertex of $Q$ w.r.t.\ $\pi$. Given a vertex $v$ in $G$ and an integer $r\geq 0$, we say that $u\in V(G)$ is \emph{weakly $r$-reachable from} $v$ w.r.t.\ $\pi$ if there exists a path $Q$ of length at most $r$ from $v$ to $u$ in $G$ such that $\operatorname{left}(Q)=u$. We let $\WR_r^\pi[v]$ denote the set of weakly $r$-reachable vertices from $v$ w.r.t.\ $\pi$ (note that this set contains $v$ for all $r\geq 0$). The {\em weak $r$-coloring number} $\wcol_r(G)$ of $G$ is defined as \[ \wcol_r(G):=\min_{\pi} \max_{v\in V(G)} |\WR_r^\pi[v]|. \] The novelty of our approach in this paper is that we bound the dimension of a poset using weak coloring numbers of its cover graph. Indeed, the general message of the paper is that dimension works surprisingly well with weak coloring numbers. We give a first illustration of this principle with the following theorem: \begin{theorem} \label{thm:dim-wcol} Let $P$ be a poset of height at most $h$, let $G$ denote its cover graph, and let $c := \wcol_{3h-3}(G)$. Then \[ \dim(P)\leq 4^c. \] \end{theorem} To prove this, we first make the following observation about weak reachability. \begin{obs}\label{obs-weak-reachability} Let $G$ be a graph and let $\pi$ be a linear order on its vertices. If $w, x, y, z$ are vertices of $G$ such that $w$ is weakly $k$-reachable from $x$ (w.r.t.\ $\pi$), $y$ is weakly $\ell$-reachable from $z$, and $Q$ is a path from $x$ to $z$ in $G$ of length at most $m$ such that $\operatorname{left}(\set{w,y})\leq_{\pi} \operatorname{left}(Q)$, then \begin{center} \text{one of $w,y$ is weakly $(k+\ell+m)$-reachable from the other.} \end{center} In particular, this holds if $m=0$, i.e.\ $x=z$. \end{obs} \begin{proof} Consider a path $Q^{(1)}$ from $w$ to $x$ witnessing that $w$ is weakly $k$-reachable from $x$, and a path $Q^{(2)}$ from $z$ to $y$ witnessing that $y$ is weakly $\ell$-reachable from $z$. The union of $Q$, $Q^{(1)}$ and $Q^{(2)}$ contains a path $Q^{(3)}$ connecting $w$ to $y$ of length at most $k+\ell+m$. Since $w$ is the leftmost vertex of $Q^{(1)}$ in $\pi$ and $y$ is the leftmost vertex of $Q^{(2)}$ in $\pi$, and $\operatorname{left}(\set{w,y})\leq_{\pi} \operatorname{left}(Q)$ we have that one of $w$ and $y$ is the leftmost vertex of $Q^{(3)}$ in $\pi$. This proves that one of $w,y$ is weakly $(k+\ell+m)$-reachable from the other. \end{proof} Before continuing with the proof, let us introduce some necessary definitions regarding posets. Let $P$ be a poset. An element $y$ {\em covers} an element $x$ if $x < y$ in $P$ and there is no element $z$ such that $x < z < y$ in $P$. A chain $X$ of $P$ is said to be a {\em covering chain} if the elements of $X$ can be enumerated as $x_1, x_2, \dots, x_k$ in such a way that $x_{i+1}$ covers $x_i$ in $P$ for each $i\in [k-1]$. (We use the notation $[n] := \{1, \dots, n\}$.) The {\em upset $\Up[x]$} of an element $x\in P$ is the set of all elements $y\in P$ such that $x \leq y$ in $P$. Similarly, the {\em downset $\D[x]$} of an element $x\in P$ is the set of all elements $y\in P$ such that $y \leq x$ in $P$. Note that $x\in \Up[x]$ and $x\in \D[x]$. Given a subset $S$ of elements of $P$, we write $\Up[S]$ for the set $\bigcup_{x\in S}\Up[x]$, and define $\D[S]$ similarly. An {\em incomparable pair} of $P$ is an ordered pair $(x,y)$ of elements of $P$ that are incomparable in $P$. We denote by $\Inc(P)$ the set of incomparable pairs of $P$. Let $I \subseteq \Inc(P)$ be a non-empty set of incomparable pairs of $P$. We say that $I$ is \emph{reversible} if there is a linear extension $L$ of $P$ \emph{reversing} each pair of $I$, that is, we have $x>y$ in $L$ for every $(x,y)\in I$. We denote by $\dim(I)$ the least integer $d$ such that $I$ can be partitioned into $d$ reversible sets. We will use the convention that $\dim(I)=1$ when $I$ is an empty set. As is well known, the dimension $\dim(P)$ of $P$ can equivalently be defined as $\dim(\Inc(P))$, that is, the least integer $d$ such that the set of all incomparable pairs of $P$ can be partitioned into $d$ reversible sets. This is the definition that we will use in the proofs. A sequence $(x_1,y_1), \dots, (x_k,y_k)$ of incomparable pairs of $P$ with $k \geq 2$ is said to be an \emph{alternating cycle of size $k$} if $x_i\leq y_{i+1}$ in $P$ for all $i\in\set{1,\ldots,k}$ (cyclically, so $x_k\le y_1$ in $P$ is required). (We remark that possibly $x_i=y_{i+1}$ for some $i$'s.) Observe that if $(x_1,y_1), \dots, (x_k,y_k)$ is a alternating cycle in $P$, then this set of incomparable pairs cannot be reversed by a linear extension $L$ of $P$. Indeed, otherwise we would have $y_i < x_i \leq y_{i+1}$ in $L$ for each $i \in \{1,2, \dots, k\}$ cyclically, which cannot hold. Hence, alternating cycles are not reversible. The converse is also true, as is well known: A set $I$ of incomparable pairs of a poset $P$ is reversible if and only if $I$ contains no alternating cycles. We may now turn to the proof of Theorem~\ref{thm:dim-wcol}. \begin{proof}[Proof of Theorem~\ref{thm:dim-wcol}] Let $\pi$ be a linear order on the elements of $P$ such that $\WR_{3h-3}^\pi[x]\leq c$ for each $x\in P$. Here and in the rest of the proof, weak reachability is to be interpreted w.r.t.\ the cover graph $G$ of $P$ and the ordering $\pi$. First, we greedily color the elements of $P$ using the ordering $\pi$ from left to right. When element $x$ is about to be colored, we give $x$ the smallest color $\phi(x)$ in $[c]$ that is not used for elements of $\WR_{3h-3}^\pi[x] -\{x\}$. Let $x\in P$ and $y,z\in \WR_{h-1}^{\pi}[x]$. By Observation~\ref{obs-weak-reachability}, either $y\in\WR_{2h-2}^{\pi}[z]$ or $z\in\WR_{2h-2}^{\pi}[y]$. Therefore, \begin{equation} \label{eq:unique-color} \text{$\phi(y)\neq \phi(z)$ for every $x\in P$ and $y,z\in \WR_{h-1}^{\pi}[x]$ with $y\neq z$.} \end{equation} In the proof, we will focus on elements $y$ in $\WR_{h-1}^\pi[x]$ that are weakly reachable from $x$ via covering chains that either start at $x$ and end in $y$, or the other way round. This leads us to introduce the \emph{weakly reachable upset $\operatorname{WU}[x]$} and the \emph{weakly reachable downset $\operatorname{WD}[x]$} of $x$: \begin{align*} \operatorname{WU}[x]:=\{y\in \Up[x] \exists\text{ covering chain $Q$ from $x$ to $y$ such that }\operatorname{left}(Q)=y\},\\ \operatorname{WD}[x]:=\{y\in \D[x]: \exists\text{ covering chain $Q$ from $y$ to $x$ such that }\operatorname{left}(Q)=y\}. \end{align*} Then $\operatorname{WD}[x], \operatorname{WU}[x] \subseteq \WR_{h-1}^{\pi}[x]$. If $X$ is a set of elements of $P$, we write $\phi(X)$ for the set of colors $\{\phi(x) : x\in X\}$. Given an element $x$ of $P$ and a color $i\in\phi(\operatorname{WU}[x])$, by~\eqref{eq:unique-color} there is a unique element in $\operatorname{WU}[x]$ with color $i$; let us denote it by $\operatorname{wu}_i(x)$. Similarly, given $i\in\phi(\operatorname{WD}[x])$, we let $\operatorname{wd}_i(x)$ denote the unique element in $\operatorname{WD}[x]$ with color $i$. For each $(x,y)\in\Inc(P)$, define the \emph{signature} $(A,B,C)$ of $(x,y)$, where \[ A=\phi(\operatorname{WU}[x]),\ B=A\cap\phi(\operatorname{WD}[y]),\ C=\set{i\in B\mid \operatorname{wu}_i(x)<_{\pi} \operatorname{wd}_i(y)}. \] As $[c]\supseteq A \supseteq B \supseteq C$, the number of possible signatures is at most $4^c$. It remains to show that the set of incomparable pairs with a given signature is reversible. This will show that $P$ has dimension at most $4^c$, as desired. Arguing by contradiction, suppose that there is a signature $(A,B,C)$ such that the set of incomparable pairs with signature $(A,B,C)$ is not reversible. Then these incomparable pairs contain an alternating cycle $(x_1,y_1),\ldots,(x_k,y_k)$. For each $j\in [k]$, consider all covering chains witnessing the comparability $x_j\leq y_{j+1}$ in $P$ (indices are taken cyclically) and choose one such covering chain $Q_j$ such that $q_j=\operatorname{left}(Q_j)$ is as far to the left as possible w.r.t.\ $\pi$. Without loss of generality we may assume that $q_1$ is leftmost w.r.t.\ $\pi$ among the $q_j$'s. Let $t:=\phi(q_1)$. Clearly, $x_1\leq q_1\leq y_{2}$ in $P$, and $q_1\in\operatorname{WU}[x_1]\cap \operatorname{WD}[y_{2}]$. Thus, $t \in\phi(\operatorname{WU}[x_1])=A=\phi(\operatorname{WU}[x_{2}])$ and $t\in\phi(\operatorname{WD}[y_{2}])$, and hence $t\in \phi(\operatorname{WU}[x_{2}]) \cap \phi(\operatorname{WD}[y_{2}]) = B$. It follows that $\operatorname{wu}_t(x_j)$ and $\operatorname{wd}_t(y_j)$ are both defined for each $j\in [k]$. First suppose that $t\in C$. In particular, $\operatorname{wu}_t(x_2) <_{\pi} \operatorname{wd}_t(y_2)$. Thus \[ \operatorname{wu}_t(x_2) <_{\pi} \operatorname{wd}_t(y_2) = q_1 \leq_{\pi} q_2 = \operatorname{left}(Q_2). \] Since $\operatorname{wu}_t(x_2)$ is $(h-1)$-weakly reachable from $x_2$, $\operatorname{wd}_t(y_3)$ is $(h-1)$-weakly reachable from $y_3$, and $Q_2$ is a path in $G$ connecting $x_2$ and $y_3$ of length at most $h-1$ such that $\operatorname{wu}_t(x_2)<_{\pi}\operatorname{left}(Q_2)$, by Observation~\ref{obs-weak-reachability} one of $\operatorname{wu}_t(x_2)$, $\operatorname{wd}_t(y_3)$ is weakly $(3h-3)$-reachable from the other. Since $\phi(\operatorname{wu}_t(x_2))=t=\phi(\operatorname{wd}_t(y_3))$, we must have $\operatorname{wd}_t(y_3) = \operatorname{wu}_t(x_2)=:q^*$ by~\eqref{eq:unique-color}. Hence, $x_2 \leq q^* \leq y_3$ in $P$, but $q^*<_{\pi} q_1\leq_\pi q_2$, which contradicts the way $q_2$ and $q_1$ were chosen. Next, suppose that $t\notin C$. Then, $\operatorname{wd}_t(y_1) \leq_{\pi} \operatorname{wu}_t(x_1)$. Note that $\operatorname{wd}_t(y_1) \neq \operatorname{wu}_t(x_1)$, since otherwise we would have $x_1 \leq y_1$ in $P$. Thus, $\operatorname{wd}_t(y_1) <_{\pi} \operatorname{wu}_t(x_1)$, and \[ \operatorname{wd}_t(y_1) <_{\pi} \operatorname{wu}_t(x_1) = q_1 \leq_{\pi} q_k = \operatorname{left}(Q_k). \] Since $\operatorname{wd}_t(y_1)$ is $(h-1)$-weakly reachable from $y_1$, $\operatorname{wu}_t(x_k)$ is $(h-1)$-weakly reachable from $x_k$, and $Q_k$ is a path in $G$ connecting $x_k$ and $y_1$ of length at most $h-1$ such that $\operatorname{wd}_t(y_1)<_{\pi}\operatorname{left}(Q_k)$, by Observation~\ref{obs-weak-reachability} one of $\operatorname{wd}_t(y_1)$, $\operatorname{wu}_t(x_k)$ is weakly $(3h-3)$-reachable from the other. Since $\phi(\operatorname{wu}_t(x_k))=t=\phi(\operatorname{wd}_t(y_1))$, we must have $\operatorname{wu}_t(x_k) = \operatorname{wd}_t(y_1)=:q^*$ by~\eqref{eq:unique-color}. Hence, $x_k \leq q^* \leq y_1$ in $P$, but $q^*<_{\pi} q_1\leq_\pi q_k$, which contradicts the way $q_k$ and $q_1$ were chosen. \end{proof} By Zhu's theorem, if we restrict ourselves to posets with cover graphs $G$ in a fixed class $\mathcal{C}$ with bounded expansion, then $\wcol_{3h-3}(G)$ is bounded by a function of $h$. Thus Theorem~\ref{thm:dim-wcol} implies the theorem from~\cite{JMW17+} for classes with bounded expansion. However, the above proof is much simpler and implies better bounds on the dimension than those following from previous works (see the discussion in Section~\ref{sec:applications}). We see this as a first sign that weak coloring numbers are the right tool to use in this context. In~\cite{JMW17+}, it is conjectured that bounded expansion captures exactly situations where dimension is bounded by a function of the height: \begin{conjecture}[\cite{JMW17+}] \label{conj:bounded_exp} A monotone class of graphs $\mathcal{C}$ has bounded expansion if and only if for every fixed $h\geq 1$, posets of height at most $h$ whose cover graphs are in $\mathcal{C}$ have bounded dimension. \end{conjecture} While the result of~\cite{JMW17+} (reproved above) shows the forward direction of the conjecture, the backward direction remains surprisingly (and frustratingly) open. By contrast, showing the backward direction of Theorem~\ref{thm:main} for nowhere dense classes is a straightforward matter, as we now explain. We prove the contrapositive. Thus let $\mathcal{C}$ be a monotone graph class which is {\em not} nowhere dense (such a class is said to be {\em somewhere dense}). Our aim is to prove that there exist $h \geq 1$ and $\epsilon > 0$ such that there are $n$-element posets of height at most $h$ with dimension $\Omega(n^\epsilon)$ whose cover graphs are in $\mathcal{C}$. Since $\mathcal{C}$ is somewhere dense, there exists an integer $r\geq 0$ (depending on $\mathcal{C}$) such that for every $t \geq 1$ there is a graph $G \in \mathcal{C}$ containing an $\leq\!r$-subdivision of $K_t$ as a subgraph. (An {\em $\leq\!k$-subdivision} of a graph is a subdivision such that each edge is subdivided at most $k$ times.) Since $\mathcal{C}$ is closed under taking subgraphs, this means that for every $m \geq 2$, the class $\mathcal{C}$ contains a graph $G_m$ that is an $\leq\!r$-subdivision of the cover graph of the standard example $S_m$. Notice that $G_m$ has at most $rm^2+2m$ vertices. Now it is easy to see that $G_m$ is also the cover graph of a poset $P_m$ of height at most $r+2$ containing $S_m$ as an induced subposet (simply perform the edge subdivisions on the diagram of $S_m$ in the obvious way). Let $n$ be the number of elements of $P_m$. The poset $P_m$ has dimension at least $m$, and thus its dimension is $\Omega(\sqrt{n})$ since $n \leq rm^2+2m$. Hence, we obtain the desired conclusion with $h := r+2$ and $\epsilon := 1/2$. This completes the proof of the backward direction of Theorem~\ref{thm:main}. The non-trivial part of Theorem~\ref{thm:main} is that $n$-element posets of bounded height with cover graphs in a nowhere dense class have dimension $\mathcal{O}(n^\epsilon)$ for all $\epsilon > 0$. To prove this, we use the following characterization of nowhere dense classes in terms of weak coloring numbers~\cite{NOdM-nowhere-dense}: A class is nowhere dense if and only if for every $r \geq 0$ and every $\epsilon >0$, every $n$-vertex graph in the class has weak $r$-coloring number $\mathcal{O}(n^{\epsilon})$. (Note that this characterization does not require that the class is monotone, as deleting edges cannot increase the weak colouring numbers.) We remark that it is a common feature of several characterizations in the literature that bounded expansion and nowhere dense classes can be characterized using the same graph invariants, but requiring $\mathcal{O}(1)$ and $\mathcal{O}(n^{\epsilon})$ $\forall \epsilon >0$ bounds on the invariants respectively. Thus, it is natural to conjecture the statement of Theorem~\ref{thm:main}, and indeed it appears as a conjecture in~\cite{JMW17+}. (We note that it was originally Dan Kr\'a{\v l} who suggested to the first author to try and show Theorem~\ref{thm:main} right after the result in~\cite{JMW17+} was obtained.) The $4^c$ bound in Theorem~\ref{thm:dim-wcol} unfortunately falls short of implying the forward direction of Theorem~\ref{thm:main}. Indeed, if the cover graph $G$ has $n$ vertices and belongs to a nowhere dense class, we only know that $\wcol_{3h-3}(G)\in \mathcal{O}(n^{\epsilon})$ for every $\epsilon > 0$. Thus from the theorem we only deduce that $\dim(P) \leq 4^{\mathcal{O}(n^{\epsilon})}$ for every $\epsilon > 0$, which is a vacuous statement since $\dim(P) \leq n$ always holds. In order to address this shortcoming, we developed a second upper bound on the dimension of a height-$h$ poset in terms of the weak $w(h)$-coloring number of its cover graph $G$ (for some function $w$) and another invariant of $G$. This extra invariant is the smallest integer $t$ such that $G$ does not contain an $\leq\!s(h)$-subdivision of $K_t$ as a subgraph, for some function $s$. The key aspect of our bound is that, for fixed $h$ and $t$, it depends polynomially on the weak $w(h)$-coloring number that is being considered. Its precise statement is as follows. (Let us remark that the particular values $w(h):=4h-4$ and $s(h):=2h-3$ used in the theorem are not important for our purposes, any functions $w$ and $s$ would have been good.) \begin{theorem} \label{thm:nowhere-dense-upper-bound} There exists a function $f:\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ such that for every $h\geq 1$ and $t\geq 1$, every poset $P$ of height at most $h$ whose cover graph $G$ contains no $\leq\!(2h-3)$-subdivision of $K_t$ as a subgraph satisfies \[ \dim(P) \leq (3c)^{f(h,t)}, \] where $c:=\wcol_{4h-4}(G)$. \end{theorem} Recall that for every nowhere dense graph class $\mathcal{C}$ and every $r \geq 1$, there exists $t\geq 1$ such that no graph in $G$ contains an $\leq\!r$-subdivision of $K_t$ as a subgraph. Hence, Theorem~\ref{thm:nowhere-dense-upper-bound} implies the following corollary. \begin{corollary} \label{cor:nowhere-dense-upper-bound} For every nowhere dense class of graphs $\mathcal{C}$, there exists a function $g:\mathbb{N} \to \mathbb{N}$ such that every poset $P$ of height at most $h$ whose cover graph $G$ is in $\mathcal{C}$ satisfies \[ \dim(P) \leq (3c)^{g(h)}, \] where $c:=\wcol_{4h-4}(G)$. \end{corollary} For every integer $h\geq 1$ and real number $\epsilon >0$, this in turn gives a bound of $\mathcal{O}(n^\epsilon)$ on the dimension of $n$-element posets of height at most $h$ whose cover graphs $G$ are in $\mathcal{C}$. Indeed, if we take $\epsilon':= \epsilon / g(h)$, then $\wcol_{4h-4}(G) \in \mathcal{O}(n^{\epsilon'})$ by the aforementioned characterization of nowhere dense classes~\cite{NOdM-nowhere-dense}, and hence $\dim(P)\in \mathcal{O}(n^{g(h)\epsilon'})=\mathcal{O}(n^{\epsilon})$ by the corollary. Therefore, this establishes the forward direction of Theorem~\ref{thm:main}. Let us also point out that Corollary~\ref{cor:nowhere-dense-upper-bound} provides another proof of the theorem from~\cite{JMW17+} for classes with bounded expansion, since $\wcol_{4h-4}(G)$ is bounded by a function of $h$ only when $\mathcal{C}$ has bounded expansion. However, the proof is more involved than that of Theorem~\ref{thm:dim-wcol} and the resulting bound on the dimension is typically larger. Indeed, the bound in Theorem~\ref{thm:nowhere-dense-upper-bound} becomes interesting when the weak coloring number under consideration grows with the number of vertices. Our proof of Theorem~\ref{thm:nowhere-dense-upper-bound} takes its roots in the alternative proof due to Micek and Wiechert~\cite{MW15} of Walczak's theorem~\cite{Walczak17}, that bounded-height posets whose cover graphs exclude $K_t$ as a topological minor have bounded dimension. This proof is essentially an iterative algorithm which, if the dimension is large enough (as a function of the height), explicitly builds a subdivision of $K_t$, one branch vertex at a time. This is very similar in appearance to what we would like to show, namely that if the dimension is too big, then the cover graph contains a subdivision of $K_t$ where each edge is subdivided a bounded number of times (by a function of the height). The heart of our proof is a new technique based on weak coloring numbers, Lemma~\ref{lemma:q-support}, which we use to bound the number of subdivision vertices. The paper is organized as follows. We prove Theorem~\ref{thm:nowhere-dense-upper-bound} in Section~\ref{sec:proof_nowhere_dense}. Next, we discuss in Section~\ref{sec:applications} improved bounds implied by Theorem~\ref{thm:dim-wcol} for special cases that were studied in the literature, such as for posets with planar cover graphs and posets with cover graphs of bounded treewidth. Finally, we close the paper in Section~\ref{sec:open_problems} with a couple open problems. \section{Nowhere Dense Classes} \label{sec:proof_nowhere_dense} As discussed in the introduction, the forward direction of Theorem~\ref{thm:main} follows from Theorem~\ref{thm:nowhere-dense-upper-bound} combined with Zhu's characterization of nowhere dense classes. In this section we prove Theorem~\ref{thm:nowhere-dense-upper-bound}. We begin with our key lemma. \begin{lemma}\label{lemma:q-support} Let $P$ be a poset of height $h$ with cover graph $G$, let $I \subseteq \Inc(P)$, and let $c:= \wcol_{4h-4}(G)$. Then there exists an element $q\in P$ such that the set $I':= \{(x,y)\in I: q \leq y \text{ in } P\}$ satisfies $$ \dim(I')\geq \dim(I) / c - 2. $$ \end{lemma} \begin{proof} Fix a linear order $\pi$ of the vertices of $G$ witnessing $\wcol_{4h-4}(G)\leq c$. Here and in the rest of the proof, weak reachability is to be interpreted w.r.t.\ the cover graph $G$ and the ordering $\pi$. Let $\phi$ be a greedy vertex coloring of $G$ obtained by considering the vertices one by one according to $\pi$, and assigning to each vertex $z$ a color $\phi(z)\in [c]$ different from all the colors used on vertices in $\WR_{4h-4}^\pi[z]-\{z\}$. Note that for every two vertices $x,y \in \WR_{2h-2}^\pi[z]$, we know from Observation~\ref{obs-weak-reachability} that one of $x,y$ is weakly $(4h-4)$-reachable from the other, and thus $\phi(x)\neq \phi(y)$. For each $z\in P$, let $\tau(z)\in[c]$ denote the color of $\operatorname{left}(\D[z])$. Given a color $i\in[c]$, let $\operatorname{w}_i(z)$ denote the unique element of $\WR_{2h-2}^\pi[z]$ colored $i$ if there is one, and leave $\operatorname{w}_i(z)$ undefined otherwise. Observe that $\operatorname{w}_{\tau(z)}(z) = \operatorname{left}(\D[z])$. In particular, $\operatorname{w}_{\tau(z)}(z)\leq z$ in $P$. Let $x,y\in P$ with $x\leq y$ in $P$. We claim that $\operatorname{w}_{\tau(y)}(x)=\operatorname{w}_{\tau(y)}(y)$. Indeed, $x$ and $y$ are at distance at most $h-1$ in $G$, and the element $\operatorname{w}_{\tau(y)}(x)$ is weakly $(2h-2)$-reachable from $x$ and $\operatorname{w}_{\tau(y)}(y)$ is weakly $(h-1)$-reachable from $y$. Hence, one of $\operatorname{w}_{\tau(y)}(x),\operatorname{w}_{\tau(y)}(y)$ is $(4h-4)$-reachable from the other by Observation~\ref{obs-weak-reachability}, and $\operatorname{w}_{\tau(y)}(x)=\operatorname{w}_{\tau(y)}(y)$. Define the \emph{signature} $\sigma(x,y)$ of a pair $(x,y) \in I$ to be the pair $(\tau(y),\alpha(x,y))$, where \[ \alpha(x,y):=\begin{cases} 1 & \quad \text{if } \operatorname{w}_{\tau(y)}(x)=\operatorname{w}_{\tau(y)}(y)\\ 2 & \quad \text{if } \operatorname{w}_{\tau(y)}(x)<_\pi \operatorname{w}_{\tau(y)}(y)\\ 3 & \quad \text{if } \operatorname{w}_{\tau(y)}(x)>_\pi \operatorname{w}_{\tau(y)}(y) \text{ or } \operatorname{w}_{\tau(y)}(x) \text{ is not defined}.\\ \end{cases} \] For each color $\tau\in[c]$ and value $\alpha\in[3]$, let $J_{\tau,\alpha}$ be the set of incomparable pairs $(x,y)\in I$ such that $\sigma(x,y)=(\tau,\alpha)$. Note that the sets $J_{\tau,\alpha}$ form a partition of $I$. \begin{claim} For each color $\tau\in[c]$, the sets $J_{\tau,2}$ and $J_{\tau,3}$ are reversible. \end{claim} \begin{proof} Let $\alpha\in \{2,3\}$. Arguing by contradiction, suppose that $J_{\tau,\alpha}$ is not reversible, and let $(x_1,y_1),\ldots,(x_k,y_k)$ denote an alternating cycle. Since $x_1 \leq y_2$ in $P$, we have that $\operatorname{w}_{\tau(y_2)}(x_1) = \operatorname{w}_{\tau(y_2)}(y_2)$. Since $\tau(y_2)=\tau=\tau(y_1)$, it follows that $\operatorname{w}_{\tau(y_1)}(x_1)$ is defined. Since for every $i\in[k]$ we have $x_i\leq y_{i+1}$ in $P$ (cyclically), we obtain that $\operatorname{w}_{\tau}(x_i)=\operatorname{w}_{\tau}(y_{i+1})$. However, by our signature function this implies $\operatorname{w}_{\tau}(y_{i+1})= \operatorname{w}_{\tau}(x_i)<_\pi \operatorname{w}_{\tau}(y_i)$ for all $i\in[k]$ if $\alpha=2$, or $\operatorname{w}_{\tau}(y_{i+1})= \operatorname{w}_{\tau}(x_i)>_\pi \operatorname{w}_{\tau}(y_i)$ for all $i\in[k]$ if $\alpha=3$, which cannot hold cyclically. \end{proof} Since \[ I = \bigcup_{\tau\in [c], \alpha \in [3]} J_{\tau,\alpha} \] the previous claims imply that \begin{align*} \dim(I) & \leq \sum_{\tau\in[c]}\dim(J_{\tau,1})+\sum_{\tau\in[c]}\dim(J_{\tau,2})+\sum_{\tau\in[c]}\dim(J_{\tau,3})\\ & \leq \sum_{\tau\in[c]}\dim(J_{\tau,1}) + 2c. \end{align*} It follows that there exists a color $\tau\in[c]$ such that $ \dim(J_{\tau,1})\geq \dim(I)/c-2$. In the rest of the proof we focus on the set $J_{\tau,1}$. Thus, denoting this set by $I_\tau$, we have \begin{equation*} \dim(I_\tau)\geq \dim(I)/c-2. \end{equation*} Given an element $p\in P$, we denote by $I_{\tau,p}$ the set of incomparable pairs $(x,y)\in I_\tau$ such that $p=\operatorname{w}_{\tau}(x)=\operatorname{w}_{\tau}(y)$. Note that the sets $I_{\tau,p}$ ($p\in P$) partition $I_\tau$. \begin{claim}\label{claim-max} $\displaystyle{\dim(I_\tau)=\max_{p\in P} \ \dim(I_{\tau,p})}$. \end{claim} \begin{proof} Let $d := \max_{p\in P} \ \dim(I_{\tau,p})$. Note that $\dim(I_\tau) \geq d$ since $I_{\tau,p} \subseteq I_\tau$ for each $p\in P$. Thus it remains to show that $\dim(I_\tau) \leq d$. For each $p\in P$, there exists a partition of $I_{\tau,p}$ into at most $d$ reversible sets. Let $I^1_{\tau,p}, \dots, I^d_{\tau,p}$ be disjoint reversible sets such that \[ I_{\tau,p}=\bigcup_{j\in [d]}I^j_{\tau,p}, \] some sets being possibly empty. We claim that the set $\bigcup_{p\in P}I^j_{\tau,p}$ is reversible for each $j\in[d]$. Arguing by contradiction, suppose that for some $j\in[d]$ this set is not reversible. Then it contains an alternating cycle $(x_1,y_1),\ldots,(x_k,y_k)$. As $x_i\leq y_{i+1}$ in $P$ for $i\in[k]$, we have $\operatorname{w}_{\tau}(x_i)=\operatorname{w}_{\tau}(y_{i+1})$, which by the signatures of these pairs implies that $\operatorname{w}_{\tau}(y_i)=\operatorname{w}_{\tau}(y_{i+1})$. As this holds cyclically, there is $p\in P$ such that $p=\operatorname{w}_{\tau}(y_i)$ for every $i\in [k]$. However, this implies that $(x_1,y_1),\ldots,(x_k,y_k)$ is an alternating cycle in $I^j_{\tau,p}$, which is a contradiction since this set is reversible by assumption. Thus, $\bigcup_{p\in P}I^j_{\tau,p}$ is reversible for each $j\in[d]$. Since $I_\tau= \bigcup_{j\in [d]}\bigcup_{p\in P}I^j_{\tau,p}$, it follows that $\dim(I_\tau) \leq d$, as desired. \end{proof} Now we can complete the proof of the lemma. Let $q\in P$ be an element witnessing the maximum value in the right-hand side of the equation in Claim~\ref{claim-max}. Clearly, $I_{\tau,q}\subset\set{(x,y)\in I: q\leq y\text{ in $P$}}$. Since \[ \dim(I_{\tau,q})= \dim(I_{\tau})\geq\dim(I)/c-2, \] this completes the proof of the lemma. \end{proof} We are now ready to prove Theorem~\ref{thm:nowhere-dense-upper-bound}. \begin{proof}[Proof of Theorem~\ref{thm:nowhere-dense-upper-bound}] Let $h \geq 1$ and $t \geq 1$. We prove the theorem with the following value for $f(h,t)$: \[ f(h,t):=\binom{m+h}{h}, \quad\text{ where } m:=\binom{t}{2}^{h^t}. \] Let thus $P$ be a poset of height at most $h$, let $G$ denote its cover graph, and let $c:=\wcol_{4h-4}(G)$. We prove the contrapositive. That is, we assume that \[ \dim(P)> (3c)^{f(h,t)}, \] and our goal is to show that $G$ contains a $\leq\!(2h-3)$-subdivision of $K_t$ as a subgraph. For technical reasons, we will need to suppose also that $c > t$. This can be assumed without loss of generality, because if not then $\wcol_{3h-3}(G) \leq \wcol_{4h-4}(G) \leq t$, and hence $\dim(P) \leq 4^{t} \leq (3c)^{f(h,t)}$ by Theorem~\ref{thm:dim-wcol}.\footnote{The reader might object that this makes the proof dependent on Theorem~\ref{thm:dim-wcol}, while we claimed in the introduction that it was not. In order to address this perfectly valid point, let us mention that one could choose instead to add the extra assumption that $c > t$ in the statement of Theorem~\ref{thm:nowhere-dense-upper-bound}; this does not change the fact that it implies the forward direction of Theorem~\ref{thm:main} (in combination with Zhu's theorem). However, it seemed rather artificial to do so, since the theorem remains true without this technical assumption.} \begin{claim} \label{claim:small-dim} There exists an antichain $S$ of size $m$ in $P$ such that, letting $I:=\{(x,y) \in \Inc(P): s \leq y \; \forall s\in S\}$, we have $\dim(I) \geq 2$, and \[ \dim(\set{(x,y)\in I: q\leq y \text{ in $P$}}) < \dim(I)/3c \] for every element $q$ such that there exists $s\in S$ with $s < q$ in $P$. \end{claim} \begin{proof} We define the \emph{height vector} of an antichain $S$ of size at most $m$ in $P$ to be the vector of heights of elements in $S$ ordered in non-increasing order and padded at the end with $0$-entries so that the vector is of size exactly $m$. Note that $\binom{m+h}{h}$ is the number of size-$m$ vectors with entries in $\{0,1, \dots, h\}$ ordered in non-increasing order. We enumerate these vectors in lexicographic order with numbers from $0$ to $\binom{m+h}{h}-1 = f(h,t)-1$. Let $\val(S)$ denote the index of the height vector of $S$ in this enumeration. Notice that $\val(\emptyset)=0$. To prove the lemma, choose an antichain $S$ with $|S| \leq m$ such that \[ \dim(I)>d:=(3c)^{f(h,t)-\val(S)}, \] where $I:=\{(x,y) \in \Inc(P): s \leq y \; \forall s\in S\}$, and with $\val(S)$ maximum. Note that $S$ is well defined as $S=\emptyset$ is a candidate. Note also that $\dim(I) > d \geq 3c \geq 2$. We will show that $S$ and $I$ satisfy the lemma. First assume that $q \in P$ is such that there exists $s\in S$ with $s < q$ in $P$ and $\dim(I') \geq \dim(I)/3c$, where $I':= \set{(x,y)\in I: q\leq y \text{ in $P$}}$. Let $S' := (S - \D[q])\cup\set{q}$. Observe that $\val(S') > \val(S)$ and $|S'| \leq |S| \leq m$, since $\D[q] \cap S \neq \emptyset$ and the height of $q$ is strictly larger than the heights of all the elements in $\D[q] \cap S$. Moreover, \[ \dim(I') \geq \dim(I)/3c > d/3c = (3c)^{f(h,t)-\val(S)-1} \geq (3c)^{f(h,t)-\val(S')}, \] showing that $S'$ was a better choice than $S$, a contradiction. Hence, there is no such element $q$, and it only remains to show that $|S|=m$. Arguing by contradiction, suppose $|S| < m$. Consider the poset $Q:= P - \D[S]$, and let $I_Q := I \cap \Inc(Q)$. First, we claim that $I - I_Q$ is reversible in $P$. Arguing by contradiction, suppose that this set contains an alternating cycle $(x_1,y_1), \dots, (x_k, y_k)$. Since $Q$ is an induced subposet of $P$, for each $i\in [k]$, at least one $x_i$ and $y_i$ must be in $\D[S]$ (otherwise, $(x_i,y_i)$ would be an incomparable pair of $Q$). We cannot have $x_i \in \D[S]$, because otherwise $x_i \leq s$ in $P$ for some $s \in S$, and since $s \leq y_i$ in $P$ this would contradict the fact that $x_i$ and $y_i$ are incomparable. Thus, $y_i \in \D[S]$. However, since $x_{i-1} \leq y_i$ in $P$ (taking indices cyclically), it follows that $x_{i-1} \in \D[S]$, a contradiction. Hence, $I-I_Q$ is reversible, as claimed. It follows \begin{equation} \dim(I_Q) \geq \dim(I) - 1. \end{equation} Applying Lemma~\ref{lemma:q-support} on poset $Q$ and set $I_Q$, we obtain an element $q\in Q$ such that $\dim_Q\left( \{(x,y)\in I_Q: q \leq y \text{ in } Q\}\right) \geq \dim_Q(I_Q) / c - 2$. (The subscript $Q$ indicates that dimension is computed w.r.t.\ $Q$.) Here we use that $Q$ has height at most that of $P$, and thus at most $h$, and that the cover graph $G_Q$ of $Q$ is an (induced) subgraph of $G$ (since $Q$ is an upset of $P$), and thus $\wcol_{4h}(G_Q) \leq \wcol_{4h}(G)\leq c$. It only remains to point out that $\dim_Q(I_Q)=\dim(I_Q)$ because $Q$ is an induced subposet of $P$ (that is, a subset of $I_Q$ is an alternating cycle in $Q$ if and only if it is one in $P$), and similarly $\dim_Q\left( \{(x,y)\in I_Q: q \leq y \text{ in } Q\}\right) = \dim\left( \{(x,y)\in I_Q: q \leq y \text{ in } P\}\right)$. Putting everything together, we obtain \begin{align*} \dim\left( \{(x,y)\in I_Q: q \leq y \text{ in } P\}\right) &\geq \dim(I_Q) / c - 2 \\ &\geq (\dim(I) - 1)/c - 2 \\ &\geq \dim(I)/c - 3. \end{align*} Now, let $S' := (S - \D[q])\cup\set{q}$ and $I':= \set{(x,y)\in I: q\leq y \text{ in $P$}}$. Observe that $\val(S') > \val(S)$ and $|S'| \leq |S| +1 \leq m$. Moreover, \begin{align*} \dim(I') &\geq \dim(\set{(x,y)\in I_Q : q\leq y \text{ in $P$}}) \\ &\geq \dim(I)/c - 3 \\ &> (3c)^{f(h,t)-\val(S)}/c-3\\ &\geq 3\cdot (3c)^{f(h,t)-\val(S)-1}-3\\ &\geq (3c)^{f(h,t)-\val(S')}. \end{align*} (For the last inequality we use that $\val(S) < \val(S') < f(h,t)$.) This shows that $S'$ is a better choice than $S$, a contradiction. \end{proof} Let $S$ denote an antichain given by Claim~\ref{claim:small-dim}, and let $I$ denote the corresponding set of incomparable pairs. The next claim will be used to build the desired subdivision of $K_t$, the set $V$ will be the set of branch vertices. \begin{claim} There exist disjoint sets $V\subset P$ and $R\subset S$ such that \begin{enumerate2} \item\label{inv:V-and-R-sizes} $\norm{V}=t$ and $|R|\geq m^{h^{-t}} = \binom{t}{2}$, \item\label{inv:clean-branching-for-V} for all $(v,r)\in V \times R$, there is $p\in P$ such that $v$ covers $p$ in $P$ and $\D[p]\cap R=\set{r}$. \end{enumerate2} \end{claim} \begin{proof} Choose disjoint sets $V\subset P$ and $R\subset S$ satisfying \ref{inv:clean-branching-for-V} and \[ \norm{V}=j \quad \textrm{ and } \quad |R|\geq m^{h^{-j}} = \binom{t}{2}^{h^{t-j}}, \] with $j \leq t$ as large as possible. Note that $V=\emptyset$ and $R=S$ is a candidate, hence this choice is possible. We claim that $j=t$, which implies the lemma. Arguing by contradiction, assume $j<t$. For every $v\in V$, there is $r\in R\subseteq S$ such that $r<v$ in $P$ by~\ref{inv:clean-branching-for-V}. Hence, by Claim~\ref{claim:small-dim} \begin{align*} \dim\left(\left\{(x,y)\in I: y \not\in \Up[V]\right\} \right) &\geq \dim(I) - \sum_{v\in V} \dim(\set{(x,y)\in I: v\leq y \text{ in $P$}})\\ &> \dim(I) - \norm{V}\cdot \dim(I)/(3c)\\ &\geq 2 (1- t/(3c))\\ &> 1. \end{align*} (This is the place in the proof where we use our assumption that $c > t$.) It follows that the left-hand side is at least $2$. Therefore, $\set{y: (x,y)\in I} - \Up[V]$ is not empty. (Recall that the dimension of an empty set of incomparable pairs is $1$.) Choose some element $y$ in this set. Now, starting from the element $y$, we go down along cover relations in the poset $P$. Initially we set $v:=y$, and as long as there is a element $x\in P$ such that $v$ covers $x$ in $P$ and \begin{equation} \norm{\D[x]\cap R} > \norm{\D[v]\cap R}/m^{h^{-(j+1)}},\quad \textrm{we update $v:=x$.}\label{eq:v-update} \end{equation} Note that the process must stop as the height of $v$ is decreasing in every move. We claim that $v$ never goes down to a minimal element nor to an element in $R$. Indeed, if in the above procedure we are considering an element $x$ covered by $v$, then after at most $h-2$ steps we are done, and hence \[ \norm{\D[v]\cap R} > \frac{\norm{\D[y]\cap R}}{m^{(h-2)h^{-(j+1)}}} = \frac{\norm{R}}{m^{(h-2)h^{-(j+1)}} } \geq m^{h^{-j} - (h-2)h^{-(j+1)}} \geq m^{h^{-(j+1)}}. \] (Note that $\norm{\D[y]\cap R}=\norm{R}$ since $s \leq y$ in $P$ for all $s\in S$, by our choice of $y$.) Now, if $x$ is a minimal element or $x\in R$, then $\norm{\D[x]\cap R} \leq 1$ (note that $\norm{\D[x]\cap R} = 1$ when $x\in R$ because $R\subseteq S$ and $S$ is an antichain). Hence, the inequality of \eqref{eq:v-update} cannot hold strictly. Therefore, at the end of the process $v$ is not a minimal element of $P$ nor is included in $R$, as claimed. Consider now the set $Z$ consisting of all elements that are covered by $v$ in $P$. Since $v\not\in R$, we have $\D[v]\cap R \subseteq \D[Z]$. Let $Z'$ be an inclusion-wise minimal subset of $Z$ such that $\D[v]\cap R \subseteq \D[Z']$. The minimality of $Z'$ allows us to fix for every $z\in Z'$ an element $r_z\in \D[v]\cap R$ such that $r_z\in \D[z]$ and $r_z\not\in \D[z']$ for every $z'\in Z' - \{z\}$. Let \[ V' := V\cup\set{v}\quad \textrm{ and }\quad R' := \set{r_z: z\in Z'}. \] Recall that we have chosen $y$ such that $w\not\leq y$ in $P$ for every $w\in V$. On the other hand, by our procedure we have $v \leq y$ in $P$. Thus $v\not\in V$, and $\norm{V'}=j+1$. We will show that the pair $(V', R')$ was a candidate for our choice at the beginning of the proof, and hence a better choice than the pair $(V, R)$. Since elements $r_z$, $r_{z'}$ are distinct for distinct $z,z'\in Z'$, we have $\norm{R'}=\norm{Z'}$. Moreover, \begin{align*} \norm{\D[v]\cap R} = \norm{\D[Z']\cap R} \leq \sum_{z\in Z'}\norm{\D[z]\cap R} \leq \norm{Z'}\cdot \frac{\norm{\D[v]\cap R}}{m^{h^{-(j+1)}}}. \end{align*} We deduce that \[ \norm{R'}=\norm{Z'}\geq m^{h^{-(j+1)}}. \] Note that $V'$ and $R'$ are disjoint since $V$ is disjoint from $R$ and since $v$ is not contained in $R$, and thus not in $R'$ either. It remains to verify~\ref{inv:clean-branching-for-V} for $(V',R')$. Since $R'\subseteq R$, we only need to check this property for the new vertex $v$. Consider an element $r\in R'$. By the definition of $R'$ there is $z\in Z'$ such that $r=r_z$. Recalling the way we defined $r_z$, we obtain $\D[z]\cap R'=\set{r_z}$. This shows that the pair $(V', R')$ was a better choice than $(V, R)$, a contradiction. \end{proof} For the rest of the proof let $(V,R)$ be a pair given by the above claim. As mentioned, the vertices of $V$ will serve as the branch vertices of the subdivision of $K_t$ we are building. Next, we connect these vertices pairwise with internally vertex-disjoint paths. By \ref{inv:V-and-R-sizes} we have \[ \norm{V}=t \quad\textrm{ and }\quad \norm{R} \geq \binom{t}{2}. \] Thus, for each unordered pair $\{v_1,v_2\} \subseteq V$ we can choose a corresponding element $r_{\{v_1,v_2\}}\in R$ in such a way that all chosen elements are distinct. Furthermore, by Invariant~\ref{inv:clean-branching-for-V}, there are elements $p_1$ and $p_2$ covered respectively by $v_1$ and $v_2$ in $P$ such that $\D[p_1]\cap R=\D[p_2]\cap R=\set{r_{\{v_1,v_2\}}}$. Let \[ r_{\{v_1,v_2\}}=u_1<u_2<\cdots<u_{k}<p_{1}<v_1\ \text{ and }\ r_{\{v_1,v_2\}}=w_1<w_2<\cdots<w_{\ell}<p_{2}<v_2 \] be covering chains in $P$. Clearly, the union of the two covering chains contains a path connecting the vertices $v_1$ and $v_2$ in $G$; fix such a path $Q_{\{v_1,v_2\}}$ for the unordered pair $\{v_1,v_2\}$. Observe that the path $Q_{\{v_1,v_2\}}$ has length at most $2h-2$. Connecting all other pairs of vertices in $V$ in a similar way, we claim that the union of these paths forms a subdivision of $K_t$. All we need to prove is that whenever there is $z\in Q_{\{v_1,v_2\}}\cap Q_{\{v_1',v_2'\}}$ for distinct sets $\set{v_1,v_2},\set{v_1',v_2'}\subseteq V$, then $z$ is an endpoint of both paths. Suppose to the contrary that $z$ is an internal vertex of one path, say of $Q_{\{v_1,v_2\}}$. By our construction, there are elements $p_1$ and $p_2$ covered respectively by $v_1$ and $v_2$ in $P$ with $\D[p_1]\cap R=\D[p_2]\cap R=\set{r_{\{v_1,v_2\}}}$. Furthermore, we have $z\leq p_1$ or $z\leq p_2$ in $P$. Say $z\leq p_1$ without loss of generality. From $z\in Q_{\{v_1',v_2'\}}$ we deduce that $r_{\{v_1',v_2'\}}\leq z$ in $P$, which implies that $r_{\{v_1',v_2'\}}\leq p_1$ in $P$. However, it follows that $r_{\{v_1',v_2'\}}\in \D[p_1]\cap R=\set{r_{\{v_1,v_2\}}}$ and hence $r_{\{v_1,v_2\}}=r_{\{v_1',v_2'\}}$, a contradiction to our construction. We conclude that both paths $Q_{\{v_1,v_2\}}$ and $Q_{\{v_1',v_2'\}}$ are indeed internally disjoint. Finally, since all the paths $Q_{\{v_1,v_2\}}$ ($\set{v_1,v_2}\subseteq V$) have length at most $2h-2$, this shows the existence of a $\leq\!(2h-3)$-subdivision of $K_t$ in $G$, as desired. This completes the proof of Theorem~\ref{thm:nowhere-dense-upper-bound}. \end{proof} \section{Applications} \label{sec:applications} In this section we discuss some applications of Theorem~\ref{thm:dim-wcol}, starting with posets whose cover graphs have bounded genus. It was shown by van den Heuvel, Ossona de Mendez, Quiroz, Rabinovich, and Siebertz~\cite{HOQRS} that \[ \wcol_r(G)\leq \left(2g+\binom{r+2}{2}\right)\cdot (2r+1) \] for every graph $G$ with genus $g$. Combining this inequality with Theorem~\ref{thm:dim-wcol}, we obtain the following upper bound. \begin{corollary} For every poset $P$ of height at most $h$ whose cover graph has genus $g$, \[ \dim(P)\leq 4^{\left(2g+\binom{3h-1}{2}\right)\cdot(6h-5)}. \] \end{corollary} For fixed genus, this is a $2^{\mathcal{O}(h^3)}$ bound on the dimension. In particular, this improves on the previous best bound for posets with planar cover graphs~\cite{JMMTWW}, which was doubly exponential in the height. It is in fact suspected that posets with planar cover graphs have dimension at most linear in their height. This was recently proved~\cite{JMW_PlanarPosets} for posets whose {\em diagrams} can be drawn in a planar way; these posets form a strict subclass of posets with planar cover graphs. Regarding posets with planar cover graphs, Kozik, Micek, and Trotter recently announced that they could prove a polynomial bound on the dimension. Let us remark that it is rather remarkable that linear or polynomial bounds can be obtained when assuming that the poset has a planar diagram or a planar cover graph, respectively. Indeed, for the slightly larger class of posets with $K_5$-minor-free cover graphs, constructions show that the dimension can already be exponential in the height, as shown in~\cite{JMW_PlanarPosets}. (This also follows from Theorem~\ref{thm:lb-treewidth} below, applied with $t=3$.) We continue our discussion with graphs of bounded treewidth. Let us first quickly recall the definitions of tree decompositions and treewidth (see e.g.\ Diestel~\cite{Diestel} for an introduction to this topic). A {\em tree decomposition} of a graph $G$ is a pair consisting of a tree $T$ and a collection $\{B_x \subseteq V(G) : x \in V(T)\}$ of sets of vertices of $G$ called {\em bags}, one for each node of $T$, satisfying: \begin{itemize} \item each vertex $v\in V(G)$ is contained in at least one bag; \item for each edge $uv\in E(G)$, there is a bag containing both $u$ and $v$, and \item for each vertex $v\in V(G)$, the set of nodes $x\in V(T)$ such that $v \in B_x$ induces a subtree of $T$. \end{itemize} The {\em width} of the tree decomposition is $\max_{x\in V(T)} \norm{B_x}-1$. The {\em treewidth} of $G$ is the minimum width of a tree decomposition of $G$. Grohe, Kreutzer, Rabinovich, Siebertz, and Stavropoulos~\cite{GKRSS} showed that \begin{align*} \wcol_r(G)\leq \binom{t+r}{t} \end{align*} for every graph $G$ of treewidth $t$. Combining Theorem~\ref{thm:dim-wcol} with the above bound, we obtain a single exponential bound. \begin{corollary} For every poset $P$ of height at most $h$ with a cover graph of treewidth $t$, \[ \dim(P)\leq 4^{\binom{t+3h-3}{t}}. \] \end{corollary} For fixed $t$, this is a $2^{\mathcal{O}(h^t)}$ bound on the dimension, which improves on the doubly exponentional bound in~\cite{JMMTWW}. Surprisingly, this upper bound turns out to be essentially best possible: \begin{theorem}\label{thm:lb-treewidth} Let $t\geq 3$ be fixed. For each $h\geq 4$, there exists a poset $P$ of height at most $h$ whose cover graph has treewidth at most $t$, and such that \[ \dim(P)\geq 2^{\Omega(h^{\lfloor(t-1)/2\rfloor})}. \] \end{theorem} This theorem will be implied by the following slightly more technical theorem, which is an extension of the construction for treewidth $3$ in~\cite{JMW_PlanarPosets}. In this theorem, we use the following terminology: If a poset $P$ is such that the sets of its minimal and maximal elements induce a standard example $S_k$ then we call {\em vertical pair} each of the $k$ pairs $(a,b)$ with $a$ a minimal element, $b$ a maximal element, and $(a, b) \in \Inc(P)$. \begin{theorem}\label{thm:lb-tw-construct} For every $h\geq 1 $ and $t\geq 1$, there exists a poset $P_{h,t}$ and a tree decomposition of its cover graph such that \begin{enumerate} \item\label{item:tw-height} $P_{h,t}$ has height $2h$; \item\label{item:tw-standard} the minimal and maximal elements of $P_{h,t}$ induce the standard example $S_k$ with $k=2^{\binom{h+t-1}{t}}$; \item\label{item:tw-width} the tree decomposition has width at most $2t+1$, and \item\label{item:tw-bag} for each vertical pair $(a,b)$ in $P_{h,t}$ there is a bag of the tree decomposition containing both $a$ and $b$. \end{enumerate} \end{theorem} \begin{proof} We prove the theorem by induction on $h$ and $t$. Let us first deal with the case $h=1$, which serves as the base cases for the induction. If $h=1$, then it is easy to see that letting $P_{1,t}$ be the standard example $S_2$ fulfills the desired conditions. For the tree decomposition, it suffices to take a tree consisting of a single node whose bag contains all four vertices. (We note that we could in fact take $P_{1,t} := S_{2t+1}$ and increase slightly the bound in~\ref{item:tw-standard} but the gain is negligible.) Next, for the inductive case, suppose that $h \geq 2$. We treat separately the cases $t=1$ and $t \geq 2$. First, suppose that $t=1$. The poset $P_{h,1}$ is defined using the inductive construction illustrated in Figure~\ref{fig:height-tw-ex} (left): We start with the poset $P_{h-1,1}$, and for each vertical pair $(a,b)$ in $P_{h-1,1}$ we introduce four elements $x_1,x_2,y_1,y_2$ forming a standard example $S_2$ (with $x$'s and $y$'s being minimal and maximal elements, respectively). Then we add the relations $x_1 < a$ and $x_2 < a$, and $b < y_1$ and $b < y_2$, and take the transitive closure. This defines the poset $P_{h,1}$. It is easy to see that the height of $P_{h,1}$ is exactly the height of $P_{h-1,1}$ plus $2$, which implies \ref{item:tw-height}. It is also easily checked that the number of minimal (maximal) elements in $P_{h-1,1}$ is twice the number in $P_{h-1,1}$, which was $2^{h-1}$, and that the union of minimal and maximal elements induce the standard example $S_{2^h}$, showing~\ref{item:tw-standard}. Now, consider a tree decomposition of the cover graph of $P_{h-1,1}$ satisfying \ref{item:tw-width} and \ref{item:tw-bag}. For each vertical pair $(a,b)$ of $P_{h-1,1}$, consider a node $z$ of the tree whose bag $B_z$ contains both $a$ and $b$, and let $x_1,x_2,y_1,y_2$ be the four elements introduced when considering $(a,b)$ in the definition of $P_{h,1}$. Extend the tree decomposition by adding three new nodes $z', z'_1, z'_2$ with bags $B_{z'}:= \{a,b,y_1,y_2\}$, $B_{z'_1}:= \{a,x_1,y_1,y_2\}$, $B_{z'_2}:= \{a,x_2,y_1,y_2\}$, and adding the three edges $zz', z'z'_1, z'z'_2$ to the tree, as illustrated in Figure~\ref{fig:tw-3-ex}. Clearly, once this extension is done for each vertical pair of $P_{h-1,1}$, the resulting tree decomposition of the cover graph of $P_{h,1}$ satisfies \ref{item:tw-width} and \ref{item:tw-bag}. Next, suppose that $t\geq 2$. We start with a copy of $P_{h-1,t}$ and let $(a_1,b_1),\ldots,(a_\ell,b_\ell)$ denote its vertical pairs. For each vertical pair $(a_i,b_i)$, we introduce a copy $P^i$ of $P_{h,t-1}$, and add the relation $x < a_i$ for each minimal element $x$ of $P_i$, and the relation $b_i < y$ for each maximal element $y$ of $P_i$; see Figure~\ref{fig:height-tw-ex} (right). Then $P_{h,t}$ is obtained by taking the transitive closure of this construction. Observe that $P_{h,t}$ has height $2h$, thus~\ref{item:tw-height} holds. Moreover, the minimal and maximal elements of $P_{h,t}$ induce a standard example $S_k$ with \[ k=2^{\binom{h+t-2}{t}}\cdot 2^{\binom{h+t-2}{t-1}}=2^{\binom{h+t-1}{t}} \] by the induction hypothesis, showing~\ref{item:tw-standard}. Next, consider the tree decomposition of the cover graph of $P_{h-1,t}$ given by the induction hypothesis. We extend this tree decomposition by doing the following for each vertical pair $(a_i,b_i)$ of $P_{h-1,t}$: Consider a node $z^i$ of the tree whose bag contains both $a_i$ and $b_i$. Take the tree decomposition of the cover graph of $P^i$ given by the induction hypothesis and denote its tree by $T^i$ (on a new set of nodes). Then add an edge between the node $z^i$ and an arbitrary node of $T^i$, and add $a_i$ and $b_i$ to every bag of nodes coming from $T^i$. It is easily checked that this defines a tree decomposition of the cover graph of $P_{h,t}$, of width at most $2t+1$, such that for each vertical pair $(a,b)$ of $P_{h,t}$ there is bag containing both $a$ and $b$. Therefore, properties \ref{item:tw-width} and \ref{item:tw-bag} are satisfied. \end{proof} \begin{figure}[t] \centering \includegraphics{height-tw-construction} \caption{Inductive construction of $P_{h,t}$.} \label{fig:height-tw-ex} \end{figure} \begin{figure}[h] \centering \includegraphics{tw-3-extend} \caption{Extending the tree decomposition.} \label{fig:tw-3-ex} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:lb-treewidth}] Let $t\geq 3$ and $h\geq 4$. Then we set $h':=\lfloor h/2\rfloor$ and $t'=\lfloor(t-1)/2\rfloor$. With these values, the poset $P_{h',t'}$ from Theorem~\ref{thm:lb-tw-construct} has height $2h'\leq h$ and its cover graph has treewidth at most $2t'+1\leq t$. Moreover, \[ \dim(P_{h',t'})\geq 2^{\binom{h'+t'-1}{t'}}=2^{\Omega(h'^{t'})}=2^{\Omega(h^{\lfloor (t-1)/2\rfloor})}. \] (Recall that the asymptotics in the theorem statement are taken with respect to $h$ with $t$ being a fixed constant.) \end{proof} We pursue with the case of posets whose cover graphs exclude $K_t$ as a minor. It was shown by van den Heuvel {\it et al.}~\cite{HOQRS} that \[ \wcol_r(G)\leq \binom{r+t-2}{t-2}\cdot (t-3)(2r+1)\in\mathcal{O}(r^{t-1}) \] for every graph $G$ excluding $K_t$ as a minor. Together with Theorem~\ref{thm:dim-wcol}, this yields the following improvement on the previous best bound~\cite{MW15}, which was doubly exponential in the height (for fixed $t$). \begin{corollary} For every poset $P$ of height at most $h$ whose cover graph excludes $K_t$ as a minor, \[ \dim(P)\leq 4^{\binom{3h+t-5}{t-2}\cdot (t-3)(6h-5)}. \] \end{corollary} For a fixed integer $t \geq 5$, this $2^{\mathcal{O}(h^{t-1})}$ bound is again essentially best possible by Theorem~\ref{thm:lb-treewidth} (using an upper bound of $t-2 \geq 3$ on the treewidth), because graphs of treewidth at most $t-2$ cannot contain $K_{t}$ as a minor. On the other hand, it is no coincidence that we cannot use Theorem~\ref{thm:lb-treewidth} in this way when $t \leq 4$: Indeed, posets whose cover graphs exclude $K_4$ as a minor (or equivalently, have treewidth at most $2$) have dimension bounded by a universal constant (at most $1276$), irrespectively of their height~\cite{JMTWW}. Regarding graphs $G$ that exclude $K_t$ as a topological minor, it is implicitly proven in the work of Kreutzer, Pilipczuk, Rabinovich, and Siebertz~\cite{KPRS} that these graphs satisfy \[ \wcol_r(G)\leq 2^{\mathcal{O}(r\log r)} \] when $t$ is fixed. Combining this inequality with Theorem~\ref{thm:dim-wcol} we get a slight improvement upon the bound derived in~\cite{MW15}, however the resulting bound remains doubly exponential: \begin{corollary} Let $t \geq 1$ be a fixed integer. Then, every poset $P$ of height at most $h$ whose cover graph excludes $K_t$ as a topological minor satisfies \[ \dim(P)\leq 2^{2^{\mathcal{O}(h\log h)}}. \] \end{corollary} See Figure~\ref{fig:hierarchy} for a summary of the best known upper bounds and extremal examples for the various graph classes discussed in this section (and a few more). Bounds not already mentioned in the text can be found in~\cite{TM77,FTW13,JMW_PlanarPosets,Veit_PhD}. \begin{figure}[ht!] \centering \includegraphics{hierarchy-bounds} \caption{Summary of known bounds.\label{fig:hierarchy}} \end{figure} \section{Open problems} \label{sec:open_problems} One remaining open problem is to prove the backward direction of Conjecture~\ref{conj:bounded_exp}, which we restate here: \begin{conjecture} Let $\mathcal{C}$ be a monotone class of graphs such that for every fixed $h\geq 1$, posets of height at most $h$ whose cover graphs are in $\mathcal{C}$ have bounded dimension. Then $\mathcal{C}$ has bounded expansion. \end{conjecture} As a first step, one could try to show that graphs in the class $\mathcal{C}$ have bounded average degree. In this direction, we offer the following related conjecture. \begin{conjecture} Let $\mathcal{C}$ be a monotone class of bipartite graphs such that, seeing the graphs in $\mathcal{C}$ as posets of height (at most) $2$, these posets have bounded dimension. Then the graphs in $\mathcal{C}$ have bounded average degree. \end{conjecture} \section*{Acknowledgements} We are much grateful to the anonymous referees for their very helpful comments. In particular, we thank one referee for pointing out an error in the proof of Claim~\ref{claim:small-dim} regarding how element $q$ was chosen, and another referee for her/his many suggestions on how to improve the exposition of the proofs and shorthen the arguments. \bibliographystyle{plain}
1,116,691,498,051
arxiv
\section{INTRODUCTION} Complex rules in modern tasks often specify desired system behaviors and timed temporal constraints that require mission completion within a given period. Performing such tasks can be challenging, especially when the operating environment is dynamic and unknown. For instance, user-specified missions or temporal constraints can be found infeasible during motion planning. Therefore, this work is motivated for online motion planning subject to timed high-level specifications. Linear temporal logic (LTL) has been widely used for task and motion planning due to its rich expressivity and resemblance to natural language \cite{Belta2007}. When considering timed formal language, as an extension of traditional LTL, timed temporal languages such as metric interval temporal logic (MITL) \cite{Alur1996}, signal temporal logic (STL) \cite{Maler2004}, time-window temporal logic (TWTL) \cite{Vasile2017TWTL}, are often employed. However, most existing results are built on the assumption that user-specified tasks are feasible. New challenges arise when the operating environment is dynamic and unknown since the environment can become prohibitive (e.g., an area to be visited is found later to be surrounded by obstacles), leading to mission failure. To address these challenges, tasks with temporal logic specifications are often relaxed to be fulfilled as much as possible. A least-violating control strategy is developed in \cite{Castro2013,Tumova2016,lahijanian2016iterative, Vasile2017,Cai2020b,Cai2021_soft_RL} to enforce the revised motion planning close to the original LTL specifications. In \cite{Guo2015,Andersson2018,Ahlberg2019}, hard and soft constraints are considered so that the satisfaction of hard constraints is guaranteed while soft constraints are minimally violated. Time relaxation of TWTL has been investigated in \cite{Peterson2020,kamale2021automata,aksaray2021learning}. Receding horizon control (RHC) is also integrated with temporal logic specifications to deal with motion planning in dynamic environments \cite{wongpiromsarn2012receding,Ding2014,ulusoy2014receding, Lu2018,Cai2020c,Aasi2021}. Other representative results include learning-based methods \cite{Hasanbeig2019reinforcement, Cai2020, Cai2020d, Cai2021modular, Cai2021safe} and sampling-based reactive methods \cite{Vasile2020,kantaros2020reactive}. Most of the results mentioned above do not consider time constraints in motion planning. MITL is an automaton-based temporal logic that has flexibility to express general time constraints. Recent works \cite{Nikou2016,Nikou2018, verginis2019, li2021policy, Xu2021controller} propose different strategies to satisfy MITL formulas. The works of \cite{Nikou2016,Nikou2018} consider cooperative planning of a multi-agent system with MITL specifications and the work of \cite{Xu2021controller} further investigates MITL planning of a MAS subject to intermittent communication. When considering dynamic environments, MITL with probabilistic distributions is developed in \cite{li2021policy} to express time-sensitive missions, and a Reconfigurable algorithm is developed in \cite{verginis2019}. However, the aforementioned works assume that the desired MTIL specifications are always feasible for the robotic system. Sofie et al. \cite{Andersson2018,Ahlberg2019} first take into account the soft MITL constraints and studies the interactions of human-robot, but only static environments are considered. It is not yet understood how timed temporal tasks can be successfully managed in a dynamic and unknown environment, where predefined tasks may be infeasible. Motivated by these challenges, this work considers online motion planning of an autonomous system with timed temporal specifications. Unlike STL defined over predicates, MITL provides more general time constraints and can express tasks over infinite horizons. Furthermore, MITL can be translated into timed automata that allow us to exploit graph-theoretical approaches for analysis and design. Therefore, MITL is used in this work. The contributions of this work are multi-fold. First, the operating environment is not fully known a priori and dynamic in the sense of containing mobile obstacles and time-varying areas of interest that can only be observed locally. The dynamic and unknown environment can lead to potentially conflicting tasks (i.e., the pre-specified MITL missions or time constraints cannot be fully satisfied). Inspired by our previous work \cite{Cai2020c}, we consider both hard and soft constraints. The motivation behind this design is that safety is crucial in real-world applications; therefore, we formulate safety requirements (e.g., avoid obstacles) as hard constraints that cannot be violated in all cases. In contrast, soft constraints can be relaxed if the environment does not permit such specifications so that the agent can accomplish the tasks as much as possible. Second, to deal with time constraints, we apply MITL specifications to model timed temporal tasks and further classify soft constraints by how they can be violated. For instance, the mission can fail because the agent cannot reach the destination on time, or the agent visits some risky regions. Therefore, the innovation considers violations of both time constraints and task specifications caused by dynamic obstacles, which can be formulated as continuous and discrete types, respectively. Our framework is to generate controllers achieving multiple objectives in decreasing order of priority: 1) formally guarantee the satisfaction of hard constraints; 2) mostly satisfy soft constraints (i.e., minimizing the violation cost); and 3) collect time-varying rewards as much as possible (e.g., visiting areas of higher interest more often). Different from \cite{Ding2014} that assumes the LTL specifications can be exactly achieved, we relax the assumption and consider tasks with time constraints described by MITL formulas. Unlike \cite{Andersson2018,Ahlberg2019}, we consider a dynamic unknown environment where the agent needs to detect and update in real-time. In particular, a multi-objective RHC is synthesized online to adapt to the dynamic environment, which guarantees the safety constraint and minimum violation of the soft specification. Furthermore, it's worth noting that the RHC only considers local dynamic information online while global satisfaction is formally guaranteed, which is efficient for large-scale environments. Finally, we demonstrate the effectiveness of our algorithm by a complex infinite task in simulation. \section{PRELIMINARIES\label{Sec:Preliminary}} A dynamical system with finite states evolving in an environment can be modeled by a weighted transition system. \begin{defn} \cite{Baier2008}\label{def:WTS} A weighted transition system (WTS) is a tuple $\mathscr{\mathcal{T=\textrm{\ensuremath{\left(Q,q_{0},\delta,\mathcal{AP},L,\mathcal{\omega}\right)}}}}$, where $Q$ is a finite set of states; $q_{0}\in Q$ is the initial state; $\delta\in Q\times Q$ is the state transitions; $\mathcal{AP}$ is the finite set of atomic propositions; $L:Q\rightarrow2^{\mathcal{AP}}$ is a labeling function, and $\omega:\delta\rightarrow\mathbb{R}^{+}$ assigns a positive weight to each transition. \end{defn} A timed run of a WTS $\mathcal{T}$ is an infinite sequence $\boldsymbol{r}=(q_{0},\tau_{0})(q_{1},\tau_{1})\ldots$, where $\boldsymbol{q}=q_{0}q_{1}\ldots$ is a trajectory with $q_{i}\in Q$ , and $\boldsymbol{\tau}=\tau_{0}\tau_{1}\ldots$ is a time sequence with $\tau_{0}=0$ and $\tau_{i+1}=\tau_{i}+\omega(q_{i},q_{i+1}),\forall i\geq0$. The timed run $\boldsymbol{r}$ generates a timed word $\boldsymbol{w}=(\sigma_{0},\tau_{0})(\sigma_{1},\tau_{1})\ldots$ where $\boldsymbol{\sigma}=\sigma_{0}\sigma_{1}\ldots$ is an infinite word with $\sigma_{i}=L(q_{i})$ for $i\geq0$. Let $R_{k}(q)$ denote the time-varying reward associated with a state $q$ at time $k$. The reward reflects the time-varying objective in the environment. Given a predicted trajectory $\boldsymbol{q}_{k}=q_{0}q_{1}\ldots q_{N}$ at time $k$ with a finite horizon $N$, the accumulated reward along the trajectory $\boldsymbol{q}_{k}$ can be computed as $\boldsymbol{R}_{k}(\boldsymbol{q}_{k})=\sum_{i=1}^{N}R_{k}(q_{i})$. Note that this paper mainly studies high-level planning and decision-making problems. Similar to \cite{Cai2020b}, we assume low-level controllers can achieve go-to-goal navigation, which can be abstracted by WTS. We further assume that the workspace boundaries are known, which is a common assumption in many existing works \cite{Nikou2016,Nikou2018,Guo2015,Andersson2018,Ahlberg2019,Ding2014}. \subsection{Metric Interval Temporal Logic}\label{sec:MITL} Metric interval temporal logic (MITL) is a specific temporal logic that includes timed temporal specification \cite{Nikou2018}. The syntax of MITL formulas is defined as $\phi:=p\mid\neg\phi\mid\phi_{1}\land\phi_{2}\mid\diamondsuit_{I}\phi\mid\boxempty_{I}\phi\mid\phi_{1}\mathcal{U_{\mathit{I}}}\phi_{2}$, where $p\in\mathcal{AP}$, $\land(\textrm{conjunction}),\lnot(\textrm{negation})$ are Boolean operators and $\diamondsuit_{I}\text{(eventually)}$, $\boxempty_{I}(\textrm{always})$ , $\mathcal{U_{\mathit{I}}}(\textrm{until})$ are temporal operators bounded by the non-empty time interval $I=[a,b]$ with $a,b\in\mathbb{R}_{\geq0},b>a$. They are called temporally bounded operators if $b\neq\infty$, and non-temporally bounded operators otherwise. A formula $\phi$ containing a temporally bounded operator will be called a temporally bounded formula. The same holds for non-temporally bounded formulas. Given a timed run $\boldsymbol{r}$ of $\mathcal{T}$ and an MITL formula $\phi$, let $(\boldsymbol{r},i)$ denote the indexed element $(q_{i},\tau_{i})$. Then the satisfaction relationship $\models$ of MITL can be defined as: \[ \begin{array}{l} (\boldsymbol{r},i)\models p\Longleftrightarrow p\in L(q_{i})\\ (\boldsymbol{r},i)\models\lnot\phi\Longleftrightarrow(\boldsymbol{r},i)\nvDash\phi\\ (\boldsymbol{r},i)\models\phi_{1}\land\phi_{2}\Longleftrightarrow(\boldsymbol{r},i)\models\phi_{1}\textrm{\textrm{ and }}(\boldsymbol{r},i)\models\phi_{2}\\ (\boldsymbol{r},i)\models\diamondsuit_{I}\phi\Longleftrightarrow\exists j,i\leq j,s.t.(\boldsymbol{r},j)\models\phi,\tau_{j}-\tau_{i}\in I\\ (\boldsymbol{r},i)\models\boxempty_{I}\phi\Longleftrightarrow\forall j,i\leq j,\tau_{j}-\tau_{i}\in I\Rightarrow(\boldsymbol{r},j)\models\phi\\ (\boldsymbol{r},i)\models\phi_{1}\mathscr{\mathcal{U}}_{I}\phi_{2}\Longleftrightarrow\exists j,i\leq j,s.t.(\boldsymbol{r},j)\models\phi_{2},\tau_{j}-\tau_{i}\\ \textrm{\ \ \ \ \ \ \ \ \ \ \ \ \ensuremath{\in I} and }(\boldsymbol{r},k)\models\phi_{1}\textrm{\textrm{ for }\textrm{every} }i\leq k\leq j \end{array} \] \subsection{Timed B\"uchi Automaton\label{subsec:TBA}} Let $X=\{x_{1},x_{2},\ldots,x_{M}\}$ be a finite set of clocks. The set of clock constraints $\Phi(X)$ is defined by the grammar $\varphi\coloneqq\top\mid\lnot\varphi\mid\varphi_{1}\wedge\varphi_{2}\mid x\Join c$, where $x\in X$ is a clock, $c\in\mathbb{R^{+}}$ is a clock constant and $\Join\in\{<,>,\ge,\le,=\}$. A clock valuation $\nu$ : $X\rightarrow\mathbb{R^{+}}$ assigns a real value to each clock. We denote by $\nu\models\varphi$ if the valuation $\nu$ satisfies the clock constraint $\varphi$, where $\nu=(\nu_{1},\ldots,\nu_{M})$ with $\nu_{i}$ being the valuation of $x_{i}$, $\forall i\in{1,\ldots,M}$. An MITL formula can be converted into a Timed B\"uchi Automaton (TBA) \cite{Alur1994}. \begin{defn} A TBA is a tuple $\mathcal{A}=(S,S_{0},\mathcal{AP},\mathcal{L},X,I_{X},E,F)$ where $S$ is a finite set of states; $S_{0}\subseteq S$ is the set of initial states; $2^{\mathcal{AP}}$ is the alphabet where $\mathcal{AP}$ is a finite set of atomic propositions; $\mathcal{L}:S\rightarrow2^{\mathcal{AP}}$ is a labeling function; $X$ is a finite set of clocks; $I_{X}:S\rightarrow\Phi(X)$ is a map from states to clock constraints; $E\subseteq S\times\Phi(X)\times2^{\mathcal{AP}}\times S$ represents the set of edges of form $e=(s,g,a,s^{\prime})$ where $s,s^{\prime}$ are the source and target states, $g$ is the guard of edge via an assigned clock constraint, and $a\in2^{\mathcal{AP}}$ is an input symbol; $F\subseteq S$ is a set of accepting states. \end{defn} \begin{defn} An automata timed run $\boldsymbol{r}_{\mathcal{A}}=(s_{0},\tau_{0})\ldots(s_{n},\tau_{n})$ of a TBA $\mathcal{A}$, corresponding to the timed run $\boldsymbol{r}=(q_{0},\tau_{0})\ldots(q_{n},\tau_{n})$ of a WTS $\mathcal{T}$, is a sequence where $s_{0}\in S_{0}$, $s_{j}\in S$, and $(s_{j},g_{j},a_{j},s_{j+1})\in E\ \forall j\geq0$ such that i) $\tau_{j}\models g_{j},j\geq0,$ and ii) $L(q_{j})\subseteq\mathcal{L}(s_{j}),\forall j$. \end{defn} \begin{defn} Given a WTS $\mathcal{T}=\textrm{\ensuremath{\left(Q,q_{0},\delta,\mathcal{AP},L,\mathcal{\omega}\right)}}$ and a TBA $\mathcal{A}=(S,S_{0},\mathcal{AP},\mathcal{L},X,I_{X},E,F)$, the product automaton $\mathcal{P}=\mathcal{T\times A}$ is defined as a tuple $\mathcal{P}=\{P,P_{0},\mathcal{AP},L_{\mathcal{P}},\delta_{\mathcal{\mathcal{P}}},I_{X}^{\mathcal{P}}\mathcal{\text{,}F_{\mathcal{P}}},\omega_{\mathcal{P}}\}$, where $P\subseteq\left\{ (q,s)\in Q\times S:L(q)\subseteq \mathcal{L}(s)\right\}$ is the set of states; $P_{0}=\{q_{0}\}\times S_{0}$ is the set of initial states; $L_{\mathcal{P}}=P\rightarrow2^{\mathcal{AP}}$ is a labeling function, i.e., $L_{\mathcal{P}}(p)=L(q)$; $\delta_{\mathcal{P}}\subseteq P\times P$ is the set of transitions defined such that $((q,s),(q^{\prime},s^{\prime}))\in\delta_{\mathcal{P}}$ if and only if ($q,q^{\prime})\in\delta$ and $\exists g,a,$ such that $(s,g,a,s^{\prime})\in E$; $I_{X}^{\mathcal{P}}(p)=I_{X}(s)$ is a map of clock constraints; $\mathcal{F}_{\mathcal{P}}=Q\times F$ is the set of accepting states; $\omega_{\mathcal{P}}\colon\delta_{\mathcal{P}}\rightarrow\mathbb{R}^{+}$ is the positive weight function, i.e., $\omega_{\mathcal{P}}(p,p^{\prime})=\omega(q,q^{\prime})$. \end{defn} \section{Problem Formulation\label{sec:PF}} To better explain our motion planning strategy, we use the following running example throughout this work. \begin{example} \label{examp1} \begin{figure} \centering{}\includegraphics[scale=0.37]{figure1.PNG}\caption{\label{fig:example}(a) The simplified Pac-Man game with randomly populated $\mathtt{pear}$, $\mathtt{cherry}$, $\mathtt{grass}$ and $\mathtt{obstacle}$ (i.e., black blocks). (b) The sensed environment by Pac-Man initially. Pac-man only knows the positions of $\mathtt{pear}$, $\mathtt{cherry}$, $\mathtt{grass}$ and locally sensed time-varying rewards (i.e., cyan dots), without any knowledge about the number and distribution of obstacles.} \end{figure} Consider a motion planning problem for a simplified Pac-Man game in Fig. \ref{fig:example}. The maze is abstracted to a named grid-like graph, and the set of atomic propositions $\mathcal{AP}=\left\{ \mathtt{obstacle,grass,pear,cherry}\right\} $ indicates the labeled properties of regions. In particular, $\mathtt{obstacle}$ represents areas that should be totally avoided, $\mathtt{grass}$ represents risky areas that should be avoided if possible, and $\mathtt{pear}$ and $\mathtt{cherry}$ represent points of interest. The environment is dynamic in the sense of containing mobile obstacles and time-varying rewards $R_{k}(q)\in\mathbb{R}^{+}$ that are randomly generated. Cyan dots represent the rewards with size proportional to their value. We make the following assumptions: 1) the environment is only partially known to Pac-Man, i.e., the locations of $\mathtt{pear}$, $\mathtt{cherry}$, and $\mathtt{grass}$ are known, but not the obstacles it may encounter; 2) the Pac-Man has limited sensing capability, i.e., it can only detect obstacles, sense region labels, and collect rewards within a local area around itself. The motion of the Pac-Man is modeled by a weighted transition system $\mathcal{T}$ as in Def. \ref{def:WTS} with four possible actions, \textquotedblleft up,\textquotedblright{} \textquotedblleft down,\textquotedblright{} \textquotedblleft right,\textquotedblright{} and \textquotedblleft left.\textquotedblright{} The timed temporal task of Pac-Man is specified by an MITL formula $\phi=\phi_{h}\wedge\phi_{s}$, where the hard constraints $\phi_{h}$ \textcolor{black}{enforce safety requirement (e.g., }$\phi_{h}=\lnot\mathtt{obstacle}$\textcolor{black}{) }that has to be fully satisfied while the soft constraints $\phi_{s}$ represent tasks that can be relaxed if the environment does not permit (e.g., $\phi_{s}=\lnot\mathtt{grass}\land\lozenge_{t<10}\mathtt{pear}$). \end{example} In Example \ref{examp1}, the motion planning problem is challenging since $\phi_{s}$ can be violated in multiple ways. For instance, suppose that $\mathtt{grass}$ is in between $\mathtt{pear}$ and Pac-Man, and it takes more than 10 seconds to reach $\mathtt{pear}$ if Pac-Man circumvents $\mathtt{grass}$. In this case, Pac-Man can either violate the mission $\lnot\mathtt{grass}$ by traversing $\mathtt{grass}$ or violate the time constraints $\lozenge_{t<10}\mathtt{pear}$ by taking a longer but safer path. To consider potentially infeasible specifications, we define the total violation cost of an MITL formula as follows. \begin{defn} Given a time run $\boldsymbol{r}=(q_{0},\tau_{0})\ldots(q_{n},\tau_{n})$ of a WTS $\mathcal{T}$, the total violation cost of an MITL formula $\phi$ is defined as \begin{equation} \mathcal{W}(\boldsymbol{r},\phi)=\stackrel[k=0]{n-1}{\sum}\omega(q_{k},q_{k+1})\omega_{v}(q_{k},q_{k+1},\phi), \label{eq:total_cost} \end{equation} where $\omega(q_{k},q_{k+1})=\tau_{k+1}-\tau_{k}$ is the time required for the transition $(q_{k},q_{k+1})$ and $\omega_{v}(q_{k},q_{k+1},\phi)$ is defined as the violation cost of the transition with respect to $\phi$. Then, the formal statement of the problem is expressed as follows. \end{defn} \begin{problem} \label{prob1}Given a weighted transition system $\mathcal{T}$, and an MITL formula $\phi=\phi_{h}\land\phi_{s}$, the control objective is to design a multi-goal online planning strategy, in decreasing order of priority, with which 1) $\phi_{h}$ is fully satisfied; 2) $\phi_{s}$ is fulfilled as much as possible if $\phi_{s}$ is not feasible i.e. minimize the total violation cost $\mathcal{W}(\boldsymbol{r},\phi_{s})$; and 3) the agent collects rewards as much as possible over an infinite horizon task operation. \end{problem} \section{Relaxed Automaton } Sec. \ref{subsec:Relax} presents the procedure of constructing the relaxed TBA to allow motion revision. Sec. \ref{subsec:Energy} presents the design of energy function that guides the satisfaction of MITL specifications. Sec. \ref{subsec: Update} gives the online update of environment knowledge for motion planning. \subsection{Relaxed Timed B\"uchi Automaton \label{subsec:Relax}} \begin{algorithm} \caption{\label{alg:Construct S,F}Construct set of states $\hat{S}$, initial states $\hat{S}_{0}$ and accepting states $\hat{F}$ of a relaxed TBA} \small \singlespacing \begin{algorithmic}[1] \Procedure {Input: } {MITL specification $\phi=\phi_{h}\land\phi_{s}$ } {Output: } { $\hat{S},\hat{S}_{0},\hat{F}$} \State{ construct state relevant to $\phi_{h}$:} \State{ add a state $\hat{s}_{sink}$} \State{ construct states relevant to $\phi_{s}$:} \State{ $\varPhi_{s}=\{\phi_{i}:\phi_{s}=\bigwedge_{i}\phi_{i}\}$} \For{ $\phi_{i}\in\varPhi_{s}$ } \If{$\phi_{i}$ is temporally bounded } \State $\varphi_{i}=\{\phi_{i}^{sat},\phi_{i}^{vio},\phi_{i}^{unc}\}$; \ElsIf{$\phi_{i}$ is non-temporally bounded of Type I } \State $\varphi_{i}=\{\phi_{i}^{sat},\phi_{i}^{unc}\}$; \Else{ $\varphi_{i}=\{\phi_{i}^{vio},\phi_{i}^{unc}\};$} \EndIf \EndFor \State $\psi_{s}^{j}=\underset{i}{\bigwedge}\phi_{i}^{state},\phi_{i}^{state}\in\varphi_{i}$; \State $\varPsi_{s}=\left\{ \psi_{s}^{j}:j=0,1\ldots,n-1\right\} $ with $n=\prod_{i}\left|\varphi_{i}\right|$; \State $\hat{S}=\{\hat{s}_{k}:k=0,1\ldots,n\}$; \State$\hat{S}_{0}=\hat{s}_{0},$where $\hat{s}_{0}$ corresponds to $\psi_{s}^{0}=\bigwedge_{i}\phi_{i}^{unc}$; \State$\hat{F}=\hat{s}_{F}$, where $\hat{s}_{F}$ corresponds to $\psi_{s}^{F}=\bigwedge_{i_{1}\in I_{1}}\phi_{i_{1}}^{sat}\cap\bigwedge_{i_{2}\in I_{2}}\phi_{i_{2}}^{unc}$,where $i_{1}\in I_{1}$ are the indexes of sub-formulas of $\phi_{s}$ that are either temporally bounded or of Type I, and $i_{2}\in I_{2}$ are the indexes of sub-formulas that are of Type II; \EndProcedure \end{algorithmic} \end{algorithm} To address the violation of MITL tasks, the relaxed TBA is defined to contain two extra components (i.e., a continuous violation cost and a discrete violation cost) compared with the original TBA. This section presents the procedure of constructing a relaxed TBA for an MITL formula $\phi=\phi_{h}\land\phi_{s}$. First, we explain how to build the set of states in a relaxed TBA (see Alg.\ref{alg:Construct S,F}). Given the hard constraints $\phi_{h}$, which have to be fully satisfied and cannot be violated at any time, we add a sink state $\hat{s}_{sink}$ in the relaxed TBA to indicate the violation of hard constraints. Before developing soft constraints $\phi_{s}$, a more detailed classification of temporal operators for MITL formulas is introduced. An MITL specification $\phi$ can be written as $\phi=\bigwedge_{i\in{1,2,...,n}}\phi_{i}$ s.t. $\phi_{i}\neq\phi_{j},\forall i\neq j$. For each sub-formula $\phi_{i}$, if it is temporally bounded, $\phi_{i}$ can be either satisfied, violated, or uncertain \cite{Andersson2018}. If $\phi_{i}$ is non-temporally bounded, it can be either satisfied/uncertain or violated/uncertain. Specifically, a non-temporally bounded formula $\phi_{i}$ is of $\Type$ \mbox{I} (i.e., satisfied/uncertain) if $\phi_{i}$ cannot be concluded to be violated at any time during a run since there remains a possibility for it to be satisfied in the future. In contrast, it is of $\Type$ \mbox{II} (i.e., violated/uncertain) if $\phi_{i}$ cannot be concluded to be satisfied during a run, since it remains possible to be violated in the future. For instance, when $b=\infty$, $\diamondsuit_{[a,b]}$ is of $\Type$ \mbox{I} and $\Square_{[a,b]}$ is of $\Type$ \mbox{II}. The operator $\mathcal{U}_{[a,b]}$ is special since it results in two parts of semantics, which can be classified as $\Type$ \mbox{I} and \mbox{II}, respectively. Hence we treat formulas like $A\mathcal{U}_{\left[a,b\right]}B$ as a combination of two non-temporally bounded sub-formulas. Based on above statement, for the soft constraints $\phi_{s}=\bigwedge_{i\in{1,...,n}}\phi_{i}$, an evaluation set $\varphi_{i}$ of a sub-formula $\phi_{i}$ which represent possible satisfaction for a sub-formula is defined as \begin{equation} \varphi_{i}=\left\{ \begin{array}{ll} \left\{ \phi_{i}^{vio},\phi_{i}^{sat},\phi_{i}^{unc}\right\} , & \textrm{if \ensuremath{\phi_{i}}}\textrm{ is temporally bounded},\\ \left\{ \phi_{i}^{sat},\phi_{i}^{unc}\right\} , & \textrm{if \ensuremath{\phi_{i}}}\textrm{ is non-temporally }\\ & \textrm{bounded of \ensuremath{\Type\ }\mbox{I} },\\ \left\{ \phi_{i}^{vio},\phi_{i}^{unc}\right\} , & \textrm{if \ensuremath{\phi_{i}}}\textrm{ is non-temporally }\\ & \textrm{bounded of \ensuremath{\Type\ }\mbox{II}}, \end{array}\right.\label{eq:EvaluFcn_i} \end{equation} Based on (\ref{eq:EvaluFcn_i}), a subformula evaluation $\psi_{s}$ of $\phi_{s}$ is defined as \begin{equation} \psi_{s}^{j}=\underset{i}{\bigwedge}\phi_{i}^{state},\phi_{i}^{state}\in\varphi_{i} \label{eq:psi_s}. \end{equation} In (\ref{eq:psi_s}), $\psi_{s}^{j}$ represents one possible outcome of the formula, which can be obtained by taking an element from the evaluation set $\varphi_{i}$ for each sub-formula $\phi_i$, and then operating the conjunction of all these elements. Each different combination corresponds to a sub-formula evaluation $\psi_s^{j}$. Let $\varPsi_{s}$ denote the set of all sub-formula evaluations $\psi_{s}^{j}$ of $\phi_{s}$, the number of $\psi_{s}^{j}\in\varPsi_{s}$ is equal to the product of the number of elements in the evaluation set $\varphi_{i}$, which can be defined as $\varPsi_{s}=\left\{ \psi_{s}^{j},j=0,1\ldots,n-1\right\} $ with $n=\prod_{i}\left|\varphi_{i}\right|$ where $\left|\varphi_{i}\right|$ represents the number of elements in set $\varphi_{i}$. The set $\varPsi_{s}$ represents all possible outcomes of $\phi_{s}$ at any time. Every possible $\psi_{s}^{j}\in\varPsi_{s}$ is associated with a state $\hat{s}$. The initial state $\hat{s}_{0}$ is the state whose corresponding sub-formulas are uncertain, which indicates no progress has been made. The accepting state $\hat{s}_{F}$ is the state whose corresponding temporally bounded sub-formulas and non-temporally bounded sub-formulas of $\Type$ \mbox{I} are satisfied, while all non-temporally bounded sub-formulas of $\Type$ \mbox{II} are uncertain. The construction of the set of atomic propositions $\mathcal{AP}$, labeling function $\mathcal{L}$, clocks $X$ and the map from states to clock constraints $I_{X}$ in relaxed TBA is the same as in TBA. Here consider two different types of violation cost, i.e., a state $\hat{s}\neq \hat{s}_{sink}$ can violate soft constraints $\phi_{s}$ by either continuous violation (e.g., violating time constraints) or discrete violation (e.g., visiting risky regions). To measure their violation degrees, the outputs of continuous violation cost $v_{c}(\hat{s})$ and discrete violation cost $v_{d}(\hat{s})$ for each state $\hat{s}\neq \hat{s}_{sink}$ are defined, respectively, as \begin{equation} v_{c}(\hat{s})=\left\{ \begin{array}{ll} k, & \textrm{\textrm{if \ensuremath{\exists\phi_{1}^{vio},}\ensuremath{\phi_{2}^{vio}}, \ensuremath{\ldots\phi_{k}^{vio}\in\psi_{s}^{j}} that is temporally } }\\ & \textrm{bounded, }\\ 0, & \textrm{otherwise,} \end{array}\right. \end{equation} \begin{equation} v_{d}(\hat{s})=\left\{ \begin{array}{ll} 1, & \textrm{\ensuremath{\textrm{if}\ \exists\phi_{i}^{vio}\in\psi_{s}^{j}} that is non-temporally bounded, }\\ \\ 0, & \textrm{otherwise,} \end{array}\right. \end{equation} At the sink state $\hat{s}_{sink}$, the continuous and discrete violation costs are defined as $v_{c}(\hat{s}_{sink})=v_{d}(\hat{s}_{sink})=\infty$. The next step is to define violation-based edges connecting states, and the following definitions and notations are introduced. \begin{defn} Given soft constraints $\phi_{s}$, the distance set between $\psi_{s}$ and $\psi_{s}^{\prime}$ is defined as $|\psi_{s}-\psi_{s}^{\prime}|=\{\phi_{i}:\phi_{i}^{state^{\prime}}\neq\phi_{i}^{state}\}$. That is, it consists of all sub-formulas $\phi_{i}$ that are under different evaluations. \end{defn} We use $(\psi_{s},g,a)\rightarrow\psi_{s}^{\prime}$ to denote that all sub-formulas $\phi_{i}\in|\psi_{s}-\psi_{s}^{\prime}|$ are (i) evaluated as uncertain in $\psi_{s}$ (i.e., $\phi_{i}^{unc}\in\psi_{s}$) and (ii) re-evaluated to be either satisfied or violated in $\psi_{s}^{\prime}$ (i.e., $\phi_{i}^{state^{\prime}}\in\psi_{s}^{\prime}$, where $state^{\prime}\in\left\{ vio,sat\right\} $) if symbol $a$, which is read at time $t$, satisfies guard $g$. \begin{figure*} \centering{}\includegraphics[scale=0.25]{tba_relax_tba_revision.jpg} \caption{\label{fig:relaxExample}(a) The TBA corresponding to $\phi=\phi_{h}\land\phi_{s}$, where $\phi_{h}=\Square\lnot\mathtt{obs}$ and $\phi_{s}=\Square\lnot \mathtt{g}\land\lozenge_{t<10}\mathtt{p}$. (b) The relaxed TBA corresponding to $\phi$.} \end{figure*} The edge construction can be summarized into four steps: (1) Construct all edges corresponding to progress regarding the specifications (i.e., the edges that a TBA would have). (2) Construct edges $\hat{E}$ of non-temporally bounded soft constraints that are no longer violated, such that $(\hat{s},g,a,\hat{s}^{\prime})\in\hat{E}$ satisfying all of the following conditions: (i) $\forall\phi_{i}\in|\psi_{s}-\psi_{s}^{\prime}|, \phi_{i}^{vio}\in\psi_{s}$ where $\hat{s}$ corresponds to $\psi_{s}$, $\hat{s}^{\prime}$ corresponds to $\psi_{s}^{\prime}$ and $\phi_{i}$ is non-temporally bounded, and (ii) $(\hat{s}^{\prime\prime},g,a,\hat{s}^{\prime})\in\hat{E}$ for some $\hat{s}^{\prime\prime}$ where $|\psi_{s}-\psi_{s}^{\prime}|=|\psi_{s}-\psi_{s}^{\prime\prime}|$ or $(\hat{s}^{\prime},g,a^{\prime},\hat{s})\in\hat{E}$ where $a^{\prime}=2^{\mathcal{AP}}\setminus a$. (3) Construct edges $\hat{E}$ of temporally bounded soft constraints that are no longer violated, such that $(\hat{s},g,a,\hat{s}^{\prime})\in\hat{E}$ satisfying all the following conditions: (i) $\exists\phi_{i}\in|\psi_{s}-\psi_{s}^{\prime}|$, $\phi_{i}^{vio}\in\psi_{s}$, $\phi_{i}^{sat}\in\psi_{s}^{\prime}$, $\phi_{i}^{unc}\in\psi_{s}^{\prime\prime}$ where $\hat{s}$ corresponds to $\psi_{s}$, $\hat{s}^{\prime}$ corresponds to $\psi_{s}^{\prime}$, $\hat{s}^{\prime\prime}$ corresponds to $\psi_{s}^{\prime\prime}$ and $\phi_{i}$ is temporally bounded, (ii) $(\hat{s}^{\prime\prime},g^{\prime},a,\hat{s}^{\prime})\in\hat{E}$ , $(\hat{s}^{\prime\prime},g,a,\hat{s})\in\hat{E}$ and $g=g^{\prime}\setminus\Phi(X_{i})$, where $X_{i}$ is the set of clocks associated with $\phi_{i}$, s.t. $\phi_{i}^{unc}\in\psi_{s}^{\prime\prime}$ and $\phi_{i}^{vio}\in\psi_{s}$. (4) Construct self-loops such that $(\hat{s},g,a,\hat{s})\in\hat{E}$ if $\exists\ (g,a)$ s.t. $g\subseteq g^{\prime}$ , $a\subseteq a^{\prime}$ where $(\hat{s}^{\prime},g^{\prime},a^{\prime},\hat{s})\in\hat{E}$ for some $\hat{s}^{\prime}$ and $(\hat{s},g^{\prime},a^{\prime},\hat{s}^{\prime\prime})\notin\hat{E}$ for any $\hat{s}^{\prime\prime}$. In the first step, the edges of the original TBA are constructed except self-loops, i.e., transitions from and to the same state. Then, we construct edges from states where $v_{d}=1$, i.e., states corresponding to discrete violation (step 2). These edges can be considered as alternative routes to the ones in step 1, where some non-temporally bounded sub-formula/formulas are violated at some points. Similarly, we construct edges from states with $v_{c}>0$, i.e., states corresponding to continuous violations (step 3). This ensures that the accepting states can be reached when the time-bound action finally occurs, even after the deadline is exceeded. Finally, we consider self-loops to ensure no deadlocks in the automaton except the sink state $\hat{s}_{sink}$. Compared with TBA, the relaxed TBA allows more transitions and enables task relaxation when $\phi_{s}$ is not fully feasible. \begin{defn} An automata timed run $\boldsymbol{r}_{\mathcal{\hat{A}}}=(\hat{s}_{0},\tau_{0})\ldots(\hat{s}_{n},\tau_{n})$ of a relaxed TBA $\mathcal{\hat{A}}$, corresponding to the timed run $\boldsymbol{r}=(q_{0},\tau_{0})\ldots(q_{n},\tau_{n})$ is a sequence where $\hat{s}_{0}\in \hat{S}_{0}$, $\hat{s}_{j}\in \hat{S}$, and $(\hat{s}_{j},g_{j},a_{j},\hat{s}_{j+1})\in \hat{E}\ \forall j\geq0$ such that i) $\tau_{j}\models g_{j},j\geq0,$ and ii) $L(q_{j})\subseteq\mathcal{L}(\hat{s}_{j}),\forall j$. The continuous violation cost for the automata timed run is $\stackrel[k=0]{n-1}{\sum}v_{c}(\hat{s}_{k+1})(\tau_{k+1}-\tau_{k})$ and similarly the discrete violation cost is $\stackrel[k=0]{n-1}{\sum}v_{d}(\hat{s}_{k+1})(\tau_{k+1}-\tau_{k})$. \label{def:auto_timedrun} \end{defn} \begin{example} As a running example in Fig. \ref{fig:relaxExample}. Consider an MITL specification $\phi=\phi_{h}\land\phi_{s}$ with $\phi_{h}=\Square\lnot\mathtt{obs}$ and $\phi_{s}=\Square\lnot \mathtt{g}\land\lozenge_{t<10}\mathtt{p}$, where $\mathtt{obs}$ represents obstacles, and $\mathtt{g}$ and $\mathtt{p}$ represent the grass and pear, respectively. The TBA and the corresponding relaxed TBA are shown in Fig. \ref{fig:relaxExample}. The soft constraint $\phi_{s}$ is composed of two subformulas: $\phi_{1}=\Square\lnot \mathtt{g}$ and $\phi_{2}=\lozenge_{t<10}\mathtt{p}$, where $\phi_{1}$ is non-temporally bounded of $\Type$ \mbox{II} and $\phi_{2}$ is temporally bounded. Hence $\phi_{1}$ can be evaluated as violated or uncertain while $\phi_{2}$ can be evaluated as violated, uncertain or satisfied, i.e. the corresponding evaluation sets are $\varphi_{1}=\left\{ \phi_{1}^{unc},\phi_{1}^{vio}\right\} $ and $\varphi_{2}=\left\{ \phi_{2}^{unc},\phi_{2}^{vio},\phi_{2}^{sat}\right\} $, respectively. By operating the conjunction of the first element in set $\varphi_{1}$ and set $\varphi_{2}$, a sub-formula evaluation $\psi_{s}^{0}=\phi_{1}^{unc}\land\phi_{2}^{unc}$ is obtained. Similarly, we can enumerate all sub-formula evaluations. Therefore, the set of all sub-formula evaluations of the formula $\phi_{s}$ is $\varPsi_{s}=\left\{ \phi_{1}^{unc}\wedge\phi_{2}^{unc},\phi_{1}^{vio}\wedge\phi_{2}^{unc},\phi_{1}^{vio}\wedge\phi_{2}^{vio},\phi_{1}^{unc}\wedge\phi_{2}^{vio},\right.$ $\left.\phi_{1}^{unc}\wedge\phi_{2}^{sat},\phi_{1}^{vio}\wedge\phi_{2}^{sat}\right\} $ with $\psi_{s}^{j}\in \varPsi_{s}$. Following Alg. \ref{alg:Construct S,F}, the relaxed TBA has 7 states, which satisfy the hard constraints except that $\hat{s}_{6}$ is a sink state indicating that the hard constraint $\phi_{h}$ is violated. For $\phi_{s},$ the initial state $\hat{s}_{0}\sim\phi_{1}^{unc}\wedge\phi_{2}^{unc}$ corresponds to sub-formulas evaluated as uncertain. The accepting state $\hat{s}_{4}\sim\phi_{1}^{unc}\wedge\phi_{2}^{sat}$ corresponds to $\phi_{1}$ evaluated as uncertain and $\phi_{2}$ as satisfied. For the rest of the states, we denote by $\hat{s}_{1}\sim\phi_{1}^{vio}\wedge\phi_{2}^{unc}$, $\hat{s}_{2}\sim\phi_{1}^{vio}\wedge\phi_{2}^{vio}$, $\hat{s}_{3}\sim\phi_{1}^{unc}\wedge\phi_{2}^{vio}$, $\hat{s}_{5}\sim\phi_{1}^{vio}\wedge\phi_{2}^{sat}$. There are two clock constraints in this example: $t<10$ associated with states corresponding to $\phi_{2}^{sat}$, and $t\geq10$ associated with $\phi_{2}^{vio}$. The first clock constraint is then mapped to $\hat{s}_{4}$ and $\hat{s}_{5}$, and the second to $\hat{s}_{2}$ and $\hat{s}_{3}$. The continuous and discrete violation costs are mapped such that $v_{c}(\hat{S})=[0\ 0\ 1\ 1\ 0\ 0\ \infty]$ and $v_{d}(\hat{S})=[0\ 1\ 1\ 0\ 0\ 1\ \infty]$. \end{example} Compared with TBA, the relaxed TBA allows more transitions, enables task relaxation and measure its violation when $\phi_{s}$ is not fully feasible. Since a traditional product automaton $\mathcal{P}=\mathcal{T\times A}$ cannot handle the infeasible case, a relaxed product automaton is introduced as follow. \begin{defn} \label{def:relaxed product automaton}Given a WTS $\mathcal{T}=\textrm{\ensuremath{\left(Q,q_{0},\delta,\mathcal{AP},L,\mathcal{\omega}\right)}}$ and a relaxed TBA $\mathcal{\hat{A}}=(\hat{S},\hat{S}_{0},\mathcal{AP},\mathcal{L},X,I_{X},v_{c},v_{d},\hat{E},\hat{F})$, the relaxed product automaton (RPA) $\hat{\mathcal{P}}=\mathcal{T\times\hat{A}}$ is defined as a tuple $\hat{\mathcal{P}}=\{\hat{P},\hat{P_{0}},\mathcal{AP},L_{\mathcal{\hat{P}}},\delta_{\mathcal{\hat{P}}},I_{X}^{\hat{\mathcal{P}}},v_{c}^{\hat{\mathcal{P}}},v_{d}^{\hat{\mathcal{P}}},\mathcal{F_{\hat{P}}},\omega_{\mathcal{\hat{P}}}\}$, $\hat{P}\subseteq\{(q,\hat{s})\in Q\times\hat{S}:L(q)\subseteq\mathcal{L}(\hat{s})\}$ is the set of states; $\hat{P_{0}}=\{q_{0}\}\times\hat{S}_{0}$ is the set of initial states; $L_{\mathcal{\hat{P}}}=\hat{P}\rightarrow2^{\mathcal{AP}}$ is a labeling function, i.e., $L_{\mathcal{\hat{P}}}(\hat{p})=L(q)$; $\delta_{\mathcal{\hat{P}}}\subseteq\hat{P}\times\hat{P}$ is the set of transitions defined such that $((q,\hat{s}),(q^{\prime},\hat{s}^{\prime}))\in\delta_{\mathcal{\hat{P}}}$ if and only if ($q,q^{\prime})\in\delta$ and $\exists g,a,\textrm{s.t.\ }(\hat{s},g,a,\hat{s}^{\prime})\in\hat{E}$; $I_{X}^{\hat{\mathcal{P}}}(\hat{p})=I_{X}(\hat{s})$ is a map of clock constraints; $v_{c}^{\hat{\mathcal{P}}}(\hat{p})=v_{c}(\hat{s})$ is the continuous violation cost; $v_{d}^{\hat{\mathcal{P}}}(\hat{p})=v_{d}(\hat{s})$ is the discrete violation cost; $\mathcal{F}_{\mathcal{\hat{P}}}=Q\times\hat{F}$ are accepting states; $\omega_{\mathcal{\hat{P}}}\colon\delta_{\mathcal{\hat{P}}}\rightarrow\mathbb{R}^{+}$ is the positive weight function, i.e., $\omega_{\mathcal{\hat{P}}}(\hat{p},\hat{p}^{\prime})=\omega(q,q^{\prime})$. \label{def:RPA} \end{defn} By accounting continuous and discrete violation simultaneously, the violation cost with respect to $\phi_s$ is defined as \begin{equation} \omega_{v}^{\hat{\mathcal{P}}}(\hat{p}_{k},\hat{p}_{k+1},\phi_{s})=(1-\alpha)v_{c}^{\hat{\mathcal{P}}}(\hat{p}_{k+1})+\alpha v_{d}^{\hat{\mathcal{P}}}(\hat{p}_{k+1}), \end{equation} where $\alpha\in[0,1]$ measures the relative importance between continuous and discrete violations. Then based on $\mathcal{W}(\boldsymbol{r},\phi)$ defined in (\ref{eq:total_cost}), the total weight of a path $\hat{\boldsymbol{p}}=(q_{0},\hat{s}_{0})\ldots(q_{n},\hat{s}_{n})$ for $\hat{\mathcal{P}}$ is \begin{equation} \mathcal{W}(\boldsymbol{\hat{p}})=\stackrel[k=0]{n-1}{\sum}\omega_{\mathcal{\hat{P}}}(\hat{p}_{k},\hat{p}_{k+1})\omega_{v}^{\hat{\mathcal{P}}}(\hat{p}_{k},\hat{p}_{k+1},\phi_{s}), \label{eq:WeightFcn} \end{equation} where $\mathcal{W}(\boldsymbol{\hat{p}})$ measures the total violations with respect to $\phi_{s}$ in the WTS. Hence, by minimizing the violation of $\phi_{s}$ a run $\hat{\boldsymbol{p}}$ of $\hat{\mathcal{P}}$ can fulfill $\phi_{s}$ as much as possible. \subsection{Energy Function \label{subsec:Energy}} Inspired by previous work \cite{Cai2020c}, we design a hybrid Lyapunov-like energy function consisting of different violation costs. Such a design can measure the minimum distance to the accepting sets from the current state and enforce the accepting condition by decreasing the energy as the system evolves. Based on (\ref{eq:WeightFcn}), $d(\hat{p}_{i},\hat{p}_{j})=\textrm{min}_{\hat{\boldsymbol{p}}\in\mathcal{\mathcal{\hat{D}}}(\hat{p}_{i},\hat{p}_{j})}\mathcal{W}(\hat{\boldsymbol{p}})$ is the shortest path from $\hat{p}_{i}$ to $\hat{p}_{j}$ , where $\mathcal{\hat{D}}(\hat{p}_{i},\hat{p}_{j})$ is the set of all possible paths. For $\hat{p}\in\hat{P}$, we design the energy function as \begin{equation} J(\hat{p})=\left\{ \begin{array}{cc} \underset{\hat{p}^{\prime}\in\mathcal{F}^{\ast}}{\textrm{min}}d(\hat{p},\hat{p}^{\prime}), & \textrm{ if}\ \hat{p}\notin\mathcal{F}^{\ast},\\ 0, & \textrm{ if\ }\hat{p}\in\mathcal{F}^{\ast}, \end{array}\right. \label{eq:energy_fcn} \end{equation} where $\mathcal{F}^{\ast}$ is the largest self-reachable subset of the accepting set $\mathcal{F}_{\hat{P}}$. Since $\omega_{\mathcal{\hat{P}}}$ is positive by definition, $d(\hat{p},\hat{p}^{\prime})>0$ for all $\hat{p},\hat{p}^{\prime}\in\hat{P}$, which implies that $J(\hat{p})\geq0$. Particularly, $J(\hat{p})=0$ if $\hat{p}\in\mathcal{F}^{\ast}$. If a state in $\mathcal{F}^{\ast}$ is reachable from $\hat{p}$, then $J(\hat{p})\neq\infty$, otherwise $J(\hat{p})=\infty$. Therefore, $J(\hat{p})$ indicates the minimum distance from $\hat{p}$ to $\mathcal{F}^{\ast}$. \begin{thm} For the energy function designed in (\ref{eq:energy_fcn}), if a trajectory $\boldsymbol{\hat{p}}=\hat{p}_{1}\hat{p}_{2}\ldots \hat{p}_{n}$ is accepting, there is no state $p_{i}$,$\forall i=1,2,\dots n$ with $J(\hat{p}_{i})=\infty$, and all accepting states in $\hat{p}$ are in the set $\mathcal{F}^{\ast}$ with energy 0. In addition, for any state $\hat{p} \in \hat{P}$ with $\hat{p}\notin\mathcal{F}^{\ast}$ and $J(\hat{p})\neq\infty$, there exists at least one state $\hat{p}^{\prime}$ with $\hat{p},\hat{p}^{\prime}\in\delta_{\mathcal{\hat{P}}}$ such that $J(\hat{p}^{\prime})<J(\hat{p})$. \label{thm:energy_fcn} \end{thm} \begin{IEEEproof} Considering an accepting state $\hat{p}_{i}\in\mathcal{F}_{\mathcal{\hat{P}}}$. Suppose $\hat{p}_{i}\notin\mathcal{F}^{\ast}$. By definition \ref{def:RPA}, $\boldsymbol{\hat{p}}$ intersects $\mathcal{F}_{\mathcal{\hat{P}}}$ infinitely many times which indicates there exists another accepting state $\hat{p}_{j}\in\mathcal{F}_{\mathcal{\hat{P}}}$ reachable from $\hat{p}_{i}$. If $\hat{p}_{j}\in\mathcal{F}^{\ast}$, then by definition of $\mathcal{F}^{\ast}$, $\hat{p}_{i}$ must be in $\mathcal{F}^{\ast}$ which contradicts the assumption that $\hat{p}_{i}\notin\mathcal{F}^{\ast}$. For the case $\hat{p}_{j}\notin\mathcal{F}^{\ast}$, there must exist a non-trivial strongly connnected component(SCC) composed of accepting states reachable from $\hat{p}_{j}$. All states in SCC belong to $\mathcal{F}^{\ast}$. Since the SCC is reachable from $\hat{p}_{j}$, it implies $\hat{p}_{j}\in\mathcal{F}^{\ast}$, which contradicts the assumption. Thus all accepting states in $\hat{p}$ must be in $\mathcal{F}^{\ast}$ with energy zero based on (\ref{eq:energy_fcn}). Since $\mathcal{F}^{\ast}$ is reachable by any state in $\hat{p}$, $J(\hat{p}_{i})\neq\infty$, $\forall i=1,2,\dots n$. If $J(\hat{p})\neq\infty$ for $\hat{p}\in \hat{P}$, (\ref{eq:energy_fcn}) indicates $\mathcal{F}^{\ast}$ is reachable from $\hat{p}$. Then there exists a shortest trajectory $\boldsymbol{\hat{p}}=\hat{p}_{1}\hat{p}_{2}\ldots \hat{p}_{n}$ where $\hat{p}_{1}=\hat{p}$ and $\hat{p}_{n}\in \mathcal{F}^{\ast}$. Bellman's optimal principle states that there exists a state $\hat{p}^{\prime}$ with $(\hat{p},\hat{p}^{\prime})\in\delta_{\mathcal{\hat{P}}}$ such that $J(\hat{p}^{\prime})<J(\hat{p})$. \end{IEEEproof} Theorem \ref{thm:energy_fcn} indicates that the generated path will eventually satisfy the acceptance condition of $\mathcal{\hat{P}}$ as long as the energy function keeps decreasing. \subsection{Automaton Update \label{subsec: Update}} The system model needs to be updated according to the sensed information during the runtime to facilitate motion planning. The update procedure is outlined in Alg. \ref{alg:Automaton-Update}. Let $\textrm{Info}(\hat{p})=\{L_{\hat{\mathcal{P}}}(\hat{p}^{\prime})\mid\hat{p}^{\prime}\in\textrm{Sense}(\hat{p})\}$ denote the newly observed labels of $\hat{p}^{\prime}$ that are different from the current knowledge, where $\textrm{Sense}(\hat{p})$ represents neighbor states that the agent at current state $\hat{p}$ can detect and observe. Denote the sensing range is $N_s$. If the sensed labels $L_{\hat{\mathcal{P}}}(\hat{p}^{\prime})$ are consistent with the current knowledge of $\hat{p}^{\prime}$, $\Info(\hat{p})=\emptyset$; otherwise, the properties of $\hat{p}^{\prime}$ have to be updated. Let $\boldsymbol{J}\in\mathbb{R}^{\left|\hat{P}\right|}$ denote the stacked $J$ for all $\hat{p}\in \hat{P}$. The terms $\mathit{\boldsymbol{J}}$ are initialized from the initial knowledge of the environment. At each step, if $\textrm{Info}(\hat{p})\neq\emptyset$, the weight $\omega_{\mathcal{\hat{P}}}(\hat{p}^{\prime},\hat{p}^{\prime\prime})$ and $\omega_{\mathcal{\hat{P}}}(\hat{p}^{\prime\prime},\hat{p}^{\prime})$ for states that satisfy $\hat{p}^{\prime}\in\textrm{Sense}(\hat{p})$ and $(\hat{p}^{\prime},\hat{p}^{\prime\prime})\in\delta_{\mathcal{\hat{P}}}$ are updated. Then the energy function $\boldsymbol{J}$ is updated. \begin{lem} \label{lem:self-set} The largest self-reachable set $\mathcal{F}^{\ast}$ remains the same during the automaton update in Alg. \ref{alg:Automaton-Update}. \end{lem} \begin{IEEEproof} Given $\mathcal{\hat{P}}_{(\hat{p},\delta_{\hat{\mathcal{P}}})}$, the graph induced from $\mathcal{\hat{P}}_{(\hat{p},\delta_{\hat{\mathcal{P}}})}$ by neglecting the weight of each transition is denoted by $\mathcal{G}(\hat{p},\delta_{\hat{\mathcal{P}}})$. Similar to \cite{Cai2020c}, Alg. \ref{alg:Automaton-Update}. only updates the cost of each transition so that the topological structure of $\mathcal{G}(\hat{p},\delta_{\hat{\mathcal{P}}})$ and its corresponding $\mathcal{F}^{\ast}$ remain the same. \end{IEEEproof} Lemma \ref{lem:self-set} indicates that $\mathcal{F}^{\ast}$ doesn't need to be updated whenever newly sensed information caused by unknown obstacles is obtained. Therefore, it reduces the complexity. As a result, the $\mathcal{F}^{\ast}$ is computed off-line, and the construction of $\mathcal{F}^{\ast}$ involves the computation of $d(\hat{p},\hat{p}^{\prime})$ for all $\hat{p}^{\prime}\in\mathcal{F_{\hat{P}}}$ and the check of terminal conditions \cite{Ding2014}. \begin{algorithm} \caption{\label{alg:Automaton-Update}Automaton Update} \small \singlespacing \begin{algorithmic}[1] \Procedure {Input: } {the current state $\hat{p}=(q,\hat{s}),$ the current $\boldsymbol{J},\mathcal{F}^{\ast}$and $\textrm{Info}(\hat{p})$} {Output: } { the updated $\boldsymbol{J}$} \If { $\textrm{Info}(\hat{p})\neq\emptyset$} \For{ all $\hat{p}^{\prime}=(q^{\prime},\hat{s}^{\prime})\in\textrm{Sense}(\hat{p})$ such that $L_{\hat{\mathcal{P}}}(\hat{p}^{\prime})\in\textrm{Info}(\hat{p})$} \For{ all $\hat{p}^{\prime}$ such that $(\hat{p}^{\prime},\hat{p}^{\prime\prime})\in\delta_{\mathcal{\hat{P}}}$} \State update the labels of $L_{\hat{\mathcal{P}}}(\hat{p}^{\prime})$ according to $L(q^{\prime})$; \State update the weight $\omega_{\mathcal{\hat{P}}}(\hat{p}^{\prime},\hat{p}^{\prime\prime})$ and $\omega_{\mathcal{\hat{P}}}(\hat{p}^{\prime\prime},\hat{p}^{\prime})$; \EndFor \EndFor \State update $\boldsymbol{J}$; \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \section{Control Synthesis of MITL Motion Planning} \begin{figure*}[t] \centering{}\includegraphics[scale=0.5]{figure3.PNG}\caption{\label{fig:Snapshots}Snapshots of the motion planning. The red dotted arrow line represents the predicted trajectory at the current time. Cyan dots represent the rewards with size proportional to their value. (a) Pac-Man plans to reach pear within the specified time interval. (b) Since the soft task is infeasible, Pac-Man chooses to violate the temporally bounded operators. (c) and (d) show when the desired task is accomplished, Pac-Man revises its motion plan to go to the cherry at the right bottom corner since the right top one is not accessible.} \end{figure*} The control synthesis of the MITL motion planning strategy is based on receding horizon control (RHC). The idea of RHC is to solve an online optimization problem by maximizing the utility function over a finite horizon $N$ and produces a predicted optimal path at each time step. With only the first predicted step applied, the optimization problem is repeatedly solved to predict optimal paths. Specifically, based on the current state $\hat{p}_{k}$, let $\hat{\boldsymbol{p}}_{k}=\hat{p}_{1\mid k}\hat{p}_{2\mid k}\ldots\hat{p}_{N\mid k}$ denote a predicted path of horizon $N$ at time $k$ starting from $\hat{p}_{k}$, where $\hat{p}_{i\mid k}\in\hat{P}$ satisfies $(\hat{p}_{i\mid k},\hat{p}_{i+1\mid k})\in\delta_{\hat{\mathcal{P}}}$ for all $i=1,...,N-1$, and $(\hat{p}_{k},\hat{p}_{1\mid k})\in\delta_{\mathcal{\hat{P}}}$. Let $\textrm{Path}(\hat{p}_{k},N)$ be the set of paths of horizon $N$ generated from $\hat{p}_{k}$. Note that a predicted path $\hat{\boldsymbol{p}}_{k}\in\textrm{Path}(\hat{p}_{k},N)$ can uniquely project to a trajectory $\gamma_{\mathcal{T}}(\hat{\boldsymbol{p}}_{k})=\boldsymbol{q}=q_{1}\cdots q_{N}$ on $\mathcal{T}$, where $\gamma_{\mathcal{T}}(\hat{p}_{i\mid k})=q_{i}$, $\forall i=1,\ldots,N$. The choice of the finite horizon $N$ depends on the local sensing range $N_s$ of the agent. The total reward along the predicted path $\hat{\boldsymbol{p}}_{k}$ is $\boldsymbol{R}(\gamma_{\mathcal{T}}(\hat{\boldsymbol{p}}_{k}))=\stackrel[i=1]{N}{\sum}R_{k}(\gamma_{\mathcal{T}}(\hat{p}_{i\mid k}))$. Based on (\ref{eq:WeightFcn}), for every predicted path $\hat{\boldsymbol{p}}_{k}$ the total violation cost is $\mathcal{W}(\boldsymbol{\hat{p}_{k}})$. Then the utility function of RHC is designed as \begin{equation} \mathbf{U}(\hat{\boldsymbol{p}}_{k})=\boldsymbol{R}(\gamma_{\mathcal{T}}(\hat{\boldsymbol{p}}_{k}))-\beta\mathcal{W}(\boldsymbol{\hat{p}_{k}}), \end{equation} where $\beta$ is the relative penalty. By applying large $\beta$, maximizing the utility $\mathbf{U}(\hat{\boldsymbol{p}}_{k})$ tends to bias the selection of paths towards the objectives, in the decreasing order, of 1) hard constraints $\phi_{h}$ satisfaction, 2) fulfilling soft constraints $\phi_{s}$ as much as possible, and 3) collecting time-varying rewards as much as possible. Note that continuous and discrete violations are optimized simultaneously based on the preference weight $\alpha$ in $\mathcal{W}(\boldsymbol{\hat{p}_{k}})$. To satisfy the acceptance condition of $\mathcal{\hat{P}}$, we consider the energy function-based constraints simultaneously. The initial predicted path from $\hat{P_{0}}$ can be identified by solving \begin{equation} \begin{aligned}\hat{\boldsymbol{p}}_{0,opt}= & \underset{\hat{\boldsymbol{p}}_{0}\in\textrm{Path}(\hat{P}_{0},N)}{\textrm{argmax}}\mathbf{U}(\hat{\boldsymbol{p}}_{0}),\\ & \textrm{ subject to: }J(\hat{p}_{0})<\infty. \end{aligned} \label{eq:P0} \end{equation} The constraint $J(\hat{p}_{0})<\infty$ is critical because otherwise, the path starting from $\hat{p}_{0}$ cannot be accepting. After determining the initial state $\hat{p}_{0}^{\ast}=\hat{p}_{1|0,opt}$, where $\hat{p}_{1|0,opt}$ is the first element of $\hat{\boldsymbol{p}}_{0,opt}$ , RHC will be employed repeatedly to determine the optimal states $\hat{p}_{k}^{\ast}$ for $k=1,2,\ldots$. At each time instant $k$, a predicted optimal path $\hat{\boldsymbol{p}}_{k,opt}=\hat{p}_{1\mid k,opt}\hat{p}_{2\mid k,opt}\cdots\hat{p}_{N\mid k,opt}$ is constructed based on $\hat{p}_{k-1}^{\ast}$ and $\hat{\boldsymbol{p}}_{k-1,opt}$ obtained at time $k-1$. Note that only $\hat{p}_{1\mid k,opt}$ will be applied at time $k$, i.e., $\hat{p}_{k}^{\ast}=\hat{p}_{1\mid k,opt}$, which will then be used with $\hat{\boldsymbol{p}}_{k,opt}$ to generate $\hat{\boldsymbol{p}}_{k+1,opt}$. \begin{thm} For each time k = 1, 2 . . ., provided $\hat{p}_{k-1}^{\ast}$ and $\hat{\boldsymbol{p}}_{k-1,opt}$ from previous time step, consider a RHC \begin{equation} \hat{\boldsymbol{p}}_{k,opt}=\underset{\hat{\boldsymbol{p}}_{k}\in\textrm{Path}(\hat{p}_{k-1}^{\ast},N)}{\textrm{argmax}}\ \ \mathbf{U}(\hat{\boldsymbol{p}}_{k}), \label{eq:P_kopt} \end{equation} subject to the following constraints: \begin{enumerate} \item $J(\hat{p}_{N\mid k})<J(\hat{p}_{N\mid k-1,opt})$ if $J(\hat{p}_{k-1}^{\ast})>0$ and $J(\hat{p}_{i\mid k-1,opt})\neq0$ for all $i=1,\ldots,N$; \item $J(\hat{p}_{i_{0}(\hat{p}_{k-1,opt})-1\mid k})=0$ if $J(\hat{p}_{k-1}^{\ast})>0$ and $J(\hat{p}_{i\mid k-1,opt})=0$ for some $i=1,\ldots,N$, where $i_{0}(\hat{p}_{k-1,opt})$ is the index of the first occurrence that satisfies $J(\hat{p}_{i_{0}\mid k-1,opt})=0$ in $\hat{\boldsymbol{p}}_{k-1,opt}$; \item $J(\hat{p}_{N\mid k})<\infty$ if $J(\hat{p}_{k-1}^{\ast})=0$ . \end{enumerate} Applying $\hat{p}_{k}^{\ast}=\hat{p}_{1|k,opt}$ at each time $k$, the optimal path $\hat{\boldsymbol{p}}^{\ast}=\hat{p}_{0}^{\ast}\hat{p}_{1}^{\ast}\ldots$ is guaranteed to satisfy the acceptance condition. \label{thm:RHC_thm} \end{thm} \begin{IEEEproof} Consider a state $\hat{p}_{k-1}^{\ast}\in P,\forall k=1,2,\dots$ and $\textrm{Path}(\hat{p}_{k-1}^{\ast},N)$ represents the set of all possible paths starting from $\hat{p}_{k-1}^{\ast}$ with horizon $N$. Since not all predicted trajectories maximizing the utility function $\boldsymbol{U}(\boldsymbol{\hat{p}}_{k}),\boldsymbol{\hat{p}}_{k}\in\textrm{Path}(\hat{p}_{k-1}^{\ast},N)$ in (\ref{eq:P_kopt}) are guaranteed to satisfy the acceptance condition of $\mathcal{\hat{P}}$, additional constraints need to be imposed. The key idea about the design of the constraint for (\ref{eq:P_kopt}) is to ensure the energy of the states along the trajectory eventually decrease to zero. Therefore, we consider the following three cases. \begin{enumerate} \item Case 1: if $J(\hat{p}_{k-1}^{\ast})>0$ and $J(\hat{p}_{i\mid k-1,opt})\neq0$ for all $i=1,\dots,N$, the constraint $J(\hat{p}_{N\mid k})<J(\hat{p}_{N\mid k-1,opt})$ is enforced. The energy $J(\hat{p}_{k-1}^{\ast})>0$ indicates there exists a trajectory from $\hat{p}_{k-1}^{\ast}$ to $\mathcal{F}^{\ast}$, and $J(\hat{p}_{i\mid k-1,opt})\neq0$ for all $i=1,\dots,N$ indicates $\boldsymbol{\hat{p}}_{k-1,opt}$ does not intersect $\mathcal{F}^{\ast}$. The constraint $J(\hat{p}_{N\mid k})<J(\hat{p}_{N\mid k-1,opt})$ enforces that the optimal predicted trajectory $\hat{p}_{N\mid k}$ must end at a state with lower energy than that of the previous predicted trajectory $\boldsymbol{\hat{p}}_{k-1,opt}$, which indicates the energy along $\boldsymbol{\hat{p}}_{k,opt}$ decreases at each iteration $k$. \item Case 2: if $J(\hat{p}_{i\mid k-1,opt})=0$ for some $i=1,\dots,N$, $\boldsymbol{\hat{p}}_{k-1,opt}$ intersects $\mathcal{F}^{\ast}$. Let $i_{0}(\hat{p}_{k-1,opt})$ be the index of the first occurrence in $\boldsymbol{\hat{p}}_{k-1,opt}$ where $J(\hat{p}_{i_{0}\mid k-1})=0$. The constraint $J(\hat{p}_{i_{0}(\hat{p}_{k-1,opt})-1\mid k})=0$ enforces the predicted trajectory at the current time $k$ to have energy 0 if the previous predicted trajectory contains such a state. \item Case 3: if $J(\hat{p}_{k-1}^{\ast})=0$, it indicates $\hat{p}_{k-1}^{\ast}\in\mathcal{F}^{\ast}$. The constraint $J(\hat{p}_{N\mid k})<\infty$ only requires the predicted trajectory $\boldsymbol{\hat{p}}_{k}$ ending at a state with bounded energy, where Cases 1 and 2 can then be applied to enforce the following sequence $\hat{p}_{k+1}^{\ast}\hat{p}_{k+2}^{\ast}\dots$ converging to $\mathcal{F}^{\ast}$. \end{enumerate} \end{IEEEproof} Since the environment is dynamic and unknown, the agent will update the environment according to the detected information at each time step. In addition, by selecting the predictive horizon $N$ to be less than or equal to the sensor range $N_s$, we can ensure the existence of the solutions, since the local environment can be regarded as static. As a result, lemmas in \cite{Ding2014} can be applied directly and the proof of the existence is omitted here. Similar as \cite{Cai2020c}, the energy function based constraints (\ref{eq:P_kopt}) in Theorem \ref{thm:RHC_thm} ensure an optimal trajectory $\hat{\boldsymbol{p}}^{\ast}=\hat{p}_{0}^{\ast}\hat{p}_{1}^{\ast}\ldots$ is obtained which satisfies the acceptance condition. Since the hard constraint is not relaxed, we can restrict the agent to avoid collisions at each time-step based on the sensor information. We assume the local information of WTS can be accurately updated such that the hard constraint is guaranteed. The system will return no solution in cases where no feasible trajectories satisfy the hard constraint, e.g., obstacles surrounding the agent. Note that the optimality mentioned in this paper refers to local optimum since RHC controllers only optimize the objective within finite predictive steps. The control synthesis of the MITL online motion planning strategy is presented in the form of Algorithm \ref{alg:Controlsynthesis}. Lines 2-3 are responsible for the offline initialization to obtain an initial $\boldsymbol{J}$. The rest of Algorithm \ref{alg:Controlsynthesis} (lines 4-16) is the online receding horizon control part executed at each time step. In Lines 4-6 the receding horizon control is applied to determine $\hat{p}_{0}^{\ast}$ at time $k=0$. Since the environment is dynamic and unknown, Algorithm \ref{alg:Controlsynthesis} is applied at each time $k>0$ to update $\boldsymbol{J}$ based on local sensing in Lines 7-9. The RHC is then employed based on the previously determined $\hat{p}_{k-1}^{\ast}$ to generate $\hat{\boldsymbol{p}}_{k,opt}$, where the next state is determined as $\hat{p}_{k}^{\ast}=\hat{p}_{1|k,opt}$ in Lines 10-12. The transition from $\hat{p}_{k-1}^{\ast}$ to $\hat{p}_{k}^{\ast}$ applied on $\mathcal{\hat{P}}$ corresponds to the movement of the agent at time $k$ from $\gamma_{\mathcal{T}}(\hat{p}_{k-1}^{\ast})$ to $\gamma_{\mathcal{T}}(\hat{p}_{k}^{\ast})$ on $\mathcal{T}$ in Line 11. By repeating the process in lines 7-13, an optimal path $\hat{\boldsymbol{p}}^{\ast}=\hat{p}_{0}^{\ast}\hat{p}_{1}^{\ast}\ldots$ can be obtained that satisfies the acceptance condition of $\mathcal{\hat{P}}$. \begin{algorithm} \caption{\label{alg:Controlsynthesis}Control synthesis of MITL online motion planning} \small \singlespacing \begin{algorithmic}[1] \Procedure {Input: } {The WTS $\mathscr{\mathcal{T=\textrm{\ensuremath{\left(Q,q_{0},\delta,\mathcal{AP},L,\mathcal{\omega}\right)}}}}$and the relax TBA $\mathcal{\hat{A}}=(\hat{S},\hat{S}_{0},\mathcal{AP},\mathcal{L},X,I_{X},v_{c},v_{d},\hat{E},\hat{F})$ corresponding to the MITL formula $\phi=\phi_{h}\land\phi_{s}$} {Output: } { the path $\hat{\boldsymbol{p}}^{\ast}=\hat{p}_{0}^{\ast}\hat{p}_{1}^{\ast}\ldots$ } {Off-line Execution:} \State Construct the relaxed product automaton $\hat{\mathcal{P}}=\mathcal{T\times\hat{A}}$ \State Construct $\mathcal{F}^{\ast}$ and initialize $\mathit{\boldsymbol{J}}$ {On-line Execution:} \If { $\exists\hat{p_{0}}\in\hat{P_{0}}$ $J(\hat{p_{0}})<\infty$} \State Solve for $\hat{\boldsymbol{p}}_{0,opt}$ \State $\hat{p}_{0}^{\ast}=\hat{p}_{1|0,opt}$ and $k\leftarrow1$ \While { $k>0$ } \State Apply automaton update at $\hat{p}_{k-1}^{\ast}$ in Algorithm \ref{alg:Automaton-Update} based on local sensing \State Locally observe rewards $\boldsymbol{R}(\gamma_{\mathcal{T}}(\hat{p}_{k-1}^{\ast}))$ \State Solve for $\hat{\boldsymbol{p}}_{k,opt}$ \State Implement corresponding transitions on $\mathcal{\hat{P}}$ and $\mathcal{T}$ \State $\hat{p}_{k}^{\ast}=\hat{p}_{1|k,opt}$ and $k++$ \EndWhile \Else{ There does not exist an accepting run from initial states} \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \textbf{Complexity Analysis}: Since the off-line execution involves the computation of $\hat{\mathcal{P}}$, $\mathcal{F}^{\ast}$ and the initial $\boldsymbol{J}$, its complexity is $O(\left|\mathcal{F}_{\hat{P}}\right|^{3}+\left|\mathcal{F}_{\hat{P}}\right|^{2}+\left|\hat{P}\right|^{2}\times\left|\mathcal{F}_{\hat{P}}\right|)$. For online execution, since $\mathcal{F}^{\ast}$ remains the same from Lemma 1, Algorithm 2 requires $\left|\hat{P} \right|$ runs of Dijkstra\textquoteright s algorithm. Suppose the number of $\textrm{Sense}(\hat{p})$ is bounded by $\left|N_{1}\right|$, therefore, the complexity of Algorithm 2 is at most $O(\left|N_{1}\right|\times\left|\hat{P}\right|+\left|\hat{P}\right|)$. Suppose the number of total transitions between states is $\left|\Delta_{\delta}\right|$. In Algorithm 3, the complexity of recursive computation at each time step is highly dependent on the horizon $N$ and is bounded by $\left|\Delta_{\delta}\right|^{N}$. Overall, the maximum complexity of the online portion of RHC is $O(\left|N_{1}\right|\times\left|\hat{P}\right|+\left|\hat{P}\right|+\left|\Delta_{\delta}\right|^{N})$. \section{Case Studies\label{sec:Case}} The simulation was implemented in MATLAB on a PC with 3.1 GHz Quad-core CPU and 16 GB RAM. We demonstrate our framework using the Pac-Man setup shown in Section \ref{sec:PF}. Consider an MITL specification $\phi=\phi_{h}\land\phi_{s}$, where $\phi_{h}=\Square\lnot\mathtt{obstacle}$ and $\phi_{s}=\Square(\lnot\mathtt{grass})\land\Square\lozenge_{t<10}\mathtt{cherry}\land\Square(\mathtt{cherry}\rightarrow\lozenge_{t<20}\mathtt{pear)}.$ In English, $\phi_{h}$ means the agent has always to avoid obstacles, and $\phi_{s}$ indicates the agent needs to repeatedly and sequentially eat pears and cherries within the specified time intervals while avoiding the grass. The tool \cite{Brihaye2017} allows converting MITL into TBA. Fig. \ref{fig:Snapshots} shows the snapshots during mission operation. The simulation video is provided \footnote{\url{https://youtu.be/S_jfavmFIMo}}. \textbf{Simulation Results:} As for the priorities of violations, we set up that avoiding grass is more critical than eating fruits within the specified time, i.e., we prefer to avoid discrete violation rather than the continuous violation when $\phi_{s}$ is infeasible. Therefore, we set the parameters $\alpha=0.8$ and $\beta=10$. The Pac-Man starts at the bottom left corner and can move up, down, left, and right. In the maze, the time-varying reward $R_{k}(q)$ is randomly generated at region $q$ from a uniform distribution at time $k$. Since the WTS $\mathcal{T}$ has $\left|Q\right|=100$ states and the relaxed TBA $\mathcal{\hat{A}}$ has $\left|\hat{S}\right|=15$ states, the relaxed product automaton $\mathcal{\hat{P}}$ has $\left|\hat{P}\right|=1500$ states. The computation of $\mathcal{\hat{P}}$, the largest self-reachable set $\mathcal{F}^{\ast}$, and the energy function took $0.62$s. The control algorithm outlined in Algorithm \ref{alg:Controlsynthesis} is implemented for $50$ time steps with horizon $N=4$. Fig. \ref{fig:Snapshots} shows the snapshots during mission operation. Fig. \ref{fig:Snapshots} (a) shows that Pac-Man plans to reach cherry within the specified time interval. Fig. \ref{fig:Snapshots} (b) shows that $\phi_{s}$ is relaxed, and Pac-Man has two choices: go straight to the left, pass the grass, and eat the pear within the specified time or go up first and then to the left to avoid the grass and eat pear beyond the specified time. The former choice means discrete violation while the latter means continuous violation. Since the avoidance of discrete violations has higher priority in our algorithm, the agent chooses the second plan as the predicted optimal path illustrated. Note that, due to the consideration of dynamic obstacles, the deployment of black blocks can vary with time. Fig. \ref{fig:Snapshots} (c) and (d) show that on the second completion of the MITL task Pac-Man detects that the cherry at the right top corner is blocked by obstacles and chooses to eat the bottom one. Fig. \ref{fig:energy} (a) shows the evolution of the energy function during mission operation. Each time the energy $J(\hat{p})=0$ in Fig. \ref{fig:energy} (a) indicates that an accepting state has been reached, i.e., the desired task is accomplished for one time. The jumps of energy from $t=30s$ to $35s$ (e.g., $t=30s$) in Fig. \ref{fig:energy} (a) are due to the violation of the desired task whenever the soft task is relaxed. Nevertheless, the developed control strategy still guarantees the decrease of energy function to satisfy the acceptance condition of $\mathcal{\hat{P}}$. Fig. \ref{fig:energy} (b) shows the collected local time-varying rewards. \textbf{Computation Analysis:} To demonstrate out algorithm's scalability and computational complexity, we repeat the control synthesis introduced above for workspace with different sizes. The sizes of the resulted graph, WTS $\mathcal{T}$, the relaxed product automaton $\mathcal{\hat{P}}$, and the meantime taken to solve the predicted trajectories at each time-step are shown in Table I. We also analyze the effect of horizon $N$ on the computation. From Table I, we can see that in the cases with the same horizon $N$, the computational time increases gradually along with the increased workspace size. It is because trajectory updating involves recomputing the energy function based on the updated environment knowledge. In this paper, the proposed RHC-based algorithm only needs to consider the local optimization problem, and the energy constraints will ensure global task satisfaction. Therefore, the mean computation time at each time step does not increase significantly. It shall be noted that in general RHC optimizations, the computations are influenced by the pre-defined horizon $N$. \begin{figure} \centering{}\includegraphics[scale=0.4]{figure4_5.png}\caption{\label{fig:energy}The evolution of the energy function (a) and the accumulative collected time-varying rewards\textcolor{black}{{} (b)} during mission operation.} \end{figure} \begin{table} \caption{The comparison of workspace size, horizon and computation time.} \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline $\begin{array}{c} \textrm{Workspace}\\ \textrm{size} \end{array}$ & $\begin{array}{c} \textrm{Horizon}\\ N \end{array}$ & $\begin{array}{c} \mathcal{T}\\ \left|Q\right| \end{array}$ & $\begin{array}{c} \hat{\mathcal{P}}\\ \left|\hat{P}\right| \end{array}$ & $\begin{array}{c} \mathscr{\textrm{Mean}}\\ \textrm{time}(s) \end{array}$\tabularnewline \hline $10\times10$ & 4 & 100 & 1500 & 0.98\tabularnewline \hline $10\times10$ & 6 & 100 & 1500 & 1.01\tabularnewline \hline $10\times10$ & 8 & 100 & 1500 & 1.05\tabularnewline \hline $30\times30$ & 4 & 900 & 13500 & 1.36\tabularnewline \hline $30\times30$ & 6 & 900 & 13500 & 1.39\tabularnewline \hline $30\times30$ & 8 & 900 & 13500 & 1.54\tabularnewline \hline $50\times50$ & 4 & 2500 & 37500 & 2.91\tabularnewline \hline $50\times50$ & 6 & 2500 & 37500 & 3.02\tabularnewline \hline $50\times50$ & 8 & 2500 & 37500 & 3.60\tabularnewline \hline \end{tabular} \end{table} \section{Conclusion } In this paper, we propose a control synthesis under hard and soft constraints given as MITL specifications. A relaxed timed product automaton is constructed for task relaxation consisting of task and time violations. An online motion planning strategy is synthesized with a receding horizon controller to deal with the dynamic and unknown environment and achieve multi-objective tasks. Simulation results validate the proposed approach. Future research will consider building the deterministic system online based on the real-time sensing information and develop online robust planning methods for stochastic systems. \bibliographystyle{IEEEtran}
1,116,691,498,052
arxiv
\section{Introduction} A linear polymer in a good solvent at equilibrium has conformations that correspond to a self-avoiding random walk with a step size equal to the Kuhn length, \(b\), defined as the length between monomers for orientational correlations to be lost. The Kuhn length is directly related to the bending stiffness of the polymer, \(\kappa\), and depends inversely on the thermal energy: \(b = \frac{2 \kappa}{k_B T}.\) In terms of \(b\), the average length-scales of the polymer, such as the average end-to-end distance, \(\langle R_{ee}\rangle \), and the average radius gyration, \(\langle R_g\rangle \), scale as $b \cdot N_b^\nu$, where \(N_b\) is the length of the polymer, \(L\), divided by the Kuhn length, \(N_b=L/b\), and \(\nu\) is a scaling exponent called the Flory exponent. For a self-avoiding random walk in 2D, \(\nu=3/4\) \cite{Polymer2003,deGennes1979}. Far less is known about the behavior of linear polymers in out-of-equilibrium baths, despite its potential significance in biology, where biopolymers, such as proteins and filaments, are forced out of equilibrium by the presence of motor proteins \cite{Kruse2004,JULICHER2007,Prost2015,Weber2015,Oyama2019}. To date, most of the work on out-of-equilibrium polymers has focused on two model cases. The first case is active polymers, which consist of monomers that experience active forces that drive the polymer out of equilibrium; these have been studied both in simulations and experiments \cite{Winkler2020,Locatelli2021}. The second case pertains to passive linear polymers in an out-of-equilibrium bath, comprised, for example, of active particles; these have been solely studied by computer simulations \cite{Kaiser2014, Shin2015, Harder2014,Xia2019,Cao2020}. For \(\kappa=0\), the polymer was found to slightly swell with increasing bath activity \cite{Kaiser2014}, while for \(\kappa \neq 0\), the polymer was found to shrink \cite{Shin2015}. In this paper, we directly test some of the simulation expectations using connected stainless steel ball-chains as a model passive polymer and baths of radially symmetric or asymmetric disks on a vibrated plate. While the system composed of symmetric particles is out-of-equilibrium, it has been shown previously \cite{Junot2017,Briand2016,Lanoiselee2018} that the assembly of such particles behaves in a manner close to an equilibrium hard-disk liquid. Therefore the symmetric particles can be taken as a model passive bath, while the asymmetric particles, which behave as polar self-propelled particles, constitute the active bath. We find that the growth of the radius of gyration of the polymer in an active bath with the number of monomers is consistent with what is expected for the polymer in a passive bath. However, activity results in smaller values of \(R_g\), indicating the polymer shrinks in active baths. Additionally, the actual polymer chain conformations can be significantly different; in active baths, the polymer adopts significantly more ``hairpin''-like structures than in passive baths. \section{Methods} We use the experimental system described in detail in Reference \cite{Deseigne2012} and subject a collection of macroscopic circular grains and a ball-chain to a well-controlled vertical vibration while they are confined to a \(2.4\) mm gap between two horizontal glass plates. An electromagnetic servo-controlled shaker (V455/6-PA1000L,LDS) coupled to a triaxial accelerometer (356B18, PCB Electronics) allows producing a sinusoidal vibration in the bottom plate. The resulting contacts between the grains and the glass plates cause the grains to experience horizontal displacement over time. We work at a frequency \(f= 120\) Hz and set the acceleration relative to gravity to \(\Gamma = a \left( 2\pi f\right)^2/g = 2.0\), which corresponds to a peak vertical displacement of $a=34~\mu$m. An accelerometer is used to ensure that no resonances of the experimental set up are present at this working frequency, that the horizontal to vertical ratio is lower than \(10^{-2}\), and that the spatial homogeneity of the vibrations across the bottom plate is within \(1\%\). We carry out experiments with embedded chains consisting of various numbers, \(N\), of hollow metal beads with a diameter \(a_0 =(2.30 \pm 0.05)\) mm and an average center-to-center distance \(\sigma = (3.10 \pm 0.05)\) mm in quasi 2D baths of active or passive disks. The active bath is composed of polar, self-propelled grains, which are micro-machined monodisperse copper-beryllium disks with a diameter \( d_{0} = 4\) mm, and an off-center tip and a glued rubber skate located in diametrically opposite positions; these raise the total height of the disks to \(h=2\) mm. The two ``legs'' have different mechanical response endowing the particles with a polar axis. At the working frequency, the disks perform a persistent random walk, with a speed \(v_{0} = (4.59 \pm 0.01)~d_{0}/\text{s}\) and a rotational diffusion constant \(D_{\theta} = 0.76~ \text{rad}^{2}/\text{s}\). Together, these two measurements can be combined to obtain a persistence length \(\xi = \pi^{2}v_{0}/(2D_{\theta}) = 14.6~d_{0}\) \cite{Deseigne2012}. The passive bath is composed of isotropic grains which are disks made of the same metal, same diameter and same height, but that are rotationally invariant. The contact with the vibrating plate results in the disks executing a random walk with diffusion constant \(D= (0.78 \pm 0.01)~d_{0}^{2}/\text{s} \). In addition to being confined between the two horizontal glass plates, the disks are constrained horizontally by a flower shaped boundary [see Fig.\ref{fig:Rg}(a)], which frustrates the tendency of active particles to accumulate at flat or concave walls \cite{Berke2008,Li2009,Elgeti2013}. The area fraction within the cell boundaries is held constant and equal to \(\Phi=0.088\) throughout the experiments. For each trial, we take images with a CCD camera at \(30~\text{fps}\) for \(10\) min and tracked the positions of each link of the polymer chain as a function of time so that a typical trial produced \(18000\) images. We then exclude from our data all images in which any bead of the chain is located within approximately \(5d_0\) from the boundary. \section{Polymer Shrinking} To check whether our model polymer chains expand or contract in the active bath relative to the passive bath, we measure their radii of gyration for each time step by first computing the gyration tensor: \(S_{\alpha \beta} = \sum_{i=1}^N \Delta \alpha_i \Delta \beta_i \), where \( \alpha, \beta \in {x,y} \) and \(\Delta \alpha_i\) and \(\Delta \beta_i\) are the \(\alpha\) and \(\beta\) positions of a monomer in the chain relative to the center of mass of the polymer chain; the index \(i\) runs over all monomers. The gyration tensor is symmetric and has two real eigenvalues, \(\lambda_2>\lambda_1\), so that \(\lambda_2\) is the axis corresponding to the maximum \(1\)-dimensional radius of gyration. The squared radius of gyration is then \(R_g^2= Tr(S) = \lambda_1+\lambda_2\), corresponding to \(R_g^2 = \sum_{i=1}^N (\Delta x_i^2 + \Delta y_i^2)\). \begin{figure}[!h] \centering \includegraphics[width=3.2in]{Rg2.jpg} \caption{ (a) A typical image of an experiment with the monomers of the chain tracked in color and the flower shaped cell highlighted in white. The beads of the chain have been highlighted in colors that range from blue (dark) to yellow (light) from one end of the chain to the other. (b) The probability distribution functions for \(R_g /(N\sigma)\) for three different lengths of chains in a passive bath. In these units, higher N would correspond to lower \(\kappa\) in equilibrium systems, which results in lower radii of gyration. (c) The probability distribution functions for \(R_g/(N\sigma)\) for five different lengths of chains immersed in the active bath. (d) \(\langle R_g \rangle / \sigma\) for passive (blue squares) and active (red circles) baths. The black line represents \(R_g^{0.75}\), corresponding to a self avoiding random walk. } \label{fig:Rg} \end{figure} Figs. \ref{fig:Rg}(b) and \ref{fig:Rg}(c) show the probability distribution functions of our measured radii of gyration for the polymer chains immersed in both passive and active baths, respectively, with the radius of gyration scaled by \(N \sigma\). With this scaling, the maximum end-to-end length of the polymer is \(R_{ee}/(N \sigma) = 1\), corresponding to a maximum radius of gyration equivalent to that of an infinitesimally thin rod: \(\frac{R_g}{N \sigma} = \left(\int_{-1/2}^{1/2} x^2 dx \right)^\frac{1}{2}= \sqrt{1/12} \approx 0.29.\) In a bath of passive particles, the shortest chain, with \(N=32\) monomers, acts like a very stiff polymer, with \od{the most probable value of} \(R_g\) near the maximum limit; see red squares in Fig. \ref{fig:Rg}(b). In this case, the chain is almost always fully extended. As \(N\) increases, the relative radius of gyration \(R_g/(N\sigma)\) begins to shrink. This is because polymer chains with normalized length \(L/(N\sigma)=1\) contains more Kuhn lengths; we are thus increasing the polymer length relative to its persistence length. When we instead immerse the polymer chains in the active bath, we find that they posses a much lower \(R_g/(N\sigma)\) for a given length [see Fig. \ref{fig:Rg}(c)], indicating that to leading order, the chains effectively have a decreased Kuhn length in the active bath, making it more flexible; this is in agreement with the simulation predictions in Refs. \cite{Shin2015, Kaiser2014}. We summarize these results in Fig. \ref{fig:Rg}(d), where we show that both the chain immersed in the passive bath and the chain immersed in the active bath, both follow the expected Flory Law \(R_g\propto N^{3/4}\), at least within the contour lengths we are able to study. This is consistent with computer simulations \cite{Kaiser2014}. Fig. \ref{fig:Rg}(d) also shows that the average \(R_g\) for the chain immersed in the active bath is always smaller (red circles) compared to the passive bath (navy squares), consistent with the chain in the active bath having a shorter Kuhn length. Because the physical stiffness of our chain has not decreased in reality, the increased flexibility of the chain in this case, can be taken as a sign of an increased effective thermal energy of the bath, \(k_B T_{\text{eff}}\), which is often considered as one of the main effects of activity in low density systems \cite{Palacci2010,Loi2011,Ginot2015,Flenner2016,Caprini2019}. Overall, our results might seem to suggest that the primary effect of the active bath is to increase the effective temperature. However, we shall see now that the active bath cannot simply be mapped onto a passive one, as we also find remarkable differences in the typical shapes adopted by the chains in active and passive baths. \section{Hairpins and Coils} Simulation work has predicted that, in addition to shrinking, polymers in active baths are much more likely to adopt hairpin configurations \cite{Harder2014}; these contain a single prominent bend and are otherwise extended. One way to measure the prevalence of this type of configuration is to measure the so called acylindricity of the polymer chain and compare it to its radius of gyration. The acylindricity is defined as \(A^2 = \frac{\lambda_2 -\lambda_1}{R_g^2}\), and it measures the relative difference between the 1-dimensional radii of gyration. If these radii are equal, as in the case of a uniform circle or a square, then \(A=0\). Conversely, \(A\) is maximum for a line, which has \(\lambda_1=0\) and \(A=1\). \begin{figure}[!h] \centering \includegraphics[width=3.2in]{Hairpin2.jpg} \caption{ (a) A chain with \(N=45\) in a nearly fully extended configuration.. (b) The chain in a more tightly confined configuration. (c) The chain in a hairpin configuration. The lines represent the principal axes of the configuration and their length is \(2\sqrt{\lambda}\). } \label{fig:chp6:Hairpins} \end{figure} \begin{figure*}[!hbt] \centering \includegraphics[width=5in]{Conformations2.jpg} \caption{ (a-c) The measured prevalence of conformations with various \(R_g\) and \(A^2\) for polymers with \(N=32, 45, \text{and } 78\) monomers, respectively, immersed in a passive bath. (d-f) The prevalence of conformations for polymers with \(N=32, 45, \text{and } 78\) monomers, respectively, immersed in an active bath. } \label{fig:chp6:R_A} \end{figure*} Simultaneous measurements of \(A^2\) and \(R_g\) thus allow detecting whether our polymer chains adopt hairpin-like configurations. As an example, consider the three configurations shown in Fig. \ref{fig:chp6:Hairpins}(a-c). We have added lines that represent the lengths and orientations of their principal radii of gyration to more easily compare them with each other; each double blue line has a length of \(2\sqrt{\lambda_2}\), and each red line has length \(2\sqrt{\lambda_1}\). The configuration in Fig. \ref{fig:chp6:Hairpins}(a) has \(A^2=0.95\) and \(R_g/(N\sigma)=0.26\), which are near their possible maximum values. The acylindricity is large because \(\lambda_2 \gg \lambda_1\), and \(R_g\) is also large because the polymer is fully extended. In comparison, the configuration in Fig. \ref{fig:chp6:Hairpins}(b) is spread out more isotropically, corresponding to \(\lambda_1 \approx \lambda_2\) and a smaller acylindricity \(A^2 = 0.31\). At the same time, the chain is more compact, which reduces the radius of gyration to \(R_g/(N\sigma) =0.11\). Figure \ref{fig:chp6:Hairpins}(c) shows an example of a hairpin configuration with \(A^2 = 0.98\) and \(R_g = 0.14\). The acylindricities of these configurations are very high, because the hairpins are highly anisotropic, but they all have much lower values of \(R_g\) relative to the values expected for a fully extended chain. We find that our polymer chains adopt many more hairpin conformations when immersed in active, as compared to passive, baths. Figs. \ref{fig:chp6:R_A}(a-c) show the probability of conformations in a passive bath in terms of \(R_g\) and \(A^2\), for polymer lengths corresponding to \(N = 32, 45, 78\), respectively. The gray scale represents the probability of the polymer having the given \(R_g\) and \(A^2\); note we scale the probabilities for each trial by the highest probability in that trial, so that all plots can be shown with the same gray scale. In passive baths, the radius of gyration and acylindricity of the polymer chain are closely related; there is little spread in the data corresponding to the largest probabilities. Any reduction in \(R_g\) is thus accompanied by a reduction in \(A^2\). In contrast, for active baths, there are many more conformations with low \(R_g\) for a given \(A^2\) [Figs. \ref{fig:chp6:R_A}(d-f)], indicating the presence of an appreciable number of hairpin configurations; this agrees with expectations from computer simulations \cite{Harder2014}. We also note that in the presence of passive baths, the polymer chain occasionally reaches a steady state configuration where it is completely depleted to the wall, such as that illustrated for a chain with \(N=100\) monomers in Fig. \ref{fig:chp6:Snail}(a). This is never observed in the presence of active baths, as we always find active particles near the wall that are able to eventually push the chain back into the bulk of the experimental cell. In other instances also in passive baths, we find that the polymer collapses into a coil, as illustrated in Fig. \ref{fig:chp6:Snail}(b). This coiled configuration is the same as the one found in the simulations in Ref. \cite{Liu2019}. However, in this case, the polymer is inside a chiral active bath composed of self-propelled particles with a non-zero angular velocity in addition to their average directed motion velocity. Additional experiments would be needed to asses whether there is some hidden chirality in our set up with passive particles or if the spiral configuration is just stable in two-dimensions when the particle density in the bath is low enough so that no particles from the bath are trapped within a loop configuration of the polymer. \begin{figure} \centering \includegraphics[width=3.375 in]{Snail.pr.jpg} \caption{ (a) An example of a chain in a passive bath depleting to the boundaries. Once this happen, the chain never returns to the bulk. (b) An example of a chain coiling in a passive bath. This is also a steady state as the grains cannot exert any force that would uncoil it. } \label{fig:chp6:Snail} \end{figure} \section{Conclusion} In this paper, we have used a well-studied two-dimensional system of self propelled particles to explore the configurations of a passive polymer embedded in an active bath. Our results show that, in both passive and active baths, the average radius of gyration increases with the number of monomers in a manner consistent with the Flory scaling law of equilibrium polymers. This makes it tempting to compare the effect of the active bath to an escalated effective temperature. However, by comparing simultaneous measurements of the radii of gyration and acylindricity of the polymers, we verify that the activity of the bath changes the configurations of the polymer, skewing the likely configurations towards those that are more ``hairpin-like'', i.e. configurations with a single prominent bend caused by one or more active particles briefly penetrating and dragging the polymer. Importantly, in our passive particle bath, the polymer can adopt steady-state configurations not seen in our active particle bath; these configurations correspond to polymer chains that are either depleted to the boundary of the cell, when by chance, there are no particles between the chain and the wall, or to spiral states, when the polymer begins to close on itself with no particles inside the loop. Our results verify various predictions from simulations of passive polymers in an active bath and may be a step towards further understanding of polymer collapse in a variety of biological situations. \begin{acknowledgments} We thank MCIN/AEI/10.13039/501100011033/FEDER,289 (grant No. PID2021-122369NB-100), as well as the FLAMEL (NSF DGE-1258425) and REU (NSF Grant GR10002751) programs, for financial support. \end{acknowledgments}
1,116,691,498,053
arxiv
\section*{Introduction} A main goal in measured group theory, initiated by work of Dye \cite{Dye}, is to classify measure-preserving group actions on standard probability spaces up to \emph{orbit equivalence}, i.e.\ up to the existence of a measure space isomorphism sending orbits to orbits. More generally, we will be interested in \emph{stable orbit equivalence} of actions of countable groups, defined as follows: two free, ergodic, measure-preserving actions $G\curvearrowright X$ and $H\curvearrowright Y$ by Borel automorphisms on standard probability spaces are \emph{stably orbit equivalent} if there exist positive measure Borel subsets $U\subseteq X$ and $V\subseteq Y$, and a measure-scaling isomorphism $f:U\to V$, such that for every $x\in U$, one has $f((G\cdot x)\cap U)=(H\cdot f(x))\cap V$. A first striking result in this theory was the proof by Ornstein and Weiss \cite{OW}, building on Dye's work, that any two free, ergodic, probability measure-preserving actions of countably infinite amenable groups are orbit equivalent. Later, Gaboriau used the notion of cost (introduced by Levitt in \cite{Lev}) to distinguish actions of free groups of different ranks \cite{Gab-cost}, and showed that $\ell^2$-Betti numbers also provide useful invariants for the classification \cite{Gab-l2}. In contrast to the Ornstein--Weiss theorem exhibiting a wide class of groups that are indistinguishable from the viewpoint of orbit equivalence, several strong rigidity results have then been obtained for various classes of groups, like higher rank lattices (Furman \cite{Fur-me,Fur-oe}), mapping class groups (Kida \cite{Kid,Kid-oe}) and related groups (e.g.\ \cite{CK}), certain large type Artin groups \cite{HH1} or $\mathrm{Out}(F_N)$ with $N\ge 3$ (as proved by Guirardel and the first named author in \cite{GH}). Interestingly, negative curvature features of the groups under consideration are often key ingredients in the proofs of orbit equivalence rigidity of their ergodic actions. Other rigidity phenomena were discovered by Monod and Shalom \cite{MS}, who proved superrigidity-type results for \emph{irreducible} actions of direct products of free groups, or more generally of direct products $G_1\times\dots\times G_k$, with $k\ge 2$, where $\mathrm{H}^2_{\mathrm{b}}(G_i,\ell^2(G_i))\neq 0$ for every $i\in\{1,\dots,k\}$ (this condition on the bounded cohomology can be viewed as an analytical form of negative curvature). The crucial \emph{irreducibility} assumption means that every factor $G_i$ acts ergodically on $X$. In yet another direction, Popa obtained orbit equivalence rigidity results for Bernoulli actions of all property (T) groups \cite{Pop-T}, and all nonamenable groups that split as direct products or have an infinite center \cite{Pop}; these results were obtained in the framework of Popa's deformation/rigidity theory, and their proofs exploit a specific property of Bernoulli actions called \emph{malleability}, rather than geometric properties of the acting group. In \cite{HH2}, we started to investigate the class of right-angled Artin groups from the viewpoint of measured group theory. These groups are of basic importance (see e.g. \cite{charney2007introduction,Wise}) and have a very simple definition: given a finite simple graph $\Gamma$ (i.e.\ with no loop-edge and no multiple edges between two vertices), the \emph{right-angled Artin group} $G_\Gamma$ is defined by the following presentation: it has one generator per vertex of $\Gamma$, and relations are given by commutation of any two generators whose associated vertices are joined by an edge. On the rigidity side, we proved in \cite{HH2} that if two right-angled Artin groups $G_\Gamma,G_\Lambda$ with finite outer automorphism groups admit free, ergodic, measure-preserving actions on standard probability spaces which are orbit equivalent, or merely stably orbit equivalent (equivalently if the groups are measure equivalent), then $G_\Gamma$ and $G_\Lambda$ are isomorphic. However, rigidity fails beyond this context: given any right-angled Artin group $G_\Gamma$, and any group $H$ which is a graph product of countably infinite amenable groups over the same graph $\Gamma$, we can build free, ergodic, probability measure-preserving actions of $G_\Gamma$ and $H$ which are orbit equivalent \cite[Proposition~4.2]{HH2}. In fact, our proof of \cite[Proposition~4.2]{HH2} shows that starting from any action $G_\Gamma\curvearrowright Z$ as above, we can find a blown-up action $G_\Gamma\curvearrowright\hat{Z}$ (i.e.\ coming with a $G_\Gamma$-equivariant map $\hat{Z}\to Z$) which fails to be superrigid for orbit equivalence. We can also build two actions of $G_\Gamma$ which are orbit equivalent but not conjugate \cite[Remark~4.4]{HH2}. The goal of the present paper is to show that rigidity can be achieved if one restricts to a certain class of actions satisfying more restrictive ergodicity conditions, as in the following definition. \begin{defintro} Let $G$ be a right-angled Artin group. A free, probability measure-preserving action of $G$ on a standard probability space $X$ is \emph{irreducible} if there exist a finite simple graph $\Gamma$ and an isomorphism between $G$ and the right-angled Artin group $G_\Gamma$ such that, through this isomorphism, every standard generator of $G_\Gamma$ (associated to a vertex of $\Gamma$) acts ergodically on $X$. \end{defintro} The above definition is a natural extension of Monod and Shalom's irreducibility condition to the context of right-angled Artin groups (and could be naturally extended to graph products). Examples of irreducible actions of right-angled Artin groups include Bernoulli actions (considered in Theorem~\ref{theointro:Bernoulli} below), Gaussian actions associated to mixing orthogonal representations (introduced by Connes and Weiss in \cite{CW}, see also \cite[Section~2.1]{PS} for a detailed study), or actions obtained by considering discrete and faithful representations of right-angled Artin groups into $\mathrm{SL}(n,\mathbb{R})$ or even $\mathrm{SL}(n,\mathbb{Z})$ (see \cite{Wan} for examples), and choosing mixing actions of closed subgroups of $\mathrm{SL}(n,\mathbb{R})$ on homogeneous spaces, coming from the Howe--Moore theorem from \cite{HM}, see e.g.\ \cite[Corollary~2.5]{Bek}. Our main theorem is the following. \begin{theointro}\label{theointro:strong-rigidity} Let $G$ and $H$ be two one-ended centerless right-angled Artin groups. Let $G\curvearrowright X$ and $H\curvearrowright Y$ be two free irreducible measure-preserving actions by Borel automorphisms on standard probability spaces. If the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are stably orbit equivalent (or merely stably $W^*$-equivalent\footnote{i.e.\ their associated von Neumann algebras $L^\infty(X)\rtimes G$ and $L^\infty(Y)\rtimes H$, defined via Murray and von Neumann's \emph{group measure space construction} \cite{MvN}, have isomorphic amplifications}), then they are conjugate, i.e.\ there exist a group isomorphism $\alpha:G\to H$ and a measure space isomorphism $f:X\to Y$ such that for every $g\in G$ and almost every $x\in X$, one has $f(gx)=\alpha(g)f(x)$. \end{theointro} Theorem~\ref{theointro:strong-rigidity} covers a much larger class of right-angled Artin groups than our previous work \cite{HH2}, including many examples with infinite outer automorphism group. For example, it applies to all right-angled Artin groups whose defining graph is a tree of diameter at least $3$, which are usually less rigid from other viewpoints (for instance, they are all quasi-isometric \cite{BN}, and the problem of their measure equivalence classification is open). Also, contrary to our previous work (and to other measure equivalence rigidity statements in the literature, like \cite{Kid,HH1,GH}), our proof of Theorem~\ref{theointro:strong-rigidity} does not rely on a combinatorial rigidity statement for a curve graph analogue \cite{KK} associated to the right-angled Artin group. Instead, rigidity comes from the combination of a local argument (untwisting the orbit equivalence cocycle to a group homomorphism inside a vertex group), and a propagation argument where the commutation relations play a central role. The irreducibility assumption is crucial in both steps. The first step relies on new orbit equivalence invariant of right-angled Artin groups (compared to \cite{HH2}), namely, the orbit equivalence relation remembers the maximal join subgroups of $G$ and $H$; this is important as it enables us to apply the results of Monod and Shalom in these local subgroups as a crucial step of the proof. As already explained above, counterexamples without the irreducibility assumption were given in \cite[Section~4.1]{HH2}. Counterexamples when the groups are infinitely-ended already arise in the context of free groups. Indeed, Bowen proved in \cite{Bow} that all nontrivial Bernoulli shifts of a given finitely generated free group are orbit equivalent; more generally, if $G=A_1\ast\dots\ast A_n$ and $G'=A'_1\ast\dots\ast A'_n$ are two free products of amenable groups with the same number of factors, then all Bernoulli shifts of $G$ and $G'$ are orbit equivalent. In these contexts, the Bernoulli shifts are completely classified up to conjugation by the entropy of their base space \cite{Bow3,Bow4}, yielding a $1$-parameter family of orbit equivalent pairwise nonconjugate actions. He also proved that all nontrivial Bernoulli shifts of finitely generated nonabelian free groups (possibly of different ranks) are stably orbit equivalent \cite{Bow2} -- although as already mentioned, work of Gaboriau ensures that they are not orbit equivalent when the ranks of the acting groups are different, by comparison of their costs \cite{Gab-cost}. This is in sharp contrast with our Theorem~\ref{theointro:strong-rigidity}, where stably orbit equivalent irreducible actions are automatically orbit equivalent, and in fact even conjugate. We mention that in the context of right-angled Artin groups, the stable $W^*$-rigidity statement in Theorem~\ref{theointro:strong-rigidity} is a consequence of the stable orbit equivalence rigidity statement, using that the corresponding von Neumann algebras have a unique virtual Cartan subalgebra up to unitary conjugacy. Uniqueness of the virtual Cartan subalgebra up to unitary conjugacy was proved in a groundbreaking work of Popa and Vaes \cite[Theorem~1.2 and Remark~1.3]{PV2} for all free, ergodic, probability measure-preserving actions of groups satisfying Ozawa and Popa's property $(\mathrm{HH})^+$ -- the fact that right-angled Artin groups satisfy this property was established by Ozawa and Popa in \cite[Theorem~2.3(5)]{OP}. See also \cite[Corollary 3.20]{HH2} for a more detailed explanation. We also mention that we actually obtain a slightly stronger statement than Theorem~\ref{theointro:strong-rigidity}, namely: every stable orbit equivalence between the actions $G\curvearrowright X$ and $H\curvearrowright Y$ has compression 1 (see Section~\ref{sec:soe} for definitions, and Theorems~\ref{theo:join-case} and~\ref{theo:coned-case} for our precise statements). In particular, the \emph{fundamental group} of the equivalence relation $\mathcal{R}$ associated to the action $G\curvearrowright X$ (i.e.\ the subgroup of $\mathbb{R}_+^*$ consisting of all $t>0$ such that $\mathcal{R}$ is isomorphic to the amplification $\mathcal{R}^t$) is trivial. Notice that the class of one-ended centerless right-angled Artin groups contains groups whose $\ell^2$-Betti numbers all vanish (e.g.\ all right-angled Artin groups whose defining graph is a tree of diameter at least $3$, see \cite{DL}), and for these triviality of the fundamental group does not follow from Gaboriau's proportionality principle \cite{Gab-l2}. In view of the above, the fundamental group of the von Neumann algebra $L^\infty(X)\rtimes G$ (defined by Murray and von Neumann in \cite{MvN2} as the subgroup of $\mathbb{R}_+^*$ consisting of all $t>0$ such that $L^\infty(X)\rtimes G$ is isomorphic to the amplification $(L^\infty(X)\rtimes G)^t$) is also trivial. \medskip Using general techniques from measured group theory, developed in successive works of Furman \cite{Fur-oe}, Monod and Shalom \cite{MS}, and Kida \cite{Kid-oe}, Theorem~\ref{theointro:strong-rigidity} yields a superrigidity theorem within the class of mildly mixing group actions. Recall that an action of a countable group $G$ on a standard probability space $X$ is \emph{mildly mixing} if for every non-singular properly ergodic action of $G$ on a standard probability measure space $Y$, the diagonal $G$-action on $X\times Y$ is ergodic. Recall also that two measure-preserving actions $G_1\curvearrowright X_1$ and $G_2\curvearrowright X_2$ of countable groups on standard probability spaces are \emph{virtually conjugate} if there exist short exact sequences $1\to F_i\to G_i\to \bar{G}_i\to 1$ with $F_i$ finite, finite-index subgroups $\bar{G}_i^0\subseteq\bar{G}_i$, and conjugate actions $\bar{G}^0_i\curvearrowright X'_i$ (through an isomorphism between $\bar{G}_1^0$ and $\bar{G}_2^0$) such that for every $i\in\{1,2\}$, the action $\bar{G}_i\curvearrowright X_i/F_i$ is induced from $\bar{G}_i^0\curvearrowright X'_i$ as in \cite[Definition~2.1]{Kid-oe}. \begin{theointro}\label{theointro:superrigidity} Let $G$ be a one-ended centerless right-angled Artin group. Let $G\curvearrowright X$ be a free, irreducible, measure-preserving action of $G$ on a standard probability space $X$. Let $H$ be a countable group, and let $H\curvearrowright Y$ be a mildly mixing, free, measure-preserving action of $H$ on a standard probability space $Y$. If the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are stably orbit equivalent (or merely stably $W^*$-equivalent), then they are virtually conjugate. \end{theointro} In the specific case of nontrivial \emph{Bernoulli actions} of $G$ (i.e.\ of the form $G\curvearrowright X_0^G$, where $X_0$ is a standard probability space not reduced to a single atom, and the action is by shift), an even stronger conclusion holds, which does not require any mildly mixing assumption on the $H$-action. In the appendix of the present paper, written jointly with Adrian Ioana, we exploit works of Popa \cite{Pop} and of Ioana, Popa and Vaes \cite{IPV} to reach the following statement. \begin{theointro}\label{theointro:Bernoulli} Let $G$ be an ICC countable group, which admits a finite generating set $S=\{s_1,\dots,s_k\}$ made of infinite-order elements, such that for every $i\in\{1,\dots,k-1\}$, the elements $s_i$ and $s_{i+1}$ commute, and $s_1$ has a nonamenable centralizer in $G$. Let $G\curvearrowright X$ be a nontrivial Bernoulli action of $G$. Let $H$ be a countable group, and let $H\curvearrowright Y$ be a free, ergodic, measure-preserving action of $H$ on a standard probability space $Y$. If the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are orbit equivalent (or merely $W^*$-equivalent), then they are conjugate. \end{theointro} This applies to all one-ended nonabelian right-angled Artin groups: in fact in this case, using the uniqueness of the virtual Cartan subalgebra up to unitary conjugacy, we also obtain that if the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are stably $W^*$-equivalent, then they are virtually conjugate. The above theorem also applies to many (non-right-angled) Artin groups and to most mapping class groups of finite-type orientable surfaces. Let us also mention that the $W^*$-superrigidity of Bernoulli actions of countable ICC Property~(T) groups was proved by Ioana in \cite{Ioa-T}. \medskip Let us conclude this introduction by presenting the main steps of our proof of Theorem~\ref{theointro:strong-rigidity}. We have a cocycle $c:G\times X\to H$, given by the stable orbit equivalence of the actions. We first observe that it is enough to find a standard generator $s$ of $G$ such that, after replacing $c$ by a cohomologous cocycle (of the form $c'(g,x)=\varphi(gx)c(g,x)\varphi(x)^{-1}$ for some measurable map $\varphi:X\to H$), the map $c_{|\langle s\rangle\times X}$ is almost everywhere constant. Indeed, a propagation argument, using that $s$ is part of a generating set of $G$ with the property that two consecutive elements commute, then shows that $c$ is cohomologous to a group homomorphism (and likewise for the given cocycle $H\times Y\to G$), from which the conclusion follows. This propagation argument is presented in Section~\ref{sec:commuting-chain}. The first step towards the above goal is to use the techniques from our previous work \cite{HH2} to ``recognize'' certain natural subgroups of $G$ and $H$ from the orbit equivalence relation coming from their actions. More precisely, we prove that there exist maximal join parabolic subgroups $P\subseteq G$ and $Q\subseteq H$ (i.e.\ decomposing as a nontrivial product), and positive measure Borel subsets $U\subseteq X$ and $V\subseteq Y$, such that after identifying $U$ and $V$ through a measure-scaling isomorphism, the intersections of the $P$-orbits with $U$ coincide with the intersections of the $Q$-orbits with $V$. If $P$ and $Q$ are centerless, then we can directly apply Monod and Shalom's rigidity theorem \cite[Theorem~2.17]{MS} regarding actions of direct products of groups in the class $\mathcal{C}_{\mathrm{reg}}$ to get the desired conclusion. The most difficult case is when all maximal join parabolic subgroups of $G$ have nontrivial center. This in fact often happens: for instance, if the underlying graph of $G$ is triangle-free and square-free, then the maximal join parabolic subgroups are exactly the star subgroups, isomorphic to $\mathbb{Z}\times F_n$. In this case, a simple combinatorial argument enables us to find two maximal join parabolic subgroups $P_1,P_2\subseteq G$ with commuting centers. Using techniques from \cite{HH2}, we are able to show that the orbits of the subgroups $P_i$, in restriction to some positive measure Borel subset $U$, coincide with the orbits (in restriction to some $V$) of two maximal join parabolic subgroups $Q_1,Q_2\subseteq H$ with commuting centers. As the centers $A_1,A_2$ of $P_1,P_2$ act ergodically (and likewise for the centers $B_1,B_2$ of $Q_1,Q_2$), we can then apply another rigidity theorem due to Monod and Shalom \cite{MS} to derive that for every $i\in\{1,2\}$, the cocycle $c$ is cohomologous to a cocycle $c_i$ that induces a group isomorphism between the quotients $P_i/A_i$ and $Q_i/B_i$. Informally, this means that our cocycle $c_i$ is only controlled \emph{up to an ambiguity in the central direction}. But by comparing the ambiguities given by $c_1$ and $c_2$, we manage to cancel them and prove that $c$ is actually cohomologous to a group homomorphism on $A_i$. As explained above, this is enough to conclude our proof. \paragraph*{Acknowledgments.} The first named author acknowledges support from the Agence Nationale de la Recherche under Grant ANR-16-CE40-0006 DAGGER. \section{Right-angled Artin groups and combinatorial lemmas} \label{sec:raag} Given a finite simple graph $\Gamma$, the \emph{right-angled Artin group} $G_\Gamma$ is the group defined by the following presentation: \begin{center} $G_\Gamma=\langle V\Gamma$\ |\ $[v,w]=1$ if $v$ and $w$ are joined by an edge$\rangle$. \end{center} The images in $G_\Gamma$ of the vertices of $\Gamma$ form the \emph{standard generating set} of $G_\Gamma$. A \emph{full subgraph} of $\Gamma$ is a subgraph $\Lambda\subseteq\Gamma$ such that two vertices of $\Lambda$ are adjacent in $\Lambda$ if and only if they are adjacent in $\Gamma$. Any full subgraph $\Lambda\subseteq\Gamma$ induces an injective homomorphism $G_{\Lambda}\hookrightarrow G_{\Gamma}$ (sending the standard generating set of $G_\Lambda$ to a subset of the standard generating set of $G_\Gamma$), whose image is called a \emph{standard subgroup} of $G_{\Gamma}$. Conjugates of standard subgroups are called \emph{parabolic subgroups} of $G_\Gamma$. It is known that if $gG_{\Lambda_1}g^{-1}\subseteq G_{\Lambda_2}$ for some full subgraphs $\Lambda_1,\Lambda_2$ of $\Gamma$, then $\Lambda_1\subseteq\Lambda_2$ and there exists $h\in G_{\Lambda_2}$ such that $hG_{\Lambda_1}h^{-1}=gG_{\Lambda_1}g^{-1}$ (this follows from \cite[Proposition~2.2]{charney2007automorphisms}). Thus the parabolic subgroup $gG_{\Lambda_1}g^{-1}$ of $G_\Gamma$ is also a parabolic subgroup of $G_{\Lambda_2}$. For a full subgraph $\Lambda\subseteq\Gamma$, define $\Lambda^\perp$ to be the full subgraph spanned by all vertices in $V\Gamma\setminus V\Lambda$ that are adjacent to all vertices of $\Lambda$. Let now $P=gG_{\Lambda}g^{-1}$ be a parabolic subgroup. We define $P^\perp=gG_{\Lambda^\perp}g^{-1}$. This is well-defined: if we can write the parabolic subgroup $P$ in two different ways $gG_{\Lambda}g^{-1}$ and $hG_{\Lambda'}h^{-1}$, then \cite[Proposition~2.2]{charney2007automorphisms} implies that $\Lambda=\Lambda'$ and $gG_{\Lambda^\perp}g^{-1}=hG_{\Lambda^\perp}h^{-1}$. \begin{lemma}[{Charney--Crisp--Vogtmann \cite[Proposition~2.2]{charney2007automorphisms}}] \label{lemma:normalizer} Let $P\subseteq G_\Gamma$ be a parabolic subgroup. Then the normalizer of $P$ in $G_{\Gamma}$ is $P\times P^\perp$. \end{lemma} Many properties of $G_\Gamma$ can be read from its defining graph $\Gamma$. For instance $G_\Gamma$ is one-ended if and only if $\Gamma$ is connected, and $G_\Gamma$ is centerless if and only if no vertex of $\Gamma$ is connected to every other vertex. For any full subgraph $\Lambda\subseteq\Gamma$, there is a retraction $r_\Lambda:G_{\Gamma}\to G_{\Lambda}$ defined by sending every element of the standard generating set corresponding to a vertex in $V\Gamma\setminus V\Lambda$ to the identity element. Hence for any parabolic subgroup $P=gG_{\Lambda}g^{-1}$ of $G_{\Gamma}$, we have a (uniquely well-defined) retraction $r_P:G_{\Gamma}\to P$, defined by letting $r_P(gsg^{-1})=gr_\Lambda(s)g^{-1}$ for every standard generator $s$ of $G_\Gamma$. A \emph{join subgraph} $\Lambda$ of $\Gamma$ is a full subgraph which admits a join decomposition $\Lambda=\Lambda_1\circ \Lambda_2$ (i.e.\ every vertex of $\Lambda_1$ is adjacent to every vertex of $\Lambda_2$) with $\Lambda_i\neq \emptyset$ for every $i\in\{1,2\}$. A \emph{maximal join subgraph} is a join subgraph which is not properly contained in another join subgraph. A \emph{(maximal) join parabolic subgroup} is a parabolic subgroup of form $gG_{\Lambda}g^{-1}$ where $\Lambda$ is a (maximal) join subgraph of $\Gamma$. The \emph{clique factor} of a graph $\Lambda$ is the maximal complete subgraph appearing in a join decomposition of $\Lambda$. \begin{lemma}\label{lemma:join-parabolic} Let $G=G_\Gamma$ be a right-angled Artin group, let $P$ be a join parabolic subgroup of $G$, and let $S\subseteq P$ be a parabolic subgroup. Then $S\times S^{\perp}$ is a join parabolic subgroup. \end{lemma} \begin{proof} Let $\Lambda\subseteq\Gamma$ be a full subgraph such that $P$ is conjugate to $G_\Lambda$; the subgraph $\Lambda$ decomposes nontrivially as a join $\Lambda=\Lambda_1\circ\Lambda_2$. Then $S$ is conjugate to $G_\Upsilon$ for some full subgraph $\Upsilon$ of $\Lambda$ (as follows from \cite[Proposition~2.2]{charney2007automorphisms}). If $\Upsilon\subseteq\Lambda_i$ for some $i\in\{1,2\}$, then $\Upsilon^{\perp}$ contains $\Lambda_{3-i}$, so $S\times S^{\perp}$ is a join parabolic subgroup. Otherwise $\Upsilon$ decomposes nontrivially as a join, and $S$ itself is a join parabolic subgroup (and therefore so is $S\times S^{\perp}$). \end{proof} \begin{lemma}\label{lemma:maximal-not-abelian} Let $G=G_\Gamma$ be a nonabelian right-angled Artin group with connected defining graph. Then no maximal join parabolic subgroup is abelian. \end{lemma} \begin{proof} Let $\Omega\subseteq\Gamma$ be a maximal join subgraph, and assume towards a contradiction that $\Omega$ is a clique. As $G$ is nonabelian and $\Gamma$ is connected, we can find a vertex $v\in V\Omega$ which is joined by an edge to a vertex $u\notin V\Omega$. In particular, $v\circ v^{\perp}$ is a join subgraph of $\Gamma$ which properly contains $\Omega$, contradicting the maximality of $\Omega$. \end{proof} The following basic combinatorial lemma will be crucial for the general structure of the proof of our main theorems: two different arguments will be used in the paper, depending on whether $G_\Gamma$ satisfies the first or second conclusion below. \begin{lemma}\label{lemma:combinatorics} Let $G=G_\Gamma$ be a one-ended centerless right-angled Artin group. Then either $G$ contains a centerless maximal join parabolic subgroup, or else $G$ contains two distinct nonabelian maximal join parabolic subgroups whose centers commute. \end{lemma} \begin{proof} We assume that every maximal join parabolic subgroup of $G$ has a nontrivial center, and prove that the second conclusion of the lemma holds. Let $\Omega$ be a maximal join subgraph in $\Gamma$, with clique factor $\Omega_1$. As $G$ is centerless and $\Gamma$ is connected (because $G$ is one-ended), there is a vertex $v\in V\Omega$ such that $v$ is adjacent to a vertex $u$ outside $\Omega$. Let $\Lambda$ be a maximal join subgraph containing $v\circ v^\perp$. Then $\Omega_1\subsetneq v\circ v^\perp\subseteq \Lambda$ and $\Omega\neq\Lambda$ (as $u\in V\Lambda$). By Lemma~\ref{lemma:maximal-not-abelian}, the parabolic subgroups $G_\Omega$ and $G_\Lambda$ are nonabelian. Finally, letting $\Lambda_1$ be the clique factor of $\Lambda$, the group $G_{\Lambda_1}$ commutes with $G_{v\circ v^{\perp}}$, in particular $G_{\Lambda_1}$ and $G_{\Omega_1}$ commute. \end{proof} \begin{lemma}\label{lemma:commuting-centers} Let $G=G_{\Gamma}$ be a right-angled Artin group, and let $P_1,P_2\subseteq G$ be two distinct maximal join parabolic subgroups. For every $i\in\{1,2\}$, let $Z_i$ be the center of $P_i$. Then $Z_1\cap Z_2=\{1\}$. In particular, if $Z_1$ and $Z_2$ commute, then $Z_1\subseteq Z_2^{\perp}$ and $Z_2\subseteq Z_1^{\perp}$. \end{lemma} \begin{proof} For every $i\in\{1,2\}$, the subgroup $Z_i$ is a parabolic subgroup of $G$, so $Z_1\cap Z_2$ is a parabolic subgroup of $G$ by \cite[Proposition~2.6]{DKR}. Let $Z=Z_1\cap Z_2$, and assume towards a contradiction that $Z\neq\{1\}$. Then $P=Z\times Z^{\perp}$ is a join parabolic subgroup of $G$ which contains $P_1$ and $P_2$. By maximality, we have $P_1=P_2=P$, a contradiction. We will now prove the last assertion of the lemma, so assume that $Z_1$ and $Z_2$ commute. Then $Z_2$ is a parabolic subgroup of $G$ contained in $Z_1\times Z_1^{\perp}$, so it is a parabolic subgroup of $Z_1\times Z_1^{\perp}$ (as can be derived from \cite[Proposition~2.2(2)]{charney2007automorphisms}). But \cite[Proposition~2.2(2)]{charney2007automorphisms} also ensures that parabolic subgroups of $Z_1\times Z_1^{\perp}$ are of the form $A\times B$, where $A$ is a parabolic subgroup of $Z_1$ and $B$ is a parabolic subgroup of $Z_1^{\perp}$. As $Z_2\cap Z_1=\{1\}$, it follows that $Z_2\subseteq Z_1^{\perp}$. The fact that $Z_2\subseteq Z_1^{\perp}$ is symmetric. \end{proof} Recall that a countable group $G$ is \emph{ICC} (standing for \emph{infinite conjugacy classes}) if the conjugacy class of every nontrivial element of $G$ is infinite. \begin{lemma}\label{lemma:icc} Every centerless right-angled Artin group is ICC. \end{lemma} \begin{proof} Let $G$ be a centerless right-angled Artin group with defining graph $\Gamma$, and let $\Gamma=\Gamma_1\circ \cdots\circ \Gamma_{k}$ be a join decomposition of $\Gamma$ into factors which does not allow any further non-trivial join decomposition. Then each $G_{\Gamma_i}$ is centerless. It suffices to prove that each $G_{\Gamma_i}$ is ICC. Note that $G_{\Gamma_i}$ is acylindrically hyperbolic in the sense of \cite{Osi}: the case when $\Gamma_i$ is connected follows from \cite[Theorem~30]{KK}, and the case when $\Gamma_i$ is disconnected follows from the fact that $G_{\Gamma_i}$ splits non-trivially as a free product. Hence $G_{\Gamma_i}$ is ICC by \cite[Theorem~2.35]{DGO}. \end{proof} \section{Background on stable orbit equivalence and measured groupoids} This section reviews material regarding stable orbit equivalence, cocycles and measured groupoids. A familiar reader can directly skip to the next section. \subsection{Stable orbit equivalence and cocycles}\label{sec:soe} A \emph{standard Borel space} is a measurable space $X$ which is isomorphic to a Polish topologial space (i.e.\ separable and completely metrizable) equipped with its Borel $\sigma$-algebra. By a \emph{standard probability space} we mean a standard Borel space equipped with a Borel measure $\mu$ such that $\mu(X)=1$. In this paper, all actions of countable groups on standard Borel spaces are assumed to be by Borel automorphisms. Given a standard probability space $(X,\mu)$ and a Borel subset $A\subseteq X$ of positive measure, we denote by $\mu_A$ the Borel probability measure on $A$ defined by renormalizing $\mu_{|A}$. Let $G$ and $H$ be two countable groups, and assume we have a measure-preserving $G$-action on a standard probability space $X$. A measurable map $c:G\times X\to H$ is a \emph{cocycle} if for every $g,g'\in G$ and almost every $x\in X$, one has $c(gg',x)=c(g,g'x)c(g',x)$. The cocycle $c$ is \emph{strict} if this relation holds for all $g,g'\in G$ and \emph{all} $x\in X$. As $G$ is countable, there always exists a $G$-invariant conull Borel subset $X^*\subseteq X$ such that $c_{|G\times X^*}$ is a strict cocycle. Two cocycles $c,c':G\times X\to H$ are \emph{cohomologous} if there exists a measurable map $\varphi:X\to H$ such that for almost every $x\in X$, one has $c'(g,x)=\varphi(gx)c(g,x)\varphi(x)^{-1}$. We now briefly review the notion of \emph{stably orbit equivalent} group actions, and refer the reader to \cite{Fur-oe} for more information. Let $G\curvearrowright (X,\mu)$ and $H\curvearrowright (Y,\nu)$ be two free, ergodic, measure-preserving actions on standard probability spaces. A \emph{stable orbit equivalence} between $G\curvearrowright X$ and $H\curvearrowright Y$ is a measure space isomorphism $f:(U,\mu_U)\to (V,\nu_V)$, where $U\subseteq X$ and $V\subseteq Y$ are positive measure Borel subsets, such that $f((G\cdot x)\cap U)=(H\cdot f(x))\cap V$ for almost every $x\in U$. The \emph{compression constant} of $f$ is defined as $\kappa(f)=\nu(V)/\mu(U)$. Following the exposition from \cite[Section~4]{Vae}, we say that a cocycle $c:G\times X\to H$ is an \emph{SOE cocycle associated to $f$} if there exists a measurable map $p:X\to U$, with $p(x)\in G\cdot x$ for almost every $x\in X$, such that for almost every $x\in X$, $c(g,x)$ is the unique element $h\in H$ such that $f\circ p(g\cdot x)=h \cdot (f\circ p(x))$ (uniqueness comes from freeness of the $H$-action). An SOE cocycle associated to $f$ always exists by ergodicity of the $G$-action (i.e.\ we can always find a map $p$ as above), and any two such cocycles (corresponding to different choices of $p$) are cohomologous. Notice that we can always choose $p$ as above such that $p_{|U}=\mathrm{id}_U$. The two actions $G\curvearrowright X$ and $H\curvearrowright Y$ are \emph{stably orbit equivalent} if there exists a stable orbit equivalence between them; they are \emph{orbit equivalent} if it can be chosen with $U=X$ and $V=Y$. We mention that two free, ergodic, measure-preserving actions on standard probability spaces are orbit equivalent if and only if there is a stable orbit equivalence between them whose compression constant is equal to $1$, see \cite[Proposition~2.7]{Fur-oe}. In the above situation, observe that if $A\subseteq G$ and $B\subseteq H$ are subgroups acting ergodically on $X,Y$, and satisfy $f((A\cdot x)\cap U)=(B\cdot f(x))\cap V$, then an SOE cocycle associated to $f$ can always be chosen so that $c_{|A\times X}$ is an SOE cocycle associated to $f$, viewed as a stable orbit equivalence between the actions $A\curvearrowright X$ and $B\curvearrowright Y$ (in particular $c(A\times X^*)\subseteq B$ for some conull Borel subset $X^*\subseteq X$). Indeed, this is proved by choosing the map $p$ so that $p(x)\in A\cdot x$ for almost every $x\in X$. \subsection{Background on measured groupoids} The arguments in Section~\ref{sec:recognition} below rely on our earlier work \cite{HH2}, which is phrased in the language of measured groupoids. In this section we offer a quick review, and refer the reader to \cite[Section~2.1]{AD}, \cite{Kid-survey} or \cite[Section~3]{GH} for more detailed treatments. It is possible to skip this section for now and come back to it when reading Section~\ref{sec:recognition}. A \emph{discrete Borel groupoid} is a standard Borel space $\mathcal{G}$ equipped with two Borel maps $s,r:\mathcal{G}\to X$ towards a standard Borel space $X$ whose fibers are at most countable, and coming with a measurable (partially defined) composition law, a measurable inverse map, and a unit element $e_x$ per $x\in X$. The space $X$ is called the \emph{base space} of the groupoid, and we think of an element $g\in\mathcal{G}$ as being an arrow whose source $s(g)$ and range $r(g)$ both belong to $X$ (composition of two arrows $g_1g_2$ makes sense when $s(g_1)=r(g_2)$). A \emph{bisection} of $\mathcal{G}$ is a Borel subset $B\subseteq\mathcal{G}$ such that $s_{|B}$ and $r_{|B}$ are injective; it thus defines a Borel isomorphism between two Borel subsets of $X$ (see \cite[Corollary~15.2]{Kec}). A theorem of Lusin and Novikov (see \cite[Theorem~18.10]{Kec}) ensures that any discrete Borel groupoid is covered by countably many pairwise disjoint bisections. A \emph{measured groupoid} is a discrete Borel groupoid $\mathcal{G}$ whose base space $X$ comes equipped with a \emph{quasi-invariant} finite Borel measure $\mu$, i.e.\ for every bisection $B\subseteq\mathcal{G}$, one has $\mu(s(B))=0$ if and only if $\mu(r(B))=0$. A measured groupoid $\mathcal{G}$ is \emph{trivial} if $\mathcal{G}=\{e_x|x\in X\}$. On the other hand $\mathcal{G}$ is \emph{of infinite type} if for every Borel subset $U\subseteq X$ of positive measure, and almost every $x\in U$, there are infinitely many elements $g\in\mathcal{G}_{|U}$ with $s(g)=x$. In the present paper, the most important example of a measured groupoid is the following. Let $G$ be a countable group which acts on a standard finite measure space $X$ by Borel automorphisms in a measure-preserving way (or merely by preserving the measure class). Then $G\times X$ is naturally a measured groupoid over $X$, with $s(g,x)=x$ and $r(g,x)=gx$. This groupoid is denoted by $G\ltimes X$. Let now $\mathcal{G}, X$ and $\mu$ be as above. Every Borel subset $\mathcal{H}\subseteq\mathcal{G}$ which is stable under composition and inversion, and contains all unit elements $e_x$, has the structure of a discrete Borel groupoid over $X$, for which $\mu$ is quasi-invariant; we say that $\mathcal{H}$ is a \emph{measured subgroupoid} of $\mathcal{G}$. Given two measured subgroupoids $\mathcal{H}_1,\mathcal{H}_2\subseteq\mathcal{G}$, we denote by $\langle\mathcal{H}_1,\mathcal{H}_2\rangle$ the subgroupoid generated by $\mathcal{H}_1$ and $\mathcal{H}_2$, defined as the smallest measured subgroupoid of $\mathcal{G}$ that contains $\mathcal{H}_1$ and $\mathcal{H}_2$; equivalently, this is the measured subgroupoid of $\mathcal{G}$ made of all elements that are finite compositions of elements of $\mathcal{H}_1$ and $\mathcal{H}_2$. Given any Borel subset $U\subseteq X$, the \emph{restriction} $\mathcal{G}_{|U}=\{g\in\mathcal{G}|s(g),r(g)\in U\}$ is naturally a measured groupoid over $U$, with quasi-invariant measure $\mu_{|U}$. Given a countable group $G$, a \emph{strict cocycle} $\rho:\mathcal{G}\to G$ is a Borel map such that for all $g_1,g_2\in\mathcal{G}$ satisfying $s(g_1)=r(g_2)$ (so that $g_1g_2$ is well-defined), one has $\rho(g_1g_2)=\rho(g_1)\rho(g_2)$. Its \emph{kernel} is $\{g\in\mathcal{G}|\rho(g)=1\}$, a measured subgroupoid of $\mathcal{G}$. A strict cocycle $\rho:\mathcal{G}\to G$ is \emph{action-type} (as in \cite[Definition~3.20]{GH}) if it has trivial kernel, and for every infinite subgroup $H\subseteq G$, the subgroupoid $\rho^{-1}(H)$ is of infinite type. The following example is crucial: if $G$ acts on a standard finite measure space $X$ by Borel automorphisms in a measure-preserving way, then the natural cocycle $G\ltimes X\to G$ is action-type \cite[Proposition~2.26]{Kid-survey}. Let now $\mathcal{H}$ and $\mathcal{H}'$ be two measured subgroupoids of $\mathcal{G}$. The subgroupoid $\mathcal{H}'$ is \emph{stably contained} in $\mathcal{H}$ (resp.\ \emph{stably equal} to $\mathcal{H}$) if there exist a conull Borel subset $X^*\subseteq X$ and a partition $X^*=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets such that for every $i\in I$, one has $\mathcal{H}'_{|X_i}\subseteq\mathcal{H}_{|X_i}$ (resp.\ $\mathcal{H}'_{|X_i}=\mathcal{H}_{|X_i}$). The subgroupoid $\mathcal{H}$ is \emph{normalized} by $\mathcal{H}'$ if there exists a conull Borel subset $X^*\subseteq X$ such that $\mathcal{H}'_{|X^*}$ can be covered by at most countably many bisections $B_n$ in such a way that for every $n$, every $g_1,g_2\in B_n$, and every $h\in\mathcal{H}'_{|X^*}$ such that $g_2hg_1^{-1}$ is well-defined, one has $h\in\mathcal{H}$ if and only if $g_2hg_1^{-1}\in\mathcal{H}$. Here is an example: if $\mathcal{G}$ comes equipped with a cocycle $\rho:\mathcal{G}\to G$ towards a countable group, and if $H,H'\subseteq G$ are two subgroups such that $H$ is normalized by $H'$, then $\rho^{-1}(H)$ is normalized by $\rho^{-1}(H')$. The subgroupoid $\mathcal{H}$ is \emph{stably normalized} by $\mathcal{H}'$ if there exists a partition $X=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets such that for every $i\in\mathbb{N}$, the groupoid $\mathcal{H}_{|X_i}$ is normalized by $\mathcal{H}'_{|X_i}$. We refer to \cite{Kid-survey} for the notion of \emph{amenability} of a measured groupoid, and only record a few properties we will need. Amenability of measured groupoids is stable under passing to subgroupoids and taking restrictions, and under stabilization in the following sense: if there exist a conull Borel subset $X^*\subseteq X$ and a partition $X^*=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets such that for every $i\in I$, the groupoid $\mathcal{G}_{|X_i}$ is amenable, then $\mathcal{G}$ is amenable (see \cite[Definition~3.33 and Remark~3.34]{GH}). If $\rho:\mathcal{G}\to G$ is a strict cocycle with trivial kernel towards a countable group $G$, and if $A\subseteq G$ is amenable, then $\rho^{-1}(A)$ is amenable (see e.g.\ \cite[Corollary~3.39]{GH}). A measured groupoid $\mathcal{G}$ over a standard finite measure space $X$ is \emph{everywhere nonamenable} if for every Borel subset $U\subseteq X$ of positive measure, the restricted groupoid $\mathcal{G}_{|U}$ is nonamenable. The following fact is crucial: if $\rho:\mathcal{G}\to G$ is a strict action-type cocycle towards a countable group $G$, and if $G$ contains a nonabelian free subgroup, then $\mathcal{G}$ is everywhere nonamenable \cite[Lemma~3.20]{Kid} (compare also \cite[Remark~3.3]{HH2}). \section{Monod and Shalom's rigidity theorems} \subsection{Quotient by a normal subgroup} The following lemma is extracted from the work of Monod and Shalom \cite{MS}. Its proof comes from \cite[p.862]{MS}; we recall it here for the convenience of the reader. \begin{lemma}[Monod--Shalom \cite{MS}]\label{lem:MS} Let $G,H$ be countable groups, and let $G\curvearrowright X$ and $H\curvearrowright Y$ be free ergodic measure-preserving actions on standard probability spaces. Assume that they are stably orbit equivalent, let $f:U\to V$ be a stable orbit equivalence between them (where $U\subseteq X$ and $V\subseteq Y$ are positive measure Borel subsets), and let $c:G\times X\to H$ be an SOE cocycle associated to $f$. Let $A\unlhd G$ and $B\unlhd H$ be normal subgroups acting ergodically on $X,Y$, and assume that for every $x\in U$, one has $f((A\cdot x)\cap U)= (B\cdot f(x))\cap V$. Then there exist a group isomorphism $\alpha:G/A\to H/B$ and a measurable map $\varphi:X\to H$ with $\varphi(x)=e$ for every $x\in U$, such that for every $g\in G$ and almost every $x\in X$, one has $\varphi(gx) c(g,x)\varphi(x)^{-1}\in\alpha(gA)$. \end{lemma} \begin{proof} As observed in Section~\ref{sec:soe}, up to replacing $c$ by a cohomologous cocycle, and $X$ by a conull $G$-invariant Borel subset, we can (and will) assume that $c(A\times X)\subseteq B$. Likewise, up to replacing $Y$ by a conull $H$-invariant subset, we can choose an SOE cocycle $c':H\times Y\to G$ associated to the stable orbit equivalence $f^{-1}:V\to U$ between $H\curvearrowright Y$ and $G\curvearrowright X$, so that $c'(B\times Y)\subseteq A$. Let $\Sigma=X\times H$, equipped with the measure-preserving action of $G\times H$ given by $(g,h)\cdot (x,k)= (gx,c(g,x)kh^{-1})$. Letting $X_e=X\times\{e\}$ (which is a fundamental domain for the $H$-action on $\Sigma$), we observe that $A X_e\subseteq B X_e$, so $A B X_e=BAX_e\subseteq B BX_e= BX_e$, thus $BX_e$ is invariant under $A\times B$. In addition, from the ergodicity of the $A$-action on $X$, we deduce that the action of $A\times B$ on $BX_e$ is ergodic. Moreover, for every $h\in H$, we have $hBX_e=BX_e$ if and only if $h\in B$, and otherwise $hBX_e\cap BX_e=\emptyset$. In addition, the union of all $H$-translates of $BX_e$ cover $\Sigma$. This proves that $\bar H=H/B$ acts simply transitively on the space $\bar\Sigma$ of ergodic components of the action of $A\times B$ on $\Sigma$. By \cite[Theorem~3.3]{Fur-oe}, the space $\Sigma$ is measurably isomorphic to $Y\times G$, equipped with the measure-preserving action of $G\times H$ given by $(g,h)\cdot (y,k)=(hy,c'(h,y)kg^{-1})$. A symmetric argument then shows that $\bar G=G/A$ also acts simply transitively on $\bar \Sigma$. Therefore, there exist an isomorphism $\alpha:\bar G\to \bar H$, and a measurable isomorphism $\bar{\Sigma}\approx\bar H$ sending $BX_e$ to $e$, such that the action of $\bar G\times\bar H$ on $\bar{\Sigma}$ is given by $(\bar g,\bar h)\cdot\bar{k}=\alpha(\bar g)\bar k\bar h^{-1}$ through this identification. We also have a $(G\times H)$-equivariant Borel map $\Phi:\Sigma\to\bar H$ (sending $BX_e$ to $e$). The equivariance of $\Phi$ shows that for all $g\in G$ and almost every $x\in X$, one has $\Phi(g(x,e))=\alpha(\bar g)$, i.e.\ $\Phi(gx,c(g,x))=\alpha(\bar g)$. Letting $h\in H$ be such that $\alpha(\bar g)=\bar h$, we deduce that $\Phi(gx,c(g,x)h^{-1})=e$. This shows that $c(g,x)h^{-1}\in B$, i.e.\ $c(g,x)\in \alpha(gA)$, as desired. \end{proof} \subsection{Direct products} Following \cite[Notation~1.2]{MS}, we let $\mathcal{C}_{\mathrm{reg}}$ be the class of all countable groups $\Gamma$ such that $\mathrm{H}^2_{\mathrm{b}}(\Gamma,\ell^2(\Gamma))\neq 0$. By \cite[Corollary~1.8]{CFI}, every nonabelian right-angled Artin group which does not split nontrivially as a direct product belongs to the class $\mathcal{C}_{\mathrm{reg}}$ (this also follows from \cite{Ham,HO} and the fact that these groups are acylindrically hyperbolic \cite{KK}). \begin{theo}[{Monod--Shalom \cite[Theorem~2.17]{MS}}]\label{theo:monod-shalom-product} Let $m,n\ge 2$, and let $G_1,\dots,G_m$ and $H_1,\dots,H_n$ be torsion-free countable groups in $\mathcal{C}_{\mathrm{reg}}$. Let $G=G_1\times\dots\times G_m$ and $H=H_1\times\dots\times H_n$. Let $G\curvearrowright X$ and $H\curvearrowright Y$ be two free, ergodic, measure-preserving actions on standard probability spaces. Assume that all groups $G_i$ act ergodically on $X$, and all groups $H_j$ act ergodically on $Y$, and the actions are stably orbit equivalent (via a stable orbit equivalence $f:U\to V$). Then $\kappa(f)=1$, the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are conjugate through a group isomorphism between $G$ and $H$, and every SOE cocycle $G\times X\to H$ is cohomologous to a group isomorphism. \end{theo} \section{Exploiting chain-commuting generating sets}\label{sec:commuting-chain} We start with an elementary lemma. \begin{lemma}\label{lemma:cocycle-homomorphism} Let $G$ and $H$ be groups, with $G$ countable. Let $G\curvearrowright X$ be a measure-preserving $G$-action on a standard probability space $X$, and let $c:G\times X\to H$ be a cocycle. Let $S\subseteq G$ be a generating set for $G$. Assume that there exists a conull Borel subset $X^*\subseteq X$ such that for every $s\in S$, the value of $c(s,\cdot)_{|X^*}$ is constant. Then there exists a group homomorphism $\alpha:G\to H$ such that for every $g\in G$ and almost every $x\in X$, one has $c(g,x)=\alpha(g)$. \end{lemma} \begin{proof} The fact that $c(g,\cdot)$ is almost everywhere constant follows from the same fact for $g\in S$ together with our assumption that $S$ generates $G$. Letting $\alpha:G\to H$ be defined by sending $g$ to the essential value of $c(g,\cdot)$, the fact that $c$ is a cocycle implies that $\alpha$ is a homomorphism, completing the proof. \end{proof} A generating set $S$ of a group $G$ is \emph{chain-commuting} if the graph whose vertex set is $S$, with one edge between two vertices if the corresponding elements of $G$ commute, is connected. Notice that a group $G$ has a finite chain-commuting generating set if and only if it is a quotient of a one-ended right-angled Artin group (defined over a finite simple graph $\Gamma$). Interestingly, having a finite chain-commuting generating set whose elements have infinite order is a condition that has already been successfully exploited in various contexts in measured group theory: for instance Gaboriau proved in \cite[Critères VI.24]{Gab-cost} that it forces all free probability measure-preserving actions of $G$ to have cost $1$; see also \cite{AGN} for a more recent use. We say that a group $G$ has the \emph{root-conjugation property} if for every $h_1,h_2\in H$ and every integer $k>0$, if $h_1$ commutes with $h_2^k$, then $h_1$ commutes with $h_2$. \begin{lemma}\label{lemma:chain-commutative} Let $G$ and $H$ be countable groups. Assume that $H$ satisfies the root-conjugation property. Let $G\curvearrowright X$ be a measure-preserving $G$-action on a standard probability space $X$, and let $c:G\times X\to H$ be a cocycle. Let $S$ be a generating set of $G$. Assume that \begin{enumerate} \item $S$ is chain-commuting, \item every element of $S$ acts ergodically on $X$, and \item there exist $s\in S$ and a conull Borel subset $X^*\subseteq X$ such that $c(s,\cdot)_{|X^*}$ is constant. \end{enumerate} Then there exists a group homomorphism $\alpha:G\to H$ such that for every $g\in G$ and almost every $x\in X$, one has $c(g,x)=\alpha(g)$. \end{lemma} \begin{proof} Let $s\in S$ be as in assertion~3, and denote by $\beta_s$ the constant value of $c(s,\cdot)$ on $X^*$. We claim that for every $u\in S$ which commutes with $s$, the value $c(u,\cdot)$ is constant on a conull Borel subset of $X$. As $S$ is chain-commuting, arguing inductively will then ensure that the same is true for all $u\in S$, and as $G$ is countable the conull Borel subset of $X$ can be chosen independent of $u$. The conclusion will then follow from Lemma~\ref{lemma:cocycle-homomorphism}. We now prove the above claim. Up to replacing $X^*$ by a further conull Borel subset (which we can assume to be $G$-invariant), we will assume that the cocycle $c$ is strict. Let $X^*=\sqcup_{i\in I}X_i$ be a partition into at most countably many Borel subsets such that for each $i$, the value of $c(u,\cdot)$ is constant when restricted to $X_i$ -- we denote it by $\alpha_i$. Let $i,j\in I$ be such that $X_i$ and $X_j$ have positive measure (possibly with $i=j$). As $s$ acts ergodically on $X$, there exist an integer $k_{i,j}\neq 0$ and $x\in X_i$ such that $s^{k_{i,j}}x\in X_j$. As $u$ and $s^{k_{i,j}}$ commute, we have $c(us^{k_{i,j}},x)=c(s^{k_{i,j}}u,x)$. Thus $c(u,s^{k_{i,j}}x)c(s^{k_{i,j}},x)=c(s^{k_{i,j}},ux)c(u,x)$, in other words \begin{equation}\label{eq:commute} \alpha_j\beta_s^{k_{i,j}}=\beta_s^{k_{i,j}}\alpha_i. \end{equation} Letting $i=j$, we see that $\alpha_i$ commutes with $\beta_s^{k_{i,j}}$. By the root-conjugation property, it follows that $\alpha_i$ and $\beta_s$ commute. Using Equation~\eqref{eq:commute} again with $i,j$ arbitrary, we see that $\alpha_i=\alpha_j$ whenever both $X_i$ and $X_j$ have positive measure. In other words, the value $c(u,\cdot)$ is almost everywhere constant. \end{proof} In the present paper, Lemma~\ref{lemma:chain-commutative} will be applied to the setting of right-angled Artin groups in the following way. \begin{lemma}\label{lemma:exploting-commuting-gensets} Let $G,H$ be two right-angled Artin groups, with $G$ one-ended. Let $G\curvearrowright X$ and $H\curvearrowright Y$ be two free, ergodic, measure-preserving actions on standard probability spaces, and assume that there is a stable orbit equivalence $f$ between $G\curvearrowright X$ and $H\curvearrowright Y$, with compression constant $\kappa(f)\ge 1$. Assume that $G\curvearrowright X$ is irreducible, and let $S$ be a standard generating set of $G$ (given by an isomorphism to some $G_\Gamma$) such that all elements of $S$ act ergodically on $X$. Let $c:G\times X\to H$ be an SOE cocycle associated to $f$. If $c$ is cohomologous to a cocycle $c'$ for which there exists $s\in S$ such that $c'(s,\cdot)$ is almost everywhere constant, then $\kappa(f)=1$, the cocycle $c$ is cohomologous to a group isomorphism $\alpha:G\to H$, and the actions are conjugate through $\alpha$. \end{lemma} \begin{proof} Right-angled Artin groups have the root-conjugation property, as follows from \cite[Lemma~6.3]{Min}. In addition, the standard generating set $S$ of $G$ is chain-commuting (because $G$ is one-ended, i.e.\ its defining graph $\Gamma$ is connected), and by assumption every element of $S$ acts ergodically on $X$. Lemma~\ref{lemma:chain-commutative} therefore implies that $c$ is cohomologous to a group homomorphism $\alpha:G\to H$. As $G$ is torsion-free, it follows from \cite[Lemma~4.7]{Vae} that $\alpha$ is injective, $\alpha(G)$ has finite index in $H$, the action $H\curvearrowright Y$ is conjugate to the action induced from $G\curvearrowright X$, and $\kappa(f)=\frac{1}{[H:\alpha(G)]}$. As $\kappa(f)\ge 1$, we deduce that $[H:\alpha(G)]=1$ (in particular $\alpha$ is a group isomorphism) and \cite[Lemma~4.7]{Vae} ensures that the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are conjugate through $\alpha$. \end{proof} \section{Recognition lemmas}\label{sec:recognition} \subsection{Review of parabolic supports} The following notion was introduced in \cite[Section~3.3]{HH2}. Let $\mathcal{G}$ be a measured groupoid over a standard finite measure space $X$, equipped with a strict cocycle $\rho:\mathcal{G}\to G$, where $G$ is a right-angled Artin group. Fix an identification $G=G_\Gamma$, and let $\mathbb{P}$ be the set of all parabolic subgroups of $G$ with respect to this identification. Given $P\in\mathbb{P}$, we say that $(\mathcal{G},\rho)$ is \emph{tightly $P$-supported} if \begin{enumerate} \item there exists a conull Borel subset $X^*\subseteq X$ such that $\rho(\mathcal{G}_{|X^*})\subseteq P$, and \item for every parabolic subgroup $Q\subsetneq P$ and every Borel subset $U\subseteq X$ of positive measure, one has $\rho(\mathcal{G}_{|U})\nsubseteq Q$. \end{enumerate} A parabolic subgroup $P$ such that $(\mathcal{G},\rho)$ is tightly $P$-supported, if it exists, is unique. By \cite[Lemma~3.7 and Remark~3.9]{HH2}, there always exist a partition $X=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets, and for every $i\in I$, a parabolic subgroup $P_i$, such that $(\mathcal{G}_{|X_i},\rho)$ is tightly $P_i$-supported. The following is a consequence of Lemma~\ref{lemma:normalizer} and \cite[Lemma 3.8 and Remark 3.9]{HH2}. \begin{lemma}\label{lemma:support-invariant-normal} Let $G=G_\Gamma$ be a right-angled Artin group, let $\mathcal{G}$ be a measured groupoid over a standard finite measure space $X$, and let $\rho:\mathcal{G}\to G$ be a strict cocycle. Let $\mathcal{H}$ and $\mathcal{H}'$ be two measured subgroupoids of $\mathcal{G}$. Assume that $(\mathcal{H},\rho)$ is tightly $P$-supported for a parabolic subgroup $P$. Assume also that $\mathcal{H}$ is normalized by $\mathcal{H}'$. Then there exists a conull Borel subset $X^*\subseteq X$ such that $\rho(\mathcal{H}'_{|X^*})\subseteq P\times P^\perp$. \qed \end{lemma} We now establish a lemma which essentially follows from our previous work \cite{HH2}. \begin{lemma}\label{lemma:support-of-normal-amenable} Let $G=G_\Gamma$ be a right-angled Artin group. Let $\mathcal{G}$ be a measured groupoid over a standard finite measure space $X$ and let $\rho:\mathcal{G}\to G$ be a strict cocycle with trivial kernel. Let $\mathcal{H}$ be a measured subgroupoid of $\mathcal{G}$. Assume that $\mathcal{H}$ is everywhere nonamenable and stably normalizes an amenable subgroupoid $\mathcal{A}$. Then there exist a conull Borel subset $X^*\subseteq X$, a partition $X^*=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets of positive measure, and for every $i\in I$, a parabolic subgroup $P_i$ such that \begin{enumerate} \item $\mathcal{A}_{|X_i}\subseteq\rho^{-1}(P_i)_{|X_i}$, \item $\mathcal{H}_{|X_i}\subseteq\rho^{-1}(P_i\times P_i^{\perp})_{|X_i}$, \item $(\mathcal{H}\cap\rho^{-1}(P_i))_{|X_i}$ is amenable, \item $P_i^{\perp}$ is nonabelian. \end{enumerate} \end{lemma} \begin{proof} Let $\mathcal{A}$ be an amenable subgroupoid of $\mathcal{G}$ which is stably normalized by $\mathcal{H}$. Consider a partition $X=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets such that for every $i\in I$, there exists a parabolic subgroup $P_i$ such that $(\mathcal{A}_{|X_i},\rho)$ is tightly $P_i$-supported. As $\mathcal{A}$ is stably normalized by $\mathcal{H}$, Lemma~\ref{lemma:support-invariant-normal} ensures that up to replacing $X$ by a conull Borel subset and refining the above partition, we can assume that $\mathcal{H}_{|X_i}\subseteq\rho^{-1}(P_i\times P_i^{\perp})_{|X_i}$ for every $i\in I$. As $\mathcal{A}$ is stably normalized by $\mathcal{H}$ and $\mathcal{H}$ is everywhere nonamenable, and as $\rho$ has trivial kernel, it follows from \cite[Lemma~3.10]{HH2} that $P_i^\perp$ is nonamenable, and $(\mathcal{H}\cap\rho^{-1}(P_i))_{|X_i}$ is amenable. \end{proof} \subsection{Recognizing maximal join parabolic subgroupoids} Given an equivalence relation arising from a probability measure-preserving action of a right-angled Artin group, the following lemma will enable us to recognize subrelations arising from restricting the action to a maximal join parabolic subgroup. Its proof is based on the techniques developed in our previous work \cite{HH2}. \begin{lemma}\label{lemma:recognize-product} Let $G$ be a one-ended nonabelian right-angled Artin group. Let $\mathcal{G}$ be a measured groupoid over a standard finite measure space $X$, coming with a strict action-type cocycle $\rho:\mathcal{G}\to G$. Let $\mathcal{H}$ be a measured subgroupoid of $\mathcal{G}$. Then the following assertions are equivalent. \begin{enumerate} \item There exist a conull Borel subset $X^*\subseteq X$ and a partition $X^*=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets such that for every $i\in I$, there exists a maximal join parabolic subgroup $P_i$ such that $\mathcal{H}_{|X_i}=\rho^{-1}(P_i)_{|X_i}$. \item The following properties hold: \begin{enumerate} \item The subgroupoid $\mathcal{H}$ contains two subgroupoids $\mathcal{A},\mathcal{N}$, where $\mathcal{A}$ is amenable, of infinite type, and stably normalized by $\mathcal{N}$, and $\mathcal{N}$ is everywhere nonamenable and stably normalized by $\mathcal{H}$. \item Whenever $\mathcal{H}'$ is another measured subgroupoid of $\mathcal{G}$ satisfying property~(a), if $\mathcal{H}$ is stably contained in $\mathcal{H}'$, then they are stably equal. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} In this proof, we fix an identification between $G$ and $G_\Gamma$; parabolic subgroups of $G$ are understood with respect to this identification. We first prove that $(1)$ implies $(2a)$. For every $i\in I$, the group $P_i$ is nonabelian (Lemma~\ref{lemma:maximal-not-abelian}); hence $P_i$ splits as a direct product $P_i=M_i\times N_i$, where $M_i$ and $N_i$ are infinite parabolic subgroups, and at least one of them (say $N_i$) is nonabelian, and therefore contains a nonabelian free subgroup. Choose an infinite cyclic subgroup $A_i\subseteq M_i$. Then $A_i$ commutes with $N_i$. The conclusion follows by letting $\mathcal{A}$ be a measured subgroupoid of $\mathcal{G}$ such that for every $i\in I$, one has $\mathcal{A}_{|X_i}=\rho^{-1}(A_i)_{|X_i}$, and letting $\mathcal{N}$ be such that for every $i\in I$, one has $\mathcal{N}_{|X_i}=\rho^{-1}(N_i)_{|X_i}$. We now claim that if a measured subgroupoid $\mathcal{H}\subseteq\mathcal{G}$ satisfies $(2a)$, then there exist a conull Borel subset $X^*\subseteq X$, a Borel partition $X^*=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets, and for every $i\in I$, a join parabolic subgroup $P_i\subseteq G$ such that $\mathcal{H}_{|X_i}\subseteq\rho^{-1}(P_i)_{|X_i}$. We now prove this claim; we will then explain in the last two paragraphs of the proof why this claim is enough to establish the lemma, by exploiting the maximality conditions. By Lemma~\ref{lemma:support-of-normal-amenable}, there exists a conull Borel subset $X^*\subseteq X$ and a partition $X^*=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets such that for every $i\in I$, there exists a parabolic subgroup $R_i$ with $R_i^{\perp}$ nonabelian, such that $\rho(\mathcal{A}_{|X_i})\subseteq R_i$ and $\rho(\mathcal{N}_{|X_i})\subseteq R_i\times R_i^\perp$. Notice that $R_i$ is nontrivial because $\mathcal{A}$ is of infinite type and $\rho$ has trivial kernel. In particular $R_i\times R_i^{\perp}$ is a join parabolic subgroup. Up to a further partition, for every $i\in I$, there exists a nontrivial parabolic subgroup $S_i\subseteq R_i\times R_i^{\perp}$ such that $(\mathcal{N}_{|X_i},\rho)$ is tightly $S_i$-supported. As $\mathcal{N}$ is stably normalized by $\mathcal{H}$, up to a further partition and restriction to a further conull Borel subset of $X$, we can assume that $\rho(\mathcal{H}_{|X_i})\subseteq S_i\times S_i^{\perp}$ by Lemma~\ref{lemma:support-invariant-normal}. Lemma~\ref{lemma:join-parabolic} ensures that $S_i\times S_i^{\perp}$ is a join parabolic subgroup, which proves our claim. We have already proved that $(1)$ implies $(2a)$. To see that $(1)$ implies $(2b)$, let $\mathcal{H}$ be a measured subgroupoid as in $(1)$ (coming with a partition $X^*=\sqcup_{i\in I}X_i$ and maximal join parabolic subgroups $P_i$), and let $\mathcal{H}'$ be as in $(2b)$. The above claim ensures that up to passing to a further conull Borel subset and refining the above partition, we can assume that for every $i\in I$, there exists a join parabolic subgroup $Q_i$ such that $\mathcal{H}'_{|X_i}\subseteq \rho^{-1}(Q_i)_{|X_i}$. As $\mathcal{H}$ is stably contained in $\mathcal{H}'$ and $\rho$ is action-type, we deduce that every element of $P_i$ has a power contained in $Q_i$, and therefore $P_i\subseteq Q_i$ by \cite[Lemma~6.4]{Min}. By maximality of $P_i$, we have $P_i=Q_i$, from which it follows that $\mathcal{H}'$ is stably contained in $\mathcal{H}$, proving $(2b)$. We finally prove that $(2)$ implies $(1)$, so let $\mathcal{H}$ be as in $(2)$. The above claim shows that there exist a Borel partition $X^*=\sqcup_{i\in I}X_i$ of a conull Borel subset into at most countably many subsets such that for every $i\in I$, $\rho(\mathcal{H}_{|X_i})$ is contained in a join parabolic subgroup $P_i$. The maximality assumption $(2b)$ together with the implication $(1)\Rightarrow (2a)$ implies that $P_i$ is maximal whenever $X_i$ has positive measure and (after a further conull subset and countable partition) $\mathcal{H}_{|X_i}=\rho^{-1}(P_i)_{|X_i}$. \end{proof} A subgroupoid $\mathcal{H}$ satisfying one of the equivalent conclusions of Lemma~\ref{lemma:recognize-product} will be called a \emph{maximal join subgroupoid} of $\mathcal{G}$ -- Lemma~\ref{lemma:recognize-product} ensures that this notion does not depend on the choice of an action-type cocycle from $\mathcal{G}$ towards a one-ended nonabelian right-angled Artin group. Notice that the partition that arises in the first assertion of Lemma~\ref{lemma:recognize-product} is not unique (for instance, one can always pass to a further partition), so it is not determined by the pair $(\mathcal{H},\rho)$ in any way; but the map sending any point $x\in Y_i$ to the parabolic subgroup $P_i$ is entirely determined (up to changing its value on a null set). We call it the \emph{parabolic map} of $(\mathcal{H},\rho)$. We insist that, while being a maximal join parabolic subgroup is a notion that is independent of the action-type cocycle $\rho$, the parabolic map does depend on $\rho$. \subsection{Recognizing the center of a right-angled Artin group} \begin{lemma}\label{lem:center} Let $G$ be a right-angled Artin group, and let $Z$ be the center of $G$. Let $\mathcal{G}$ be a measured groupoid over a standard finite measure space $X$, coming with a strict action-type cocycle $\rho:\mathcal{G}\to G$. Let $\mathcal{H}\subseteq\mathcal{G}$ be a measured subgroupoid. Then the following statements are equivalent. \begin{enumerate} \item There exist a conull Borel subset $X^*\subseteq X$ and a partition $X^*=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets such that for every $i\in I$, one has $\mathcal{H}_{|X_i}=\rho^{-1}(Z)_{|X_i}$. \item The following properties hold: \begin{enumerate} \item the subgroupoid $\mathcal{H}$ is amenable and stably normalized by $\mathcal{G}$; \item if $\mathcal{H}'\subseteq\mathcal{G}$ is another measured subgroupoid of $\mathcal{G}$ that satisfies property~(a), and if $\mathcal{H}$ is stably contained in $\mathcal{H}'$, then $\mathcal{H}$ is stably equal to $\mathcal{H}'$. \end{enumerate} \end{enumerate} \end{lemma} A \emph{central subgroupoid} of $\mathcal{G}$ is a subgroupoid $\mathcal{H}$ satisfying one of the equivalent conclusions of Lemma~\ref{lem:center}. In the context of Lemma~\ref{lem:center}, if a central subgroupoid $\mathcal{H}$ is not stably trivial, then $Z$ is infinite. The main point of Lemma~\ref{lem:center} is that the notion of central subgroupoid is independent of the choice of an action-type cocycle from $\mathcal{G}$ towards a right-angled Artin group. \begin{proof} The lemma is clear when $G$ is abelian, so we will assume otherwise. In particular $\mathcal{G}$ is everywhere nonamenable. As usual, we fix an identification between $G$ and $G_\Gamma$; parabolic subgroups are understood with respect to this identification. We first observe that $(1)$ implies $(2a)$. Indeed, if $\mathcal{H}$ is a subgroupoid as in $(1)$, then amenability of $Z$ ensures that $\mathcal{H}$ is amenable (using that $\rho$ has trivial kernel), and the fact that $Z$ is normal in $G$ ensures that $\mathcal{H}$ is stably normalized by $\mathcal{G}$. We now claim that if $\mathcal{H}$ satisfies $(2a)$, then there exists a partition $X^*=\sqcup_{i\in I}X_i$ of a conull Borel subset $X^*\subseteq X$ into at most countably many Borel subsets such that for every $i\in I$, one has $\mathcal{H}_{|X_i}\subseteq\rho^{-1}(Z)_{|X_i}$. Together with the maximality assertion~$(2b)$ and the fact that a subgroupoid as in $(1)$ satisfies $(2a)$, this will show that $(2)\Rightarrow (1)$. This claim will also prove that every subgroupoid as in $(1)$ is stably maximal with respect to $(2a)$, showing that $(1)\Rightarrow (2)$. We are thus left with proving the above claim. By Lemma~\ref{lemma:support-of-normal-amenable}, there exist a conull Borel subset $X^*\subseteq X$, a partition $X^*=\sqcup_{i\in I}X_i$ into at most countably many Borel subsets, and for every $i\in I$, a parabolic subgroup $P_i\subseteq G$ (with respect to the chosen standard generating set), such that \begin{enumerate} \item $\mathcal{H}_{|X_i}\subseteq\rho^{-1}(P_i)_{|X_i}$, \item $\mathcal{G}_{|X_i}\subseteq\rho^{-1}(P_i\times P_i^{\perp})_{|X_i}$, \item $\rho^{-1}(P_i)_{|X_i}$ is amenable. \end{enumerate} As $\rho$ is action-type, the second point implies that every element of $G$ has a power contained in $P_i\times P_i^{\perp}$, which in turn implies that $G=P_i\times P_i^{\perp}$ by \cite[Lemma~6.4]{Min}. As $\rho$ is action-type and $\rho^{-1}(P_i)_{|X_i}$ is amenable, the parabolic subgroup $P_i$ does not contain any nonabelian free subgroup, so it is abelian. These two facts together imply that $P_i\subseteq Z$, and the first point above completes our proof. \end{proof} \begin{cor}\label{cor:centerless} Let $G_1,G_2$ be two right-angled Artin groups. Assume that there exists a measured groupoid $\mathcal{G}$ which admits two action-type cocycles $\rho_1:\mathcal{G}\to G_1$ and $\rho_2:\mathcal{G}\to G_2$. If $G_1$ is centerless, then $G_2$ is centerless. \end{cor} \begin{proof} We prove the contrapositive statement, so assume that the center $Z_2$ of $G_2$ is nontrivial. Then $\mathcal{Z}=\rho_2^{-1}(Z_2)$ is a subgroupoid of $\mathcal{G}$ of infinite type which satisfies assertion~2 from Lemma~\ref{lem:center} (by using the implication $1\Rightarrow 2$ of that lemma, applied to the cocycle $\rho_2$). Using now the implication $2\Rightarrow 1$ from Lemma~\ref{lem:center}, applied to the cocycle $\rho_1$, we deduce that there exists a Borel subset $U\subseteq Y$ of positive measure such that $\rho_1(\mathcal{Z}_{|U})$ is contained in the center $Z_1$ of $G_1$. As $\rho_1$ has trivial kernel and $\mathcal{Z}$ is of infinite type, this implies that $Z_1$ is nontrivial. \end{proof} \subsection{Recognizing commuting centers} \begin{lemma}\label{lemma:recognize-adjacency} Let $G$ be a one-ended nonabelian right-angled Artin group. Let $\mathcal{G}$ be a measured groupoid over a standard probability space $X$, and let $\rho:\mathcal{G}\to G$ be an action-type cocycle. Let $\mathcal{H},\mathcal{H}'$ be two maximal join parabolic subgroupoids of $\mathcal{G}$. Let $X^*\subseteq X$ be a conull Borel subset, and $X^*=\sqcup_{i\in I}X_i$ be a partition into at most countably many Borel subsets, such that for every $i\in I$, there exist parabolic subgroups $P_i,P'_i$ of $G$ such that $\mathcal{H}_{|X_i}=\rho^{-1}(P_i)_{|X_i}$ and $\mathcal{H}'_{|X_i}=\rho^{-1}(P'_i)_{|X_i}$. Then for every $i\in I$ such that $X_i$ has positive measure, the following assertions are equivalent. \begin{enumerate} \item The centers of $P_i$ and $P'_i$ commute. \item Given any central subgroupoids $\mathcal{Z}_i\subseteq\mathcal{H}_{|X_i}$ and $\mathcal{Z}'_i\subseteq\mathcal{H}'_{|X_i}$ and any Borel subset $U\subseteq X_i$ of positive measure, there exists a Borel subset $V\subseteq U$ of positive measure such that $\langle (\mathcal{Z}_i)_{|V},(\mathcal{Z}'_i)_{|V}\rangle$ is amenable. \end{enumerate} \end{lemma} In the sequel, when two maximal join parabolic subgroupoids of $\mathcal{G}$ satisfy one of the equivalent conditions of Lemma~\ref{lemma:recognize-adjacency} for every $i\in I$, we say that they are \emph{center-commuting} (notice that this notion does not depend of the choice of a partition as in the statement). \begin{proof} Let $i\in I$ be such that $X_i$ has positive measure, and let $C_i,C'_i$ be the respective centers of $P_i,P'_i$. Let $\hat\mathcal{Z}_i=\rho^{-1}(C_i)_{|X_i}$ and $\hat\mathcal{Z}'_i=\rho^{-1}(C'_i)_{|X_i}$. Notice that $\mathcal{H}_{|X_i}$ and $\mathcal{H}'_{|X_i}$ admit action-type cocycles towards $P_i,P'_i$, respectively. Therefore, Lemma~\ref{lem:center} ensures that $\hat\mathcal{Z}_i$ and $\hat\mathcal{Z}'_i$ are central subgroupoids of $\mathcal{H}_{|X_i},\mathcal{H}'_{|X_i}$, respectively, and conversely, every central subgroupoid of $\mathcal{H}_{|X_i},\mathcal{H}'_{|X_i}$ is stably equal to $\hat\mathcal{Z}_i,\hat\mathcal{Z}'_i$, respectively. Assuming that $(1)$ holds, the group $\langle C_i,C'_i\rangle$ is abelian. Let $\mathcal{Z}_i,\mathcal{Z}'_i$ be central subgroupoids of $\mathcal{H}_{|X_i},\mathcal{H}'_{|X_i}$, respectively. Let $U\subseteq X_i$ be a Borel subset of positive measure, and let $V\subseteq U$ be a Borel subset of positive measure such that $(\mathcal{Z}_i)_{|V}=(\hat\mathcal{Z}_i)_{|V}$ and $(\mathcal{Z}'_i)_{|V}=(\hat\mathcal{Z}'_i)_{|V}$. Then $\langle(\mathcal{Z}_i)_{|V},(\mathcal{Z}'_i)_{|V}\rangle\subseteq\rho^{-1}(\langle C_i,C'_i\rangle)_{|V}$ is amenable (as $\rho$ has trivial kernel). It follows that $(2)$ holds. Assuming that $(1)$ fails, there exist infinite cyclic subgroups $A_i\subseteq C_i$ and $A'_i\subseteq C'_i$ that together generate a rank $2$ free group: this follows for instance from \cite[Theorem~44]{KK}. Let $V\subseteq X_i$ be any Borel subset of positive measure. It follows from \cite[Lemma~3.20]{Kid} that $\langle \rho^{-1}(A_i)_{|V},\rho^{-1}(A'_i)_{|V}\rangle$ is nonamenable. Therefore $\langle (\hat\mathcal{Z}_i)_{|V},(\hat\mathcal{Z}'_i)_{|V}\rangle$ is nonamenable, showing that $(2)$ fails. \end{proof} \section{Strong rigidity} In this section, we prove Theorem~\ref{theointro:strong-rigidity}. As explained in the introduction, the $W^*$-rigidity statement follows from the orbit equivalence rigidity statement via \cite[Theorem~1.2 and Remark~1.3]{PV2}, see the argument in the proof of \cite[Corollary~3.20]{HH2} for details. We will therefore focus on the orbit equivalence rigidity statement. Our proof distinguishes two cases, regarding whether $G$ contains a centerless maximal join parabolic subgroup or not. Theorem~\ref{theointro:strong-rigidity} is the combination of Theorems~\ref{theo:join-case} and~\ref{theo:coned-case} below. \subsection{The case where some maximal join parabolic subgroup is centerless} We first prove Theorem~\ref{theointro:strong-rigidity} in the case where $G$ contains a centerless maximal join parabolic subgroup. \begin{theo}\label{theo:join-case} Let $G,H$ be two one-ended right-angled Artin groups. Assume that some maximal join parabolic subgroup of $G$ is centerless. Let $G\curvearrowright X$ and $H\curvearrowright Y$ be two free, irreducible, measure-preserving actions on standard probability spaces. If the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are stably orbit equivalent (through a stable orbit equivalence $f:U\to V$ between positive measure Borel subsets $U\subseteq X$ and $V\subseteq Y$), then $\kappa(f)=1$, any SOE cocycle associated to $f$ is cohomologous to a group isomorphism, and the actions are actually conjugate through a group isomorphism $\alpha:G\to H$. \end{theo} \begin{proof} Throughout the proof, we will always identify $G,H$ with right-angled Artin groups $G_\Gamma,G_\Lambda$, in such a way that through these identifications, all standard generators act ergodically on $X,Y$. All parabolic subgroups will be understood with respect to these identifications. The groupoid $\mathcal{G}=(G\ltimes X)_{|U}$ is naturally isomorphic (via $f$) to $(H\ltimes Y)_{|V}$ (after renormalizing the measures on $U$ and $V$). So $\mathcal{G}$ comes equipped with two action-type cocycles $\rho_G:\mathcal{G}\to G$ and $\rho_H:\mathcal{G}\to H$. Let $P\subseteq G$ be a maximal join parabolic subgroup which is centerless, and let $\mathcal{P}=\rho_G^{-1}(P)$. Then $\mathcal{P}$ satisfies Assertion~2 from Lemma~\ref{lemma:recognize-product}, as follows from applying this lemma to the cocycle $\rho_G$. Using now the implication $2\Rightarrow 1$ from Lemma~\ref{lemma:recognize-product}, applied to the cocycle $\rho_H$, we see that $\mathcal{P}$ also satisfies Assertion~1 with respect to $\rho_H$. In particular, there exist a Borel subset $W\subseteq U$ of positive measure and a maximal join parabolic subgroup $Q\subseteq H$ such that $\mathcal{H}_{|W}=\rho_H^{-1}(Q)_{|f(W)}$. In particular $\mathcal{H}_{|W}$ comes equipped with two action-type cocycles towards $P$ and $Q$. As $P$ is centerless, Corollary~\ref{cor:centerless} ensures that $Q$ is also centerless. Let $c:G\times X\to H$ be an SOE cocycle associated to $f_{|W}$. Notice that $c$ is also an SOE cocycle associated to $f$, and any two such cocycles are cohomologous. So if we prove that $c$ is cohomologous to a group isomorphism, then the same is true of any SOE cocycle associated to $f$. Notice also that $\kappa(f_{|W})=\kappa(f)$. The actions $P\curvearrowright X$ and $Q\curvearrowright Y$ are ergodic (by our irreducibility assumption), and the above ensures that for almost every $x\in W$, one has $f((P\cdot x)\cap W)=(Q\cdot f(x))\cap f(W)$. So $c$ is cohomologous to a cocycle $c'$ such that $c'_{|P\times X}$ is an SOE cocycle associated to $f_{|W}$ for the stable orbit equivalence between $P\curvearrowright X$ and $Q\curvearrowright Y$ (in particular $c'(P\times X^*)\subseteq Q$ for some conull Borel subset $X^*\subseteq X$). The groups $P$ and $Q$ are centerless join parabolic subgroups, so they split as direct products $P=P_{1}\times\dots\times P_{k}$ and $Q=Q_1\times\dots\times Q_\ell$ of at least two nonabelian parabolic subgroups that do not admit any nontrivial product decomposition. All subgroups $P_i$ and $Q_j$ belong to Monod and Shalom's class $\mathcal{C}_{\mathrm{reg}}$, so Theorem~\ref{theo:monod-shalom-product} ensures that $\kappa(f_{|W})=1$ (so $\kappa(f)=1$) and that $c'_{|P\times X}$ is cohomologous to a group isomorphism $\alpha:P\to Q$. As $P$ contains a standard generator of $G$, Lemma~\ref{lemma:exploting-commuting-gensets} ensures that $c$ is cohomologous to a group isomorphism $\alpha:G\to H$, and the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are conjugate through $\alpha$. As already mentioned in the previous paragraph, this is enough to conclude. \end{proof} \subsection{The case where every maximal join parabolic subgroup has a center} We now prove Theorem~\ref{theointro:strong-rigidity} when every maximal join parabolic subgroup of $G$ has a nontrivial center. \begin{theo}\label{theo:coned-case} Let $G,H$ be two one-ended centerless right-angled Artin groups. Assume that every maximal join parabolic subgroup of $G$ has a nontrivial center. Let $G\curvearrowright (X,\mu)$ and $H\curvearrowright (Y,\nu)$ be two free, irreducible, measure-preserving actions on standard probability spaces. If the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are stably orbit equivalent (through a stable orbit equivalence $f:U\to V$ between positive measure Borel subsets $U\subseteq X$ and $V\subseteq Y$), then $\kappa(f)=1$, every SOE cocycle associated to $f$ is cohomologous to a group isomorphism, and the actions are actually conjugate through a group isomorphism $\alpha:G\to H$. \end{theo} \begin{proof} As in our previous proof, we will always identify $G,H$ with right-angled Artin groups $G_\Gamma,G_\Lambda$, in such a way that through these identifications, all standard generators act ergodically on $X,Y$. All parabolic subgroups will be understood with respect to these identifications. Up to exchanging the roles of $G$ and $H$, we will assume without loss of generality that $\kappa(f)\ge 1$. Let $\mathcal{G}=(G\ltimes X)_{|U}$, which is naturally isomorphic (through $f$) to $(H\ltimes Y)_{|V}$ (after renormalizing the measures on $U$ and $V$). Then $\mathcal{G}$ comes equipped with two action-type cocycles $\rho_G:\mathcal{G}\to G$ and $\rho_H:\mathcal{G}\to H$. Lemma~\ref{lemma:combinatorics} ensures that there exist two distinct maximal join parabolic subgroups $P_1,P_2\subseteq G$, whose centers $A_1,A_2$ are infinite and commute, and by Lemma~\ref{lemma:commuting-centers} we have $A_1\cap A_2=\{1\}$. For every $i\in\{1,2\}$, let $\mathcal{P}_i=\rho_G^{-1}(P_i)$ and $\mathcal{A}_i=\rho_G^{-1}(A_i)$. Then $\mathcal{P}_1$ and $\mathcal{P}_2$ are two maximal join subgroupoids of $\mathcal{G}$ which contain a central subgroupoid of infinite type, and are center-commuting. Using Lemmas~\ref{lemma:recognize-product} and~\ref{lemma:recognize-adjacency}, we can therefore find two distinct maximal join parabolic subgroups $Q_1,Q_2\subseteq H$ with commuting infinite centers $B_1,B_2$, and a Borel subset $W\subseteq U$ of positive measure, such that for every $i\in\{1,2\}$, one has $(\mathcal{P}_i)_{|W}=\rho_H^{-1}(Q_i)_{|f(W)}$. In addition, Lemma~\ref{lem:center} implies that up to replacing $W$ by a further positive measure Borel subset, we can assume that $(\mathcal{A}_i)_{|W}=\rho_H^{-1}(B_i)_{|f(W)}$ for every $i\in\{1,2\}$. Let $c:G\times X\to H$ be an SOE cocycle associated to $f_{|W}$. Up to replacing $X$ by a conull invariant Borel subset, we can (and will) assume that $c$ is chosen so that whenever $(g,x)\in G\times X$ satisfies $x,gx\in W$, then $c(g,x)$ is the unique element $h\in H$ so that $f(gx)=hf(x)$. We will prove that $c$ is cohomologous to a cocycle $c'$ for which there exists a standard generator $s\in G$ such that $c'(s,\cdot)$ is almost everywhere constant. This will be enough to conclude the proof of our theorem in view of Lemma~\ref{lemma:exploting-commuting-gensets} (after observing that $c$ is also an SOE cocycle associated to $f$, and $\kappa(f_{|W})=\kappa(f)$). For $i\in\{1,2\}$, we have $(\mathcal{P}_i)_{|W}=\rho_H^{-1}(Q_i)_{|f(W)}$, and $P_i,Q_i$ act ergodically on $X,Y$ by assumption. So $c$ is cohomologous to a cocycle $c_i$ such that $(c_i)_{|P_i\times X}$ is an SOE cocycle associated to $f_{|W}$ for the stable orbit equivalence between $P_i\curvearrowright X$ and $Q_i\curvearrowright Y$, and such that $c$ and $c_i$ coincide on all pairs $(g,x)$ with $x,gx\in W$. By Lemma~\ref{lem:MS} (applied to the ambient groups $P_i,Q_i$ and to the normal subgroups $A_i,B_i$), for every $i\in\{1,2\}$, up to replacing $c_i$ by a cohomologous cocycle and replacing $X$ by a conull $G$-invariant and $H$-invariant Borel subset, we can assume that the following hold: \begin{enumerate} \item there is a group isomorphism $\bar\alpha_i:P_i/A_i\to Q_i/B_i$ satisfying that for every $g\in P_i$ and every $x\in X$, one has $c_i(g,x)\in\bar\alpha_i(gA_i)$ -- in particular $c_i(A_i\times X)\subseteq B_i$; \item $c_i$ coincides with $c$ on all pairs $(g,x)$ with $x,gx\in W$. \end{enumerate} Let $r_i:Q_i=B_i\times B_i^{\perp}\to B_i^{\perp}$ be the retraction, and let $c'_i:A_i^{\perp}\times X\to B_i^{\perp}$ be the cocycle defined as $c'_i=(r_i\circ c_i)_{|A_i^{\perp}\times X}$. There are isomorphisms $P_i/A_i\to A_i^{\perp}$ and $Q_i/B_i\to B_i^{\perp}$ (coming from choosing the unique lift). Through these identifications $\bar\alpha_i$ yields an isomorphism $\alpha_i:A_i^{\perp}\to B_i^{\perp}$ such that for every $g\in A_i^{\perp}$ and every $x\in X$, one has $c'_i(g,x)=\alpha_i(g)$. Recall from Lemma~\ref{lemma:commuting-centers} that for every $i\in\{1,2\}$, we have $A_{3-i}\subseteq A_i^{\perp}$ and $B_{3-i}\subseteq B_i^{\perp}$. We now prove that for every $i\in\{1,2\}$, the isomorphism $\alpha_i$ restricts to an isomorphism between $A_{3-i}$ and $B_{3-i}$. For ease of notation, we will prove it for $i=2$, the case $i=1$ being symmetric. By Poincaré recurrence, for every $g\in A_1$, there exist an integer $n>0$ and $x\in W$ such that $g^nx\in W$. Then $c_2(g^n,x)=c_1(g^n,x)$ (they are both equal to $c(g^n,x)$), and these belong to $B_1$, which is contained in $B_2^{\perp}$. In particular $c_2(g^n,x)=c'_2(g^n,x)$, which in turn equals $\alpha_2(g)^n$ by the above. So $\alpha_2(g)^n\in B_1$, and therefore $\alpha_2(g)\in B_1$ by \cite[Lemma~6.4]{Min}. So $\alpha_2(A_1)\subseteq B_1$. We now show that actually $\alpha_2(A_1)=B_1$. Take $h\in B_1$. Then there exists an integer $m>0$ and $y\in f(W)$ such that $h^my\in f(W)$. As the actions $A_1\curvearrowright X$ and $B_1\curvearrowright Y$ induce (via $f$) the same orbit equivalence relation on $W$, there exists $g\in A_1$ such that $c_1(g,y)=c_2(g,y)=h^m$. As $h^m\in B_1\subseteq B_2^{\perp}$, we have $c_2(g,y)=c'_2(g,y)$, so $\alpha_2(g)=h^m$. As $\alpha_2:A_2^\perp\to B_2^\perp$ is an isomorphism, there also exists $g_0\in A_2^\perp$ such that $\alpha_2(g_0)=h$, hence $g^m_0=g\in A_1$. It follows that $g_0\in A_1$ by \cite[Lemma~6.4]{Min}. This proves that $\alpha_2$ restricts to an isomorphism between $A_1$ and $B_1$, as desired. For every $i\in\{1,2\}$, we can therefore extend $\alpha_i$ on $A_i$ by defining $(\alpha_i)_{|A_i}=(\alpha_{3-i})_{|A_i}$ (in particular $\alpha_1$ and $\alpha_2$ coincide on $\langle A_1,A_2\rangle$). This yields an isomorphism $\alpha_i:P_i\to Q_i$ such that for every $g\in A_i\cup A_i^{\perp}$ and every $x\in X$, one has $c_i(g,x)\in\alpha_i(g)B_i$. Now, using the cocycle relation and the fact that every element of $P_i$ is a product of the form $gh$ with $g\in A_i$ and $h\in A_i^{\perp}$, we see that the above holds for every $g\in P_i$ and almost every $x\in X$. At this point, for every $i\in\{1,2\}$, we have a measurable map $\varphi_i:X\to H$ and a measurable map $\kappa_i:P_i\times X\to B_i$ such that for every $g\in P_i$ and almost every $x\in X$, one has $c(g,x)=\varphi_i(gx)\alpha_i(g)\kappa_i(g,x)\varphi_i(x)^{-1}$. Up to replacing $X$ by a conull $G$-invariant Borel subset, we will assume that these relations hold for every $g\in G$ and every $x\in X$. Let $X=\sqcup_{j\in J}X_j$ be a partition into at most countably many Borel subsets such that for every $j\in J$, the maps $\varphi_1,\varphi_2$ have constant values $\gamma_{1,j},\gamma_{2,j}$ in restriction to $X_j$. Let $g\in A_1$. By Poincaré recurrence, for every $j\in J$ such that $X_j$ has positive measure, there exist an integer $k_j>0$ and $x\in X_j$ such that $g^{k_j}x\in X_j$. By observing that $A_1\subseteq P_1\cap P_2$, we can then write $$c(g^{k_j},x)=\gamma_{1,j} \alpha_1(g)^{k_j}\kappa_1(g^{k_j},x)\gamma_{1,j}^{-1}=\gamma_{2,j} \alpha_2(g)^{k_j}\kappa_2(g^{k_j},x)\gamma_{2,j}^{-1}$$ where $\alpha_1(g)^{k_j}=\alpha_2(g)^{k_j}$ belongs to $B_1\subseteq B_2^{\perp}$, and $\kappa_1(g^{k_j},x)\in B_1$ and $\kappa_2(g^{k_j},x)\in B_2$. Let $r_2:H\to B_2$ be the retraction as in Section~\ref{sec:raag}. As $B_1\subseteq B_2^{\perp}$, we have $r_2(B_1)=\{1\}$. By applying $r_2$ to the above equation, we deduce that $\kappa_2(g^{k_j},x)$ is trivial. Now applying $r_1:H\to B_1$ to the above equation, and using the fact that $\alpha_1(g)=\alpha_2(g)$ (and $B_1$ is abelian), we deduce that $\kappa_1(g^{k_j},x)$ is trivial. Therefore $\gamma_{1,j}^{-1}\gamma_{2,j}$ commutes with $\alpha_1(g)^{k_j}$. As $B_1$ is abelian, the centralizer of $\alpha_1(g)^{k_j}$ contains $Q_1=B_1\times B_1^{\perp}$, and therefore it is equal to $Q_1$ by the maximality of $Q_1$ as a join parabolic subgroup. This proves that for almost every $x\in X$, one has $\varphi_2(x)=\varphi_1(x)\eta_1(x)\mu_1(x)$, where $\eta_1(x)\in B_1$ and $\mu_1(x)\in B_1^{\perp}$. Now, for every $g\in A_1$ and every $x\in X$, one has \begin{displaymath} \begin{array}{rl} c(g,x)&=\varphi_1(gx)\alpha_1(g)\kappa_1(g,x)\varphi_1(x)^{-1}\\ & =\varphi_1(gx)\eta_1(gx)\mu_1(gx)\alpha_1(g)\kappa_2(g,x)\mu_1(x)^{-1}\eta_1(x)^{-1}\varphi_1(x)^{-1}, \end{array} \end{displaymath} \noindent and therefore $$\kappa_1(g,x)=\eta_1(gx)\mu_1(gx)\kappa_2(g,x)\mu_1(x)^{-1}\eta_1(x)^{-1}.$$ Retracting to $B_1$ yields $$\kappa_1(g,x)=\eta_1(gx)\eta_1(x)^{-1},$$ and therefore $$c(g,x)=\varphi_1(gx)\eta_1(gx)\alpha_1(g)\eta_1(x)^{-1}\varphi_1(x)^{-1}.$$ This proves that there exists a measurable map $\psi:X\to H$ and a homomorphism $\alpha_1:A_1\to H$ such that for every $g\in A_1$ and every $x\in X$, one has $c(g,x)=\psi(gx)\alpha_1(g)\psi(x)^{-1}$, which concludes our proof. \end{proof} \section{Superrigidity} In this section, we derive Theorem~\ref{theointro:superrigidity} from Theorem~\ref{theointro:strong-rigidity}, using general techniques developed in the successive works of Furman \cite{Fur-oe}, Monod and Shalom \cite{MS} and Kida \cite{Kid-oe}. Let $G$ be a countable group, and let $\mathcal{F}$ be a collection of subgroups of $G$. Say that a free, ergodic, measure-preserving action of $G$ on a standard probability space $X$ is \emph{$\mathcal{F}$-ergodic} if every subgroup in $\mathcal{F}$ acts ergodically on $X$. In our setting, an irreducible action of a right-angled Artin group is an action which is $\mathcal{F}$-ergodic with respect to the collection of all cyclic subgroups associated to a standard generating set $\mathcal{F}$. Say that $(G,\mathcal{F})$ is \emph{strongly cocycle-rigid} if given any two stably orbit equivalent $\mathcal{F}$-ergodic free, ergodic, measure-preserving actions $G\curvearrowright X$ and $G\curvearrowright Y$ on standard probability spaces, any SOE cocycle $c:G\times X\to G$ is cohomologous to a group isomorphism $\alpha:G\to G$. Notice that Theorems~\ref{theo:join-case} and~\ref{theo:coned-case} imply that if $G$ is a one-ended centerless right-angled Artin group, and if $\mathcal{F}$ is the set of all cyclic subgroups associated to standard generators of $G$ (under an isomorphism between $G$ and some $G_\Gamma$), then $(G,\mathcal{F})$ is strongly cocycle-rigid. Recall that a free, ergodic, measure-preserving action of a countable group $G$ on a standard probability space $(X,\mu)$ is \emph{mildly mixing} if for every Borel subset $A\subseteq X$, and every sequence $(g_n)_{n\in\mathbb{N}}\in G^{\mathbb{N}}$ made of pairwise distinct elements, either $A$ is null or conull, or else $\liminf_{n\to +\infty}\mu(g_nA\Delta A)>0$. This is equivalent to requiring that for every non-singular properly ergodic action of $G$ on a standard probability measure space $Y$, the diagonal $G$-action on $X\times Y$ is ergodic \cite{SW}. Every mildly mixing $G$-action is $\mathcal F$-ergodic, taking for $\mathcal F$ the collection of all infinite subgroups of $G$. \begin{theo} \label{theo:conj} Let $G$ be an ICC countable group, and let $\mathcal{F}$ be a collection of infinite subgroups of $G$. Assume that $(G,\mathcal{F})$ is strongly cocycle-rigid. Let $H$ be a countable group. Let $X,Y$ be standard probability spaces, let $G\curvearrowright X$ be an $\mathcal{F}$-ergodic free, ergodic, measure-preserving $G$-action, and let $H\curvearrowright Y$ be a free, measure-preserving, mildly mixing $H$-action. If the actions $G\curvearrowright X$ and $H\curvearrowright Y$ are stably orbit equivalent, then they are virtually conjugate. \end{theo} \begin{proof} By \cite[Theorem~3.3]{Fur-oe}, there exists a standard measure space $\Sigma$ equipped with a measure-preserving action of $G\times H$ such that the $G$-action on $X$ is isomorphic to the $G$-action on $H\backslash\Sigma$, and the $H$-action on $Y$ is isomorphic to the $H$-action on $G\backslash\Sigma$ (the space $\Sigma$ is a \emph{measure equivalence coupling} between $G$ and $H$ in the sense of \cite[0.5.E]{Gro}). Let $\Omega$ be the self measure equivalence coupling of $G$ defined by $\Omega=\Sigma\times_{H}H\times_{H}\check{\Sigma}$ (see \cite[Section~2]{Fur-me} for definitions). By definition $\Omega$ comes equipped with a measure-preserving action of $G\times G$; for notational simplicity, we will let $G_{\ell}=G\times\{1\}$ and $G_r=\{1\}\times G$. As the $H$-action on $Y$ is mildly mixing, \cite[Lemma~6.5]{MS} ensures that the actions of $G_\ell$ on $G_r\backslash\Omega$, and of $G_r$ on $G_\ell\backslash\Omega$, are ergodic and $\mathcal{F}$-ergodic. In addition, the essential freeness of the $G$-action on $H\backslash\Sigma$ ensures that the actions of $G_\ell$ on $G_r\backslash\Omega$ and of $G_r$ on $G_\ell\backslash\Omega$ are essentially free. We now claim that there exist a Borel map $\Phi:\Omega\to G$ and an automorphism $\rho:G\to G$ such that $\Phi$ is \emph{$\rho$-twisted equivariant}, i.e.\ $\Phi$ is $(G\times G)$-equivariant when $G$ is equipped with the action of $G\times G$ given by $(g_1,g_2)\cdot g=\rho(g_1)gg_2^{-1}$. Let $Z\subseteq\Omega$ be a fundamental domain for the action of $G_r$. By identifying $Z$ with $G_r\backslash\Omega$, we get a free, ergodic, $\mathcal{F}$-ergodic, measure-preserving action of $G_\ell$ on $Z$. It follows from \cite[Lemma~3.2]{Fur-oe} that there exist an SOE cocycle $c:G_\ell\times Z\to G_r$ and a $(G_\ell\times G_r)$-equivariant Borel isomorphism $\Omega\to Z\times G_r$, where the action of $G_\ell\times G_r$ on $Z\times G_r$ is given by $(g_1,g_2)\cdot (z,g)=(g_1z,c(g_1,z)gg_2^{-1})$. As the actions of $G$ on $G_\ell\backslash\Omega$ and $G_r\backslash\Omega$ are $\mathcal{F}$-ergodic, and $(G,\mathcal{F})$ is strongly cocycle-rigid, the cocycle $c$ is cohomologous to a group isomorphism, i.e.\ there exist a group isomorphism $\rho:G_\ell\to G_r$ and a measurable map $\varphi:Z\to G_r$ such that for every $g_1\in G_\ell$ and almost every $z\in Z$, one has $c(g_1,z)=\varphi(g_1z)\rho(g_1)\varphi(z)^{-1}$. We define $\Phi(z,g)=\varphi(z)^{-1}g$. Then the equivariance is verified as follows: $\Phi((g_1,g_2)\cdot (z,g))=\Phi((g_1z,c(g_1,z)gg_2^{-1}))=\rho(g_1)\varphi(z)^{-1}gg_2^{-1}=(g_1,g_2)\cdot \Phi(g,z)$. This proves our claim. We can thus apply \cite[Theorem~6.1]{Kid} (or the reasoning on \cite[pp.865-867]{MS}) to get a homomorphism $\alpha:H\to G$ with finite kernel and finite-index image, and an almost $(G\times H)$-equivariant Borel map $\Sigma\to G$, where the action of $G\times H$ on $G$ is via $(g,h)\cdot g'=gg'\alpha(h)^{-1}$. The conclusion then follows \cite[Lemma~4.18]{Fur-survey} (alternatively, see the argument from the proof of \cite[Theorem~1.1]{Kid-oe}). \end{proof} We can now complete the proof of Theorem~2 from the introduction. \begin{proof}[Proof of Theorem~\ref{theointro:superrigidity}] By definition of irreducibility of the $G$-action on $X$, there exists an isomorphism between a right-angled Artin group $G_\Gamma$ and $G$ such that, letting $\mathcal{F}$ be the set of all cyclic subgroups of $G$ generated by the images (under this identification) of the standard generators of $G_\Gamma$, the $G$-action on $X$ is $\mathcal{F}$-ergodic. Theorems~\ref{theo:join-case} and~\ref{theo:coned-case} ensure that $(G,\mathcal{F})$ is strongly cocycle-rigid. In addition $G$ is ICC (Lemma~\ref{lemma:icc}). So Theorem~\ref{theo:conj} applies and yields the orbit equivalence superrigidity statement. The $W^*$-superrigidity statement follows because $L^\infty(X)\rtimes G$ contains a unique virtual Cartan subalgebra up to unitary conjugacy (by \cite[Theorem~1.2 and Remark~1.3]{PV2}, see also the proof of \cite[Corollary~3.20]{HH2}). \end{proof} \newpage
1,116,691,498,054
arxiv
\subsubsection*{\bibname}} \bibliographystyle{plainnat} \begin{document} \newcommand{\quotes}[1]{``#1''} \twocolumn[ \aistatstitle{A Deep Generative Model for Fragment-Based Molecule Generation} \aistatsauthor{Marco Podda \And Davide Bacciu \And Alessio Micheli } \aistatsaddress{University of Pisa,\\Largo Bruno Pontecorvo 3,\\56127 Pisa, Italy \And University of Pisa,\\Largo Bruno Pontecorvo 3,\\56127 Pisa, Italy \And University of Pisa,\\Largo Bruno Pontecorvo 3,\\56127 Pisa, Italy} ] \begin{abstract} Molecule generation is a challenging open problem in cheminformatics. Currently, deep generative approaches addressing the challenge belong to two broad categories, differing in how molecules are represented. One approach encodes molecular graphs as strings of text, and learns their corresponding character-based language model. Another, more expressive, approach operates directly on the molecular graph. In this work, we address two limitations of the former: generation of invalid and duplicate molecules. To improve validity rates, we develop a language model for small molecular substructures called fragments, loosely inspired by the well-known paradigm of Fragment-Based Drug Design. In other words, we generate molecules fragment by fragment, instead of atom by atom. To improve uniqueness rates, we present a frequency-based masking strategy that helps generate molecules with infrequent fragments. We show experimentally that our model largely outperforms other language model-based competitors, reaching state-of-the-art performances typical of graph-based approaches. Moreover, generated molecules display molecular properties similar to those in the training sample, even in absence of explicit task-specific supervision. \end{abstract} \section{\uppercase{Introduction}}\label{sec:introduction} The term \emph{de novo} Drug Design (DD) refers to a collection of techniques for the production of novel chemical compounds, either by \emph{in-vitro} synthesis or computer-aided, endowed with desired pharmaceutical properties. Among synthesis-based methodologies of DD, Fragment-Based Drug Design (FBDD) \citep{fbdd, fbdd2} has established itself as an effective alternative to more traditional methods such as High-Throughput Screening (HTS). At the core of FBDD is the notion of fragments, small molecular weight compounds that are easily synthesizable, have high binding affinity and weakly interact with a set of target molecules. Fragments are combined together according to several strategies, producing more complex compounds with enhanced target interactions. In contrast to synthesis-based methods, computational approaches to DD are based on the efficient exploration of the space of molecules, which is an inherently hard problem because of its size (estimated to be in the order of $10^{60}$). Recently, deep generative models of molecules have shown promising results in this challenging task \citep{xu-molecular-generation-review}. Broadly speaking, deep learning models for molecule generation are typically based on an encoder-decoder approach. First, the molecular graph is encoded in a vectorial latent space. Then, a decoding distribution is placed on such latent codes, which is subsequently exploited for efficient sampling. Depending on which input representation of the molecular graph is chosen, we distinguish two broad families of approaches. The first family of models uses a textual representation of the molecular graph, e.g. the SMILES \citep{smiles} language, where atoms and chemical bonds are represented as characters. From this representation, a character-based language model (LM) \citep{sutskever-char-based-language-models} can be trained using Recurrent Neural Network (RNN) \citep{elman-rnn} architectures. For this reason, we term approaches of this kind as \emph{LM-based}. The second family operates directly on the molecular graph, encoding it either sequentially using RNNs, or in a permutation-invariant fashion using Graph Neural Networks (GNN) \citep{gnn-scarselli,gnn-micheli}. We term this family of models \emph{graph-based}. Both approaches have advantages and disadvantages: for example, graph-based models are more expressive in principle, because they act directly on the molecular graph. However, they are hard to train and less efficient to sample from. In contrast, LM-based approaches trade-off a less expressive intermediate representation of the molecular graph with efficient training and sampling. Another common issue with LM-based approaches is that they tend to generate a large share of chemically invalid molecules, as well as many duplicates of the most likely molecules. For these reasons, graph-based methods typically hold state-of-the-art performances as regards the production of chemically valid, novel and unique molecules. In this work, we address the two main shortcomings of LM-based generative models. Our first contribution is to counter low validity rates. To this end, we take inspiration from FBDD and develop a fragment-based language model of molecules. In other words, instead of generating a molecule atom by atom, we generate it \emph{fragment by fragment}. Note that since fragments are chemically sound, our approach needs to ensure validity only when connecting a novel fragment; in contrast, character-based LM approaches need to maintain validity after each novel atom is added. Hence, our approach naturally ensures higher validity rates. As a second contribution, we develop a simple strategy that fosters the generation of unique molecules, avoiding duplicates. In our fragment-based framework, the problem of duplicates is a consequence of the distribution of fragments in the data. Roughly speaking, the distribution of fragments follows a power-law distribution, with a small number of very frequent fragments as opposed to a large number of infrequent fragments. Thus, we mask infrequent fragments with a token that specifies their frequency. During generation, whenever the masking token is predicted, we sample from the set of fragments that were masked by that token. Our experimental evaluation shows that our model is able to perform on par with state-of-the-art graph-based methods, despite using an inherently least expressive representation of the molecular graph. Moreover, we show that generated compounds display similar structural and chemical features to those in the training sample, even without the support of explicit supervision on such properties. \section{\uppercase{A primer on fragments}} Here, we briefly describe what are fragments and how they are used in the context of DD. Fragments are very-small-weight compounds, typically composed of $<20$ non-hydrogen atoms. Small size has several advantages: firstly, they are easier to manipulate chemically than larger fragments. Secondly, the chemical space of fragments is narrower than, for example, the one of drug-like molecules typically generated from other DD approaches such as HTS. Thus, it is easier to explore and characterize. Thirdly, the small size makes fragments weakly interact with a broader spectrum of target proteins than larger compounds (higher molecular complexity translates into strongest interaction, albeit not necessarily beneficial). A typical FBDD experiment begins with the identification of a suitable collection of fragments, from which a subset with desired interactions with the target (hits) is identified. Subsequently, fragments are optimized into higher affinity compounds that become the starting points (leads) for subsequent drug discovery phases. Optimization is commonly carried out according to three different strategies: \emph{a}) linking, which optimizes a given fragment by connecting it with another fragment; \emph{b}) growing, where the fragment is functionally and structurally enriched to optimize binding site occupation; \emph{c}) merging, which involves combining the structure of two overlapping fragments into a new one with increased affinity. Since its inception in 1996, FBDD accounts for two clinically approved drugs, and more than thirty undergoing clinical trials at various stages \citep{fbdd3}. \begin{figure*} \centering \begin{minipage}{0.4\textwidth} \centering \includegraphics[height=223px]{figs/aspirin.eps} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \begin{algorithm}[H] \caption{Fragmentation} \begin{algorithmic}[1] \Require Molecule $M$, Fragment List $F \leftarrow[]$ \Procedure{Fragment}{$M, F$} \State \textbf{declare} Bond $b$ \State $b \leftarrow$ \textsc{GetFirstBRICSBond}$(M)$ \If{\textsc{IsEmpty($b$)}} \State \Return $F$ \Else \State \textbf{declare} Fragment f \State \textbf{declare} Molecule $M'$ \State $f$, $M' \leftarrow$ \textsc{BreakMolAtBond}($M, b$) \State $F \leftarrow$ \textsc{Append}($F$, $f$) \State $M \leftarrow M'$ \State \textsc{Fragment}($M, F$) \EndIf \EndProcedure \end{algorithmic} \label{algo:fragmentation} \end{algorithm} \end{minipage} \caption{Left: a depiction of the fragmentation procedure. The root of the tree is the molecule to be fragmented (aspirin), while the leaves (enclosed by dashed boxes) represent the extracted fragments. At each iteration (level), the molecule atoms are scanned from left to right according to the SMILES ordering, extracting a fragment as soon as a breakable bond is found. The process is repeated until the remaining fragment cannot be split further. To reconstruct a molecule, fragments are reassembled starting from the leaves to the root, right to left. Asterisks denote dummy atoms. The dashed bonds with the green highlight are the ones selected to be broken/joined using BRICS rules. Right: a sketch of recursive implementation of the fragmentation algorithm.} \label{fig:fragmentation} \end{figure*} \section{\uppercase{Related Works}} In contrast with general-purpose models which use auto-regressive generation to sample novel graphs \citep{you-graphrnn,podda-graph-generation}, molecular generators are usually arranged in an encoder-decoder scheme, coupled with a generative model that is trained to learn the distribution of codes in latent space, either explicitly using variants of Variational Auto-Encoders (VAEs) \citep{vae} or implicitly using Generative Adversarial Networks (GANs) \citep{gan}. Novel molecules can be generated by sampling the latent space, and letting the decoder reconstruct the molecular graph, conditioned on the sampled code. We now adopt the taxonomy of Section~\ref{sec:introduction} and recap approaches belonging to the LM-based as well as graph-based families. We especially focus on VAE-based models, as of direct relevance for this work. \paragraph{Language Model-Based Approaches} A seminal LM-based model for molecular generation is the work of \cite{chemvae}, which is essentially a character-based language model of SMILES strings coupled with a VAE to learn the distribution of the latent space. A first extension to constrain the generation with syntactic rules is proposed in the work of \cite{grammarvae}. The work of \cite{sdvae} extends this approach further, augmenting the VAE generator with a form of syntax-directed translation, thus ensuring that generated molecules are both syntactically valid, as well as semantically reasonable. Notice that our work is LM-based, but differs from existing approaches in that we do not generate a molecule atom by atom, but rather fragment by fragment. \paragraph{Graph-Based Approaches} Early contributions in this line of research were based on encoding the molecular graph with various strategies, and decoding its adjacency matrix directly. For example, \cite{graphvae} use a VAE-based architecture with a GNN encoder. The decoder outputs a probabilistic fully-connected graph, where the presence of an edge is modeled as a Bernoulli process, assuming edge independence. The final graph is sparsified with approximate graph matching. \cite{nevae} propose a different approach. Similarly to the work of \cite{graphvae}, a GNN is used as encoder; however, only node embeddings are mapped to latent space. The decoder works by first sampling a set of atom embeddings and inferring their type from a categorical distribution. Then, bonds between all possible pairings of such atoms are predicted, and their their type is inferred from another categorical distribution. The whole architecture is end-to-end trainable. A similar approach is developed in the work of \cite{cgvae}, where a Gated-Graph Neural Network \citep{ggnn} is used as encoder. The decoder first samples a set of nodes, then sequentially adds the edges for each node on the basis of a breadth-first queue. Finally, the model by \cite{jtvae} generates molecules by first sampling a tree structure that specifies how functional pieces of the molecule are connected. Then, it uses the sampled tree to predict the molecular subgraphs corresponding to each tree node. \begin{figure*} \centering \begin{subfigure}[b]{0.67\textwidth} \centering \resizebox{0.99\textwidth}{!}{\input{figs/training.tex}} \caption{Training} \label{fig:model-training} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{0.99\textwidth}{!}{\input{figs/generation.tex}} \caption{Generation} \label{fig:model-generation} \end{subfigure} \caption{The proposed architecture during training (a) and generation (b). The EMBED layer (in green) is the skip-gram embedding matrix of the textual representation of fragments; the GRU layers (in blue) are the recurrent units that encode and decode fragments; the LINEAR + SOFTMAX (in red) layers serve the purpose of projecting the GRU outputs to the space of the vocabulary, and computing the probability of the next fragment, respectively. Dashed lines indicate sampling.} \label{fig:model} \end{figure*} \section{\uppercase{Methods}} At a high level, our approach encompasses three steps: break molecules into sequences of fragments, encode them as SMILES words, and learn their corresponding language model. In this section, we review each steps and provide the necessary details on how we operated. \subsection{Molecule Fragmentation} Given a dataset of molecules, the first step of our approach entails breaking them into an ordered sequence of fragments. To do so, we leverage the Breaking of Retrosynthetically Interesting Chemical Substructures (BRICS) algorithm \citep{brics}, which breaks strategic bonds in a molecule that match a set of chemical reactions. "Dummy" atoms (with atomic number 0) are attached to each end of the cleavage sites, marking the position where two fragments can be joined together. BRICS cleavage rules are designed to retain molecular components with valuable structural and functional content, e.g. aromatic rings and side-chains, breaking only single bonds that connect among them. Our fragmentation algorithm works by scanning atoms in the order imposed by the SMILES encoding. As soon as a breakable bond (according to the BRICS rules) is encountered during the scan, the molecule is broken in two at that bond, applying a matching chemical reaction. After the cleavage, we collect the leftmost fragment, and repeat the process on the rightmost fragment in a recursive fashion. Note that fragment extraction is ordered from left to right according to the SMILES representation; this makes the process fully reversible, i.e. it is possible to reconstruct the original molecule from a sequence of fragments. In Figure~\ref{fig:fragmentation}, we show a practical example of the fragmentation process and provide a pseudo-code recursive implementation of our algorithm. \subsection{Fragment Embedding} The former process transforms a dataset of molecules into a dataset of sequences of SMILES-encoded fragments. In analogy with the work of \cite{bowman-sentences-continuous-space}, we view a sequence of fragments as a \quotes{sentence}; therefore, we construct a vocabulary of unique fragment \quotes{words}. We embed each fragment by pushing fragments that occur in similar contexts to be mapped to similar regions in embedding space. More formally, given a sequence $s = ( s_1, s_2, \ldots s_{|s|})$ of SMILES-encoded fragments, we minimize the following objective function: $$\mathcal{L}(s) = - \sum_{i=1}^{|s|}\sum_{-w \leq j \leq w} \log P(s_{i+j}| s_i),\ j \neq 0,$$ where $w$ is the size of the context window, $s_i$ is the target fragment, and $s_{i+j}$ are context fragments. In this work, $P$ is implemented as a skip-gram model with negative sampling \citep{mikolov-skipgram}. After training the embeddings, each fragment sequence is represented as $x = (x_1, x_2, \ldots, x_{|x|})$, where the generic $x_i$ is a column vector of the skip-gram embedding matrix. \subsection{Training} Similarly to other language models, we adopt an encoder-decoder architecture with a generative model in between the two. Here, we describe the architecture and the training process in detail. \paragraph{Encoder} To encode the sequence of fragments, we use Gated Recurrent Units (GRUs) \citep{gru}. Specifically, we transform each embedding $x_i$ into a hidden representation $h_i = \mathrm{GRU}(x_i, h_{i-1})$ as follows\footnote{We omit bias terms for clarity.}: \begin{equation}\label{eq:gru} \begin{split} r_i &= \mathrm{sigmoid}(W_r x_i + U_r h_{i-1})\\ u_i &= \mathrm{sigmoid}(W_u x_i + U_u h_{i-1})\\ v_i &= \mathrm{tanh}(W_h x_i + U_h(r_i \odot h_{i-1}))\\ h_i &= u_i \odot h_{i-1} + (1-u_i) \odot v_i, \end{split} \end{equation} where $h_0$ is the zero vector. In the above formula, $r_i$ is a reset gate vector, $u_i$ is an update gate vector; $W$ and $U$ are weight matrices, and $\odot$ denotes element-wise multiplication ($v_i$ is a convenience notation for ease of read). The hidden representation of the last fragment in the sequence, which we term $h$, is used as latent representation of the entire sequence. The encoder is trained to minimize the following Kullback-Leibler (KL) divergence: $$\mathcal{L}_{\mathrm{enc}}(x) = -\mathrm{KL}(\mathcal{N}(\mu, \mathrm{diag}(\sigma^2))\; ||\; \mathcal{N}(0, \mathbb{I})).$$ In this work, $\mu = W_{\mu}h + b_{\mu}$ and $\log(\sigma^2) = W_{\sigma}h + b_{\sigma}$, where $W$ denotes weight matrices, and $b$ denotes bias terms. \paragraph{Decoder} The decoder is a recurrent model with GRU units. Its hidden state is initialized by applying the reparameterization trick \citep{vae}, setting $z = h_0 = \mu + \sigma \epsilon$, with $\epsilon \sim \mathcal{N}(0, \mathbb{I})$, as the initial hidden state of the decoder. Differently from the encoder, the decoder also computes the output probability associated to the next element in the sequence as follows: $$P(x_{i+1}|x_i,h_{i-1}) = \mathrm{softmax}(W_{out} h_i + b_{out}),$$ where $h_i = \mathrm{GRU}(x_i, h_{i-1})$ similarly to the encoder, weight matrix $W_{out}$ projects the hidden representation to the space of the vocabulary, and $b_{out}$ is a bias term. During training, we use teacher forcing \citep{williams-teacher-forcing} and feed the ground truth fragment as input for the following step. The decoder is trained to minimize the negative log-likelihood of the fragment sequence: $$\mathcal{L}_{\mathrm{dec}}(x) = - \sum_{i=1}^{|x|} \log P(x_{i+1} \mid x_i, h_{i-1}).$$ Note that this loss corresponds to the Cross-Entropy between the one-hot encoded ground truth sequence and the predicted fragment probabilities for each of its elements, computed as described above. \paragraph{Model Loss} Our language model is trained in an end-to-end fashion on a dataset of fragment sequences $\mathcal{D}$. The overall loss is the sum of the encoder and decoder losses for each fragment sequence: $$\mathcal{L}(\mathcal{D}) = \sum_{x \in \mathcal{D}} \mathcal{L}_{\mathrm{enc}}(x) + \mathcal{L}_{\mathrm{dec}}(x).$$ In analogy with the VAE framework, the decoder loss can be viewed as the reconstruction error of the input sequence, while the encoder loss acts as a regularizer that forces the encoding distribution to be Gaussian. Fig.~\ref{fig:model-training} provides an overview of the architecture. \subsection{Generation} The generative process starts by sampling a latent vector $z \sim \mathcal{N}(0, \mathbb{I})$, which is used as the initial state of the decoder. The first input of the decoder is an \texttt{SOS} token. Tokens and recurrent states are passed through the GRU, linear and softmax layers to produce an output probability for the next fragment. From it, we use a greedy strategy and sample the most likely fragment, which becomes the input of the next decoding step. The generative process is interrupted whenever an \texttt{EOS} token is sampled. The resulting fragment sequence is finally reassembled into a molecule. Note that, for a sequence to be decodable, it is necessary that the first and last fragments contain exactly one attachment point (because they connect only to one fragment), whereas intermediate fragments need to have two (because they are connected to the preceding and following fragments). Sequences which do not respect this constraint are rejected. Fig.~\ref{fig:model-generation} illustrates the generative process. \subsection{Low-Frequency Masking} To foster molecule diversity, we start from the observation that the distribution of fragments in the data can be roughly approximated by a power law distribution. In fact, there is usually a small number of fragments with very high frequency, as opposed to a very large number of fragments that occur rarely. Hence, infrequent fragments are unlikely to be sampled during generation. To counter this, we develop a strategy which we term Low-Frequency Masking (LFM). During training, we mask fragments with frequency below a certain threshold $k$ with a token composed of its frequency and the number of attachment points. As an example, suppose that fragment \texttt{*Nc1ccc(O*)cc1} occurs 5 times in the dataset, and the threshold is $k=10$. Thus, this fragment is masked with the token \texttt{5\_2}, where 5 denotes its frequency, and 2 denotes the number of attachment points. Similarly, fragment \texttt{*C(=O)N1CCN(Cc2ccccc2)CC1} with frequency of 3 is masked with the token \texttt{3\_1}. In contrast, fragment \texttt{*c1ccccc1OC} with a frequency of 200 is left unmasked, since its frequency is above the threshold. A reverse mapping from the masking tokens to the masked fragments is kept. During sampling, whenever a masking token is sampled, we replace it with a fragment sampled with uniform probability from the corresponding set of masked fragments. This strategy serves a double purpose. Firstly, it greatly reduces vocabulary size during training, speeding up the computations. Secondly, it fosters molecule diversity by indirectly boosting the probability of infrequent fragments, and injecting more randomness in the sampling process at the same time. From another point of view, LFM forces the model to generate molecules mostly composed of very frequent fragments, but with infrequent substructures that may vary uniformly from molecule to molecule. \begin{table}[t] \begin{center} \caption{Dataset statistics.}\label{tab:statistics} \scriptsize \begin{tabular}{lcc} \toprule \textbf{} & \textbf{ZINC} & \textbf{PCBA}\\ \midrule Total number of molecules & 249455 & 437929\\ Molecules with no. fragments $\geq 2$ & 227946 & 383790\\ Mean number of fragments & 2.24$\pm$0.45 & 2.25$\pm$0.48\\ Vocabulary size & 168537 & 199835\\ Vocabulary size (LFM) & 21085 & 35949\\ Average number of atoms & 23.52$\pm$4.29 & 26.78$\pm$6.76\\ Average number of bonds & 25.31$\pm$5.07 & 28.98$\pm$7.44\\ Average number of rings & 2.75$\pm$1.00 & 3.16$\pm$1.05\\ \bottomrule \end{tabular} \end{center} \end{table} \section{\uppercase{Experiments}}\label{sec:experiments} Following, we review our experimental setup, namely how experiments are conceived, which dataset and evaluation metrics were used, which baselines we compare to, as well as details about the hyper-parameters of our model. In our experiments, we try to provide an empirical answer to the following questions: \begin{itemize} \item Q1: is our fragment-based language model able to increase validity rates? \item Q2: is our LFM strategy beneficial to increase uniqueness rates? \end{itemize} To answer the first question, we compare our model against character-based baselines, which generate molecules atom by atom. As regards the second question, we perform an ablation study of performances with and without LFM. We also compare against graph-based approaches, to assess performances in relation to models that use more expressive molecule representations. \subsection{Data} We experiment on the ZINC dataset \citep{zinc}, consisting of $\approx$ 250k drug-like compounds. ZINC is a common benchmark for the generative task; as such, it is used to compare against several baselines. To assess the impact of LFM further, in our ablation study we also test our model variants with the PubChem BioAssay (PCBA) dataset \citep{pubchem}, which comprises $\approx$ 440k small molecules. Dataset statistics are presented in Table~\ref{tab:statistics}. \paragraph{Preprocessing} We applied some common preprocessing steps before training. In the PCBA dataset, we found 10822 duplicate or invalid molecules, which were removed. After fragmentation, we discarded molecules composed of $< 2$ fragments. After preprocessing, our training samples were $227946$ (ZINC) and $383790$ (PCBA). For completeness, we report that we tried to test our model on the QM9 dataset \citep{qm9} as well, but found out that approximately 70\% of its molecules are composed of a single fragment, making assessment poorly informative due to the small sample size. \begin{table*}[h] \begin{center} \caption{Scores obtained by our model against LM-based and graph-based baselines. LFM indicates that the model has been trained with Low-Frequency Masking. Performances of our LFM variant are shown in bold.}\label{tab:results} \footnotesize \begin{tabular}{lccccc} \toprule \textbf{Model} & \textbf{Model Family} & \textbf{Dataset} & \textbf{Valid} & \textbf{Novel} & \textbf{Unique}\\ \midrule ChemVAE & LM & ZINC & 0.170 & 0.980 & 0.310\\ GrammarVAE & LM & ZINC & 0.310 & 1.000 & 0.108\\ SDVAE & LM & ZINC & 0.435 & - & -\\ GraphVAE & Graph & ZINC & 0.140 & 1.000 & 0.316\\ CGVAE & Graph & ZINC & 1.000 & 1.000 & 0.998\\ NeVAE & Graph & ZINC & 1.000 & 0.999 & 1.000\\ \midrule Ours & LM & ZINC & 1.000 & 0.992 & 0.460\\ \textbf{Ours (LFM)} & LM & ZINC & \textbf{1.000} & \textbf{0.995} & \textbf{0.998}\\ \midrule Ours & LM & PCBA & 1.000 & 0.981 & 0.108\\ \textbf{Ours (LFM)} & LM & PCBA & \textbf{1.000} & \textbf{0.991} & \textbf{0.972}\\ \bottomrule \end{tabular} \end{center} \end{table*} \subsection{Performance Metrics} Following the standards to evaluate molecular generators, we compare our model with the baselines on the following performance metrics: \begin{itemize} \item \emph{validity rate}, the ratio of generated molecules that decode to valid SMILES strings, out of the total number of generated molecules; \item \emph{novelty rate}, the ratio of valid generated molecules which do not appear in the training set; \item \emph{uniqueness rate}, the ratio of unique molecules (not duplicated) out of the total number of valid generated molecules. \end{itemize} \begin{figure*}[h] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=.95\textwidth]{figs/zinc_props} \caption{ZINC} \label{fig:zinc-props} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=.95\textwidth]{figs/pcba_props} \caption{PCBA} \label{fig:pcba-props} \end{subfigure} \caption{Plot of the distributions of structural features (top row) and molecular properties (bottom row) of compounds in the ZINC (a) and PCBA (b) datasets, compared against the 20k compounds sampled from our model.} \label{fig:generated-props} \end{figure*} \subsection{Baselines} We compare to baselines found in literature, representing the two families of generative models described in Section~\ref{sec:introduction}. As regards LM-based approaches, we consider ChemVAE \citep{chemvae}, GrammarVAE \citep{grammarvae} and SDVAE \citep{sdvae}, whereas as regards graph-based models, we compare against GraphVAE \citep{graphvae}, CGVAE \citep{cgvae} and NeVAE \citep{nevae}. \subsection{Hyper-Parameters} We evaluate our model using the same hyper-parameters for both variants, in order to isolate the effect of our contribution from improvements due to hyper-parameter tuning. We set the embedding dimension to 64, the number of recurrent layers to 2, the number of GRU units per layer to 128 and the latent space size to 100. We used the Adam optimizer with an initial learning rate of 0.00001, annealed every epoch by a multiplicative factor of 0.9, a batch size of 128, and a dropout rate of 0.3 applied to the recurrent layers to prevent overfitting. Training required only 4 epochs: after that, we found empirically that the model started to severely overfit the training set. We used $k=10$ as LFM threshold. The stopping criteria for training is the following: after each epoch, we sample 1000 molecules and measure validity, novelty and uniqueness rates of the sample, stopping whenever the uniqueness rate starts to drop (we found out empirically that samples were stable in terms of validity and novelty rates). After training, we sample 20k molecules for evaluation. We publicly release code and samples for reproducibility\footnote{\scriptsize{\texttt{\url{https://github.com/marcopodda/fragment-based-dgm}}}}. Baseline results are taken from literature\footnote{We found no results in literature for the PCBA dataset.}. \section{\uppercase{Results}} The main results of our experiments are summarized in Table~\ref{tab:results}, and provide the answers to the experimental questions posed in Section~\ref{sec:experiments}. As regards Q1, we observe that our model achieves perfect validity scores in the ZINC data, greatly outperforming LM-based models and performing on par with the state of the art. This is true also as regards the PCBA dataset. Since both our variants improve over the LM-based competitors, it is safe to argue that our fragment-based approach can effectively increase validity rates. As regards Q2, we observe an improvement in uniqueness by both our variants, with respect to the LM-based competitors. However, the improvement is noticeably higher whenever the LFM strategy is employed. In the PCBA this trend is even more pronounced. Compared to graph-based models, we see how the model with LFM is now competitive with the state of the art. Lastly, we notice that using LFM yields a small improvement in novelty with respect to the vanilla variant. \begin{figure*} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=.8\textwidth]{figs/generated} \caption{ZINC} \label{fig:zinc-samples} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=.8\textwidth]{figs/samples} \caption{Generated} \label{fig:generated-samples} \end{subfigure} \caption{A random sample of 30 molecules taken from the ZINC dataset (a) and generated by our model (b).} \label{fig:samples} \end{figure*} \subsection{Molecular Properties of Samples} One essential aspect of evaluating generative models is determining to what extent generated samples resemble the training data. To this end, we show in Figure~\ref{fig:generated-props} the distribution of several structural features and molecular properties of out-of-dataset samples generated by our model on the ZINC and PCBA datasets, compared to the training sample, after removal of duplicates. Structural features under consideration include atom type counts, bond type counts, and ring type counts (from 3 to 6). As regards molecular properties, we included: \begin{itemize} \item octanol/water Partition coefficient (logP), which measures solubility; \item Quantitative Estimate of Drug-likeness \citep{qed} (QED), which measures drug-likeness; \item Synthetic Accessibility Score \citep{sas} (SAS), which measures ease of synthesis. \end{itemize} In Figure~\ref{fig:generated-props}, our samples against training compounds are compared, as regards the distribution of the three structural features, and molecular properties listed above. Notice that even without the help of an explicit supervision, generated molecules are qualitatively similar to the training data. Figure~\ref{fig:samples} shows two random samples of 30 molecules taken from the ZINC dataset and generated by our model for visual comparison. \subsection{Computational considerations} To generate a molecule with $N$ atoms, LM-based methods require $O(C)$ decoding steps, where $C$ is the number of characters in the corresponding SMILES string. Our model requires $O(F)$ decoding steps during generation, where $F$ is the number of fragments (2-3 on average). In contrast, our model requires a substantially larger vocabulary than most LM-based models; its size, however, can be greatly reduced using LFM (e.g. an $\approx 87\%$ reduction for the ZINC dataset). Graph-based methods sample $N$ node embeddings first, then score $O(N^2)$ node pairs to add connections. Moreover, they usually need to enforce chemical validity through additional edge masking. Without masking, performances drop significantly (e.g. NeVAE validity rates drop to 59\%). Our method does not require to enforce validity. \subsection{Limitations of the current approach} We have shown that the presented model is able to perform on par with the state of the art as regards the molecular generation task. At the same time, we acknowledge that it might not be suitable for tasks like molecule optimization in its current form, as the molecular space spanned using LFM is likely less structured than other approaches due to its stochastic component. Given that molecular optimization was outside the scope of this work, we recommend to take this limitation into account when employing our model for that specific task. \section{\uppercase{Conclusions}} In this work, we have tackled two main limitations of LM-based generative models for molecules, namely producing chemically invalid as well as duplicate compounds. As regards the first issue, we introduced the first (to our knowledge) fragment-based language model for molecule generation, which operates at fragment level, rather than atom level. As regards the second issue, we presented a low-frequency masking strategy that fosters molecule diversity. In our experiments, we show that our contributions can increase validity and uniqueness rates of LM-based models up to the state of the art, even though an inherently less expressive representation of the molecule is used. As regards future works, we aim at extending this model for task like molecular optimization. This will require the design of novel strategies to maintain high uniqueness rates, while preserving smoothness in latent space. In addition, we would like to adapt the fragment-based paradigm to graph-based molecular generators. \subsubsection*{Acknowledgements} This work has been supported by the Italian Ministry of Education, University, and Research (MIUR) under project SIR 2014 LIST-IT (grant n. RBSI14STDE).
1,116,691,498,055
arxiv
\section{Introduction} Strain induced self-assembly of three dimensional (3D) islands in heteroepitaxy have been attracting much research interest because of the rich physics involved and their potential applications as quantum dots in optoelectronic devices \cite{Politi2000,Shchukin2003,Berbezier2009}. A widely studied system is Ge deposited on Si(100) substrates with a 4\% lattice misfit. Relatively flat islands in the form of stepped mounds with unfaceted sidewalls called pre-pyramids start to emerge at 3 monolayers (MLs) of Ge coverage \cite{Mo1990,Vailionis2000}. Further deposition leads to pyramids or rectangular-based huts bounded by (105) facet planes. Deposition temperatures lower than 500$^\circ C$ generally favors rectangular huts \cite{Kastner1999,Drucker2008} while higher temperature often leads to pyramids \cite{Rastelli2003}. After still further deposition or annealing, pyramids can grow into dome islands bounded mainly by steeper (113) facets \cite{Medeiros1998,Rastelli2005}. (105) facets on pyramids and huts have been found to be extraordinarily stable and atomically flat from first principle calculations \cite{Fujikawa2002,Montalenti2005,Lu2005,Shklyaev2005}. At low temperature, surface steps on (105) facets are rarely observed \cite{Drucker2008}. They are however present at higher temperature and the bunching of them are observed to be important to the morphological evolution \cite{Montalenti2004}. The structures, energies and dynamics of these steps have been studied using first-principles calculations \cite{Montalenti2007}. Also, the edge energies of a (105) faceted ridge have been estimated using molecular dynamics simulations based on empirical potentials \cite{Retford2007}. Large scale simulations of the formation of 3D islands is possible using kinetic Monte Carlo (KMC) methods based on lattice models \cite{Orr1992,Khor2000,Meixner2001,Lam2002,Gray2005,Lung2005, Smereka2006,Lam2008,Smereka2009,Zhu2007}. The simulations are computationally very intensive due to the long-range nature of elastic interactions. Elastic forces can be accounted for accurately and efficiently using advanced algorithms so that simulations in 2D \cite{Lam2002,Gray2005} and 3D \cite{Lung2005,Smereka2006,Lam2008,Smereka2009} with respectively large and moderate system sizes are possible. Using more approximate forms of the elastic interactions, larger systems in 3D can also be studied \cite{Meixner2001,Zhu2007}. KMC studies on strained layers are generally based on square or cubic lattices for simplicity. Strain induced islands or pits are readily generated but their sidewalls are almost vertical \cite{Orr1992,Smereka2006,Smereka2009} or at an of inclination of about 45 degrees \cite{Khor2000,Lam2002,Lung2005,Zhu2007} depending on the details of the bond energies or additional constraints used. These inclinations are much steeper than 11$^\circ$ and 26$^\circ$ for the (105) and (113) facets respectively. The realistic facets however are of rather low-symmetry and in general are not favored energetically in lattice models. The discrepancy results in strain distributions considerably different from the realistic ones and may probably lead to qualitatively different growth modes in certain situations. Furthermore, the surface energy of the island sidewalls from existing KMC models are not independently adjustable and there is no simple approach to incorporate for example the extraordinary stability of certain facets. In addition, with only one favored sidewall slope in a given model, only one type of island can be simulated so that studying the pyramid to dome transition for instance is impossible. In this work, we extend the convectional ball and spring lattice model for KMC simulation of heteroepitaxial solids in 2D by allowing specific geometrical deformation states of the surface atoms. These deformations phenomenologically represent surface reconstructions on (105) facets. We show computationally that this new multi-state model leads to the formation of faceted islands. Examples of qualitative differences in the growth dynamics between faceted and unfaceted islands are explained. \section{Ball and spring Lattice Model} \label{S:MBE} We first explain the conventional square lattice model of elastic solids in 2D while further extensions will be introduced in the next section. Every atom is associated with a lattice sites and are connected to nearest and next nearest neighbors by elastic springs. Solid-on-solid conditions are assumed. We follow the model parameters used in Ref. \cite{Lam2002} unless otherwise stated to approximate the widely studied Ge/Si(001) system. We assume a substrate lattice constant $a_s=2.715$\AA ~ so that $a_s^3$ gives the correct atomic volume in crystalline silicon. The lattice misfit $\epsilon=(a_f-a_s)/a_f$ equals 4\% where $a_f$ is the lattice constant of the film. Nearest and next nearest neighboring atoms are directly connected by elastic springs with force constants $k_N=13.85eV/a_s^2$ and $k_{NN}=k_N/2$ respectively. The elastic couplings of adatoms with the rest of the system are weak and are completely neglected for better computational efficiency. In this model, surface steps have a particularly high tendency to bunch together under strain presumably due to the much weaker entropic surface step repulsion in 2D. We hence forbid double surface steps as well as adjacent single surface steps of the same direction so that the steepest surface slope allowed is 1/2. The KMC approach simulates the morphological evolution by explicitly considering the diffusion of surface atoms. Every topmost atom $m$ on the film can hop to a nearby site with a hopping rate $\Gamma(m)$ following an Arrhenius form: \begin{equation} \label{rate} \Gamma(m) = {R_0}\exp \left[ -\frac{n_m \gamma - \Delta E_s(m) - E_0}{k_{B}T}\right] \end{equation} where $n_m$ is the number of nearest and next nearest neighbors of atom $m$. We have assumed an identical nearest and next nearest neighbor bond strength $\gamma$. We put $\gamma=0.5 eV$, slightly larger than the value in Ref. \cite{Lam2002} so that the energy costs of stepped mounds become slightly higher. The energy $\Delta E_s(m)$ is the difference in the strain energy $E_s$ of the whole lattice at mechanical equilibrium with or without the atom $m$. Due to the long-range nature of elastic interactions, its efficient calculation is highly nontrivial and we handle it using a Green's function method together with a super-particle approach explained in Refs. \cite{Lam2002,Lam2008,Lung2007}. In addition, $E_0=3\gamma-0.67eV$, where 0.67eV is the adatom diffusion barrier on the (100) plane. To speed up the simulations, long jumps are allowed so that a hopping atom will jump directly to another random topmost site at most $s_{max}=8$ columns away with equal probability. Then, $R_0=2D_0/(\sigma_s a_s)^2$ with $D_0=3.83\times 10^{13}\mbox{\AA}^2 s^{-1}$ and $\sigma_s^2 = \frac{1}{6}(s_{max}+1)(2s_{max}+1)$. This gives the appropriate adatom diffusion coefficient for silicon (100). \section{Multi-state Lattice model with Surface deformation} \label{S:extended} To effectively model (105) facets, which are more precisely (15) surfaces in 2D, we introduce additional degrees of freedom representing local deformations to all topmost atoms. They phenomenologically accounts for the surface rebonding or reconstruction states on a (105) faceted region \cite{Fujikawa2002}. For efficient computation, these deformations localized to individual surface atoms are assumed to be completely independent of the lattice misfit, although correlations between misfit strain and surface reconstruction are known to exist, \cite{Montalenti2005,Lu2005,Shklyaev2005}. In the following calculation of the local deformation energies, we hence neglect lattice misfit and express all lengths in unit of lattice constant. The subsequent calculation of the misfit strain energy term is identical to that outlined in Sec. \ref{S:MBE}. \figtwo{facet.eps}{facet-b.eps}{\label{F:facet} A faceted island from a small scale simulation using the multi-state model (a) and a magnification of part of the surface containing a (105) surface step between the third and the fourth columns (b). Deformed film atoms, undeformed film atoms and all substrate atoms are shaded in red, light blue and dark blue respectively. In (b), the tilt variable $\sigma_i$ is $\frac15$ for all columns, while the extension variable $\kappa_i$ from left to right equals $0, \frac15, \frac25, - \frac15, 0, \frac15, \frac25, -\frac25, - \frac15, 0, \frac15, \frac25, -\frac25, - \frac15$ and 0. } We first show an example of a faceted island from a small scale simulation in Fig. \ref{F:facet}(a). Figure \ref{F:facet}(b) magnifies part of the surface. It shows how the surface deformation smooths out the (100) steps of the original stepped mound and turn the sidewalls into atomically flat effective (105) facets with slopes $\pm 1/5$. An example of a surface step on the (105) surface is also shown and will be explained later. In the absence of deformation, an atom is represented by a unit square. An integer $h_i$ denotes the surface height at column $i$. We assume that a topmost atom in the film surface or in an exposed region of the substrate can be deformed into a trapezoid characterized by two new deformation state variables, namely a tilt variable $\sigma_i$ and an extension variable $\kappa_i$. We put \begin{equation} \sigma_i = 0, ~ \frac15 , \mbox{~ or ~} - \frac15 \end{equation} which gives the slope of the upper surface of the deformed atom. The values $\sigma_i = \pm 1/5$ enable the formation of the (105) facets in both directions. As shown in Fig. \ref{F:facet}(b), attaining a flat (105) faceted region further requires properly coordinated vertical stretching or compression of the topmost atom by $\kappa_i$ which is given by \begin{equation} \kappa_i = \left\{ \begin{array}{ll} 0 & \mbox{for $\sigma_i = 0$}\\ - \frac25, -\frac15, 0, \frac15, \mbox{~ or ~} \frac25 ~~~ & \mbox{for $\sigma_i=\pm \frac15$} \end{array} \right. \end{equation} The $i$th atomic column hence can be rectangular or trapezoidal with the left and right edges of heights $h_i^a$ and $h_i^b$ given by \begin{eqnarray} h_i^a &=& h_i + \kappa_i - \frac{\sigma_i}2\\ h_i^b &=& h_i + \kappa_i + \frac{\sigma_i}2 \end{eqnarray} A surface step in between the $i$th and the $(i+1)$th column has a step height $\delta_i$ defined as \begin{equation} \label{delta} \delta_i = \mid h_{i+1}^a - h_{i}^b \mid \end{equation} For simplicity, we have measured step heights as projected along the lattice axis rather than the surface normals. Note that single steps on (100) and (105) surfaces have very different heights of 1 and 1/5 respectively in our model. We will next explain the energy cost of the local deformation of the surface atoms. Values of the energy parameters to be introduced are chosen phenomenologically to provide morphologies best compared with observations. Similar to the original lattice model \cite{Lam2002}, although we believe that our parameters are within physically acceptable ranges, this model being in 2D is not realistic enough to apply directly parameters from first principle studies \cite{Montalenti2005,Lu2005,Shklyaev2005} in general. Furthermore, we have found from numerous exploratory simulations that only a rather limited and specific range of parameters provides reasonable morphologies under a wide range of relevant growth conditions. The constraints on our parameters hence may also shed light on how the morphologies reveals certain features on the microscopic details of the surface and this will be discussed further. \newcommand{\delta_i}{\delta_i} The hopping rate of a topmost atom $m$ in Eq. (\ref{rate}) is generalized to \begin{equation} \label{rate2} \Gamma(m) = {R_0}\exp \left[ \frac{\Delta E_b(m) + \Delta E_s(m) + E_0'}{k_{B}T}\right] \end{equation} where $E_0' = -\gamma-0.67$ eV. The misfit strain energy term $\Delta E_s(m)$ is defined similarly as before and its calculation is assumed to be completely independent of the local surface deformation. The surface energy term $\Delta E_b(m)$ denotes the change in the bond energy $E_b$ of the whole surface when the site is occupied versus unoccupied. More precisely, surface energy is defined relative to that of a flat (100) surface as \begin{eqnarray} \label{Eb} E_b &=& \sum_i \left[ \eta(\sigma_i) + \nu(\sigma_i, \sigma_{i+1}) +\omega( \delta_i , \sigma_i , \sigma_{i+1}) \right] \end{eqnarray} Here, $\eta(\pm 1/5)$ = 5 meV is the formation energy per site of the (105) facet and $\eta(0)=0$ for the (100) region. Also, $\nu(\sigma_i,\sigma_{i+1})=0.35$ eV denotes the interface energy at the boundary of a facet where $\sigma_i\neq\sigma_{i+1}$ and it is zero otherwise. It dictates the energy barrier of facet nucleation. If we choose a larger value of $\eta(\pm 1/5)$, the (105) facet can become unstable. A negative value of $\eta(\pm 1/5)$ has been suggested \cite{Shklyaev2005} corresponding to extremely stable (105) facets. However, this is not acceptable as island sizes from such simulations are then dominated by $\nu$ closely related to the edge energy in Ref. \cite{Shklyaev2005} but is practically independent of the lattice misfit. The last term in Eq. (\ref{Eb}) represents the energy of a surface step. On a (100) region with $\sigma_i= \sigma_{i+1}=0$, it is defined as \begin{equation} \label{omega100} \omega(\delta_i , \sigma_i , \sigma_{i+1}) = \frac{\gamma}{2} \delta_i \end{equation} where the step height $\delta_i$ defined in Eq. (\ref{delta}) is a integer. This results from simple bond counting noting that two single steps are created by breaking one nearest neighboring bond of strength $\gamma$. Noting also that a bulk atom has a bond energy of $-4\gamma$, Eq. (\ref{rate2}-\ref{omega100}) reduces exactly to Eq. (\ref{rate}) so that the (100) regions in the multi-state model behaves identically to the basic model in Sec. \ref{S:MBE}. Outside of a (100) region (i.e. $\sigma_i$ or $\sigma_{i+1} \neq 0$) we put \begin{equation} \label{omega105} \omega(\delta_i , \sigma_i , \sigma_{i+1}) = {\beta_{105}} \left( 1+\chi - \chi e^{ 1 - { 5 \delta_i }} \right) + \frac{\gamma}{2} \left( \delta_i - \frac15 \right) \end{equation} for $\delta_i\ge 1/5$ and it is zero otherwise. This expression gives an energy $\beta_{105}$ for a single step with height $\delta_i=1/5$ on a (105) region. It is known that incomplete (105) facets can be practically absent at low temperature around 450$^\circ C$ \cite{Drucker2008} but are observable at 550$^\circ C$ \cite{Montalenti2004}. We reproduce this feature in our model by taking a relatively large value of $\beta_{105}=0.3$ eV. From Eq. (\ref{omega105}), the step energy per unit height of a multiple step approaches $\gamma_n/2$ identical to that for a step on a (100) facet. This also reduces the energy of an adatom on a (105) surface which is bounded by two unit steps to a more acceptable but still very large value of 1.3 eV. The parameter $\chi$ determines the energy of multiple steps of intermediate heights. We put $\chi=0.5$ allowing a slight tendency of step bunching \cite{Montalenti2004}. In KMC simulation using this multi-state model, the atomic hopping events are randomly sampled and simulated according to the rates $\Gamma(m)$ in Eq. (\ref{rate2}). We assume that the deformation state variables $\sigma_i$ and $\kappa_i$ at every column are unchanged after an atomic hop, i.e. the deformation state is attached to the column rather than to the hopping atom. Deposition of an atom also increases the column height by unity without altering the deformation state. After every period $\tau$, the deformation state for a set of columns will be updated. Specifically, to facilitate program parallelization, we adopt a sublattice updating scheme in which the deformation states at all odd (even) lattice sites will be updated at every odd (even) updating event. When column $i$ is to be updated, the variables $\sigma_i$ and $\kappa_i$ are re-sampled from the allowed set of 11 possible combinations using a heat bath algorithm based on the relative probability $exp(-E_{b}/kT)$. We take $\tau=2/\Gamma_{ad}$ where $\Gamma_{ad}$ is the adatom hopping rate on a (100) surface easily calculable from Eq. (\ref{rate}). This is the highest possible rate without increasing significantly the overall execution time of our program. Local changes in the surface reconstruction states are most likely a fast process compared with atomic hopping. We have checked that our deformation state updating rate is indeed sufficiently fast so that decreasing $\tau$ gives no observable difference to our results. Our model follows detailed balance which allows us to confirm the reliability of our software implementation using a Boltzmann's distribution test \cite{Lam2008}. \section{Results} Using both the conventional ball and spring lattice model and the multi-state lattice model with surface deformation explained in Secs. \ref{S:MBE} and \ref{S:extended}, we have simulated the self-assembly of strained islands in 2D. A substrate of size 1024 $\times$ 1024 (width $\times$ depth) is used. We take a temperature 450$^\circ$ and a deposition rate 0.1 ML/s. The conventional and the multi-state models lead to islands with unfaceted and faceted sidewalls respectively. For convenience, we refer to them as unfaceted and faceted islands. \figtwo{island-u.eps}{island-f.eps}{\label{F:profile} Snapshots of surfaces showing the development of (a) unfaceted and (b) faceted islands simulated respectively using the conventional model and the multi-state model. (105) faceted regions are shaded in red. Each successive profile is displaced by $+5$ vertically for clarity and corresponds to the deposition of a further 1/4 MLs up to a total of 6 MLs.} Figure~\ref{F:profile}(a) shows the evolution of unfaceted islands from a typical run using the conventional model during deposition of up to 6 MLs of film material on to an initially flat substrate. Unstable shallow stepped mounds develop at very early stage. After depositing about 2 MLs, some stepped mounds have attained steeper sidewalls and become more stable. At about 4 MLs, they have generally attained the steepest possible slope of 1/2 allowed in our model. As observed in this and other similar runs, there is a rather well defined island nucleation period and no new island emerges after some larger islands are established. We also observe that some relatively mature islands eventually decay and vanish, indicating a ripening process. The existence of a finite nucleation period followed by ripening is consistent with previous KMC simulations \cite{Zhu2007} as well as continuum simulations \cite{ZhangYW2003,aqua2007}. It may also have some experimental relevance at higher temperature although the pyramid to dome transition and alloying between the film and substrate atoms \cite{Rastelli2005} add further complications. Analogous evolution of faceted islands simulated using the multi-state model with surface deformation is shown in Fig.~\ref{F:profile}(b). Small highly unstable (105) faceted regions with deformed surface atoms begin to appear at a coverage of about 0.5 MLs. Relatively stable (105) faceted islands emerges at about 1 ML. These islands develops from the larger ones of the stepped mounds. Faceted regions nucleate on either side of the mounds independently so that half faceted asymmetric islands exist during the course of development. Islands also often go through an truncated pyramid stage \cite{Vailionis2000} with unfaceted tops before finally becoming fully developed pyramids. Some faceted islands may occasionally decay partially or even completely back to unfaceted stepped mounds, but the larger ones are much more stable. On the other hand, some stepped mounds may happen to get faceted at rather small sizes while slightly larger ones can remain unfaceted for long periods. Therefore, the faceting process in our current model is strongly affected by both the energetics and the kinetics. At this low growth temperature of $450^\circ$, surface steps on a (105) facet is rare as explained in Sec. \ref{S:extended}. Further growth of faceted islands by step flow is hence kinetic limited \cite{Kastner1999,Drucker2008}. It can be observed from Fig.~\ref{F:profile}(b) that island growth rates drop dramatically once becoming faceted. Their sizes occasionally jump up rapidly only when parts of the sidewalls become temporarily unfaceted due to thermal excitation. Since developed islands are poor absorber of newly deposited atoms, new islands continue to nucleate until the substrate is crowded with islands. Kinetic limited growth and continuous island nucleation have not been reported previously in KMC or continuum simulations in our knowledge. More importantly, deposition experiments at 550$^\circ$C do indicate slower growth of matured islands and a continuous island nucleation growth mode \cite{Rastelli2003}. Deposition at lower temperature however leads to huts \cite{Kastner1999,Drucker2008} which may share some related characteristics but are more complicated. \fig{size.eps}{\label{F:size} Plot of island size against nominal film thickness $h$ for unfaceted (a) and faceted (b) islands } For more quantitative analysis, we define an island as one in which each of the constituent columns must be at least 4 atoms tall. All islands can then be automatically identified. Figure~\ref{F:size} traces the size evolution against the nominal film thickness $h$ of every island in Fig.~\ref{F:profile} once they have attained a size of at least 150 atoms. Islands from another similar run are also included in Fig.~\ref{F:size}(b) to provide additional examples. From Fig.~\ref{F:size}(a), unfaceted islands beyond a certain size in general grow steadily with its own characteristic rates which are expected to depend mainly on the sizes of their adatom capture zones. Small islands decay and vanish. In contrast, from Fig.~\ref{F:size}(b), there is in general an initial period of rapid island growth followed by much slower growth after faceting. Once faceted, their sizes remain nearly constant except at occasional jumps associated with temporary partial decay of the facets as described above. \fig{islandnumber.eps}{\label{F:number} Plot of the average number of islands on a substrate of 1024 atoms wide against nominal film thickness $h$ } To obtain more statistics, we have repeated each simulation 200 times. Figure~\ref{F:number} plots the average number of islands of size 150 or larger on the 1024 atoms wide substrate used. Smaller islands are excluded because they are highly unstable. For unfaceted islands, their number first increases indicating a period of active nucleation at coverage from about 1 to 2.5 MLs. It then declines but at a very slowly rate indicating rather inefficient coarsening during growth. In contrast, faceted islands steadily increase in number for coverage up to about 5 MLs due to continuous nucleation. Beyond 5 MLs, the substrate is crowded with islands and the number of islands saturates. \figtwo{histo-unfaceted.eps}{histo-faceted.eps} {\label{F:histo} Size histograms for unfaceted (a) and faceted (b) islands} Finally, we histogram the island sizes from all the independent runs. Fig.~\ref{F:histo} plots the average number of islands on the substrate against island size. For both models, a peak island size emerges for $h\agt 2.5$ MLs. For unfaceted islands, the histogram broadens significantly upon growth due to a wide distribution of growth rates. In contrast for faceted islands, it broadens much more slowly due to the highly kinetic limited growth mode. Nevertheless, the faceted islands do not possess narrower size distribution relative to the average size. This is because a significant size distribution already exists when the islands become faceted as can be observed in Fig. \ref{F:profile}(b). The continuous nucleation of new islands also broadens the distribution as the older islands are larger on average. Another difference between the models is that the peak of the histogram decays monotonically upon deposition for unfaceted islands while it increases for $2.5 \le h\le 4.5$ due to the continuous nucleation of islands. \section{Discussions} We have generalized a lattice model for strained films to allow for a range of local deformation states of surface atoms representing effective surface reconstructions. This deformations are assumed to be independent of the misfit induced strains for simplicity. Using this multi-state lattice model, we have performed kinetic Monte Carlo simulations in 2D and observed the formation of (105) faceted pyramid islands. The model enables us to simulate faceted island formation in the kinetic limited regime. In this regime, island growth slows down dramatically and becomes intermittent after faceting. The slower growth of the more established islands also leads to a continuous nucleation of islands until the substrate is fully occupied. The width of the island distribution is dominated both by fluctuations in the initial size at the start of faceting as well as the diversity in their ages. Stepped mounds from the conventional model exhibit a simple nucleation period followed by slow ripening. Additional studies on the growth and annealing of faceted islands under other growth conditions will be report elsewhere. It is also interesting to further generalize the model to consider two facet types so as to study the pyramid to dome transition. Generalization to 3D is conceptually straightforward but is challenging in practice because of the heavy computational load expected. This work was supported by HK RGC, Grant No. PolyU-5009/06P. ~\\~\\~\\~\\~\\~\\~\\~\\~\\~
1,116,691,498,056
arxiv
\section{Introduction} The Large Hadron Collider (LHC)~\cite{LHCPAPER} at the European Laboratory for Particle Physics (CERN) in Geneva, Switzerland, is an example of a scientific project whose computing resource requirements are larger that those likely to provided in a single computer center. Data processing and storage are distributed across the Worldwide LHC Computing Grid (WLCG)~\cite{WLHC}, which uses resources from 160 computer centers in 35 countries. Such computational resources have enabled the CMS~\cite{CMSDET} and ATLAS~\cite{ATLAS} experiments to discover the Higgs Boson~\cite{CMSHIGGS, ATLASHIGGS}, for example. The WLHC requires a massive amount of computational resources (250,000 x86 cores in 2012) and, proportionally, energy. In the future, with planned increases to the LHC luminosity~\cite{HLLHC}, the dataset size will increase by 2-3 orders of magnitude, presenting even more challenges in terms of energy consumption. In order to find and develop better solutions for improving energy efficiency in High Energy Physics (HEP) computing, it is important to understand how energy is used by the HEP systems themselves. We describe several tools and techniques that facilitate researchers to reach that goal. As energy efficiency becomes a concern, new solutions have been considered to develop energy efficient systems. One potential solution is to replace the traditional Intel x86 architectures by low power architectures such as ARM. A comparison of the energy efficiency between ARMv7 and x86 Intel architecture is conducted in this article. The experiments use CMS workloads and rely on the techniques and tools described earlier to perform the measurements. This article is structured as following. Firstly, we describe where is energy consumed in a HTC system and outline some of the tools and techniques available to measure and monitor energy consumption on HTC systems (Section 2). Secondly, we present the results of a comparison between ARMv7 and Intel Xeon architecture using CMS workloads (Section 3). Finally, we present IgProf, a general purpose, open source application performance profile. In addition, we describe its recent added energy profiling features and 64-bit ARM support. \section{Tools and techniques for energy measurement} When optimizing power usage, there are two granularities at which one can look at a computing system. The coarser granularity takes into account the behavior of the whole node (or some of its passive parts, e.g.\ the transformer) as part of a rack in a datacenter. This is usually investigated when engineering and optimizing computing centers. Alternatively, a more detailed approach is to look into the components which make up the active parts of a node, in particular the CPU and its memory subsystem since these are responsible for a sizeable fraction of the consumed power. They are also the place where the largest gains in terms of efficiency can be obtained through optimizations in the software. If one is simply interested in the coarse power consumption by node, external probing devices can be used: monitoring interfaces of the rack power distribution units, plugin meters and non-invasive clamp meters (allowing measurement of the current pulled by the system by induction without making physical contact with it). They differ mostly in terms of flexibility. Their accuracy is typically a few percent for power, whereas their time resolution is in the order of seconds. This is more than enough to optimize electrical layout of the datacenters or to provide a baseline for more detailed studies. A alternative approach takes into account the internal structure of a computing element of an HTC system, as shown in figure ~\ref{fig:power-consumption-model}. Nowadays, every board manufacturer provides on-board chips which monitor energy consumption of different components of the system. These allow energy measurements of fine grained detail, as it is possible to individually monitor energy consumption of components such as the CPU, its memory subsystem, and others. An example of this chip monitors is the Texas Instruments TI INA231~\cite{TIINA231} current-churn and power monitor which is found on the ARMv7 developer board which we used for our studies. It is quite common in the industry. Compared to external methods, these on-board components provide high accuracy and reasonably high precision measurements (millisecond level). \begin{figure}[tbp] \centering \includegraphics[width=70mm]{img/energy_model.png} \caption{Components that contribute for power consumption in HPC} \label{fig:power-consumption-model} \end{figure} A special and slightly different case of these on-board monitors is a new technology called Running Average Power Limit (RAPL), provided by Intel beginning from the Sandy Bridge family of processors. Contrary to other solutions, which are implemented as discrete chips, RAPL is embedded as part of the CPU package itself and provides information on the CPUs own subsystems. In particular RAPL provides data for three different domains: \textbf{package} (pck), which measures energy consumed by the system's sockets, \textbf{power plane 0} (pp0), which measures energy consumed by the CPU core(s), and \textbf{dram}, which accounts for the sum of energy consumed by memory in a given socket, therefore excluding the on-core caches~\cite{INTELMAN}. As for the discrete components case, the timing resolution of measurements is in the millisecond range~\cite{RAPL1}. This is fine enough to permit exploiting such data to build an energy consumption sampling profiler for applications, similar to how performance sampling profilers work (see section~\ref{sec:sampling}). Finally, in addition to power monitoring of the sockets, RAPL can limit the power consumed by the different domains. This feature, usually referred as power capping, allows the user to define the average power consumption limit of a domain in a defined time window and allows more accurate independent measurements of the non limited components. \section{Power efficiency measurements with x86-64 and ARMv7} In this section, we demonstrate the potential of some of the tools we previously described. To that end, we perform several measurements of workloads from CERN, running on different architectures. The workloads used in the experiment run on top of Intel x86-64 architecture, traditionally used in HTC and data centers and 32 bit ARMv7 architectures (for similar studies for 64bit ARMv8 and Xeon Phi, please refer to~\cite{ABD2014}). The ARM architecture, initially developed for mobile devices, has been considered~\cite{ACAT13ARM, CHEP13ARMPHI} as a potential alternative to Intel in HTC, given its energy efficient computing. We also present a brief comparison between ARM and Intel architectures from the energy consumption perspective, based on the results obtained. \subsection{Tools and techniques} For the Intel architecture, we used the RAPL technology to perform measurements of the energy consumed by the package, DRAM and cores (figure~\ref{fig:power-consumption-model}). The external measurements for the baseline were performed using a rack PDU, which provides an online API to gather the energy consumed by the system on the rack at a sampling rate of 1 second. For the ARM board, we used the Texas Instrument power monitor chip TI INA231 which allows reading of the energy consumed by the cores and dram at a sampling rate of microseconds. The chip was embedded in the board from the vendor. For the external measurements, we used an external plug-in power monitor with a computer interface for gathering and storing the results. In both cases we read the data as it was exposed to the system via the sysfs / devfs knobs. The machine specifications can be seen in figure~\ref{figure:machine-specs}. \begin{figure}[ht!] \centering \includegraphics[width=150mm]{img/table.png} \caption{Machine's specifications} \label{figure:machine-specs} \end{figure} \subsection{Experiment setup} The workload used for the experiments was ParFullCMS, a multi-threaded Geant4~\cite{GEANT4} benchmark application which uses a complex CMS geometry for its simulation. Using ParFullCMS, we ran simulation tasks on both the Intel and ARM machines (figure~\ref{fig:parfull-cms-benchmark}). The workflow was run several times with different number of threads in each machine. The number of threads run in each experiment is chosen according to the number of the cores of the machines. \subsection{Analysis} As expected, the ARMv7 architecture shows encouraging results from the energy efficiency perspective than Intel in all the experiments performed. Also as expected, both architectures do not perform better when overcommitted (more threads than the physical number of cores). Notice ARM results when overcommitted (8 threads) are much worse then Intel ones. This is due to the relatively modest amount of available DRAM (figure~\ref{figure:machine-specs}), causing the machine started swapping, greatly affecting performance. While this was expected, since the ARM system used is just a development board for mobile applications, this is a clear indication that when doing a final assessment of power efficiency for an architecture, one needs to have a full server-grade system in order to make a proper comparison. \begin{figure}[tbp] \centering \includegraphics[width=170mm]{img/results1.png} \caption{Chip monitor and external measurements results. The results are shown according to the relation events per number of cores of each machine and their absolute number of cores. } \label{fig:parfull-cms-benchmark} \end{figure} \section{Profiling for power efficiency} \label{sec:sampling} The hardware components described above provide measurements that are related to the full set of processes running on the machine. For the simple case where only a single benchmark application is running, these can be used to make comparisons between architectures. A further step is to try to see if there is a way to map the energy consumption measurements to functions and methods within an executing process. Such a mapping would allow for optimizations of the software itself. This kind of mapping can be done in two different ways, which we call {\it instrumentation} and {\it sampling profiling}. In the {\it instrumentation} case, effective readings of profiled quantities (e.g.\ energy consumption), or quantities correlated with them (e.g.\ CPU power state transitions) are done at the beginning and the end of a profiled task and the difference between the two is used to estimate average power consumption over that period of time. By bookkeeping starting and stop values for monitored tasks one can get a fairly complete picture of what is happening to the system, provided the measured interval is large compared to the temporal precision of the measure being done. This is both to avoid a large error on the average estimation and to reduce performance overhead due to the measure itself. {\it Sampling profiling}, on the other hand, has a different approach where a given quantity is sampled regularly and at each sample the measured quantity is accumulated until it overflows a user provided limit. When this happens the profiler increments a counter for the process / function being executed in that precise moment. Assuming that the distribution of where time is spent in a system is constant over time (which is typically true for large data processing tasks), such a sampling algorithm converges to the actual distribution of the measured quantity. The advantage of this approach is that the fidelity of the measurement to first approximation depends only on the number of samples made, regardless of the error on the profiled quantity. This also allows minimizing the performance overhead by tuning the sampling period to be much larger than the measurement itself. \texttt{IgProf} is a general purpose, open source application performance profiler. It was developed in HEP, but it is capable of profiling all types of software applications. The profiler has been available on the x86 and x86-64 platforms since many years~\cite{igprofchep04, igprof-web}, and recently we have also ported it to ARMv7 and ARMv8. Moreover we have now added a statistical sampling energy profiling module which provides function level energy cost distribution~\cite{weaver12}. Such a module uses the PAPI library to read energy measurements from the RAPL interface previously described. \begin{figure}[tbp] \begin{center} \includegraphics{img/stream-pp-np.pdf} \end{center} \vspace{-20pt} \caption{The results of performance and energy profiling of the \texttt{STREAM} tool.} \label{fig:stream-pp-np} \end{figure} To illustrate the new module, we use it to profile the memory benchmark \texttt{STREAM}~\cite{stream-web}. Figure~\ref{fig:stream-pp-np} compares the results from performance and energy profiling of the benchmarking tool. The X-axis describes the four main functions contributing to the execution time and energy consumption of the stream tool: Add, Copy, Triad and Scale. The left scale of the Y-axis and the perf\_ticks series describe the execution time spent in each function, whereas the right scale of the Y-axis and the nrg\_pkg, nrg\_pp0 and nrg\_pp1 series describe the amount of energy spent in each function. The energy consumption of the processor package domain and the power plane 0 (describing the CPU cores) seem to follow the time spent in the functions, whereas the energy consumption of power plane 1 seems to be fairly constant to zero (describing the unused GPU) . As we would expect from a simple benchmark, the profiling results of a simple single-threaded application shows a correlation between the execution time and the energy spent in a function. While the energy profiling module is now fully functional within \texttt{IgProf}, further work needs to be done to tune the measurements and to gain experience with how to use the profiles obtained for more complex applications. \section{Conclusions} Energy efficiency has become a major concern for HTC, given the large amount of computing resources - and thus energy - that recent experiments require. LHC computing is a prime example of the need for energy efficient facilities, given its present requirements and costs constraints. The need for energy efficiency drives an interest in accurately evaluating the different components of a HTC system to understand how and where energy is consumed and improve the overall efficiency. However, HTC systems are complex and composed of different components. In this paper we have presented a number of techniques and tools that provide insight into how and where energy is consumed from different perspectives and granularities. In addition, \texttt{IgProf}, an open source profiling tool, has been extended to run on 64-bit ARM and to provide function-level energy profiling capabilities. Using these tools and techniques we have also reported studies done to compare the energy performance of x86-64 and ARMv7 processors, confirming the potential of ARMv7 for efficient HTC systems should server grade systems be built around such chips. \section*{Acknowledgements} This work was partially supported by the National Science Foundation, under Cooperative Agreement PHY-1120138, and by the U.S. Department of Energy. ARMv8 and energy profiling support in IgProf was also supported by Google Summer of Code (GSoC 2014). \section*{References} \bibliographystyle{unsrt}
1,116,691,498,057
arxiv
\section{Introduction} The scattering problems over a half-space with local perturbations have widely considered in the fields of radar techniques, sonar, ocean surface detection, medical detection, geophysics, outdoor sound propagation and so on. Such problems are also referred to as cavity scattering problems in the literature; see e.g. \cite{Lidtn, Wood, Amm00, Amm03} where variational and integral equation methods (see also \cite{Amm00, Amm03}) were adopted to reduce the unbounded physical domain to a truncated computational domain in the time-harmonic regime. In this paper we concern the time-dependent scattering problems governed by wave equations. If the domain of the wave equations is unbounded, one can either use transparent/absorbing boundary conditions to minimize the spurious reflections or absorbing boundary layers, which are usually referred to perfectly matched layers (PML), to bound the unbounded physical domain by truncated computational domain in the numerical simulation. A major challenge is to construct the temporal dependence of the transparent boundary condition \cite{Nedelec} or the artificially designed absorbing medium (see e.g.,\cite{Joly06}) in the PML method. The PML scheme is initially introduced by B\'{e}renger for 2D and 3D Maxwell equations \cite{Ber94, Ber96}. The basic idea of the PML is to surround the physically computational domain by some {\rm artificial} medium that absorbs outgoing waves effectively. Mathematically, a PML layer method can be equivalently formulated as a complex stretching of the external domain. Such a feature makes PML an effective for modeling a variety of wave phenomena \cite{Bram, Chew94, Coll, Tur}. Due to its barely reflective absorption of out going waves, PML turns out to be very popular for simulating the propagation of waves in time domain \cite{hoop, Joly03, Joly12}. For time-harmonic scattering problems, the PML formulation was introduced in \cite{Cheny} to locally perturbed rough surface scattering problems. We also refer the readers to \cite{Chen10-fre, Hohage, Lassas} for the analysis of acoustic scattering problem in the whole space and to \cite{Bram08, Bao05} for electromagnetic scattering problems where the convergence rate depends exponentially on the absorption parameter and thickness of PML layer. In theory it is crucial to investigate well-posedness, stability, convergence of the PML formulation. This paper is concerned with the mathematical analysis and numerical simulation of the time-dependent acoustic scattering problem in a locally perturbed half space with the following issues: \begin{description} \item[(1)] well-posedeness and stability of the time-dependent problem using the Dirichlet-to-Neumann (DtN) operator; \item[(2)] well-posedness and long-time stability of the PML formulation in a truncated domain; \item[(3)] convergence of the solution of the PML formulation to that of the original problem; \item[(4)] numerical tests of the exponential convergence of the PML method. \end{description} To the best of our knowledge, the mathematical investigation of the convergence/error analysis of the PML problem for wave equations is far from being complete, in comparision with the vast works for time-harmonic scattering problems. Existing results mainly concern the well-posedness and stability of PML problem; see e.g., \cite{appelo06, joly02,joly03,Bramble} where the absorption parameter was all assumed to be a constant. Using Laplace transform and the transparent boundary conditions (TBC), the exponential convergence with respect to the thickness and the absorbing parameter has been justified in \cite{Chen09,Chen12} for time-dependent acoustic scattering problems in the whole space. Later the approach of applying Laplace transform \cite{Chen09,Chen12} has been extended to the cases of waveguides \cite{Becache21}, periodic structures as well as electromagnetic scattering problems in the whole space \cite{Wei20}. See also \cite{bao18, li15, wei19} for the analysis of the time-dependent fluid-solid interaction problems and electromagnetic scattering problems. Nevertheless, the exponential convergence results in the aforementioned literatures are not confirmed by numerical examples. On the other hand, as far as we know, a comprehensive analysis is still missing for the PML method to the acoustic wave equation in a locally perturbed half space. In this work, a perturbation of the half plane $\{x: x_2>0\}$ can be caused by either a bounded obstacle imbedded in the background medium or a compact change of the unbounded curve $x_2=0$; see the geometry shown in Figure \ref{fig1}. Firstly, we adopt the approach of \cite{Chen09} to prove well-posedness of the scattering problem in proper time-dependent Sobolev spaces by using a well-defined TBC (Dirichlet-to-Neumann operator). We complement the earlier work \cite{Chen09} by describing mapping properties of the DtN operator and by connecting the TBCs defined over a finite and an infinite time period, which seem not well-addressed in the literature. Motivated by \cite{Chen09}, a circular PML layer with special medium properties will then be defined to truncate the original problem. A first order symmetric hyperbolic system is derived for the truncated PML problem, which is similar to those considered in \cite{Chen09, Chen12, Bao18}. The well-posedness and stability of the truncated PML problem are justified by Laplace transform, variational method together with the energy method of \cite{joly02}. The convergence of the PML scheme is based on the stability estimate of an initial-boundary value problem in the PML layer and the exponential decay of the PML extension problem to be proved using modified Bessel functions. Such a technique is also inspired \cite{Chen09}. This paper is organized as follows. In the subsequent Section 2, we first introduce the mathematical model and rigorous define the transparent boundary condition (TBC) to reformulate the scattering problem to an initial-boundary value problem in a truncated bounded domain. Well-posedness and stability will then be shown in Section 2.2. In Section 3, we derive a PML formulation in the half plane by complex coordinates stretching inspired by \cite{Chew94, Chen09, Chen12, Petropoulos} and study the well-posedness and stability for the PML problem. We analyze the exponential convergence of the PML method in the half space in Section 4. In the final Section 5, two numerical examples are reported to show the performance of the PML method. \section{Mathematical formulations} Let $\Gamma_0$ be a local perturbation of the straight line $\{(x_1,0): x_1\in {\mathbb R}\}$ such that $\Gamma_0$ coincides with $x_2=0$ in $|x_1|>R$ for some $R>0$ and that $\Gamma_0$ is a non-selfintersecting $C^2$-smooth curve. Denote by $\Omega\subset {\mathbb R}^2$ the unbounded domain above $\Gamma_0$, which is supposed to be filled by a homogeneous and isotropic medium with the unit mass density. Let $D \subset B_R^+:=\{x\in\Omega: |x|<R\}$ be a bounded domain with the Lipschitz boundary $\partial D$ such that the exterior of $D$ is connected; see Figure \ref{fig1}. Physically, the domain $D$ represents a sound soft obstacle embedded in $\Omega$. Write ${\mathbb R}^2_+=\{x\in {\mathbb R}^2: x_2>0\}$, $\Gamma_R^+:=\{x\in \Omega: |x|=R\}$. It is obvious that $B_R^+$ is a Lipschitz domain. The time-dependent acoustic scattering problem with the Dirichlet boundary condition enforcing on the obstacle $\partial D$ and the locally perturbed rough surface $\Gamma_0$ can be governed by the initial-boundary value problem of the wave equation \begin{eqnarray} \label{eqs:wave} \left\{\begin{array}{lll} \partial_{t}^2 u(x,t)-\Delta u(x,t)=\partial_t f(x,t) &&\mbox{in}\quad (\Omega\backslash\overline{D})\times (0,T),\\ u(x,t)=0 \quad &&\mbox{on}\quad ( \partial D \cup \Gamma_0)\times (0,T),\\ u(x,0)=\partial_t u(x,0)=0 &&\mbox{in}\quad (\Omega\backslash\overline{D}). \end{array}\right. \end{eqnarray} Here, $T>0$ is an arbitrarily fixed positive number, the function $f$ represents an acoustic source term compactly supported in $B_R^+\backslash\overline{D}$ and $u$ denotes the total field. In the exterior of $B_R^+$, the total field $u=u^{in}+u^{re}+u^{sc}$ can be divided into the sum of the incident field $u^{in}$, the reflected field $u^{re}$ corresponding to the unperturbed scattering problem in the homogeneous half space $x_2>0$ and the scattered field $u^{sc}$ caused by $D$ and the perturbation of the straight line $x_2=0$. The first two components of $u$ will be explained as follows. \begin{figure}[h] \centering \includegraphics[scale=0.4]{Fig/pic1.png} \caption{Geometry of acoustic wave scattering problem caused by a bounded obstacle $D$ and a locally perturbed curve $\Gamma_0$. } \label{fig1} \end{figure} The incident field $u^{in}$ is generated by the inhomogeneous wave equation in ${\mathbb R}^2$: \begin{eqnarray*} \left\{\begin{array}{lll} \partial_{t}^2 u(x,t)-\Delta u(x,t)=\partial_t f(x,t) &&\mbox{in}\quad {\mathbb R}^2, \; t>0,\\ u(x,0)=\partial_t u(x,0)=0 &&\mbox{in}\quad {\mathbb R}^2. \end{array}\right. \end{eqnarray*} Obviously, the incident field $u^{in}$ takes the explicit form \begin{eqnarray*} u^{in}(x,t)=\int_{{\mathbb R}^2} G(x,t;\,y)*\partial_t f(y,t)\,dy \quad \mbox{in} \quad {\mathbb R}^2\times {\mathbb R}^+, \end{eqnarray*} where $*$ denotes convolution between $G$ and $\partial_t f$ with respect to the time $t$ , and \begin{eqnarray*} G(x,t;\,y):=\frac{H(t-|x-y|)}{2\pi\sqrt{t^2-|x-y|^2}}, \end{eqnarray*} is the Green's function of the wave operator $\partial_{t}^2-\Delta$ in the free space ${\mathbb R}^2\times{\mathbb R}$. Note that $H$ is the Heaviside function defined by \begin{eqnarray*} H(t):= \left\{\begin{array}{lll} 0, && t\leq 0, \\ 1, && t > 0. \end{array}\right. \end{eqnarray*} The reflected field $u^{re}$ caused by the incident field $u^{in}$ and the Dirichlet curve $x_2=0$ is governed by \begin{eqnarray*} \left\{\begin{array}{lll} \partial_{t}^2 u^{re}(x,t)-\Delta u^{re}(x,t)=0 &&\mbox{in}\quad {\mathbb R}^2_+, \; t>0,\\ u^{re}(x,0)=\partial_t u^{re}(x,0)=0 &&\mbox{in}\quad {\mathbb R}^2_+,\\ u^{re}(x,t)=-u^{in}(x,t) &&\mbox{on}\quad x_2=0, t>0. \end{array}\right. \end{eqnarray*} Denote by $y^{*}=(y_1,-y_2)$ the reflection of $y=(y_1,y_2)$ by the straight line $x_2=0$. Through simple calculations, we obtain the expression of the reflected field $u^{re}$ by \begin{eqnarray*} u^{re}(x,t)&=&-\int_{{\mathbb R}^2}G(x,t;\,y^{*})*\partial_t f(y,t)\,dy \\ &=& -\int_0^t \int_{B_R^+\backslash\overline{D}} G(x-y^*,t-\tau) \partial\tau f(y,\tau) \,dy d\tau. \end{eqnarray*} Evidently, the sum $u^{in}+u^{re}$ denotes the total field to the unperturbed scattering problem that corresponds to $u^{in}$ and the Dirichlet curve $x_2=0$. The function $u^{sc}$ consists of the scattered wave from the bounded domain $D$ and the local perturbation $\{x\in \Gamma_0:x_2\neq0, x_1\in{\mathbb R} \}$. Throughout this paper, we suppose that for any bounded domain $\Omega_0$, $f\in H^2(0,T;L^2(\Omega_0))$ and that $f|_{t=0}=0$, $f=\tilde{f}|_{(0,T)}$ where \begin{eqnarray*} \tilde{f}\in H^2(0,\infty;L^2(\Omega_0)),\quad \|\tilde{f}\|_{ H^2(0,\infty;L^2(\Omega_0))}\leq \|f\|_{ H^2(0,T;L^2(\Omega_0))}. \end{eqnarray*} This implies that the source term $\partial_t f$ on the right hand side of \eqref{eqs:wave} belongs to $H^1(0,T;\Omega\backslash\overline{D})$. Hence, applying the approach of J. L. Lions (see \cite[Theorem 8.1, Chapter 3]{LM72} and \cite[Theorem 8.2, Chapter 3]{LM72}) there exists a unique solution $u\in C(0,T; H^1_0(\Omega\backslash \overline {D})) \cap C^1(0,T; L^2(\Omega\backslash \overline{D}))$ to \eqref{eqs:wave}. \subsection{A transparent boundary condition (TBC) on a semi-circle } The aim of this section is to rigorously address the Dirichlet-to-Neumann map for the wave equation \eqref{eqs:wave} in a locally perturbed half-plane. We shall follow the spirit of \cite{Chen09} for a bounded sound-hard obstacle but complement the definition of DtN there by describing mapping properties in time-dependent Sobolev spaces and connecting the DtN operators defined over a finite and an infinite time period. More precisely, we shall define the time-domain boundary operator $\mathscr{T}$ by \begin{eqnarray} \label{bc:operator} \mathscr{T}u=\partial_r u \quad \mbox{on} \quad \Gamma_R^+\times(0,T), \end{eqnarray} which is called the TBC. Thus, the time-domain scattering problem (\ref{eqs:wave}) in the unbounded domain over the local rough surface can be reduced into an equivalent initial-boundary value problem in the bounded domain $\Omega_R^+:=B_R^+\backslash\overline{D}$: \begin{eqnarray} \label{eqs:wave-b} \left\{\begin{array}{lll} \partial_{t}^2 u-\Delta u=\partial_t f &&\mbox{in}\quad \Omega_R^+\times (0,T),\\ u=0 \quad &&\mbox{on}\quad (\partial D \cup \Gamma_0) \times (0,T),\\ \partial_r u=\mathscr{T}u \quad &&\mbox{on} \quad \Gamma_R^+\times (0,T),\\ u|_{t=0}=\partial_t u|_{t=0}=0 &&\mbox{in}\quad \Omega_R^+. \end{array}\right. \end{eqnarray} In what follows we derive a representation of the boundary operator $\mathscr{T}$. Let $H^{1/2}_0 (\Gamma_R^+)$, $H^{1/2} (\Gamma_R^+)$, $H^{-1/2} (\Gamma_R^+)$, $H^{-1/2}_0 (\Gamma_R^+)$ be Sobolev spaces defined on the open arc $\Gamma_R^+$. Then $H^{1/2}_0 (\Gamma_R^+)$ and $H^{-1/2} (\Gamma_R^+)$, $H^{1/2} (\Gamma_R^+)$ and $H^{-1/2}_0 (\Gamma_R^+)$ are anti-linear dual spaces\cite{Mclean}. For $u\in H^1_0(B_R^+)$, we have $u|_{\Gamma_R^+} \in H^{1/2}_0 (\Gamma_R^+)$. \noindent Consider an initial-boundary value problem over a finite time period \begin{eqnarray} \label{eqs:test1} \left\{\begin{array}{lll} \partial_{t}^2 u(x,t)-\Delta u(x,t)=0 &&\mbox{in}\quad (\Omega\backslash \overline{ B_R^+})\times (0,T),\\ u(x,t)=g(x,t) \quad &&\mbox{on}\quad \Gamma_R^+ \times (0,T),\\ u(x,t)=0 \quad &&\mbox{on}\quad (\Gamma_0 \cap |x|>R ) \times (0,T),\\ u(x,0)=\partial_t u(x,0)=0 &&\mbox{in}\quad (\Omega\backslash \overline {B_R^+}) \end{array}\right. \end{eqnarray} where $g\in C(0,T;H^{1/2}_0 (\Gamma_R^+))\,\cap C^1(0,T; H^{-1/2}(\Gamma_R^+))$ satisfying $g(x,0)=\partial_t g(x,0)=0$. By \cite[Chapter 7]{Func}, there exists a unique solution $u$ to the above equations satisfying \begin{eqnarray*} u\in C(0,T; H^1_{\diamond}(\Omega\backslash \overline {B_R^+})) \cap C^1(0,T; L^2(\Omega\backslash \overline{B_R^+})), \end{eqnarray*} where $H^1_{\diamond}(\Omega\backslash \overline{ B_R^+})=\{u\in H^1(\Omega\backslash B_R^+): u=0 \; \mbox{on} \;\{\Gamma_0 \cap |x|>R\}\}$. \begin{defn} The DtN operator $\mathscr{T}: C(0,T;H^{1/2}_0 (\Gamma_R^+))\rightarrow C(0,T;H^{-1/2} (\Gamma_R^+))$ over a finite time period $(0,T)$ is defined as \begin{eqnarray*} \mathscr{T} g=\partial_r u \quad \mbox{on} \quad \Gamma_R^+\times(0,T), \end{eqnarray*} where $u\in C(0,T; H^1_{\diamond}(\Omega\backslash \overline {B_R^+})) \cap C^1(0,T; L^2(\Omega\backslash \overline{B_R^+}))$ is the unique solution to (\ref{eqs:test1}). \end{defn} \noindent Consider another initial-boundary value problem but over an infinite time \begin{eqnarray} \label{eqs:test2} \left\{\begin{array}{lll} \partial_{t}^2 w(x,t)-\Delta w(x,t)=0 &&\mbox{in}\quad (\Omega\backslash \overline{ B_R^+})\times (0,\infty),\\ w(x,t)=\tilde g(x,t) \quad &&\mbox{on}\quad \Gamma_R^+ \times (0,\infty),\\ w(x,t)=0 \quad &&\mbox{on}\quad (\Gamma_0 \cap |x|>R ) \times (0,\infty),\\ w(x,0)=\partial_t w(x,0)=0 &&\mbox{in}\quad (\Omega\backslash \overline{B_R^+}), \end{array}\right. \end{eqnarray} with the initial-boundary value \begin{eqnarray}\label{reg} \tilde{g}\in L^2(0,\infty;H^{1/2}_0 (\Gamma_R^+))\,\cap H^1(0,\infty; H^{-1/2}(\Gamma_R^+)),\quad \tilde{g}(x,0)=\partial_t \tilde{g}(x,0)=0. \end{eqnarray} \begin{defn} The DtN operator $\tilde{ \mathscr{T}}: L^2(0,\infty;H^{1/2}_0 (\Gamma_R^+))\rightarrow L^2(0,\infty;H^{-1/2} (\Gamma_R^+))$ over the infinite time period $(0,\infty)$ is defined as \begin{eqnarray*} \tilde{\mathscr{T}} \tilde g=\partial_r w \quad \mbox{on} \quad \Gamma_R^+\times(0,\infty), \end{eqnarray*} where $ w\in L^2(0,\infty; H^1_{\diamond}(\Omega\backslash \overline {B_R^+})) \cap H^1(0,\infty; L^2(\Omega\backslash \overline{B_R^+}))$ is the unique solution of (\ref{eqs:test2}). \end{defn} \begin{lem} \label{dtn-t} Let $\tilde g$ be the boundary value of the problem (\ref{eqs:test1}) and denote by $\tilde{g}$ its zero extension to $t>T$. Then \begin{eqnarray*} \tilde{\mathscr{T}} \tilde g=\mathscr{T} g \quad \mbox{in} \quad L^2(0,T;H^{-1/2} (\Gamma_R^+)).\end{eqnarray*} \end{lem} \begin{proof}It is obvious that the extension $\tilde{g}$ fulfills the regularity and initial values specified in \eqref{reg}. Define $v:=u-w$, where $u$ and $w$ are the unique solutions to \eqref{eqs:test1} and \eqref{eqs:test2}, respectively. It then follows that \begin{eqnarray} \label{eqs:test3} \left\{\begin{array}{lll} \partial_{t}^2 v(x,t)-\Delta v(x,t)=0 &&\mbox{in}\quad (\Omega\backslash \overline {B_R^+})\times (0,T),\\ v(x,t)=0 \quad &&\mbox{on}\quad \Gamma_R^+ \times (0,T),\\ v(x,t)=0 \quad &&\mbox{on}\quad (\Gamma_0 \cap |x|>R ) \times (0,T),\\ v(x,0)=\partial_t v(x,0)=0 &&\mbox{in}\quad (\Omega\backslash \overline {B_R^+}). \end{array}\right. \end{eqnarray} By uniqueness to the above system (see e.g., \cite{LM72, Func}), we get $v\equiv0$ in $(\Omega\backslash \overline{B_R^+})\times (0,T)$, implying that $\partial_r v=0$ on $\Gamma_R^+ \times (0,T)$. Hence, we obtain that $\tilde{\mathscr{T}}\tilde g=\mathscr{T} g$ on $\Gamma_R^+\times (0, T)$. \end{proof} From the proof of Lemma \ref{dtn-t} we conclude that the definition of $\mathscr{T}g$ is independent of the values of $g$ in $t>T$. Below we want to derive an expression of $\tilde{\mathscr{T}}$ by Laplace transform. For any $s\in {\mathbb C}$ with $\textnormal{Re}(s)>0$, applying Laplace transform to (\ref{eqs:test2}) with respect to $t$, we see that $w_L=\mathscr{L}(w)$ satisfies the Helmoholtz equation \begin{eqnarray}\label{w1} -\Delta w_L +s^2 w_L=0 \quad \mbox{in}\quad \Omega\backslash \overline{B_R^+}, \end{eqnarray} together with the radiation condition \begin{eqnarray}\label{w2} \sqrt{r}(\frac{\partial w_L}{\partial r} +s w_L ) \rightarrow 0 \quad \mbox{as}\quad r=|x|\rightarrow\infty. \end{eqnarray} Let $\mathscr{G}: H^{1/2}_0(\Gamma_R^+)\rightarrow H^{-1/2}(\Gamma_R^+)$ be the DtN operator in s-domain defined by \begin{eqnarray*} \mathscr{G}\tilde{g}_L=\partial_r w_L \quad \textnormal{on}\quad \Gamma_R^+. \end{eqnarray*} where $w_L$ is the unique solution to \eqref{w1}-\eqref{w2} satisfying the boundary value $w_L=g_L$ on $\Gamma_R^+$ and $w_L=0$ on $\Gamma_0\cap \{x: |x|>R\}$. Then it follows that $$\tilde{\mathscr{T}}=\mathscr{L}^{-1}\circ \mathscr{G} \circ \mathscr{L}.$$ Next, we derive a representation of the DtN operator $\mathscr G$. In the polar coordinates $(r,\theta)$, $w_L$ can be expanded into the series (see e.g., \cite{Chen09, Lidtn, Wood}) \begin{eqnarray*} w_L(r, \theta; s) =\sum_{n=1}^{\infty} \frac{K_n(sr)}{K_n(sR)}w^n _L(R,s)\sin{n\theta}, \quad r>R, \quad \theta\in[0,\pi], \end{eqnarray*} where $$w^n _L(R,s)=\frac{2}{\pi} \int_{0}^{\pi} {w_L (R, \theta, s) \sin{n\theta} \, d\theta}= \frac{2}{\pi} \int_{0}^{\pi} {\tilde{g}_L (R, \theta, s) \sin{n\theta} \, d\theta} .$$ Here $K_n(z)$ represents the modified Bessel function of order $n$. A simple calculation gives \begin{eqnarray} \label{eq:gul} \mathscr{G}w_L(R,\theta,s) = \frac{\partial w_L}{\partial r}|_{\Gamma_R^+}=s\sum_{n=1}^{\infty} \frac{K_n'(sR)}{K_n(sR)}w^n _L(R,s)\sin{n\theta}. \end{eqnarray} The DtN operators $\mathscr{G}$ and $\mathscr{T}$ have the following properties. \begin{lem} \label{lem:bd} The operator $\mathscr{G}: H^{1/2}_0(\Gamma_R^+) \rightarrow H^{-1/2}(\Gamma_R^+)$ is bounded \end{lem} \begin{proof} By the recurrence formula of modified Bessel function \begin{eqnarray*} K_n^{'}(z)=-K_{n-1}(z)-\frac{n}{z}K_n(z), \end{eqnarray*} we deduce that \begin{eqnarray*} \Big | \frac{K_n^{'}(sR)}{K_n(sR)} \Big | = \Big | \frac{n}{sR}+\frac{K_{n-1}(sR)}{K_n(sR)} \Big | \leq \frac{n}{|s|R}+1. \end{eqnarray*} Let $|B_n|:=\Big | \frac{K_n^{'}(sR)}{K_n(sR)} \Big |$. Then, $|B_n|\leq C\sqrt{n^2+1}$ for some constant $C>0$. Given $\phi \in H^{-1/2}_0(\Gamma_R^+)$, we expand \begin{eqnarray*} \phi(R,\theta)=\sum_{n=1}^{\infty}\phi_n(R)\sin{n\theta},\quad \phi_n(R)=\frac{2}{\pi}\int_{0}^{\pi}\phi(R,\theta)\sin n\theta \, d\theta. \end{eqnarray*} By the definition of $\mathscr{G}$, for any $\omega \in H^{1/2}_0(\Gamma_R^+)$ it follows that \begin{align*} \Big|\langle \mathscr{G}(\omega),\,\phi \rangle_{\Gamma_R^+}\Big|&= \Big| \int_{\Gamma_R^+} s \sum_{n=1}^{\infty} \frac{K_n^{'} (sR)}{K_n(sR)}\omega_n(R) \sin n\theta \sum_{n=1}^{\infty} \overline{\phi}_n(R) \sin n\theta\,d \gamma \Big| \\ &=\Big| sR \sum_{n=1}^{\infty} \frac{K_n^{'} (sR)}{K_n(sR)}\omega_n(R) \overline{\phi}_n(R) \int_0^\pi \sin^2 n\theta \, d\theta \\ &=\Big|\frac{\pi}{2}sR\sum_{n=1}^{\infty} \frac{K_n^{'} (sR)}{K_n(sR)}\omega_n(R) \overline{\phi}_n(R) \Big| \\ &\leq \frac{\pi}{2}|s|R \Big(\sum_{n=1}^{\infty} \Big| \frac{K_n^{'} (sR)}{K_n(sR)} \Big| |\omega_n(R)|^2 \Big)^{1/2} \Big(\sum_{n=1}^{\infty} \Big| \frac{K_n^{'} (sR)}{K_n(sR)} \Big| | \phi_n(R)|^2 \Big)^{1/2}\\ &\leq C\Big( \sum_{n=1}^{\infty} \sqrt{1+n^2} |\omega_n(R)|^2 \Big)^{1/2} \Big( \sum_{n=1}^{\infty} \sqrt{1+n^2} |\phi_n(R)|^2 \Big)^{1/2} \\ &\leq C \| \omega \|_{H^{1/2}_0(\Gamma_R^+)} \, \| \phi \|_{H^{1/2}_0(\Gamma_R^+)}. \end{align*} Then, we have \begin{eqnarray*} \|\mathscr G \omega\|_{H^{-1/2}(\Gamma_R^+)} =\sup_{\phi\in H^{1/2}_0(\Gamma_R^+)}\frac{\Big|\langle \mathscr{G}(\omega),\,\phi \rangle_{\Gamma_R^+}\Big|}{\|\phi\|_{H^{1/2}_0(\Gamma_R^+)}}\leq C \| \omega\|_{H^{1/2}_0(\Gamma_R^+)}. \end{eqnarray*} \end{proof} \begin{lem} \label{lem:G} It holds that, for any $\omega \in H^{1/2}_0(\Gamma_R^+)$, \begin{eqnarray*} -\textnormal{Re}\,\langle s^{-1}\,\mathscr{G}\omega,\, \omega \rangle_{\Gamma_R^+} \geq0. \end{eqnarray*} \end{lem} \begin{proof} Given $\omega \in H^{1/2}_0(\Gamma_R^+)$, we have \begin{eqnarray*} \omega(R,\theta)=\sum_{n=1}^{\infty}\omega_n(R)\sin{n\theta},\quad \omega_n(R)=\frac{2}{\pi}\int_{0}^{\pi}\omega(R,\theta)\sin n\theta \, d\theta. \end{eqnarray*} It follows from the expression of $\mathscr{G}u_L$ (\ref{eq:gul}) and Lemma \ref{lem:mbf-1}, we obtain \begin{align*} -\textnormal{Re}\,\langle s^{-1}\,\mathscr{G}\omega,\omega \rangle_{\Gamma_R^+} =&-\textnormal{Re} \int_{\Gamma_R^+} \sum_{n=1}^{\infty} \frac{K_n^{'} (sR)}{K_n(sR)}\omega_n(R) \sin n\theta \sum_{n=1}^{\infty} \overline{\omega}_n(R) \sin n\theta\,d \gamma\\ =& -R \sum_{n=1}^{\infty}\textnormal{Re}\Big(\frac{K_n^{'} (sR)}{K_n(sR)}\Big)|\omega_n (R)|^2 \int_0^{\pi} \sin^2 n\theta \, d \theta\\ =&-\frac{\pi}{2} R \sum_{n=1}^{\infty}\textnormal{Re}\Big(\frac{K_n^{'} (sR)}{K_n(sR)}\Big)|\omega_n (R)|^2\geq 0. \end{align*} \end{proof} Below we write the Laplace transform variable as $s=s_1+is_2$ with $s_1>0$, $s_2\in{\mathbb R}$. \begin{lem} \label{lem:t} Let $\omega \in C(0,T;H^{1/2}_0({\Gamma_R^+})\cap C^1(0,T;H^{-1/2}({\Gamma_R^+}) $ with the initial values $\omega(\cdot,\, 0)=\partial_t \omega(\cdot,\,0)=0$. Then it holds that \begin{eqnarray*} \textnormal{Re}\, \int_0^T e^{-2s_1 t}\langle \mathscr{T} \omega, \, \partial_t \omega \rangle_{\Gamma_R^+}\,dt \leq0. \end{eqnarray*} \end{lem} \begin{proof} Let $\tilde{\omega}(r,t)$ be the zero extension of $\omega(r,t)$ with respect to $t$ in ${\mathbb R}$. Applying the Parseval identity (\ref{PI}) and Lemmas \ref{dtn-t} and \ref{lem:G}, we obtain \begin{align*} \textnormal{Re}\, \int_0^T e^{-2s_1 t}\langle \mathscr{T}\omega, \, \partial_t \omega \rangle_{\Gamma_R^+}\,dt &=\textnormal{Re}\, \int_0^T e^{-2s_1 t} \int_{\Gamma_R^+}\mathscr{T}\omega \partial_t \overline\omega \,d\gamma\,dt \\ &=\textnormal{Re}\, \int_{\Gamma_R^+}\int_0^{\infty} e^{-2s_1 t} \tilde{ \mathscr{T}}\tilde\omega \partial_t \overline{\tilde\omega} \,dt \,d\gamma \\ &=\frac{1}{2\pi}\int_{-\infty}^{\infty}\textnormal{Re} \langle \mathscr{G} \tilde{\omega}_L,\, s\tilde{\omega}_L \rangle_{\Gamma_R^+} \,d s_2 \\ &=\frac{1}{2\pi}\int_{-\infty}^{\infty} |s|^2 \textnormal{Re} \langle s^{-1}\mathscr{G} \tilde{\omega}_L,\, \tilde{\omega}_L \rangle_{\Gamma_R^+} \,d s_2 \\ &\leq0. \end{align*} \end{proof} \subsection{Well-posedness in the time-domain} In the subsection, we prove well-posedness of the truncated initial-boundary value problem (\ref{eqs:wave-b}) in the bounded domain $\Omega_R^+$ by using the variational method in the Laplace domain. Taking Laplace transform of (\ref{eqs:wave-b}) and using $f(\cdot, 0)=0$ we obtain \begin{eqnarray} \label{eq:s} \left\{\begin{array}{lll} \Delta u_L-s^2 u_L=sf_L &&\mbox{in} \quad \Omega_{R}^+\backslash\overline{D}, \\ u_L=0 &&\mbox{on} \quad \partial D\cup\Gamma_0, \\ \partial_r u_L=\mathscr{G}u_L &&\mbox{on} \quad \Gamma_R^+. \end{array}\right. \end{eqnarray} We formulate the variational formulation of problem (\ref{eq:s}) and show its well-posedness in the space $X_R:=\{u\in H^1(\Omega_R^+):u=0\; \mbox{on} \;\partial D \cup \Gamma_0\}$. Multiplying the Helmholtz equation in (\ref{eq:s}) by the complex conjugate of a test function $v\in X_R$, applying the Green's formula with the boundary conditions on $\Gamma_R^+\cup\Gamma_0\cup\partial D$, we arrive at \begin{eqnarray} \label{eq:s-v} a(u_L,\,v)=\int_{\Omega_R^+} f_L \, \overline v \, d x \quad \mbox{for all}\quad v\in X_R, \end{eqnarray} where the sesquilinear form $a(\cdot,\cdot)$ is defined as \begin{eqnarray*} a(u_L,\,v)=\int_{\Omega_R^+} \left( \frac{1}{s} \nabla u_L \cdot \nabla \overline v +s u_L \overline v\right) \,d x -\langle s^{-1}\mathscr{G}u_L,\, v \rangle_{\Gamma_R^+}. \end{eqnarray*} \begin{lem}\label{lem:wave-s} The variational problem (\ref{eq:s-v}) has a unique solution $u_L\in X_R$ with the following stability estimate \begin{eqnarray} \label{eq:wave-s} \| \nabla u_L \|_{L^2(\Omega_R^+)} + \| s u_L \|_{L^2(\Omega_R^+)} \leq C \frac{(1+|s|)|s|}{s_1}\|f_L\|_{L^2(\Omega_R^+)}, \end{eqnarray} where $C$ is a constant independent of $s$. \end{lem} \begin{proof} (i) We first prove that $a(\cdot,\,\cdot)$ is continuous and strictly coercive. Using the Cauchy-Schwarz inequality, the boundedness of $\mathscr{G}$ in Lemma \ref{lem:bd} and the trace theorem, we obtain \begin{eqnarray*} |a(u_L,\, v) | &\leq &|s|^{-1}\|\nabla u_L\|_{L^2(\Omega_R^+)} \|\nabla v\|_{L^2(\Omega_R^+)}+ |s|\|u_L\|_{L^2(\Omega_R^+)} \|v\|_{L^2(\Omega_R^+)} \\ & &+ |s|^{-1}\|\mathscr{G} u_L\|_{H^{-1/2}(\Gamma_R^+)} \|v\|_{H^{1/2}(\Gamma_R^+)}\\ &\leq& C \|u_L\|_{H^1(\Omega_R^+)} \|v\|_{H^1(\Omega_R^+)} + C \|u_L\|_{H^1(\Omega_R^+)} \|v\|_{H^1(\Omega_R^+)}\\ && C \| u_L\|_{H^{1/2}(\Gamma_R^+)} \| v\|_{H^{1/2}(\Gamma_R^+)} \\ &\leq& C \|u_L\|_{H^1(\Omega_R^+)} \|v\|_{H^1(\Omega_R^+)}. \end{eqnarray*} Setting $v=u_L$, it follows from the expression of the sesquilinear form $a(\cdot,\cdot)$ that \begin{eqnarray*} a(u_L,\,u_L)=\int_{\Omega_R^+} \frac{1}{s} |\nabla u_L|^2 +s |u_L|^2 \,d x -\langle s^{-1}\mathscr{G}u_L, \, u_L \rangle_{\Gamma_R^+}. \end{eqnarray*} Taking the real part of the above equation and using Lemma \ref{lem:G} we have \begin{eqnarray} \label{eq:s-cor} \textnormal{Re}(a(u_L,\,u_L))&=&\int_{\Omega_R^+} \frac{s_1}{|s|^2} |\nabla u_L|^2 +s_1 |u_L|^2 \,d x -\textnormal{Re} \langle s^{-1}\mathscr{G}u_L, \, u_L \rangle_{\Gamma_R^+}\nonumber\\ &\geq& \int_{\Omega_R^+} \frac{s_1}{|s|^2} |\nabla u_L|^2 +s_1 |u_L|^2 \,d x \nonumber \\ &\geq& \frac{s_1}{|s|^2} \left( \|\nabla u_L\|_{L^2(\Omega_R^+)}^2 + \|s u_L\|_{L^2(\Omega_R^+)}^2 \right ). \end{eqnarray} Hence, by the Lax-Milgram Lemma, the variational problem (\ref{eq:s-v}) has a unique solution $u_L\in X_R$. (ii) Combining (\ref{eq:s-v}) with the Cauchy-Schwarz inequality, it follows that \begin{eqnarray} \label{eq:s-cs} |a(u_L,\,u_L)| &\leq& \frac{1}{|s|} \| f_L\|_{L^2(\Omega_R^+)} \| s u_L\|_{L^2(\Omega_R^+)} \nonumber\\ &\leq& \frac{1}{|s|} \| f_L\|_{L^2(\Omega_R^+)} \| s u_L\|_{H^1(\Omega_R^+)}\nonumber\\ &\leq& \frac{1}{|s|} \| f_L\|_{L^2(\Omega_R^+)} \left(|s|^2\| \nabla u_L\|_{L^2(\Omega_R^+)}^2 +\| s u_L\|_{L^2(\Omega_R^+)}^2\right)^{1/2}\nonumber\\ &\leq& C \frac{1+|s|}{|s|} \| f_L\|_{L^2(\Omega_R^+)} \left(\| \nabla u_L\|_{L^2(\Omega_R^+)}^2 +\| s u_L\|_{L^2(\Omega_R^+)}^2\right)^{1/2}. \end{eqnarray} Combining (\ref{eq:s-cor}) and (\ref{eq:s-cs}) yields \begin{eqnarray*} &&\frac{s_1}{|s|^2} \left( \|\nabla u_L\|_{L^2(\Omega_R^+)}^2 + \|s u_L\|_{L^2(\Omega_R^+)}^2 \right ) \\ &\leq& \textnormal{Re}(a(u_L,\,u_L)) \\ &\leq& |a(u_L,\,u_L) | \\ &\leq& C \frac{1+|s|}{|s|} \| f_L\|_{L^2(\Omega_R^+)} \left(\| \nabla u_L\|_{L^2(\Omega_R^+)}^2 +\| s u_L\|_{L^2(\Omega_R^+)}^2\right)^{1/2}. \end{eqnarray*} Then, using the Cauchy-Schwarz inequality, we have \begin{eqnarray*} \|\nabla u_L\|_{L^2(\Omega_R^+)} + \|s u_L\|_{L^2(\Omega_R^+)} &\leq& \left(\| \nabla u_L\|_{L^2(\Omega_R^+)}^2 +\| s u_L\|_{L^2(\Omega_R^+)}^2\right)^{1/2} \\ &\leq& C \frac{(1+|s|)|s|}{s_1}\| f_L\|_{L^2(\Omega_R^+)}, \end{eqnarray*} which completes the proof of the stability estimate. \end{proof} \begin{thm} The initial boundary value problem (\ref{eqs:wave-b}) has a unique solution \begin{eqnarray*} u(x,t)\in L^2(0,T; X_R) \cap H^1(0,T; L^2(\Omega_R^+)), \end{eqnarray*} which satisfies the stability estimate \begin{eqnarray*} \max_{0\leq t\leq T}\Big(\| \partial_t u \|_{L^2(\Omega_R^+)} + \| \nabla u \|_{L^2(\Omega_R^+)} \Big ) \leq C \|\partial_t f\|_{L^1(0,T;L^2(\Omega_R^+))}. \end{eqnarray*} \end{thm} \begin{proof} We first prove existence and uniqueness of solutions to (\ref{eqs:wave-b}). Simple calculations show that \begin{eqnarray*} &&\int_0^T \| \partial_t u \|_{L^2(\Omega_R^+)}^2 + \| \nabla u \|_{L^2(\Omega_R^+)}^2 \, dt\\ \leq && C \int_0^\infty e^{-2s_1 t} \Big(\| \partial_t u \|_{L^2(\Omega_R^+)}^2 + \| \nabla u \|_{L^2(\Omega_R^+)}^2\Big) \, dt. \end{eqnarray*} Hence it suffices to estimate the integral \begin{eqnarray*} \int_0^\infty e^{-2s_1 t} \Big(\| \partial_t u \|_{L^2(\Omega_R^+)}^2 + \| \nabla u \|_{L^2(\Omega_R^+)}^2\Big) \, dt. \end{eqnarray*} Based on the stability estimate of $u_L$ in Lemma \ref{lem:wave-s}, we derive from \cite[Lemma 44.1]{Treves} that $u_L$ is a holomorphic function of $s$ on the half plane $s_1>\zeta_0>0$, where $\zeta_0$ is any positive constant. Thus, by Lemma \ref{lem:a}, the inverse Laplace transform of $u_L$ exists and is supported in $[0,\infty)$. Set $u=\mathscr{L}^{-1}(u_L)$. Applying the Parseval identity (\ref{PI}) and the stability estimate (\ref{eq:wave-s}) in Lemma \ref{lem:wave-s} and using the Cauchy-Schwarz inequality, we obtain \begin{eqnarray*} &&\int_0^\infty e^{-2s_1 t} (\| \partial_t u \|_{L^2(\Omega_R^+)}^2 + \| \nabla u \|_{L^2(\Omega_R^+)}^2) \, dt\\ =&& \frac{1}{2\pi} \int_{-\infty}^{+\infty} \| s u_L \|_{L^2(\Omega_R^+}^2 + \| \nabla u_L \|_{L^2(\Omega_R^+)}^2) \, d s_2\\ \leq && C \frac{1}{s_1^2} \int_{-\infty}^{+\infty} (1+|s|)^2|s|^2 \| f_L\|_{L^2(\Omega_R^+)}^2 \, d s_2 \\ \leq && C \frac{1}{s_1^2} \int_{-\infty}^{+\infty} \| s^2f_L\|_{L^2(\Omega_R^+)}^2 +\|sf_L\|_{L^2(\Omega_R^+)}^2\, d s_2 \\ \leq && C \frac{1}{s_1^2} \int_0^\infty e^{-2 s_1 t} \left(\|\partial_t^2 f\|_{L^2(\Omega_R^+)}^2+\|\partial_t f\|_{L^2(\Omega_R^+)}^2\right) \, dt. \end{eqnarray*} This together with the Poincar\'{e} inequality proves \begin{eqnarray*} u\in L^2(0,T; X_R) \cap H^1(0,T; L^2(\Omega_R^+)). \end{eqnarray*} To prove the stability estimate, we define the energy function \begin{eqnarray*} E(t):=\frac{1}{2}\left(\| \partial_t u \|_{L^2(\Omega_R^+)}^2 + \| \nabla u \|_{L^2(\Omega_R^+)}^2\right), \quad 0<t<T. \end{eqnarray*} It is obvious that \begin{eqnarray*} \label{eq:en} E(t)-E(0)=\int_0^t E'(\tau)\,d\tau. \end{eqnarray*} Recalling the wave equation in (\ref{eqs:wave-b}) and applying integration by parts, we obtain \begin{eqnarray} \label{eq:eni} \int_0^t e^{-2s_1 \tau} E'(\tau)\,d\tau\quad =&&\mbox{Re}\int_0^t e^{-2s_1 \tau}\int_{\Omega_R^+} \partial_{\tau\tau}u \partial_{\tau}\overline{u} + \nabla u\cdot \nabla (\partial_{\tau}\overline{u}) \, dx d\tau \nonumber\\ =&&\mbox{Re}\int_0^t e^{-2s_1 \tau} \int_{\Omega_R^+} (\Delta u+\partial_\tau f) \partial_{\tau}\overline{u} + \nabla u\cdot \nabla (\partial_{\tau}\overline{u}) \, dx d\tau \nonumber \\ =&&\mbox{Re}\int_0^t e^{-2s_1 \tau}\langle \mathscr{T}u,\, \partial_{\tau}\overline{u} \rangle_{\Gamma_R^+} \,d\tau \nonumber \\&& +\mbox{Re}\int_0^t e^{-2s_1 \tau}(\partial_\tau f,\, \partial_{\tau}\overline{u})_{L^2(\Omega_R^+)}\,d\tau. \end{eqnarray} Applying integration by parts on the left hand side of \eqref{eq:eni} and using $E(0)=0$ together with Lemma \ref{lem:t}, we obtain \begin{eqnarray*} &&e^{-2s_1t}E(t)+2s_1\int_0^t e^{-2s_1\tau}E(\tau)\,d\tau \\ \leq && \mbox{Re}\int_0^t e^{-2s_1\tau} (\partial_\tau f,\, \partial_{\tau}\overline{u})_{L^2(\Omega_R^+)}\,d\tau\\ \leq && \int_0^T \|e^{-s_1t}\partial_t f\|_{L^2(\Omega_R^+)}\,\|e^{-s_1t}\partial_t u\|_{L^2(\Omega_R^+)} dt\\ \leq && \max_{0\leq t\leq T} \| e^{-s_1t}\partial_t u \|_{L^2(\Omega_R^+)}\, \|e^{-s_1t}\partial_t f\|_{L^1(0,T;L^2(\Omega_R^+))}\\ \leq && \varepsilon \max_{0\leq t\leq T} \| e^{-s_1t}\partial_t u \|_{L^2(\Omega_R^+)}^2 + \frac{1}{4\varepsilon}\, \|e^{-s_1t}\partial_t f\|_{L^1(0,T;L^2(\Omega_R^+))}^2. \end{eqnarray*} Letting $s_1\rightarrow0$, choosing $\varepsilon>0$ small enough and applying Cauchy-Schwartz inequality, we finally get \begin{eqnarray*} &&\max_{0\leq t\leq T} \Big(\| \partial_t u \|_{L^2(\Omega_R^+)} + \| \nabla u \|_{L^2(\Omega_R^+)} \Big)\\ \leq &&C \max_{0\leq t\leq T} \Big(\| \partial_t u \|_{L^2(\Omega_R^+)}^2 + \| \nabla u \|_{L^2(\Omega_R^+)}^2 \Big)^{1/2}\\ \leq &&C \|\partial_t f\|_{L^1(0,T;L^2(\Omega_R^+))}. \end{eqnarray*} This completes the stability estimate. \end{proof} \section{The time-domain PML problem } Inspired by the PML approach for bounded obstacles \cite{Chen09,Chen12}, we present in this section the time-domain PML formulation in a perturbed half-plane and then show the well-posedness and stability of the PML problem by applying the Laplace transform together with the variational and energy methods. \subsection{Well-posedness of the PML problem} We surround the domain $\Omega_R^+$ with a PML layer $$\Omega_{PML}^+:=B_{\rho}^+\backslash \overline{B_R^+}= :\{x\in\Omega:R<|x|<\rho\},$$ where $B_{\rho}^+:=\{x\in\Omega:|x|<\rho\}$. We denote $\Omega_{\rho}^+:=B_{\rho}^+\backslash \overline{D}$ the truncated PML domain with the exterior boundary $\Gamma_{\rho}^+:=\{x\in\Omega:|x|=\rho\}$. Let $s_1=\textnormal{Re}(s)>0$ for $s\in{\mathbb C}$. Define the medium parameter in the PML layer as \begin{eqnarray*} \alpha(r)= \left\{\begin{array}{lll} 1, && r\leq R, \\ 1+s^{-1} \sigma(r), && r> R, \end{array}\right. \end{eqnarray*} where $r=|x|$, $\sigma=0$ for $r \leq R$ and $\sigma >0$ for $r> R$. In what follows, we will derive the PML formulation by a complex transformation of variables. Denote by $\tilde{r}$ the complex radius \begin{eqnarray*} \tilde{r}=\int_0^r \alpha(\tau)\,d\tau=r \beta(r), \end{eqnarray*} where $\beta(r)=r^{-1}\int_0^r \alpha(\tau)\,d\tau$. It is obvious that $\beta(r)=1+s^{-1}\hat{\sigma}(r)$ for $r\geq R$, where $\hat{\sigma}(r)=r^{-1}\int_0^r \sigma(\tau)\,d\tau$. To derive the PML equations, we need to transform the exterior problem (\ref{eqs:wave-b}) into the s-domain. On $\Gamma_R^+$, the Laplace transform of $u$ can be expanded into the series, \begin{eqnarray*} u_L (R,s)=\sum_{n=1}^{\infty} u^n _L(R,s)\sin{n\theta},\quad u^n _L(R,s)=\frac{2}{\pi} \int_{0}^{\pi} {u_L (R, \theta, s) \sin{n\theta} \, d\theta}. \end{eqnarray*} Then, let us define the PML extension $\tilde{u}_L$ in the s-domain as \begin{eqnarray*} \tilde{u}_L( r,\theta,s) =\sum_{n=1}^{\infty} \frac{K_n(s\tilde r)}{K_n(sR)}\tilde u^n _L(R,s)\sin{n\theta}, \quad r>R. \end{eqnarray*} Since $K_n(z)\backsim (\frac{\pi}{2z})^{1/2}e^{-z}$ as $|z|\rightarrow\infty$, $\tilde{u}_L(r, \theta, s)$ decays exponentially for large $\tilde r$. It is easy to see that $\tilde u _L$ satisfies $-\frac{1}{\tilde r} \frac{\partial}{\partial \tilde r}(\tilde r \frac{\partial}{\partial \tilde r})\tilde{u}_L-\frac{1}{\tilde r ^2}\frac{\partial^2}{\partial \theta^2}\tilde{u}_L+s^2\tilde{u}_L=0$ in $\Omega \backslash \overline{B_R^+}$. Since $\tilde{r}=r\beta$ and $d\tilde{r}=\alpha dr$, we obtain \begin{eqnarray} \label{eq:s-pml} -\nabla\cdot ( A \nabla \tilde u_L)+s^2\alpha \beta \tilde u_L=0, \quad x\in \Omega \backslash \overline{B_R^+} \end{eqnarray} where $A=\textnormal{diag}\{\beta / \alpha, \alpha / \beta\}$ is a complex matrix and $ A \nabla \tilde u_L=\frac{\beta}{\alpha} \frac{\partial \tilde u_L}{\partial r} \textbf{e}_r+\frac{\alpha}{\beta r}\frac{\partial \tilde u_L}{\partial \theta}\textbf{e}_\theta$. Here $\textbf{e}_r$ and $\textbf{e}_\theta$ are the unit vectors in polar coordinates. Next, we will deduce the PML system in the time-domain by applying the inverse Laplace transform to (\ref{eq:s-pml}). Since $A$, $\alpha$ and $\beta$ are complex, to simplify the inverse Laplace transform, we introduce the auxiliary functions \begin{eqnarray} \label{eq:s-m} \tilde{p}_L^{*}:=-\frac{1}{s}\nabla \tilde u_L, \quad \tilde u _L^{*}:=\frac{1}{s}\sigma \tilde u_L,\quad \tilde p_L:=A \tilde{p}_L^{*}, \end{eqnarray} to transform (\ref{eq:s-pml}) into a first order system. In $\Omega\backslash \overline {B_{R}^{+}}$, define \begin{eqnarray*} \tilde u:=\mathscr{L}^{-1}(\tilde u_L),\;\tilde p:=\mathscr{L}^{-1}(\tilde p_L),\; \tilde u^*:=\mathscr{L}^{-1}(\tilde u_L^*),\;\tilde p^*:=\mathscr{L}^{-1}(\tilde p_L^*), \end{eqnarray*} with the zero initial conditions \begin{eqnarray*} \tilde u|_{t=0}=0,\;\tilde p|_{t=0}=0,\;\tilde u^*|_{t=0}=0,\;\tilde p^*|_{t=0}=0. \end{eqnarray*} Taking the inverse Laplace transform to (\ref{eq:s-pml}) and (\ref{eq:s-m}) and using the zero initial conditions, we can write the PML system for $x\in \Omega \backslash \overline {B_R^+}$ as \begin{eqnarray} \label{t-pml} \left\{\begin{array}{lll} \partial_t \tilde u+(\sigma +\hat \sigma)\tilde u +\sigma \tilde u^{*}+ \nabla \cdot \tilde p =0,\\ \partial_t \tilde{p}^{*}=-\nabla\tilde u,\quad \partial_t \tilde{u}^{*}=\sigma \tilde u, \\ \partial_t \tilde{p}+\Lambda_1 \tilde p= \partial_t \tilde{p}^{*}+\Lambda_2 \tilde p^{*}, \end{array}\right. \end{eqnarray} where $s\alpha=s+\sigma$, $s\beta=s+\hat\sigma$, $\Lambda_1={M^{T}}\textnormal{diag}\{\sigma, \hat\sigma\} {M}$ and $\Lambda_2={M^{T}}\textnormal{diag}\{\hat\sigma, \sigma\} {M}$ with \begin{eqnarray*} M := \left ( \begin{array}{rrl} \cos \theta && \sin \theta \\ -\sin \theta && \cos \theta \end{array}\right). \end{eqnarray*} Since the above PML system (\ref{t-pml}) is a first order system, it is necessary to reduce equivalently the time-domain scattering problem (\ref{eqs:wave-b}) in the half space into a first order PDE system: \begin{eqnarray} \label{eqs:h-1} \left\{\begin{array}{lll} \partial_t u =-\nabla\cdot p+f(x,t) &&\quad \mbox{in} \quad \Omega_R^+\times(0, T),\\ \partial_t p=-\nabla u &&\quad \mbox{in} \quad \Omega_R^+\times(0, T),\\ u=0 &&\quad \mbox{on}\quad (\partial D \cup \Gamma_0) \times(0,T),\\ p\cdot\hat{x}+\mathscr{T}(\int_0^t u\,d\tau)=0, &&\quad \mbox{on}\quad \Gamma_R^+\times (0,T), \\ u|_{t=0}=p|_{t=0}=0 &&\quad \mbox{in} \quad \Omega_R^+. \end{array}\right. \end{eqnarray} Below we derive the DtN boundary condition on $\Gamma_R^{+}\times (0,T)$. Taking Laplace transform to the second equation of (\ref{eqs:h-1}), we obtain that \begin{eqnarray*} p_L+\frac{1}{s}\nabla u_L=0. \end{eqnarray*} Then, multiplying the above equation by $\hat{x}=x/|x|$ on $\Gamma_R^{+}$ and using the DtN boundary condition $\partial_r u_L=\mathscr{G}u_L$, it follows that \begin{eqnarray} \label{bc:dtn-pg} p_L \cdot \hat x+ \frac{1}{s}\mathscr{G}u_L=0 \quad \mbox{on}\quad \Gamma_R^{+}. \end{eqnarray} Taking inverse Laplace transform to (\ref{bc:dtn-pg}) and using (\ref{eq:l-3}), we have \begin{eqnarray}\label{bc:dtn-1} p\cdot\hat{x}+\mathscr{T}\left(\int_0^t u\,d\tau\right)=0 \quad \mbox{on}\quad \Gamma_R^{+}\times(0,T). \end{eqnarray} Further, since $\sigma(R)=\hat\sigma(R)=0$, we get $\alpha =\beta=1$ on $\Gamma_R^+$ and thus $\tilde u=u$ and $\tilde p=p$ on $\Gamma_R^+$. Therefore, ($\tilde u, \tilde p $) can be viewed as the extension of the solution of the problem (\ref{eqs:wave}). Setting $\tilde u=u $ and $\tilde p=p$ in $\Omega_R^+$, we can reformulate the truncated PML problem in $\Omega_{\rho}^{+}$ as \begin{subequations} \label{eq:t-pml} \begin{align} &\partial_t \tilde u+(\sigma +\hat \sigma)\tilde u +\sigma \tilde u^{*}+ \nabla \cdot \tilde p =f \quad &&\mbox{in}\quad \Omega_\rho^+\times (0,T) \label{eq:t-pml-a},\\ &\partial_t \tilde{p}^{*}=-\nabla\tilde u,\quad \partial_t \tilde{u}^{*}=\sigma \tilde u \quad &&\mbox{in}\quad \Omega_\rho^+\times (0,T) \label{eq:t-pml-b},\\ &\partial_t \tilde{p}+\Lambda_1 \tilde p= \partial_t \tilde{p}^{*}+\Lambda_2 \tilde p^{*} \quad &&\mbox{in}\quad \Omega_\rho^+\times (0,T) \label{eq:t-pml-c},\\ &\tilde u =0 \quad &&\mbox{on}\quad (\partial D \cup \Gamma_0) \times (0,T)\label{eq:t-pml-d},\\ &\tilde u =0 \quad &&\mbox{on}\quad \Gamma_\rho^+ \times (0,T)\label{eq:t-pml-e},\\ &\tilde u|_{t=0} =\tilde p|_{t=0} =\tilde u^{*}|_{t=0}=\tilde p^{*}|_{t=0} \quad &&\mbox{in}\quad \Omega_\rho^+. \label{eq:t-pml-f} \end{align} \end{subequations} The well-posedness of truncated PML problem will be proved by applying Laplace transform and the variational method. In the rest of this paper, we assume that $\sigma(r) $ is monotonically increasing on $[R, \rho]$ such that $\sigma_R \leq \sigma \leq \sigma_\rho$. First, we take Laplace transform to (\ref{eq:t-pml}) with $s\in{\mathbb C}$ and then eliminate $\tilde p_L$, $\tilde u_L^{*}$ and $\tilde p_L^{*}$, to obtain \begin{eqnarray} \label{eqs:s-pml} \left\{\begin{array}{lll} -\nabla\cdot ( A \nabla \tilde u_L)+s^2\alpha \beta \tilde u_L=sf_L \quad &&\mbox{in}\quad \Omega_\rho^+ \times (0,T),\\ \tilde u_L =0 \quad &&\mbox{on}\quad \partial D \cup \Gamma_0,\\ \tilde u_L =0 \quad &&\mbox{on}\quad \Gamma_\rho^+. \end{array}\right. \end{eqnarray} It is easy to derive the variational formulation of (\ref{eqs:s-pml}): find a solution $\tilde u_L\in H_0^1(\Omega_\rho^+)$ such that \begin{eqnarray}\label{eq:v-pml} \tilde a(\tilde u_L, v)=\int_{\Omega_\rho^+} s f_L \overline v dx, \quad \mbox{for all}\; v\in H_0^1(\Omega_\rho^+ ) \end{eqnarray} where the sesquilinear form $\tilde a(\cdot,\, \cdot):H_0^1(\Omega_\rho^+ )\times H_0^1(\Omega_\rho^+ )\rightarrow {\mathbb C}$ is defined as \begin{eqnarray*} \tilde a(\tilde u_L, v)=\int_{\Omega_\rho^+} A \nabla \tilde u_L \cdot \nabla \overline v+s^2 \alpha \beta \tilde u_L \overline v \,dx. \end{eqnarray*} We will prove the well-posedness of (\ref{eqs:s-pml}). The proof of the first inequality in the subsequent lemma is similar to that in \cite[Lemma 4.1]{Chen09} where the PML layer is defined as an annular domain in the free space ${\mathbb R} ^2$ and $\sigma$ is a positive constant. \begin{lem}\label{lem:a-pml} For any $\tilde u_L\in H_0^1(\Omega_\rho ^+ )$, it holds that \begin{itemize} \item[(a)] $\textnormal{Re}[\tilde a(u_L,u_L)]+\frac{s_2}{s_1+\sigma_\rho}\textnormal{Im}[\tilde a(u_L,u_L)]\geq \frac{s_1^2}{(s_1+\sigma_\rho)^2}\big(\|A\nabla u_L\|_{L^2(\Omega_\rho^+)}^2+\|s\alpha \beta u_L\|_{L^2(\Omega_\rho^+)}^2\big)$,\\ \item[(b)] $|\tilde a(u_L,u_L)|\geq \Big( \frac{s_1}{s_1+\sigma_\rho}\Big)^2\frac{s_1}{|s|}|\frac{s_1}{s+\sigma_\rho}|^2\big(\|\nabla u_L\|_{L^2(\Omega_\rho^+)}^2+\|s u_L\|_{L^2(\Omega_\rho^+)}^2\big)$. \end{itemize} \end{lem} \begin{proof} It suffices to prove (b). For any $\tilde u_L\in H_0^1(\Omega_\rho ^+ )$, applying (a) we have \begin{eqnarray*} |\tilde a(u_L,u_L)|&\geq &\frac{1}{|s|}\mbox{Re}[s \tilde a(u_L,u_L)] \\ &\geq &\frac{s_1}{|s|}\left(\textnormal{Re}[\tilde a(u_L,u_L)]+\frac{s_2}{s_1}\textnormal{Im}[\tilde a(u_L,u_L)]\right)\\ &\geq &\frac{s_1}{|s|}\left(\textnormal{Re}[\tilde a(u_L,u_L)]+\frac{s_2}{s_1+\sigma_\rho}\textnormal{Im}[\tilde a(u_L,u_L)]\right)\\ &\geq &\frac{s_1}{|s|}\left(\frac{s_1}{s_1+\sigma_\rho}\right)^2\left(\|A\nabla u_L\|_{L^2(\Omega_\rho^+)}^2+\|s\alpha \beta u_L\|_{L^2(\Omega_\rho^+)}^2\right)\\ &\geq &\Big(\frac{s_1}{s_1+\sigma_\rho}\Big)^2\frac{s_1}{|s|}\Big|\frac{s_1}{s+\sigma_\rho}\Big|^2\left(\|\nabla u_L\|_{L^2(\Omega_\rho^+)}^2+\|s u_L\|_{L^2(\Omega_\rho^+)}^2\right). \end{eqnarray*} This completes the proof. \end{proof} \begin{lem} \label{lem:e-pml} The variational problem (\ref{eq:v-pml}) has a unique solution $\tilde u_L\in H_0^1(\Omega_R^+)$ with the following stability estimates \begin{eqnarray} \|A\nabla u_L\|_{L^2(\Omega_\rho^+)}+\|s\alpha \beta u_L\|_{L^2(\Omega_\rho^+)}&\leq & C \left(\frac{|s|}{s_1}\right)^{1/2}\left(1+\frac{\sigma_\rho}{s_1} \right)\| f_L \|_{L^2(\Omega_\rho^+)}, \label{eq:s-e1}\\ \|\nabla u_L\|_{L^2(\Omega_\rho^+)}+\|s u_L\|_{L^2(\Omega_\rho^+)} &\leq & C \left(\frac{|s|}{s_1}\right)^{1/2}\left(1+\frac{\sigma_\rho}{s_1} \right) \frac{|s+\sigma_\rho|}{s_1} \| f_L \|_{L^2(\Omega_\rho^+)},\label{eq:s-e2} \end{eqnarray} where $C$ is a constant independent of $s$. \end{lem} \begin{proof} The first part of the lemma follows easily from the Lax-Milgram lemma and the strictly coercivity of $\tilde a(\cdot,\,\cdot)$ in Lemma \ref{lem:a-pml}. Further, the stability estimates (\ref{eq:s-e1}) and (\ref{eq:s-e2}) follow from (\ref{eq:v-pml}), Lemma \ref{lem:a-pml} and the Cauchy-Schwartz inequality. \end{proof} The well-posedness of PML problem (\ref{eq:t-pml}) in the time domain can be established by applying Lemma \ref{lem:e-pml}. \begin{thm} The truncated PML problem (\ref{eq:t-pml}) in the time domain has a unique solution $(u,p,u^*,p^*)$ such that \begin{eqnarray*} &u\in L^2(0,T;H_0^1(\Omega_\rho^+)) \cap H^1(0,T;L^2(\Omega_\rho^+)),\quad & u^*\in H^1(0,T;L^2(\Omega_\rho^+)),\\ &p\in L^2(0,T;H(\textnormal{div},\Omega_\rho^+)) \cap H^1(0,T;L^2(\Omega_\rho^+)),\quad & p^*\in H^1(0,T;L^2(\Omega_\rho^+)). \end{eqnarray*} \end{thm} \begin{proof} By simple calculations, we can obtain \begin{eqnarray*} &&\int_0^T \| \partial_t u \|_{L^2(\Omega_\rho^+)}^2 + \| \nabla u \|_{L^2(\Omega_\rho^+)}^2 \, dt\\ \leq && C \int_0^\infty e^{-2s_1 t} (\| \partial_t u \|_{L^2(\Omega_\rho^+)}^2 + \| \nabla u \|_{L^2(\Omega_\rho^+)}^2) \, dt. \end{eqnarray*} Hence it suffices to estimate the integral \begin{eqnarray*} \int_0^\infty e^{-2s_1 t} (\| \partial_t u \|_{L^2\Omega_\rho^+)}^2 + \| \nabla u \|_{L^2(\Omega_\rho^+)}^2) \, dt. \end{eqnarray*} Using the stability estimate of $u_L$ in Lemma \ref{lem:e-pml}, we duduce from \cite[Lemma 44.1]{Treves} that $u_L$ is a holomorphic function of $s$ on the half plane $s_1>\zeta_0>0$, where $\zeta_0$ is any positive constant. Thus, by Lemma \ref{lem:a}, the inverse Laplace transform of $u_L$ exists and is supported in $[0,\infty]$. Set $u=\mathscr{L}^{-1}(u_L)$. One deduces from the Parseval identity (\ref{PI}), stability estimate (\ref{eq:s-e2}) and the Cauchy-Schwartz inequality that \begin{eqnarray*} &&\int_0^\infty e^{-2s_1 t} (\| \partial_t u \|_{L^2(\Omega_\rho^+)}^2 + \| \nabla u \|_{L^2(\Omega_\rho^+)}^2) \, dt\\ =&& \frac{1}{2\pi} \int_{-\infty}^{+\infty} \| s u_L \|_{L^2(\Omega_\rho^+)}^2 + \| \nabla u_L \|_{L^2(\Omega_\rho^+)}^2) \, d s_2\\ \leq && C \frac{1}{s_1^3}\left( 1+\frac{\sigma_\rho}{s_1}\right)^2 \int_{-\infty}^{+\infty} |s| |s+\sigma_\rho|^2 \| f_L\|_{L^2(\Omega_\rho^+)}^2 \, d s_2 \\ = && C \frac{1}{s_1^3}\left( 1+\frac{\sigma_\rho}{s_1}\right)^2 \int_{-\infty}^{+\infty} \|s(s+\sigma_\rho)f_L\|_{L^2(\Omega_\rho^+)} \| (s+\sigma_\rho) f_L\|_{L^2(\Omega_\rho^+)}\, d s_2 \\ \leq && C \frac{1}{s_1^3}\left( 1+\frac{\sigma_\rho}{s_1}\right)^2 \int_{0}^{+\infty} e^{-2 s_1 t} \|\partial_{tt}f+\sigma_\rho\partial_t f\|_{L^2(\Omega_\rho^+)} \| \partial_t f+\sigma_\rho f\|_{L^2(\Omega_\rho^+)}\, d t \\ \leq && C \frac{1}{s_1^3}\left( 1+\frac{\sigma_\rho}{s_1}\right)^2 \int_{0}^{+\infty} e^{-2 s_1 t} \Big( \|\partial_{tt}f\|_{L^2(\Omega_\rho^+)} \|\partial_t f\|_{L^2(\Omega_\rho^+)} \\ &&+\sigma_\rho\|\partial_{tt}f\|_{L^2(\Omega_\rho^+)} \| f\|_{L^2(\Omega_\rho^+)} +\sigma_\rho\|\partial_t f\|_{L^2(\Omega_\rho^+)}^2 +\sigma_\rho^2\|\partial_t f\|_{L^2(\Omega_\rho^+)} \| f\|_{L^2(\Omega_\rho^+)} \Big) \, d t \\ \leq && C \frac{1}{s_1^3}\left( 1+\frac{\sigma_\rho}{s_1}\right)^2 \int_{0}^{+\infty} e^{-2 s_1 t} \Big( (1+\sigma_\rho)\|\partial_{tt}f\|_{L^2(\Omega_\rho^+)}^2 +\sigma_\rho(1+\sigma_\rho+\sigma_\rho^2)\|\partial_t f\|_{L^2(\Omega_\rho^+)}^2 \\ &&+ (1+\sigma_\rho)\| f\|_{L^2(\Omega_\rho^+)}^2 \Big)\, dt. \end{eqnarray*} This together with the Poincar\'{e} inequality proves \begin{eqnarray*} u\in L^2(0,T;H_0^1(\Omega_\rho^+)) \cap H^1(0,T; L^2(\Omega_\rho^+)). \end{eqnarray*} From (\ref{eq:s-m}) and the first equation of (\ref{eqs:s-pml}), we obtain \begin{eqnarray} \label{eq:t-tr} s p_L=-A \nabla u_L, \quad \nabla \cdot p_L=-s \alpha\beta u_L+f_L. \end{eqnarray} By the first equation of (\ref{eq:t-tr}) and stability estimate (\ref{eq:s-e1}), we deduce from \cite[Lemma 44.1]{Treves} that $p_L$ is holomorphic function of $s$ on the half plane $s_1>\zeta_0>0$, where $\zeta_0$ is any positive constant. Thus, by Lemma \ref{lem:a}, it follows from that the inverse Laplace transform of $p_L$ exists and is supported in $[0,\infty]$. Then, using the Parseval identity (\ref{PI}), Cauchy inequality with $\varepsilon$ and stability estimate (\ref{eq:s-e1}), we can obtain \begin{eqnarray*} &&\int_0^\infty e^{-2s_1 t} \left(\| \partial_t p \|_{L^2(\Omega_\rho^+)}^2 + \| \nabla \cdot p \|_{L^2(\Omega_\rho^+)}^2 \right) \, dt\\ = && \frac{1}{2\pi} \int_{-\infty}^{+\infty} \| s p_L \|_{L^2(\Omega_\rho^+)}^2 + \| \nabla \cdot p_L \|_{L^2(\Omega_\rho^+)}^2 \, d s_2\\ =&& \frac{1}{2\pi} \int_{-\infty}^{+\infty} \| A \nabla u_L \|_{L^2(\Omega_\rho^+)}^2 + \| s \alpha \beta u_L +f_L\|_{L^2(\Omega_\rho^+)}^2 \, d s_2\\ \leq && C \frac{1}{2\pi} \int_{-\infty}^{+\infty} \| A \nabla u_L \|_{L^2(\Omega_\rho^+)}^2 + \| s \alpha \beta u_L\|_{L^2(\Omega_\rho^+)}^2+\| f_L\|_{L^2(\Omega_\rho^+)}^2 \, d s_2\\ \leq && C \frac{1}{2\pi} \int_{-\infty}^{+\infty} \frac{|s|}{s_1} \left(1+\frac{\sigma_\rho}{s_1} \right)^2\| f_L \|_{L^2(\Omega_\rho^+)}^2+ \| f_L \|_{L^2(\Omega_\rho^+)}^2 \,d s_2\\ \leq && C \int_{0}^{+\infty} e^{-2s_1 t} \left( \frac{1}{s_1} \left(1+\frac{\sigma_\rho}{s_1} \right)^2 \|\partial_t f \|_{L^2(\Omega_\rho^+)}\| f \|_{L^2(\Omega_\rho^+)}+ \| f \|_{L^2(\Omega_\rho^+)}^2 \right) \,d t\\ \leq && C \int_{0}^{+\infty} e^{-2s_1 t} \left( \frac{1}{s_1} \left(1+\frac{\sigma_\rho}{s_1} \right)^2 \|\partial_t f \|_{L^2(\Omega_\rho^+)}^2+ \Big( \frac{1}{s_1} \left(1+\frac{\sigma_\rho}{s_1} \right)^2+1\Big)\| f \|_{L^2(\Omega_\rho^+)}^2 \right) \,d t. \end{eqnarray*} Hence, \begin{eqnarray*} p\;\in\; L^2(0,T;H(\textnormal{div},\Omega_\rho^+)) \cap H^1(0,T;L^2(\Omega_\rho^+)). \end{eqnarray*} By the second equation of (\ref{eq:t-pml-b}) and Poinc$\acute{a}$re's inequality, we see \begin{eqnarray*} &&\int_0^\infty e^{-2s_1 t} \| \partial_t u^* \|_{L^2(\Omega_\rho^+)}^2 \, dt\\ \leq &&\int_0^\infty e^{-2s_1 t} \sigma_\rho^2 \| u\|_{L^2(\Omega_\rho^+)}^2 \, dt\\ \leq && C \int_0^\infty e^{-2s_1 t} \sigma_\rho^2 \left(\| \nabla u\|_{L^2(\Omega_\rho^+)}^2 \right) \, dt\\ \leq && C \int_0^\infty e^{-2s_1 t} \sigma_\rho^2 \left(\| \nabla u\|_{L^2(\Omega_\rho^+)}^2 + \| \partial_t u\|_{L^2(\Omega_\rho^+)}^2 \right) \, dt. \end{eqnarray*} Then, in view of the solution space for $u$, we know $u^*\;\in\; H^1(0,T;L^2(\Omega_\rho^+))$. Similarly, from the first equation of (\ref{eq:t-pml-b}), we know $\| \partial_t p^*\|_{L^2(\Omega_\rho^+)}^2=\| \nabla u\|_{L^2(\Omega_\rho^+)}^2$, which implies $p^*\;\in\;H^1(0,T;L^2(\Omega_\rho^+))$. \end{proof} \subsection{Stability of the truncated PML problem } The aim of this subsection is to prove the stability of the PML problem (\ref{eq:t-pml}) with $\sigma=\hat \sigma$. We first present an auxiliary stability estimate. \begin{thm} \label{thm:sg} Let $(u,p,u^*,p^*)$ be the solution of the truncated PML problem (\ref{eq:t-pml}). Then there holds the stability estimate \begin{eqnarray*} &&\max_{0\leq t\leq T} \left( \|\partial_t u +\sigma u \|_{L^2(\Omega_\rho^+)} +\|\partial_t p +\sigma p \|_{L^2(\Omega_\rho^+)} +\|\partial_t u^* +\sigma u^* \|_{L^2(\Omega_\rho^+)} \right)\\ \leq && C \int_0^T \| \partial_t f +\sigma f\|_{L^2(\Omega_\rho^+)}\, dt, \end{eqnarray*} where the constant $C$ is independent of $\sigma$ and $T$. \end{thm} \begin{proof} We apply to equation (\ref{eq:t-pml-a}) the operator $\partial_t +\sigma$ to get \begin{eqnarray*} \partial_{t}^2 u+\nabla \cdot \left( \partial_t p +\sigma p \right) + (\sigma+\hat \sigma)\left( \partial_t u +\sigma u \right) +\hat \sigma \left( \partial_t u^* +\sigma u^* \right)= \partial_t f +\sigma f. \end{eqnarray*} Multiplying the above equation by $\partial_t u +\sigma u$ and integrating over $\Omega_\rho^+$ yield \begin{eqnarray} \begin{aligned} \label{eq:st-1} &\frac{1}{2}\frac{d}{dt}\| \partial_t u+\sigma u\|_{L^2(\Omega_\rho^+)}^2+ \frac{1}{2}\frac{d}{dt}\| \partial_t u^*+\sigma u^*\|_{L^2(\Omega_\rho^+)}^2+ \left( \nabla\cdot (\partial_t p+\sigma p),\, \partial_t u+\sigma u\right)_{\Omega_\rho^+} \\ &+\left((\sigma+\hat \sigma) (\partial_t u+\sigma u),\,(\partial_t u+\sigma u) \right )_{\Omega_\rho^+}= \left(\partial_t f+\sigma f,\, \partial_t u+\sigma u\right)_{\Omega_\rho^+}. \end{aligned} \end{eqnarray} Since \begin{eqnarray*} \int_0^t \left((\sigma+\hat \sigma) (\partial_\tau u+\sigma u),\,(\partial_\tau u+\sigma u) \right )_{\Omega_\rho^+}\, d\tau\geq0, \end{eqnarray*} integrating (\ref{eq:st-1}) from $0$ to $t$ and applying Green's first identity, we obtain \begin{eqnarray} \label{eq:t-su} \begin{aligned} &\frac{1}{2}\| \partial_t u+\sigma u\|_{L^2(\Omega_\rho^+)}^2+ \frac{1}{2}\| \partial_t u^*+\sigma u^*\|_{L^2(\Omega_\rho^+)}^2- \int_0^t \left( (\partial_\tau p+\sigma p),\, \nabla ( \partial_\tau u+\sigma u)\right)_{\Omega_\rho^+}\,d\tau\\ \leq & \frac{1}{2}\| \partial_t u|_{t=0}\|_{L^2(\Omega_\rho^+)}^2+\frac{1}{2}\| \partial_t u^*|_{t=0}\|_{L^2(\Omega_\rho^+)}^2 +\int_0^t\left(\partial_\tau f+\sigma f,\, \partial_\tau u+\sigma u\right)_{\Omega_\rho^+}\,d\tau . \end{aligned} \end{eqnarray} Here we have used the fact that $u|_{t=0}=u^*|_{t=0}=0$. We then apply $\partial_t $ to the first equation of (\ref{eq:t-pml-b}) and (\ref{eq:t-pml-c}) and eliminate the term with $p^*$. This gives \begin{eqnarray*} \partial_{t}^2 p+\Lambda_1 \partial_t p +\Lambda_2 \nabla u+\nabla \partial_t u=0. \end{eqnarray*} Multiplying the above equation by $\partial_t p +\sigma p$ and integrating over $\Omega_\rho^+$ yield \begin{eqnarray} \label{eq:t-p} \begin{aligned} &\frac{1}{2}\frac{d}{dt}\| \partial_t p+\sigma p\|_{L^2(\Omega_\rho^+)}^2+ \left( \nabla ( \partial_t u+\sigma u),\, (\partial_t p+\sigma p)\right)_{\Omega_\rho^+} \\ &+\left((\Lambda_1-\sigma I) \partial_t p,\,(\partial_t p+\sigma p) \right )_{\Omega_\rho^+}=0. \end{aligned} \end{eqnarray} Since $\sigma=\hat\sigma$, we have $\Lambda_1=\Lambda_2=\sigma I$ and $\Lambda_1-\sigma I=0$. Thus it follows from (\ref{eq:t-p}) that \begin{eqnarray} \label{eq:t-sp} \frac{1}{2}\| \partial_t p+\sigma p\|_{L^2(\Omega_\rho^+)}^2+ \int_0^t \left( \nabla ( \partial_\tau u+\sigma u),\, (\partial_\tau p+\sigma p)\right)_{\Omega_\rho^+} \,d\tau =\| \partial_t p|_{t=0}\|_{L^2(\Omega_\rho^+)}^2. \end{eqnarray} Adding (\ref{eq:t-su}) and (\ref{eq:t-sp}) we get \begin{eqnarray} &&\frac{1}{2}\| \partial_t u+\sigma u\|_{L^2(\Omega_\rho^+)}^2+ \frac{1}{2}\| \partial_t u^*+\sigma u^*\|_{L^2(\Omega_\rho^+)}^2+ \frac{1}{2}\| \partial_t p+\sigma p\|_{L^2(\Omega_\rho^+)}^2 \nonumber\\ &\leq & \frac{1}{2}\| \partial_t u|_{t=0}\|_{L^2(\Omega_\rho^+)}^2 +\frac{1}{2}\| \partial_t u^*|_{t=0}\|_{L^2(\Omega_\rho^+)}^2 \nonumber\\ &&+\| \partial_t p|_{t=0}\|_{L^2(\Omega_\rho^+)}^2 +\int_0^t\left(\partial_\tau f+\sigma f,\, \partial_\tau u+\sigma u\right)_{\Omega_\rho^+}\,d\tau. \nonumber \end{eqnarray} It follows from the compatibility conditions in (\ref{eq:t-pml-a})-(\ref{eq:t-pml-b}) and the initial conditions (\ref{eq:t-pml-f}) that \begin{equation} \partial_t u|_{t=0}=f|_{t=0}=0,\quad \partial_t u^*|_{t=0}=0, \quad \partial_t p|_{t=0}=0. \end{equation} Applying the Cauchy inequality with $\varepsilon$, we have \begin{eqnarray*} &&\max_{0\leq t\leq T} \left( \|\partial_t u +\sigma u \|_{L^2(\Omega_\rho^+)} +\|\partial_t p +\sigma p \|_{L^2(\Omega_\rho^+)} +\|\partial_t u^* +\sigma u^* \|_{L^2(\Omega_\rho^+)} \right)\\ \leq && C \int_0^T \| \partial_t f +\sigma f\|_{L^2(\Omega_\rho^+)}\, dt. \end{eqnarray*} \end{proof} The following lemma will be used to prove the stability of truncated PML problem (\ref{eq:t-pml}) which can be directly obtained from \cite[Lemma 3.2]{Chen12}. \begin{lem} \label{lem:sg} It holds that \begin{eqnarray*} \max_{0\leq t\leq T} \| \sigma u \|_{L^2(\Omega_\rho^+)}\leq \max_{0\leq t\leq T} \| \partial_t u+\sigma u \|_{L^2(\Omega_\rho^+)}. \end{eqnarray*} \end{lem} The main result of this subsection is stated as follows. \begin{thm} \label{thm:gg} The solution $(u,p,u^*,p^*)$ to the truncated PML problem (\ref{eq:t-pml}) satisfies the stability estimate \begin{eqnarray*} &&\max_{0\leq t\leq T} \left( \|\partial_t u \|_{L^2(\Omega_\rho^+)} +\|\partial_t p \|_{L^2(\Omega_\rho^+)} +\|\partial_t u^* \|_{L^2(\Omega_\rho^+)}++\|\partial_t p^* \|_{L^2(\Omega_\rho^+)} \right)\\ & \leq & C \int_0^T \| \partial_t f +\sigma f\|_{L^2(\Omega_\rho^+)}\, dt. \end{eqnarray*} \end{thm} \begin{proof} It follows from Lemma \ref{lem:sg} that \begin{eqnarray*} \max_{0\leq t\leq T}\| \partial_t u \|_{L^2(\Omega_\rho^+)} &\leq& \max_{0\leq t\leq T}\| \partial_t u +\sigma u \|_{L^2(\Omega_\rho^+)}+ \max_{0\leq t\leq T}\| \sigma u \|_{L^2(\Omega_\rho^+)} \\ &\leq& 2\max_{0\leq t\leq T}\| \partial_t u +\sigma u \|_{L^2(\Omega_\rho^+)}. \end{eqnarray*} Similarly, we obtain \begin{eqnarray*} \max_{0\leq t\leq T}\| \partial_t p \|_{L^2(\Omega_\rho^+)} &\leq& 2\max_{0\leq t\leq T}\| \partial_t p +\sigma p \|_{L^2(\Omega_\rho^+)}. \end{eqnarray*} Using (\ref{eq:t-pml-b}) and Lemma \ref{lem:sg}, \begin{eqnarray*} \max_{0\leq t\leq T}\| \partial_t u^* \|_{L^2(\Omega_\rho^+)} =\max_{0\leq t\leq T}\| \sigma u \|_{L^2(\Omega_\rho^+)} \leq \max_{0\leq t\leq T}\| \partial_t u +\sigma u \|_{L^2(\Omega_\rho^+)}. \end{eqnarray*} By (\ref{eq:t-pml-c}) and Lemma \ref{lem:sg}, one deduces \begin{eqnarray*} \max_{0\leq t\leq T}\| \partial_t p^* \|_{L^2(\Omega_\rho^+)} \;\; &\leq& 2\max_{0\leq t\leq T}\| \partial_t p^* +\sigma p^* \|_{L^2(\Omega_\rho^+)}\\ &=&2\max_{0\leq t\leq T}\| \partial_t p +\sigma p \|_{L^2(\Omega_\rho^+)}.\\ \end{eqnarray*} The desired estimate of Theorem \ref{thm:gg} follows from Theorem \ref{thm:sg} and the above estimates. \end{proof} \section {Convergence of PML method} In this section, we shall prove convergence of the PML method. First, we discuss the stability of an auxiliary problem for $(\tilde u, \tilde p, \tilde u^*, \tilde p^*) $ over the PML layer $\Omega_{PML}^+$. Consider \begin{eqnarray} \label{eq:t-pml-l} \left\{\begin{array}{lll} \partial_t \tilde u+(\sigma +\hat \sigma)\tilde u +\sigma \tilde u^{*}+ \nabla \cdot \tilde p =0 \quad &&\mbox{in}\quad \Omega_{PML}^+\times (0,T),\\ \partial_t \tilde{p}^{*}=-\nabla\tilde u \quad \partial_t \tilde{u}^{*}=-\sigma \tilde u \quad &&\mbox{in}\quad \Omega_{PML}^+\times (0,T),\\ \partial_t \tilde{p}+\Lambda_1 \tilde p= \partial_t \tilde{p}^{*}+\Lambda_2 \tilde p^{*} \quad &&\mbox{in}\quad \Omega_{PML}^+\times (0,T),\\ \tilde u =0 \quad &&\mbox{on}\quad (\Gamma_R^+ \cup \Gamma_0) \times (0,T),\\ \tilde u =\xi \quad &&\mbox{on}\quad \Gamma_\rho^+ \times (0,T)\\ \tilde u|_{t=0} =\tilde p|_{t=0} =\tilde u^{*}|_{t=0}=\tilde p^{*}|_{t=0} \quad &&\mbox{in}\quad \Omega_{PML}^+. \end{array}\right. \end{eqnarray} Below we prove a trace lemma which will be used in proving the stability of the above auxiliary problem (\ref{eq:t-pml-l}). \begin{lem} \label{lem:pml-stability} Let $\xi \in H^2(0,T;H^{1/2}_0(\Gamma_\rho^+))$. Then there exists a function $\zeta\in H^2(0,T;\\H^1(\Omega_{PML}^+))$ such that $\zeta=0$ on $\Gamma_R^+\times(0,T)$, $\zeta=\xi$ on $\Gamma_\rho^+\times(0,T)$ and \begin{eqnarray} &&\|\partial_t^2 \zeta\|_{L^2(0,T;L^2(\Omega_{PML}^+))}\leq C \rho^{1/2} \|\partial_t^2 \xi\|_{L^2(0,T;H^{-1/2}(\Gamma_\rho^+))}, \label{lem:tr-1}\\ &&\|\nabla \partial_t \zeta\|_{L^2(0,T;L^2(\Omega_{PML}^+))}\leq C \rho^{-1/2} \|\partial_t \xi\|_{L^2(0,T;H^{1/2}_0(\Gamma_\rho^+))}. \label{lem:tr-2} \end{eqnarray} \end{lem} \begin{proof} Expand $\xi(\theta,t)$ as follows \begin{eqnarray*} \xi(\theta,t)=\sum_{n=1}^{\infty} \xi_n(t)\sin n\theta,\quad \xi_n=\frac{2}{\pi}\int_{0}^{\pi} \xi(\theta, t)\sin n\theta \, d\theta. \end{eqnarray*} Let $\chi_n\in C^{\infty}[R,\rho]$ such that $\chi_n(\rho)=1$, $0\leq\chi_n(r)\leq1$, $| \chi_n^{'}|\leq C \delta_n^{-1}$ for $r\in [R,\rho]$, and supp$(\chi_n)\subset(\rho-\delta_n,\rho)$, where $\delta_n=(\rho-R)/\sqrt{1+n^2}$, $n\in{\mathbb Z}$. Define the function \begin{eqnarray*} \zeta(t,r,\theta):=\sum_{n=1}^{\infty} \xi_n(t)\chi_n(r)\sin n\theta. \end{eqnarray*} Then, it is clear that $\zeta=0$ on $\Gamma_R^+\times(0,T)$, $\zeta=\xi$ on $\Gamma_\rho^+\times(0,T)$. It is obvious that \begin{eqnarray*} \int_0^T \|\partial_t^2\zeta\|_{L^2(\Omega_{PML}^+)}^2 \,dt &=&\int_0^T \int_0^{\pi} \int_R^\rho \Big| \sum_{n=1}^{\infty} \xi_n^{''}(t)\chi_n(r)\sin n \theta \Big|^2 r \,dr d\theta dt\\ &=& \int_0^T \frac{\pi}{2} \sum_{n=1}^{\infty} \int_R^\rho \big| \xi_n^{''}(t)\big|^2 \big| \chi_n(r)\big|^2 r \, dr dt \\ &\leq & \int_0^T \frac{\pi}{2} \sum_{n=1}^{\infty} \int_{\rho-\delta_n}^\rho \big| \xi_n^{''}(t)\big|^2 r \, dr dt \\ &\leq& \int_0^T \frac{\pi}{2}\rho \sum_{n=1}^{\infty} \delta_n \big| \xi_n^{''}(t)\big|^2 \, dt \\ &\leq& \int_0^T |\rho-R| \|\partial_t^2 \xi\|_{H^{-1/2}(\Gamma_\rho^+)}^2 \, dt \\ & \leq & C \rho \|\partial_t^2 \xi\|_{L^2(0,T;H^{-1/2}(\Gamma_\rho^+))}^2. \end{eqnarray*} This proves (\ref{lem:tr-1}). Similarly, one can prove (\ref{lem:tr-2}). \end{proof} Theorem \ref{thm:au} below describes the stability of the solution to the problem (\ref{eq:t-pml-l}) in $\Omega_{PML}^+$. It can be easily proved by combining Lemmas \ref{lem:e-pml} and \ref{lem:pml-stability} together with the Parseval identity. Since the proof is quite similar to \cite[Theorem 4.3]{Chen09}, we omit the detailed proof. \begin{lem} \label{thm:au} Let $s_1=1/T$, $(\phi, \Phi, \phi^*, \Phi^*)$ be the solution of the PML problem (\ref{eq:t-pml-l}) in $\Omega_{PML}^+$. Then \begin{eqnarray*} & &\|\partial_t \Phi\|_{L^2(0,T;L^2(\Omega_{PML}^+))}+\|\nabla\cdot \Phi\|_{L^2(0,T;L^2(\Omega_{PML}^+))}\\ &\leq & (1+\sigma T)^2 T \left( \rho \|\partial_t^2 \xi\|_{L^2(0,T;H^{-1/2}(\Gamma_\rho^+))} + \rho^{-1} \|\partial_t \xi\|_{L^2(0,T;H^{1/2}_0(\Gamma_\rho^+))} \right ). \end{eqnarray*} \end{lem} We also need an estimate for the convolution proved in \cite[Lemma 5.2]{Chen09}. \begin{lem} \label{lem:pml-2} Let $g_1$, $g_2$ $\in $ $L^2(0,T)$. For any $\textnormal{Re}(s)=s_1>0$, it holds that \begin{eqnarray} \| g_1 \ast g_2\|_{L^2(0,T)}\leq e^{s_1 t} \left( \max_{-\infty<s_2<+\infty}|\mathscr{L}(g_1)(s_1+is_2)| \right ) \| g_2\|_{L^2(0,T)}. \end{eqnarray} \end{lem} The following result follows directly from the proof of Lemma \ref{lem:t}. \begin{lem} \label{lem:con} Given $t\geq0$ and $\omega\in L^2(0,T;H^{1/2}_0(\Gamma_R^+))$ with the initial condition $\omega(\cdot,0)=0$, it holds that \begin{eqnarray*} -\textnormal{Re}\int_0^t e^{-2s_1 \tau} \Big\langle \mathscr{T} \left(\int_0^\tau \omega(x,\eta) \,d \eta \right) ,\, \omega(x,\eta) \Big \rangle\, d\tau\geq0. \end{eqnarray*} \end{lem} Now, we are ready to verify the exponential convergence of the time-domain PML method. \begin{thm} \label{convergence} Let $(u,p)$ and $(\hat u, \hat p, \hat u^*, \hat p^*)$ be the solution of the problems (\ref{eqs:h-1}) and (\ref{t-pml}) with $s_1=T^{-1}$, respectively. Then \begin{eqnarray*} && \max_{0\leq t \leq T}\left( \|u-\hat u \|_{L^2(\Omega_R^+)}+\|p-\hat p \|_{L^2(\Omega_R^+) }\right )\\ \leq && C\,(1+\sigma T)^2 \rho T^{3/2} e^{-\rho \hat \sigma(\rho)\left(1-\frac{R^2}{\rho^2}\right)} \| \partial_t^2 \hat u\|_{L^2(0,T;H^{-1/2}(\Gamma_R^+))} \\ &&+C\,(1+\sigma T)^2 \rho^{-1}T^{3/2} e^{-\rho \hat \sigma(\rho)\left(1-\frac{R^2}{\rho^2}\right)} \| \partial_t \hat u\|_{L^2(0,T;H^{1/2}_0(\Gamma_R^+))}, \end{eqnarray*} where $C>0$ is a constant. \end{thm} \begin{proof} By (\ref{eqs:h-1}) and (\ref{eq:t-pml-a})-(\ref{eq:t-pml-b}), it follows that \begin{eqnarray} \frac{\partial (u-\hat u)}{\partial t}+\nabla\cdot(p-\hat p)=0 \quad &&\mbox{in} \; \Omega_R^{+}\times(0,T), \label{eq:c-1}\\ \frac{\partial (p-\hat p)}{\partial t}+\nabla (u-\hat u)=0 \quad &&\mbox{in} \; \Omega_R^{+}\times(0,T). \label{eq:c-2} \end{eqnarray} Multiplying both sides of (\ref{eq:c-1}) by a test function $v\in X_R$, using the DtN boundary condition (\ref{bc:dtn-1}) and Green's first formula, we obtain \begin{eqnarray} \label{eq:c-3} \begin{aligned} &\Big (\frac{\partial (u-\hat u)}{\partial t},\,v \Big)_{\Omega_R^{+}}- (p-\hat p,\, \nabla v )_{\Omega_R^{+}}-\Big\langle \mathscr{T}\Big(\int_0^t (u-\hat u)\, d\tau\Big),\, v \Big \rangle_{\Gamma_R^+} \\ =&\Big\langle \hat p \cdot \hat x+\mathscr{T}\Big(\int_0^t \hat u\, d\tau\Big),\, v \Big\rangle_{\Gamma_R^+}. \end{aligned} \end{eqnarray} Define \begin{eqnarray*} \omega:=u-\hat u,\quad \omega^*:=\int_0^t u-\hat u\,d\tau. \end{eqnarray*} Taking $v=\omega$ in (\ref{eq:c-3}) and applying (\ref{eq:c-3}) with $p-\hat p|_{t=0}=0$, we have \begin{eqnarray} \label{eq:c-4} \frac{1}{2}\frac{d}{dt}\Big( \| \omega \|_{L^2(\Omega_R^+)}^2 + \| \nabla \omega^* \|_{L^2(\Omega_R^+)}^2 \Big)-\langle \mathscr{T}(\omega^*),\, \omega \rangle_{\Gamma_R^+}= \Big\langle \hat p \cdot \hat x+\mathscr{T}\Big(\int_0^t \hat u\, d\tau\Big),\, \omega \Big\rangle_{\Gamma_R^+}. \end{eqnarray} Denote the spaces \begin{eqnarray*} &&X(0,T;\Omega_R^+):=\Big\{v\in L^{\infty}(0,T;L^2(\Omega_R^+)),\, v^*=\int_0^t v\, dt \in L^{\infty}(0,T;H^1(\Omega_R^+))\Big\},\\&&Y(0,T;\Gamma_R^+):=\Big\{\phi:\int_0^T \langle \phi,\,v \rangle_{\Gamma_R^+}\,dt<\infty, \forall\; v\in X(0,T;\Omega_R^+) \Big\}. \end{eqnarray*} It is clear that $X(0,T;\Omega_R^+)$ and $Y(0,T;\Gamma_R^+)$ are Banach spaces with the norms, respectively \begin{eqnarray} &&\|v\|_{X(0,T;\Omega_R^+)}=\sup_{0\leq t\leq T}\Big( \| v\|_{L^2(\Omega_R^+)}^2 + \| \nabla v^*\|_{L^2(\Omega_R^+)}^2 \Big)^{1/2},\\ &&\|\phi \|_{Y(0,T;\Gamma_R^+)}=\sup_{v\in X(0,T;\Omega_R^+)}\frac{\Big| \int_0^T \langle \phi,\, v \rangle_{\Gamma_R^+} \,dt \Big| }{\|v\|_{X(0,T;\Omega_R^+)}}. \label{eq:con-y} \end{eqnarray} Multiplying both sides of (\ref{eq:c-4}) by $e^{-2s_1 t}$ and then integrating from $0$ to $t$. Since $\omega|_{t=0}=\omega^*|_{t=0}=0$, taking the real part of the resulting identity and using Lemma \ref{lem:con} and trace theorem, we obtain \begin{eqnarray*} \|e^{-s_1 t} \omega\|_{{X(0,T;\Omega_R^+)}}^2\leq& C \Big\| e^{-s_1 t} \Big (\hat p \cdot \hat x+\mathscr{T}\Big(\int_0^t \hat u\, d\tau \Big) \Big )\Big \|_{Y(0,T;\Gamma_R^+)} \, \|e^{-s_1 t} \omega\|_{Y(0,T;\Gamma_R^+)}\\ \leq & C \| e^{-s_1 t} \Big (\hat p \cdot \hat x+\mathscr{T}\Big(\int_0^t \hat u\, d\tau\Big)\Big ) \Big\|_{Y(0,T;\Gamma_R^+)} \, \|e^{-s_1 t} \omega\|_{X(0,T;\Omega_R^+)}. \end{eqnarray*} Hence, by taking $s_1\rightarrow 0$ \begin{eqnarray} \label{eq:con-4} \sup_{0\leq t\leq T}\Big( \| \omega\|_{L^2(\Omega_R^+)}^2 + \| \nabla \omega^*\|_{L^2(\Omega_R^+)}^2 \Big) \leq C\Big \| \hat p \cdot \hat x+\mathscr{T}\Big(\int_0^t \hat u\, d\tau \Big) \Big\|_{Y(0,T;\Gamma_R^+)}. \end{eqnarray} It is clear that $\mathscr{T}\Big(\int_0^t \hat u\, d\tau \Big) =-\tilde {\hat p}\cdot \hat x$ on $\Gamma_R^+$, where $\tilde {\hat p}$ defines the PML extension of $\hat p$. Hence, in order to estimate $\Big\| \hat p \cdot \hat x+\mathscr{T}\Big(\int_0^t \hat u\, d\tau \Big) \Big\|_{Y(0,T;\Gamma_R^+)}$, it suffices to estimate $\|(\hat p-\tilde{\hat p})\cdot \hat x\|_{Y(0,T;\Gamma_R^+)}$. Since any function $v\in X(0,T;\Omega_R^+)$ can be extended into $\Omega_{PML}^+\times(0,T)$ such that $v=0$ on $\Gamma_\rho^+\times(0,T)$ and $\|v\|_{X(0,T;\Omega_R^+)}\leq C \|v\|_{X(0,T;\Omega_{PML}^+)}$, it follows by (\ref{eq:con-y}) that \begin{eqnarray} \begin{aligned} \label{eq:con-5} \|(\hat p-\tilde{\hat p})\cdot \hat x\|_{Y(0,T;\Gamma_R^+)} =&& \sup_{v\in X(0,T;\Omega_R^+)}\frac{\Big| \int_0^T \langle \phi,\, v \rangle_{\Gamma_R^+} \,dt \Big| }{\|v\|_{X(0,T;\Omega_R^+)}}\\ \leq && \sup_{v\in X(0,T;\Omega_R^+)}\frac{\Big| \int_0^T \langle \phi,\, v \rangle_{\Gamma_R^+} \,dt \Big| }{\|v\|_{X(0,T;\Omega_{PML}^+)}}. \end{aligned} \end{eqnarray} For any $v\in X(0,T;\Omega_{PML}^+)$ it has that $v=0$ on $\Gamma_\rho^+$, and then, by divergence theorem, \begin{eqnarray} \label{eq:con-6} \int_0^T \langle (\hat p- \tilde{\hat p})\cdot \hat x,\, v \rangle_{\Gamma_R^+}\, dt= \int_0^T \Big[(\nabla\cdot(\hat p-\tilde{\hat{p}}),\,v)_{\Omega_{PML}^+}+(\hat p-\tilde{\hat{p}},\,\nabla v)_{\Omega_{PML}^+} \Big]\,dt. \end{eqnarray} Now, it follows that, for any $v\in X(0,T;\Omega_{PML}^+)$, by the definition of $v^*$ and the initial condition $\hat p-\tilde{\hat{p}}|_{t=0} =0$, \begin{eqnarray} \label{eq:con-7} \begin{aligned} \Big | \int_0^T (\hat p- \tilde{\hat p},\, \nabla v)_{\Omega_{PML}^+}\, dt \Big | = &&(\hat p- \tilde{\hat p},\,\nabla v^*)_{\Omega_{PML}^+}\Big|_{0}^T-\int_0^T (\partial_t(\hat p- \tilde{\hat p}),\, \nabla v^*)_{\Omega_{PML}^+}\, dt \\ \leq && 2 \max_{0\leq t\leq T}\| \nabla v^*\|_{L^2(\Omega_{PML}^+)} \int_0^T \| \partial_t(\hat p- \tilde{\hat p})\|_{L^2(\Omega_{PML}^+)}\, dt. \end{aligned} \end{eqnarray} Combining (\ref{eq:con-5}), (\ref{eq:con-6}), (\ref{eq:con-7}) and using the Cauchy-Schwartz inequality, we have \begin{eqnarray*} \|(\hat p-\tilde{\hat p})\cdot \hat x\|_{Y(0,T;\Gamma_R^+)} \leq C \int_0^T \| \nabla \cdot (\hat p- \tilde{\hat p})\|_{L^2(\Omega_{PML}^+)}+ \| \partial_t(\hat p- \tilde{\hat p})\|_{L^2(\Omega_{PML}^+)} \,dt. \end{eqnarray*} This together with (\ref{eq:con-4}) leads to \begin{eqnarray*} &&\sup_{0\leq t\leq T}\Big( \| \omega\|_{L^2(\Omega_R^+)}^2 + \| \nabla \omega^*\|_{L^2(\Omega_R^+)}^2 \Big) \\ \leq && C \int_0^T \| \nabla \cdot (\hat p- \tilde{\hat p})\|_{L^2(\Omega_{PML}^+)}+ \| \partial_t(\hat p- \tilde{\hat p})\|_{L^2(\Omega_{PML}^+)} \,dt. \end{eqnarray*} Let $(\tilde{\hat u}, \tilde{\hat p}, \tilde{\hat u}^*, \tilde{\hat p}^*)$ be the PML extension of $(\hat u, \hat p, \hat u^*, \hat p^*)$. Then, $(\hat u-\tilde{\hat{u}}, \hat p- \tilde{\hat p}, \hat u^*-\tilde{\hat{u}}^*, \hat p^*- \tilde{\hat p}^*)$ satisfies the problem (\ref{eq:t-pml-l}) with $\xi=-\tilde{\hat u}|_{\Gamma_\rho^+}$. It follows by Theorem \ref{thm:au} and Cauchy-Schwartz inequality that \begin{eqnarray} \begin{aligned} \label{eq:con-8} &\sup_{0\leq t\leq T}\Big( \| \omega\|_{L^2(\Omega_R^+)}^2 + \| \nabla \omega^*\|_{L^2(\Omega_R^+)}^2 \Big)\\ \leq& \quad C (1+\sigma T)^2 T^{3/2} \left( \rho \|\partial_t^2 \tilde{\hat{u}}\|_{L^2(0,T;H^{-1/2}(\Gamma_\rho^+))} + \rho^{-1} \|\partial_t \tilde{\hat{u}}\|_{L^2(0,T;H^{1/2}_0(\Gamma_\rho^+))} \right ). \end{aligned} \end{eqnarray} Now we estimate each term on the right hand side of the above inequality. Since $\tilde{\hat u}$ is the PML extension of $\hat u$ in the time-domain for $r>R$, it can be written as \begin{eqnarray*} \tilde{\hat u}(r,\theta, t) =\sum_{n=1}^{\infty} \left [ \mathscr{L}^{-1}\Big(\frac{K_n'(s\tilde{r})}{K_n(sR)}\Big)*\hat u_n(R,t)\right]\,\sin{n\theta}, \end{eqnarray*} where $\hat u_n(R,t)=\frac{2}{\pi}\int_0^{\pi} \hat u(R,\theta,t)\sin n\theta \,d\theta$. Since $\hat u_n(R,0)=0$, we have \begin{eqnarray*} \partial_t \tilde{\hat u}=\sum_{n=1}^{\infty} \left [ \mathscr{L}^{-1}\Big(\frac{K_n'(s\tilde{r})}{K_n(sR)}\Big)*\partial_t \hat u_n(R,t)\right]\,\sin{n\theta}. \end{eqnarray*} Then, since $s\tilde{\rho}=s\rho+\hat\sigma\rho$, by Lemmas \ref{lem:pml-1} and \ref{lem:pml-2}, we know that for any $s_1>0$ \begin{eqnarray*} &&\|\partial_t \tilde {\hat u}\|_{L^2(0,T;H^{1/2}_0(\Gamma_\rho^+))}^2=\int_0^T \|\partial_t \tilde u \|_{H^{1/2}_0(\Gamma_\rho^+))}^2\,dt \\ =&& \int_0^T\frac{\pi}{2} \rho \sum_{n=1}^{\infty} (1+n^2)^{\frac{1}{2}} \left [ \mathscr{L}^{-1}\left(\frac{K_n'(s\tilde{\rho})}{K_n(sR)}\right)*\partial_t \hat u_n(R,t)\right]^2 \,dt \\ =&& \frac{\pi}{2} \rho \sum_{n=1}^{\infty} (1+n^2)^{\frac{1}{2}} \Big \| \mathscr{L}^{-1}\left(\frac{K_n'(s\tilde{\rho})}{K_n(sR)}\right)*\partial_t \hat u_n(R,t)\Big \|^2_{L^2(0,T)}\\ \leq && \frac{\pi}{2} \rho\, e^{2s_1 T} \sum_{n=1}^{\infty} (1+n^2)^{\frac{1}{2}} \max_{-\infty<s_2<+\infty}\left|\frac{K_n'(s\tilde{\rho})}{K_n(sR)}\right|^2\| \partial_t \hat u_n(R,t)\|^2_{L^2(0,T)}\\ \leq && \frac{\rho}{R}\, e^{2s_1 T} \max_{-\infty<n<+\infty} \max_{-\infty<s_2<+\infty}\left|\frac{K_n'(s\tilde{\rho})}{K_n(sR)}\right|^2\| \partial_t \hat u\|^2_{L^2(0,T;H^{1/2}_0(\Gamma_R^+))}\\ \leq && \frac{\rho}{R}\, e^{2s_1 T} e^{-2\rho \hat \sigma(\rho)\left(1-\frac{R^2}{\rho^2}\right)} \| \partial_t \hat u\|^2_{L^2(0,T;H^{1/2}_0(\Gamma_R^+))}.\\ \end{eqnarray*} This implies that \begin{eqnarray} \|\partial_t \tilde{ \hat u}\|_{L^2(0,T;H^{1/2}_0(\Gamma_\rho^+))} \leq Ce^{-\rho \hat \sigma(\rho)\left(1-\frac{R^2}{\rho^2}\right)} \| \partial_t \hat u\|_{L^2(0,T;H^{1/2}_0(\Gamma_R^+))}. \label{eq:con-9} \end{eqnarray} Analogously, we obtain \begin{eqnarray} \|\partial_t^2 \tilde {\hat u}\|_{L^2(0,T;H^{-1/2}_0(\Gamma_\rho^+))} \leq C e^{-\rho \hat \sigma(\rho)\left(1-\frac{R^2}{\rho^2}\right)} \| \partial_t^2 \hat u\|_{L^2(0,T;H^{-1/2}(\Gamma_R^+))}. \label{eq:con-10} \end{eqnarray} Combining (\ref{eq:con-8}) with (\ref{eq:con-9}) and (\ref{eq:con-9}), we complete the proof. \end{proof} \begin{rem} Theorem \ref{convergence} illustrates that the exponential convergence of error between the PML solution and the original solution can be achieved by enlarging the absorbing parameter $\sigma$ or the thickness $\rho-R$ of the PML layer . \end{rem} \begin{rem} We remark that the results in this paper can be easily extended to the Neumann boundary condition imposed on $\Gamma_0$. In the Neumann case, one should expand the solutions in terms of cosine functions in the Laplace domain and change correspondingly the solution spaces. However, we don't know how to extend the approach to the case of the impedance boundary condition. \end{rem} \section{Numerical implementation} In this section, we will present two numerical examples to demonstrate the convergence of the PML method. The PML equations are discretized by the finite element method in space and finite difference in time. The computations are carried out by the software FreeFEM. Multiply (\ref{eq:t-pml-a})-(\ref{eq:t-pml-c}) with test functions $v\in H^1_{0}(\Omega_\rho^+)$, $q\in L^2(\Omega_\rho^+)$, $v^*\in H^1_{0}(\Omega_\rho^+)$, $q^*\in L^2(\Omega_\rho^+)$, respectively. The weak formulation of system (\ref{eq:t-pml}) reads as follow: find $u\in L^2(0,T; H^1_{0}(\Omega_\rho^+))$, $p\in L^2(0,T; L^2(\Omega_\rho^+))$, $u^*\in L^2(0,T; L^2(\Omega_\rho^+))$, $p^*\in L^2(0,T; L^2(\Omega_\rho^+))$ such that \begin{subequations} \label{eq:t-pml-weak} \begin{align} &\left(\partial_t \tilde u,v\right)_{\Omega_\rho^+}+((\sigma +\hat \sigma)\tilde u, v)_{\Omega_\rho^+} +(\sigma \tilde u^{*}, v)_{\Omega_\rho^+}- ( \tilde p, \nabla v)_{\Omega_\rho^+} =(f,v)_{\Omega_\rho^+}, \\ &(\partial_t \tilde{p},q)_{\Omega_\rho^+}+(\Lambda_1 \tilde p,q)_{\Omega_\rho^+}-( \partial_t \tilde{p}^{*},q)_{\Omega_\rho^+}-(\Lambda_2 \tilde p^{*},q)_{\Omega_\rho^+}=0, \\ &(\partial_t \tilde{u}^{*}, v^{*})_{\Omega_\rho^+}-(\sigma \tilde u, q^{*})_{\Omega_\rho^+}=0, \\ &(\partial_t \tilde{p}^{*},q^{*})_{\Omega_\rho^+} + (\nabla \tilde u, q^{*})_{\Omega_\rho^+}=0,\\ &\tilde u =0 \quad \mbox{on}\; \partial D \cup \Gamma_0 \times (0,T),\\ &\tilde u =0 \quad \mbox{on}\; \Gamma_\rho^+ \times (0,T),\\ &\tilde u|_{t=0} =\tilde p|_{t=0} =\tilde u^{*}|_{t=0}=\tilde p^{*}|_{t=0} =0\quad \mbox{in}\; \Omega_\rho^+. \end{align} \end{subequations} Solutions of the weak form (\ref{eq:t-pml-weak}) will be numerically solved by an implicit finite difference in time and a finite element method in space. Let $\{t_0, t_1,...,t_N\}$ be a partition of the time interval $[0, T]$ and $\delta_ t^n = t_{n+1}-t_n$ be the $n$-th time-step size. Let $\mathcal M_h$ be a regular triangulation of $\Omega_\rho^+$. We assume the elements $K \in \mathcal M_h$ may have one curved edge align with $\Gamma_0\cup\Gamma_\rho^+$ so that $\Omega_\rho^+ = \cup_{ K\in \mathcal M_h }K$. As usual, we shall use the most simple continuous finite elements in the computation. The solutions $u$, $p$, $u^{*}$ and $p^{*}$ will be approximated in the finite element space $P_1$ for piecewise linear functions, namely, the standard Taylor-Hood finite element for the velocity-pressure variables, satisfying the inf-sup condition. Denote by $P_0$ the finite element space for piecewise constant functions. Define the spaces $L_h$, $\tilde V_h$, $V_h$ and $W_h$ as \begin{eqnarray*} L_h=\{\sigma \in L^2 (\Omega_{\rho}^+)\,|\,\forall K\in \mathcal T_h,\, \sigma_{|K}\in P_0\},\\ \tilde V_h=\{u\in H^1_{\star}(\Omega_{\rho}^+)\,|\,\forall K\in \mathcal T_h,\, u_{|K}\in P_1\},\\ V_h=\{u\in H^1(\Omega_{\rho}^+)\,|\,\forall K\in \mathcal T_h,\, u_{|K}\in P_1\},\\ W_h=\{p \in L^2 (\Omega_{\rho}^+)\,|\,\forall K\in \mathcal T_h,\, \sigma_{|K}\in P_1\}. \end{eqnarray*} Let $(u_h^n, p_h^n, u_h^{*n}, p_h^{*n}) \in \tilde V_h\times (W_h)^2\times V_h\times (W_h)^2$ be an approximation of $u(t_n), p(t_n), u^{*}(t_n)$ and $p^{*}(t_n)$ at the time point $t_n$. The approximated solution at $t_{n+1}$, which we denote as $(u_h^{n+1}, p_h^{n+1}, u_h^{*n+1}, p_h^{*n+1}) \in \tilde V_h\times (W_h)^2\times V_h\times (W_h)^2$, will be obtained by the following typical temporal scheme: \begin{figure}[htb] \centering \includegraphics[scale=0.6]{Fig/mesh.png} \caption{Geometry of the computational domain where $\{x\in \Omega: R<|x|<\rho\}$ is the PML layer. } \label{fig:mesh} \end{figure} \begin{subequations} \begin{align*} &\left( \frac{\tilde u_h^{n+1}-\tilde u_h^{n}}{\delta_t^n} ,v\right)_{\Omega_\rho^+}+\left((\sigma +\hat \sigma)\tilde u_h^{n+1}, v\right)_{\Omega_\rho^+} +\left(\sigma \tilde u_h^{*n+1}, v\right)_{\Omega_\rho^+} - \left( \tilde p_h^{n+1}, \nabla v\right)_{\Omega_\rho^+} =\left(f^{n+1},v\right)_{\Omega_\rho^+}, \\ &\left(\frac{\tilde{p}_h^{n+1}-\tilde{p}_h^{n}}{\delta_t^n},q\right)_{\Omega_\rho^+}+\left(\Lambda_1 \tilde p_h^{n+1},q\right)_{\Omega_\rho^+}-\left( \partial_t \tilde{p}^{*n+1}_h,q\right)_{\Omega_\rho^+}-\left(\Lambda_2 \tilde p^{*n+1}_h,q\right)_{\Omega_\rho^+}=0, \\ &\left( \frac{\tilde u_h^{*n+1}-\tilde u_h^{*n}}{\delta_t^n}, v^{*}\right)_{\Omega_\rho^+}-\left(\sigma \tilde u_h^{n+1}, q^{*}\right)_{\Omega_\rho^+}=0, \\ &\left(\frac{\tilde{p}_h^{*n+1}-\tilde{p}_h^{*n}}{\delta_t^n},q^{*}\right)_{\Omega_\rho^+} + \left( \nabla \tilde u_h^{n+1}, q^{*}\right)_{\Omega_\rho^+}=0, \end{align*} \end{subequations} where $f^{n+1}=f(t_{n+1})$ and the Dirichlet boundary condition is imposed on $\partial \Omega_\rho^+$. In the following numerical examples, we suppose $D= \emptyset $. The local rough surface is given by \begin{eqnarray*} h(x)= \left\{\begin{array}{lll} 0, && x\in(-\infty, -\frac{\pi}{4}), \\ 0.3\sin(4x), && x\in[ -\frac{\pi}{4}, \frac{\pi}{4}],\\ 0, && x\in(\frac{\pi}{4}, \infty). \end{array}\right. \end{eqnarray*} $\textbf{Example 1}$ We consider a time harmonic source term over the local rough surface. In the computation, we take $R=2$, $\rho=3$ and set $\sigma=\hat\sigma=10$. A mesh of $9245$ vertices, $510$ edges and $18128$ triangles is adopted and the terminal time is set at $t=8$. The time harmonic source is supposed to be given by \begin{eqnarray*} f(x,t):=\frac{e^{\frac{-|x-x_0|^2}{2\eta}}}{\sqrt{2\pi}\;\eta}\, \sin(2t),\; \eta=0.1, \end{eqnarray*} \begin{figure}[htb] \centering \subfigure[$t=2$]{ \includegraphics[scale=0.2]{Fig/t2-sin2t.png} } \subfigure[$t=4$ ]{ \includegraphics[scale=0.2]{Fig/t4-sin2t.png} } \subfigure[$t=8$]{ \includegraphics[scale=0.2]{Fig/t8-sin2t.png} } \caption{Numerical solutions excited by a point source over a local rough surface at time $t=2,4,8$, respectively.} \label{fig:sint} \end{figure} \noindent where the excitation frequency is $\omega=2$ and the location of source is at $x_0=(0,0.5)$. In Figure $\ref{fig:sint}$, we show the numerical solution at $t=2,4,8$, respectively. It is observed that the waves are almost attenuated in the PML layer without reflections from the interface between the physical domain and PML layer. To validate convergence of the PML method, we compute the relative error \begin{eqnarray*} E_{rel}:=\frac{\| u^{num}-u^{exa} \|_{L^{\infty}(0,T;L^\infty(\Omega_\rho^+))}}{\|u^{exa}\|_{L^{\infty}(0,T;L^\infty(\Omega_\rho^+))}}, \end{eqnarray*} where $u^{exa}$ represents the numerical solution with relatively larger absorbing parameters $\sigma$, $\hat \sigma$ and with a larger thickness $\rho-R$ of the PML layer. Note that analytical solutions are not available in general. \begin{figure}[htb] \centering \includegraphics[scale=0.5]{Fig/con_sigma.png} \caption{ Relative error $E_{rel}$ versus PML absorbing parameter $\sigma$ for fixed PML thickness $\rho-R=1$. The vertical axis is logarithmically scaled. } \label{fig:sigma} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.5]{Fig/con_thick_sin2t.png} \caption{ Relative error $E_{rel}$ versus PML thickness $\rho-R$ for fixed PML absorbing parameter $\sigma=25$. The vertical axis is logarithmically scaled. } \label{fig:thick-sin2t} \end{figure} Figures \ref{fig:sigma} and \ref{fig:thick-sin2t} show the decaying behavior of the relative error $E_{rel}$ as the PML absorbing parameter $\sigma$ or the PML thickness $\rho-R$ increases. In Figure \ref{fig:sigma} we take the PML absorbing parameter $\sigma$ varying between $5$ and $25$ and fix the PML layer thickness at $\rho -R=1$. Since the vertical axis is logarithmically scaled, the dashed lines indicate that the relative error $E_{rel}$ decays exponentially as $\sigma$ increases for a fixed layer thickness. In Figure $\ref{fig:thick-sin2t}$ we display the relative error versus PML thickness $\rho-R$ for fixed absorbing $\sigma=25$. We take the PML thickness $\rho-R$ ranging from $2.6$ to $4$. It is obvious that relative error $E_{rel}$ decays exponentially as $\rho-R$ increases for a fixed absorbing parameter. $\textbf{Example 2}$ In this example, we consider a non-harmonic source term, that is not compactly supported with respect to the time-variable. We take $R=2$, $\rho=3.4$ and set $\sigma=\hat \sigma =25$. A mesh of $11502$ vertices, $553$ edges and $22599$ triangles is applied and the final time is set at $t=10$. The source is of the form \begin{eqnarray*} f(x,t):=\frac{e^{\frac{-|x-x_0|^2}{2\eta}}}{\sqrt{2\pi}\;\eta}\, t,\; \eta=0.1, \end{eqnarray*} which is located at $(0,0.5)$. We present in Figure \ref{fig:t} the numerical solutions at time $t=3, 7, 10$, respectively. It shows that the source is excited all the time and rare reflection occurs at the interface $\Gamma_R^+$. \begin{figure}[htb] \centering \subfigure[$t=3$]{ \includegraphics[scale=0.2]{Fig/t3.png} } \subfigure[$t=7$ ]{ \includegraphics[scale=0.2]{Fig/t7.png} } \subfigure[$t=10$]{ \includegraphics[scale=0.2]{Fig/t10.png} } \caption{Numerical solutions for a point source over a local rough surface at time $t=3,7,10$, respectively.} \label{fig:t} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.5]{Fig/con_sigma_t.png} \caption{ Relative error $E_{rel}$ versus PML absorbing parameter $\sigma$ for fixed PML thickness $\rho-R=1.4$. The vertical axis is logarithmically scaled. } \label{fig:sigma-t} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.5]{Fig/con_thick_t.png} \caption{ Relative error $E_{rel}$ versus PML thickness $\rho-R$ for fixed PML absorbing parameter $\sigma=25$. The vertical axis is logarithmically scaled. } \label{fig:thick-t} \end{figure} Figures \ref{fig:sigma-t} and \ref{fig:thick-t} show the convergence of the relative error $E_{rel}$ versus one of the two PML parameters $\sigma$ and $\rho-R$. In Figure \ref{fig:sigma-t}, we present the relative error $E_{rel}$ versus the PML absorbing parameter $\sigma$ changing from $5$ to $25$ for the fixed PML thickness $\rho-R=1.4$. As in \textbf{Example 1}, we observe that $E_{rel}$ decays exponentially as $\sigma$ increases. In Figure \ref{fig:thick-t} we display the relative error versus the PML thickness $\rho-R$ for a fixed absorbing $\sigma=25$. We take the PML thickness $\rho-R$ changing from $2.4$ to $4$. It is obvious that the relative error $E_{rel}$ decays exponentially as $\rho-R$ increases. \section{Conclusion} In this paper we study the PML-method to the time domain acoustic scattering problem over a locally rough surface. We proved well-posedness of the scattering problem and the PML formulation. The long time stability of the PML formulations is obtained by the energy method. The exponential convergence of the PML solution is proved and it can be realized by either enlarging the PML absorbing parameter or the thickness of the layer. The convergence results are verified by two numerical examples. \newpage \begin{appendices} \section{Laplace transform} For any $s=s_1+is_2$ with $s_1>0$, $s_2\in {\mathbb R}$, we define the Laplace transform of $u$ as \begin{eqnarray*} u_L(s)=\mathscr{L}(u)(s):=\int_0^\infty e^{-st}u(t)\, dt. \end{eqnarray*} Some properties of the Laplace transform and its inversion are listed as follows: \begin{eqnarray} \mathscr{L}(\frac{du}{dt})&=&s u_L-u(0),\\ \mathscr{L}(\frac{d^2u}{dt^2})&=&s^2 u_L- su(0) -\frac{du}{dt}(0),\\ \int_0^t u(\tau)\, d\tau&=&\mathscr{L}^{-1}(s^{-1}u_L(s)). \label{eq:l-3} \end{eqnarray} It can be verified from the inverse Laplace transform that \begin{eqnarray*} u(t)=\mathscr{F}^{-1}(e^{s_1t} \mathscr{L}(u)(s_1+is_2)). \end{eqnarray*} where $\mathscr{F}^{-1}$ denotes the inverse Fourier transform with respect to $s_2$. Below is the Parseval or Plancherel identity for the Laplace transform (see \cite[equation (2.46)]{Cohen}) \begin{lem} \textnormal{(Parseval or Plancherel Identity)} If $u_L=\mathscr{L}(u)$ and $v_L=\mathscr{L}(v)$,then \begin{eqnarray} \label{PI} \frac{1}{2\pi}\int_{-\infty}^{+\infty} u_L(s)\,v_L(s) \,ds_2=\int_0^{\infty} e^{-2s_1 t} u(t)\,v(t)\,dt, \end{eqnarray} for all $s_1>s_0$, where $s_0$ is abscissa of convergence for Laplace transform of $u$ and $v$. \end{lem} The following lemma refers to \cite[Theorem 43.1]{Treves}. \begin{lem} \label{lem:a} let $\breve{\omega}(s)$ denote a holomorphic function in the half complex plane $\textnormal{Re} (s)>\sigma_0$ for some $\sigma_0\in{\mathbb R}$, valued in the Banach space ${\mathcal E}$. The following conditions are equivelent: \item[(1)] there is a distribution $\omega \in \mathcal{D}_+^{'}$ whose Laplace transform is equal to $\breve\omega(s)$, where $\mathcal{D}^{'}_+({\mathcal E})$ is the space of distributions on the real line which vanish identically in the open negative half-line; \item[(2)] there is a $\sigma_1$ with $\sigma_0\leq\sigma_1<\infty$ and an integer $m\geq0$ such that for all complex numbers $s$ with $\textnormal{Re} (s)>\sigma_1$ it holds that $\|\breve{\omega}(s)\|_{{\mathcal E}}\leq C(1+|s|)^m$. \end{lem} \section{Sobolev spaces} For a bounded domain $D$ with Lipschitz continuous boundary $\partial D$, define the Sobolev space \begin{eqnarray*} H(\mbox{div},\,D):=\{u\in L^2(D): \nabla \cdot u\in L^2(D)\}. \end{eqnarray*} which is a Hilbert space with the norm \begin{eqnarray*} \|u\|_{H(\mbox{div},\,D)}=\left (\|u\|^2_{L^2 (D)}+\|\nabla\cdot u\|^2 _{L^2(D)}\right)^{1/2}. \end{eqnarray*} \noindent The first Green formula takes the form \begin{eqnarray} (\nabla\cdot u,\, v)_{D}+(u,\, \nabla v)_{D}=\langle u\cdot n,\, v \rangle_{\partial D} \quad \mbox{for all}\quad u, v \in H(\mbox{div},\,D) \end{eqnarray} where $(\cdot, \cdot)_{D}$ and $\langle \cdot ,\cdot \rangle_{\partial D}$ denote the $L^2$-inner product on $D$ and the dual pairing product between $H^{-1/2}(\partial D)$ and $H^{1/2}(\partial D)$, respectively. Let $\Gamma_R^+$ be defined as in Section 2. For any $u\in C^{\infty}_0(\Gamma_R^+)$, we have the Fourier series expansion \begin{eqnarray*} u(R,\theta)=\sum_{n=1}^{\infty} a_n \sin n \theta, \quad a_n=\frac{2}{\pi}\int_0^\pi u(R,\theta) \sin n \theta\, d\theta. \end{eqnarray*} Define the trace function space $H^p_0(\Gamma_R^+)$ as \begin{eqnarray*} H^p_0(\Gamma_R^+):=\left \{u\in L^2(\Gamma_R^+): \|u\|_{H^p(\Gamma_R^+)}:= \left(\sum_{n=1}^{\infty}(1+n^2)^p|a_n|^2\right)^{1/2}<+\infty. \right\}. \end{eqnarray*} \section{Modified Bessel functions} We look for solutions to the Helmholz equation \begin{eqnarray*} \Delta u(x)-s^2u(x)=0, \end{eqnarray*} in the form of \begin{eqnarray*} u(x)=y(sr)e^{in\theta}, \quad n=0, \pm1, \pm2, \cdots, \end{eqnarray*} where $(r,\theta)$ are cylindrical coordinates. It is obvious that $y(r)$ is a solution of the ordinary equation \begin{eqnarray*} \frac{d^2y}{dr^2}+\frac{1}{r}\frac{dy}{dr}-\left(1+\frac{n^2}{r^2}\right)y=0. \end{eqnarray*} The modified Bessel functions of order $\nu$, which we denote by $K_{\nu}(z)$, are solutions to the differential equation \begin{eqnarray*} z^2 \frac{d^2y}{dz^2}+z\frac{dy}{dz}-(z^2+\nu^2)y=0. \end{eqnarray*} $K_\nu(z)$ satisfies the following asymptotic behavior as $|z|\rightarrow\infty$ \begin{eqnarray*} K_\nu(z)\sim\left( \frac{\pi}{2z} \right)^{1/2}e^{-z}. \end{eqnarray*} The following estimates for the modified Bessel function $K_\nu(z)$ have been proved in \cite[Lemma 2.10 and 5.1]{Chen09} . \begin{lem} \label{lem:mbf-1} Let $s=s_1+i s_2$ with $s_1>0$, $s_2\in{\mathbb R}$. It holds that \begin{eqnarray*} -\textnormal{Re}\Big(\frac{K_n^{'}(sR)} {K_n(sR)}\Big)\geq0. \end{eqnarray*} \end{lem} \begin{lem} \label{lem:pml-1} Suppose $\nu \in {\mathbb R}$, $s=s_1+i s_2$ with $s_1>0$, $s_2\in{\mathbb R}$, $\rho_1>\rho_2>0$ and $\tau>0$. It holds that \begin{eqnarray*} \frac{|K_{\nu}(s\rho_1+\tau)|}{|K_{\nu}(s\rho_2)|}\leq e^{-\tau (1-\rho_2^2/\rho_1^2)}. \end{eqnarray*} \end{lem} \end{appendices} \section*{Acknowledgements} G. Hu is partially supported by the National Natural Science Foundation of China (No. 12071236) and the Fundamental Research Funds for Central Universities in China (No. 63213025).
1,116,691,498,058
arxiv
\section{Introduction} Certain quantum field theories are known to possess a continuous set of inequivalent ground states, or in other words, a quantum moduli space. Every point in the moduli space corresponds to a vacuum state with zero energy. Turning on a non-zero temperature $T$ will generically ``lift'' moduli space, leaving a much smaller set of thermal equilibrium states. For some range of temperatures, there may be a unique equilibrium state. For other temperatures, there may be multiple degenerate equilibrium states related by spontaneously broken global symmetries. One may define a thermal effective potential, or free energy functional, using the same coordinates which parametrize the zero temperature moduli space. This will turn the flat $T=0$ zero energy surface into a non-trivial $T>0$ free energy surface. Equilibrium states correspond to the global minima of this free energy surface. Computing this free energy surface in an interacting theory is, of course, non-trivial. Supersymmetry provides little help, since cancellations between bosonic and fermionic particles in virtual processes are spoiled by their different statistics at non-zero temperature. Nevertheless, two extreme limits are interesting and amenable to analytical calculation: arbitrarily low temperatures and asymptotically high temperatures. In the former case, one might expect the free energy surface to be a slight deformation away from the flat surface of moduli space. What does this lift look like, and where do the minima of the free energy surface lie? In the high temperature regime, thermal fluctuations should have a disordering effect on the system. Are spontaneously broken global symmetries restored? At what temperature? In this work, we examine the effects of thermal fluctuations on the equilibrium properties and realization of global symmetries in the simplest asymptotically free supersymmetric gauge theory with a continuous moduli space, $SU(2)$ $\mathcal{N}\,{=}\,2$ supersymmetric Yang-Mills theory. Much about this theory is known from the celebrated work of Seiberg and Witten \cite{SW1,SW2}. The following features make it an attractive model for our purposes: \begin{enumerate}\advance\itemsep -6pt \item [(\textit{i})] The quantum theory has a continuous moduli space of vacua --- it is a one-complex dimensional K\"ahler manifold parametrized by a single complex number, $u$. \item [(\textit{ii})] Each vacuum describes a Coulomb phase where the long distance dynamics is Abelian; there is no vacuum state with long distance non-Abelian dynamics ({\it i.e.}, confinement). \item [(\textit {iii})] Generic ground states spontaneously break a discrete {\it R}-symmetry. \item [(\textit {iv})] Asymptotic freedom guarantees that vacua in the neighborhood of infinity on moduli space have weakly-coupled descriptions in terms of the light elementary fields. \item [(\textit {v})] Two distinguished ``singular'' points in moduli space exist where extra massless states with spin $\leq \tfrac{1}{2}$ appear. The corresponding particles are magnetically charged under the long-distance Abelian gauge group and may be interpreted as magnetic monopoles or dyons. For vacua in neighborhoods of these special points, a low energy effective description is strongly-coupled in terms of the elementary fields. However, a version of electric-magnetic duality provides a weakly-coupled formulation in terms of dual fields. \end{enumerate} The combination of asymptotic freedom and electric-magnetic duality enables one to use weak coupling methods to explore the dynamics both near and far from the singular points in moduli space. There is a dynamically generated mass scale $\La$ in the theory (analogous to $\La_{\rm QCD}$). For temperatures much greater than $\La$, the free energy may be computed as an asymptotic expansion in the small effective gauge coupling $g^2(T)$. For temperatures much less than $\La$, one may use appropriate low energy effective descriptions near infinity, or near the special points on moduli space, to compute the free energy as an expansion in the appropriate effective gauge coupling (either $g^2(u)$ or its magnetic dual). Thermal effects in $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory at low temperature have previously been studied by Wirstam \cite{Wirstam}. In this work, it was asserted that the free energy density was locally minimized asymptotically far out on moduli space, and on circles of non-zero radius surrounding the singular points. It was not made clear which local minima represented the global minimum. The free energy surface found in Ref.~\cite{Wirstam} is depicted on the left side of Figure~\ref{fig:uflow}. In this figure, arrows depict directions of free energy decrease ({\em i.e.}, minus the gradient). The picture implies non-monotonic behavior as one moves from a singular point to infinity, with a free energy barrier separating the large $u$ domain from the region near the singular points, and some sort of instability at the singular points. Such features are unexpected and surprising. One puzzle is why the free energy surface slopes downward to infinity. Massive states get heavier as one moves further out on moduli space so, in the absence of interactions, one would expect their contribution to the pressure to decrease (since the associated particle density falls exponentially due to Boltzmann suppression). The free energy density is minus the pressure, so the decoupling of massive states as one approaches the boundary of moduli space should lead to a rising free energy. Do interactions, in an asymptotically free theory, really change this simple behavior? A second puzzle concerns the circle of minima around each singular point. What physical mechanism leads to this? There is no continuous global symmetry whose action on moduli space produces phase rotations around a singular point, and whose spontaneous breaking could explain such a circle of free-energy minima. \FIGURE[t]{ \label{fig:uflow} {\bf \hspace{0.01in} (a) results of Ref.~\cite{Wirstam} \hspace{1.8in} (b) this work} \\[5pt] \centerline{\includegraphics[height=2.7in]{uflow.pdf}} \vspace*{-10pt} \caption{ Qualitative form of the free energy surface in $SU(2)$ $\mathcal{N}\,{=}\,2$ Yang-Mills theory at $T \ll \La$. Arrows indicate directions on moduli space for which free energy decreases. The singular points at $\pm u_0$ are represented by heavy dots. The large dotted circle at infinity serves to guide the eye. Left: Asserted behavior from Ref.~\cite{Wirstam}. The dashed circles surrounding the singular points at $\pm u_0$ represent valleys of stable local minima. Right: Results of our analysis. The singular points are stable minima.} } The main purpose of this paper is to derive the correct behavior of the free energy surface at low temperatures by reconsidering the computations of Ref.~\cite{Wirstam}. Using effective field theory techniques, we systematically evaluate the contributions of successively longer wavelength fluctuations to the effective scalar potential. The free energy surface, viewed as a functional of the translationally invariant expectation value which parameterizes moduli space, may be identified with this effective potential. Unlike Ref.~\cite{Wirstam}, we find the simple behavior sketched on the right side of Figure \ref{fig:uflow}. In the low temperature regime, the asymptotic region of the free energy surface is locally unstable; the free energy decreases as one moves inward from infinity. The two singular points on moduli space where monopoles or dyons become massless are local minima of the free energy. There is no evidence for any other local minima. Assuming so, this means that at sufficiently low temperatures there are two distinct equilibrium states, related by a spontaneously broken discrete \textit{R}-symmetry. In contrast, at sufficiently high temperatures there is a unique equilibrium state and the {\it R}-symmetry is unbroken. Hence, the theory undergoes a thermal phase transition. The transition temperature must be a pure number times the strong scale $\La$. Our analysis mirrors that of Ref.~\cite{Wirstam}, but also extends it in several important areas in order to fix the misunderstanding of the free energy. The problematic interpretation suggested by Figure \ref{fig:uflow}(a) arises from a sign error and a mistreatment of zero-frequency modes. We determine the sign of a crucial next-to-leading term in the momentum expansion of the low energy effective theory using general arguments based on analyticity constraints satisfied by scattering amplitudes in any UV-complete theory \cite{Adams}. An $S$-duality transformation is then used to relate results in different regimes of moduli space \cite{Hen}. When a large hierarchy separates the temperature from smaller momentum scales of interest, we construct appropriate three-dimensional effective theories which implement the Wilsonian procedure of integrating out short distance fluctuations in order to generate effective descriptions for the long distance degrees of freedom \cite{BN1,BN2,ArnYaf,YamYaf}. The remainder of this paper is organized as follows. In Sec.~\ref{sec:review} we review relevant facts about $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory at zero temperature, and discuss the formulation of four-dimensional low energy effective theories that will be useful for studying the low temperature regime. In Sec.~\ref{sec:hightemp}, we discuss the high temperature limit and the unique equilibrium state that realizes all {\it R}-symmetries. Portions of this analysis involving the construction of the appropriate high temperature three-dimensional effective theory are relegated to Appendix~\ref{app:hightemp}. Low temperature thermal effects on moduli space are the subject of Sec.~\ref{sec:lowtemp}. We analyze the thermal effective potential for the scalar field that is related to the local coordinate on moduli space. The generalization of our analysis to so-called $\mathcal{N}\,{=}\,2^*$ theory, obtained by adding a single flavor of massive adjoint hypermultiplet to $\mathcal{N}\,{=}\,2$ super-Yang-Mills, is discussed in Sec.~\ref{sec:star}. Finally, in Sec.~\ref{sec:discuss} we summarize our findings and discuss some open questions. \section{Review of \boldmath $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory} \label{sec:review} We consider four dimensional $\mathcal{N}\,{=}\,2$ supersymmetric pure Yang-Mills theory with gauge group $SU(2)$. It is renormalizable and asymptotically free. Consequently, the dimensionless running coupling transmutates into a renormalization group invariant energy scale $\La$. This theory describes the interactions of an $\mathcal{N}\,{=}\,2$ vector multiplet $\mathcal{A}$. In terms of $\mathcal{N}\,{=}\,1$ superfields, the superfield $\mathcal{A}$ consists of a scalar-valued adjoint representation chiral multiplet $\Phi$ and a spinor-valued chiral field strength $W_\alpha$. On-shell, the component fields in $\Phi$ are a complex adjoint scalar $\phi$ and an adjoint Weyl fermion $\psi_\alpha$. The field strength $W_\alpha$ contains an adjoint Weyl fermion $\lambda} \def\La{\Lambda_\alpha$ and a gauge field $A_\mu$. We shall always work in Euclidean space unless noted otherwise, with an action% \footnote{ To obtain a Minkowski space Lagrange density, $\mathcal{L}_\text{(Mink.)}$, from the Euclidean version, one performs the rotation $x^0 = -ix^0_\text{E}$ and identifies $\mathcal{L}_\text{(Mink.)} = -\mathcal{L}_\text{(Eucl.)}$. } \begin{equation} \label{action} S_\text{(Eucl.)} = \int dx^0_\text{E} \, d^3x \> \mathcal{L}_\text{(Eucl.)}. \end{equation} Using $\mathcal{N}\,{=}\,1$ superspace notation, the Lagrange density is given by \begin{equation} \label{Ls} -g^2\mathcal{L} = \biggl(\int d^2\theta\> \tfrac{1}{2}\text{tr}(W^\alpha W_\alpha) + \text{H.c.}\biggr) + 2\int d^2\theta \> d^2\bar{\theta}\> \text{tr}(\Phi^\dag e^{[2V,\,\cdot\,]} \Phi)\,, \end{equation} where the field strength $W_\alpha$ and vector superfield $V$ are related by% \footnote{ We use the superspace conventions of Ref.~\cite{WB}. The factor of 2 in front of the K\"ahler potential ensures that the kinetic terms of the two Weyl fermions have the same normalization. } \begin{equation} W_\alpha = -\tfrac{1}{8}\bar{D}_{\dot{\alpha}} \bar{D}^{\dot{\alpha}} (e^{-2V}D_\alpha e^{2V}) \,. \end{equation} We employ a matrix notation where all fields are Lie algebra-valued (so, for example, $W_\alpha = W_\alpha^a T^a$ with repeated group indices summed over $a = 1,\dotsc,\dim G$). We take $G = SU(N_{\rm c})$ and will specialize to $N_{\rm c} = 2$ momentarily. The fundamental representation Lie algebra generators $T^a$ are traceless Hermitian $N_{\rm c} \times N_{\rm c}$ matrices satisfying $[T^a, T^b] = if^{abc}T^c$, normalized such that $\text{tr}(T^a T^b) = \delta^{ab}/2$. The structure constants are real and totally antisymmetric. The integrands of the superspace integrals are manifestly invariant under gauge transformations of the form $e^{2V} \to e^{-i\La}e^{2V}e^{i\La}$, where $\La$ is a fundamental representation chiral superfield. Under gauge transformations, the fields $W_\alpha$, $\Phi$, and $e^{[2V,\cdot]}\Phi$ all transform via conjugation by the same group element. Starting from Eq.~\eqref{Ls}, it is straightforward to show that the Lagrange density in terms of on-shell component fields is given by \begin{equation} \label{Lc} \begin{split} g^2\mathcal{L} & = 2\, \text{tr}\Bigl\{ \tfrac{1}{4} F_{\mu\nu}F_{\mu\nu} + i\bar{\lambda}\bar{\sigma}_\text{E}^\mu D_\mu\lambda} \def\La{\Lambda + i\bar{\psi}} \def\Psibar{\bar{\Psi}\bar{\sigma}_\text{E}^\mu D_\mu\psi + (D_\mu\phi)^\dag D_\mu\phi \\ &\qquad - i\sqrt{2}[\lambda} \def\La{\Lambda,\psi]\phi^\dag - i\sqrt{2}[\bar{\lambda},\bar{\psi}} \def\Psibar{\bar{\Psi}]\phi + \tfrac{1}{2} [\phi^\dag,\phi]^2 \Bigr\}\,. \end{split} \end{equation} The covariant derivative $D_\mu = \partial_\mu + i[A_\mu,\cdot]$, and the field strength $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu + i[A_\mu,A_\nu]$. For a Weyl fermion, $\bar{\lambda}_{\dot{\alpha}} = (\lambda} \def\La{\Lambda_\alpha)^\dag$ where $\alpha,{\dot{\alpha}} = 1,2$. Our spinor conventions follow those of Ref.~\cite{WB} except that the metric is $\delta_{\mu\nu}$ and the matrix $\sigma_\text{E}^0$ is anti-Hermitian. The matrices $(\sigma_\text{E}^\mu)_{\alpha{\dot{\alpha}}}$ form a basis for $2\times 2$ complex matrices: $\sigma_\text{E}^0 = i\bigl(\begin{smallmatrix} -1 & \phantom-0 \\ \phantom-0 & -1 \end{smallmatrix}\bigr)$, and $\sigma_\text{E}^i$ are the standard Pauli matrices. The $\epsilon$ tensor is used to raise spinor indices to obtain $(\bar{\sigma}_\text{E}^\mu)^{{\dot{\beta}}\beta} = \epsilon^{{\dot{\beta}}{\dot{\alpha}}}\epsilon^{\beta\alpha}(\sigma_\text{E}^\mu)_{\alpha{\dot{\alpha}}}$. Numerically, $\bar{\sigma}_\text{E}^0 = \sigma_\text{E}^0$ and $\bar{\sigma}_\text{E}^i = -\sigma_\text{E}^i$, although the index structures are distinct. Note that $[\lambda} \def\La{\Lambda,\psi] = \lambda} \def\La{\Lambda^\alpha\psi_\alpha - \psi^\alpha\lambda} \def\La{\Lambda_\alpha$ and $[\bar{\lambda},\bar{\psi}} \def\Psibar{\bar{\Psi}] = \bar{\lambda}_{\dot{\alpha}}\bar{\psi}} \def\Psibar{\bar{\Psi}^{\dot{\alpha}} - \bar{\psi}} \def\Psibar{\bar{\Psi}_{\dot{\alpha}}\bar{\lambda}^{\dot{\alpha}}$. The Lagrange density \eqref{Lc} is invariant under a global $SU(2)_R \times U(1)_R$ {\it R}-symmetry. The fermions $\bigl(\begin{smallmatrix} \la \\ \psi \end{smallmatrix}\bigr)$ transform as a doublet under $SU(2)_R$ while $A_\mu$ and $\phi$ transform as singlets.% \footnote{ Since $\lambda} \def\La{\Lambda$ and $\psi$ belong to different $\mathcal{N}\,{=}\,1$ multiplets, one may check the consistency of $\mathcal{N}\,{=}\,2$ supersymmetry in Eq.~\eqref{Lc} by testing the invariance of the Lagrange density under $ \bigl(\begin{smallmatrix} \la \\ \psi \end{smallmatrix}\bigr) \to \bigl(\begin{smallmatrix} \phantom-\psi \\ -\lambda} \def\La{\Lambda \end{smallmatrix}\bigr) $. This discrete transformation corresponds to the $\bigl(\begin{smallmatrix} \phantom-0 & 1 \\ -1 & 0 \end{smallmatrix}\bigr)$ element of $SU(2)_R$, which generates a $\mathbb{Z}_4$ subgroup. } The $U(1)_R$ factor is an ordinary $\mathcal{N}\,{=}\,1$ {\it R}-symmetry under which $\Phi$ has charge 2 and $W_\alpha$ has charge 1. Quantum mechanically, the $U(1)_R$ is anomalous and only a $\mathbb{Z}_{4N_{\rm c}}$ subgroup survives.% \footnote{ This anomalous {\it R}-symmetry is the reason why no topological charge density (and associated theta angle) appears in the Lagrange density. Since $\Phi = \phi + \sqrt{2}\theta\psi + \dotsb$, and $W_\alpha = -i\lambda} \def\La{\Lambda_\alpha + \dotsb$, it follows that both Weyl fermions have {\it R}-charge 1. If one repackages these Weyl spinors as a single Dirac spinor $\Psi = \bigl(\begin{smallmatrix} \lambda} \def\La{\Lambda_\alpha \\ \bar{\psi}} \def\Psibar{\bar{\Psi}^{\dot{\alpha}} \end{smallmatrix} \bigr)$, then $U(1)_R$ acts as a continuous chiral transformation, $\Psi \to e^{i\omega} \def\W{\Omega\gamma} \def\G{\Gamma_5}\Psi$, in a basis where $\gamma} \def\G{\Gamma_5 = \text{diag}(1,1,-1,-1)$. Under this field redefinition, the fermion measure in the functional integral acquires a nontrivial Jacobian involving the exponential of the topological charge. Therefore, an appropriate choice of $\omega} \def\W{\Omega$ allows one to cancel any dependence of the theory on the theta angle. } For $N_{\rm c} = 2$, the true global {\it R}-symmetry is thus $(SU(2)_R \times \mathbb{Z}_8)/\mathbb{Z}_2$, where the division by $\mathbb{Z}_2$ is a reminder not to double count the $(-1)^F$ symmetry (with $F$ fermion number) present in both the center of $SU(2)_R$ and $\mathbb{Z}_8$. It will be useful for later purposes to mention another massless representation of $\mathcal{N}\,{=}\,2$ supersymmetry, the hypermultiplet $\mathcal{H}$. In terms of $\mathcal{N}\,{=}\,1$ superfields, $\mathcal{H}$ consists of two scalar-valued chiral multiplets $Q$ and $Q'$ that transform under conjugate representations of the gauge group. On-shell, $Q$ contains a complex scalar $q$ and a Weyl fermion $\psi_q$. Similarly, $Q'$ contains a complex scalar $q'$ and a Weyl fermion $\psi_{q'}$. The scalars $\bigl(\begin{smallmatrix} q \\ q'^\dag \end{smallmatrix}\bigr)$ transform as an $SU(2)_R$ doublet while $\psi_q$ and $\psi_{q'}$ transform as singlets. Both $Q$ and $Q'$ have {\it R}-charge 0. Vacua in this theory may be described classically by the requirements that $F_{\mu\nu} = \lambda} \def\La{\Lambda = \psi = 0$, $\phi$ is covariantly constant, and $[\phi,\phi^\dag] = 0$. If a diagonalizing gauge transformation is made to write $\phi = a \sigma^3/2$ for some arbitrary complex number $a$, then $\phi$ automatically commutes with its Hermitian conjugate. The two eigenvalues of $\phi$ are $a$ and $-a$. Since we are free to permute them, $\phi = -a\sigma^3/2$ also describes the same vacuum. This permutation freedom is part of the residual gauge invariance (specifically, conjugation by $\bigl(\begin{smallmatrix} \phantom- 0 & 1 \\ -1 & 0 \end{smallmatrix}\bigr)$). A translation invariant and gauge invariant ``order parameter'' parameterizing the space of vacua is $u = \vev{\text{tr}\,(\phi^2)}$. Since $u$ is a complex number, the classical space of vacua is the complex $u$-plane. In the quantum theory $\phi$ (and its eigenvalues $\pm a$) is a fluctuating field. The space of vacua may be changed by quantum effects, but it can never be entirely lifted. One reason is that it is impossible to generate an effective superpotential (and therefore no squares of F-terms in the scalar potential) invariant under $\mathcal{N}\,{=}\,2$ supersymmetry without also including at least one light hypermultiplet. There are no hypermultiplets at weak coupling. For $|\vev{a}| \gg \La$, asymptotic freedom ensures that the theory is weakly-coupled. When quantum fluctuations of the eigenvalues of $\phi$ are small compared to their vacuum expectation value, the fractional difference $\left<{(a-\vev a)^2}\right>/\vev{a}^2 \ll 1$ and hence $u \approx \vev{a}^2/2$. For any vacuum satisfying $|u| \gg \La^2$, a weak-coupling mean field analysis is reliable and shows that the Higgs mechanism reduces the gauge group from $SU(2)$ to $U(1)$. There are massive $W$ bosons charged under the $U(1)$ photon. The $W$ bosons have masses proportional to the expectation value of $a$, \begin{equation} \label{Wmass} M_W = \sqrt{2}\,|\vev{a}|\,. \end{equation} $\mathcal{N}\,{=}\,2$ supersymmetry requires the $W$ bosons to belong to Abelian vector multiplets, and other components in the multiplet must have the same mass. The dynamics of the resulting theory at momenta much less than $M_W$ is both Abelian and $\mathcal{N}\,{=}\,2$ supersymmetric. The $\mathbb{Z}_8$ {\it R}-symmetry is spontaneously broken since $u$ has {\it R}-charge 4. The unbroken subgroup $\mathbb{Z}_4$ acts trivially on $u$, while the coset $e^{2\pi i/8} \mathbb{Z}_4$ acts as $u \to -u$. In Ref.~\cite{SW1}, it was shown that when quantum effects are taken into account the space of vacua, or moduli space, is precisely the $u$-plane but with three singular points: one at infinity and two at finite values $\pm u_0$. One may choose to renormalize the operator $\text{tr}\,(\phi^2)$ so that $u_0 = \La^2$. The existence of a continuous set of vacua implies that $\vev{\text{tr}\,(\phi^2(x))}$ may have arbitrarily long wavelength fluctuations. Such configurations can have arbitrarily small spatial gradients with negligible cost in energy, implying that there are massless states in the spectrum of the Hamiltonian. These states comprise an $\mathcal{N}\,{=}\,2$ Abelian vector multiplet. The singularity at infinity is a consequence of asymptotic freedom. The singularities at $\pm u_0$ are interpreted as vacua in which extra massless states appear in the spectrum. These massless states have spins 0 and $\tfrac{1}{2}$, and constitute an $\mathcal{N}\,{=}\,2$ Abelian hypermultiplet. Since there are no elementary hypermultiplets in the theory, these extra particles are solitonic excitations. Massless non-Abelian gluons never appear for any choice of $u$. That is, there is no vacuum corresponding to an infrared fixed point with conformal invariance. Every choice of $u$ (even $u = 0$) corresponds to a theory in which the long distance dynamics is Abelian, possibly with extra massless excitations. The theory is always in a Coulomb phase. To discuss the particle spectrum at a given $u$, it is helpful to construct an effective theory describing the dynamics in such a vacuum at arbitrarily low momentum. At a generic point in moduli space, the massless fields comprise a $U(1)$ $\mathcal{N}\,{=}\,2$ vector multiplet which is simply the neutral component $\mathcal{A}^3 = (\Phi^3, W^3_\alpha)$ of the gauge triplet.% \footnote{ This is a direct consequence of the Higgs mechanism for large $|u|$. By analytic continuation in the $u$-plane (avoiding possible singularities or cuts), it must also be true even for $|u| \sim \La^2$ where the dynamics is strongly-coupled. In fact, if this were not true, then the $U(1)$ photon would have to obtain a mass through some type of Higgs mechanism. This cannot happen because there are no charged scalars in the $U(1)$ vector multiplet~\cite{Argy}. } The complex scalar field $\phi^3 = \Phi^3|_{\theta=\bar{\theta}=0}$ is identical to the eigenvalue field $a$ when $\phi$ is diagonal. Abusing notation (but following Ref.~\cite{SW1}), we shall henceforth refer to $\mathcal{A}^3$ as $\mathcal{A}$, its $\mathcal{N}\,{=}\,1$ scalar-valued chiral multiplet $\Phi^3$ as $A$, and its field strength $W^3_\alpha$ as $W_\alpha$. The low energy effective theory possesses $\mathcal{N}\,{=}\,2$ supersymmetry, and this is made manifest by constructing a Lagrange density directly in $\mathcal{N}\,{=}\,2$ superspace, \begin{equation} \label{Leffss} \mathcal{L}_\text{eff} = -\tfrac{1}{4\pi}\>\text{Im}\int d^4\theta\, \mathcal{F}(\mathcal{A}) - \int d^4\theta \> d^4\bar{\theta}\> \mathcal{K}(\mathcal{A},\bar{\mathcal{A}}) + O(n \geq 6). \end{equation} The prepotential $\mathcal{F}(\mathcal{A})$ is a mass dimension two holomorphic function of $\mathcal{A}$. The function $\mathcal{K}(\mathcal{A},\bar{\mathcal{A}})$ is dimensionless and non-holomorphic in $\mathcal{A}$.% \footnote{ Henceforth, $\bar{\mathcal{A}} \equiv \mathcal{A}^\dag$. } The terms in Eq.~\eqref{Leffss} are organized as an expansion in the `order in derivatives' $n$, explained in Ref.~\cite{Hen}. The number $n$ is defined such that $\mathcal{A}$ has $n = 0$ and the supercovariant derivative has $n = 1/2$. Ordinary spacetime derivatives have $n = 1$ since they arise from anticommutators of supercovariant derivatives. From the structure of a supercovariant derivative, it immediately follows that Grassmann-valued superspace coordinates $\theta^\alpha_i, \bar{\theta}^{{\dot{\alpha}} i}$ have $n = -1/2$. Gauge fields and scalars have $n=0$ and fermions have $n=1/2$. Based on this counting scheme, the chiral superspace integral has $n = 2$ and the full superspace integral has $n = 4$. Therefore, knowledge of the prepotential completely determines terms in the effective Lagrange density with up to two spacetime derivatives and at most four fermions. Note that Eq.~\eqref{Leffss} is unchanged by a linear shift of $\mathcal{F}$ or the addition of a holomorphic function (and its Hermitian conjugate) to $\mathcal{K}$. When $\mathcal{F}$ and $\mathcal{K}$ are determined through matching calculations at the momentum scale $M_W$, then Eq.~\eqref{Leffss} will correctly reproduce gauge invariant correlators of the light fields for distances $\gg M_W^{-1}$. The effective Lagrange density \eqref{Leffss} in $\mathcal{N}\,{=}\,1$ superspace notation is \cite{SW1,Hen} \begin{equation} \label{Leffs} \mathcal{L}_\text{eff} = \mathcal{L}_\text{eff}^{n=2} + \mathcal{L}_\text{eff}^{n=4} + O(n\geq 6), \end{equation} where \begin{equation} \label{Leff2s} \mathcal{L}_\text{eff}^{n=2} = -\tfrac{1}{4\pi}\text{Im}\biggl[\int d^2\theta\,\tfrac{1}{2}\mathcal{F}''(A)\, W^\alpha W_\alpha + \int d^2\theta d^2\bar{\theta}\, \mathcal{F}'(A)\, \Abar\biggr], \end{equation} and \begin{equation} \label{Leff4s} \begin{split} \mathcal{L}_\text{eff}^{n=4} =& -\int d^2\theta \> d^2\bar{\theta}\,\Bigl\{ \mathcal{K}_{A\Abar}(A,\Abar)\Bigl[(D^\alpha D_\alpha A) (\bar{D}_{\dot{\alpha}} \bar{D}^{\dot{\alpha}}\Abar) + 2(\bar{D}_{\dot{\alpha}} D^\alpha A) (D_\alpha \bar{D}^{\dot{\alpha}} \Abar) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + 4(D^\alpha W_\alpha) (\bar{D}_{\dot{\alpha}} \bar{W}^{\dot{\alpha}}) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - 4(D^{(\alpha}W^{\beta)}) (D_{(\alpha}W_{\beta)}) - 2D^\alpha D_\alpha(W^\beta W_\beta) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - 4(\bar{D}_{({\dot{\alpha}}}\bar{W}_{{\dot{\beta}})}) (\bar{D}^{({\dot{\alpha}}}\bar{W}^{{\dot{\beta}})}) - 2\bar{D}_{\dot{\alpha}}\bar{D}^{\dot{\alpha}}(\bar{W}_{\dot{\beta}}\bar{W}^{\dot{\beta}})\Bigr] \\ & ~~~~~~~~~~~~~~~~~~ - 2\mathcal{K}_{AA\Abar}(A,\Abar) W^\alpha W_\alpha D^\beta D_\beta A - 2\mathcal{K}_{A\Abar\Abar}(A,\Abar) \bar{W}_{\dot{\alpha}}\bar{W}^{\dot{\alpha}} \bar{D}_{\dot{\beta}} \bar{D}^{\dot{\beta}} \Abar \\ & ~~~~~~~~~~~~~~~~~~ + \mathcal{K}_{AA\Abar\Abar}(A,\Abar)\Bigl[-8(W^\alpha D_\alpha A) (\bar{W}_{\dot{\alpha}}\bar{D}^{\dot{\alpha}}\Abar) + 4W^\alpha W_\alpha \bar{W}_{\dot{\alpha}} \bar{W}^{\dot{\alpha}}\Bigr] \Bigr\}. \end{split} \end{equation} In expression \eqref{Leff4s}, subscripts on the non-holomorphic function $\mathcal{K}$ indicate derivatives with respect to the indicated arguments. Expression \eqref{Leff4s} is unique up to terms proportional to $D^\alpha W_\alpha - \bar{D}_{\dot{\alpha}}\bar{W}^{\dot{\alpha}}$ which vanish because $W_\alpha$ satisfies the Bianchi identity. A gauge invariant description of moduli space is the $u$-plane, a one-complex dimensional manifold. One may promote the coordinate $u$ to a field $u(x)$, valued in the complex numbers. The dynamics of arbitrarily long wavelength fluctuations of $u(x)$ around a constant value is described by a sigma model action of the form $S_\text{eff} = \int d^4x\bigl[\gamma} \def\G{\Gamma(u,\bar{u}) \, \partial_\nu u \, \partial_\nu\bar{u} + \dotsb\bigr]$. One can regard the coefficient of the two derivative term as a metric on moduli space. The line element is written as $ds^2 = \gamma} \def\G{\Gamma(u,\bar{u}) \, du \, d\bar{u}$. The metric on moduli space is easily determined for $|u| \gg \La^2$. In this regime the effective theory is formulated in terms of the scalar field $a(x)$, and its interactions are weakly coupled due to asymptotic freedom. The translationally invariant vacuum expectation value $\vev{a}$ is mapped to $u$ by the approximate formula $u \approx \vev{a}^2/2$. Therefore, the asymptotic region of moduli space may be parametrized by the local coordinate $a$.% \footnote{ Here $a$ is understood to mean $\vev{a}$. Future usage should be clear from context. } The induced metric in field configuration space is obtained from the K\"ahler potential $K(A,\Abar) = \frac{1}{4\pi}\text{Im}(\mathcal{F}'(A)\Abar)$. The full superspace integral of the K\"ahler potential yields \begin{equation} \mathcal{L}_\text{eff} = \gamma} \def\G{\Gamma(a,\bar{a}} \def\Abar{\bar{A})\, \partial_\mu a \, \partial_\mu\bar{a}} \def\Abar{\bar{A} + \dotsb, \end{equation} with the K\"ahler metric \begin{equation} \gamma} \def\G{\Gamma(a,\bar{a}} \def\Abar{\bar{A}) = \partial_a \partial_{\bar{a}} \def\Abar{\bar{A}} K|_{\theta=\bar{\theta}=0} = \tfrac{1}{4\pi} \text{Im}\,\mathcal{F}''(a)\,. \end{equation} However, the effective theory is more than just a sigma model; it is also an Abelian gauge theory. Define a holomorphic gauge coupling function $\tau(A) = \mathcal{F}''(A)$. The half superspace integral of $\frac{1}{8\pi}\tau(A)\,W^\alpha W_\alpha$ yields \begin{equation} \mathcal{L}_\text{eff} = \tfrac{1}{4} g_\text{eff}^{-2}(a,\bar{a}} \def\Abar{\bar{A}) F_{\mu\nu}F_{\mu\nu} + \tfrac{1}{32\pi^2}\,\theta_\text{eff}(a,\bar{a}} \def\Abar{\bar{A}) F_{\mu\nu}\widetilde{F}_{\mu\nu} + \dotsb, \end{equation} where the inverse effective gauge coupling and the effective theta angle are given by \begin{equation} g_\text{eff}^{-2}(a,\bar{a}} \def\Abar{\bar{A}) = \tfrac{1}{4\pi}\, \text{Im}\,\tau(a), \qquad \theta_\text{eff}(a,\bar{a}} \def\Abar{\bar{A}) = 2\pi\,\text{Re}\,\tau(a). \end{equation} Note that $\mathcal{N}\,{=}\,2$ supersymmetry requires that the inverse gauge coupling $g_\text{eff}^{-2}$ and the K\"ahler metric $\gamma} \def\G{\Gamma$ coincide. The prepotential fixes all the effective couplings in $\mathcal{L}_\text{eff}^{n=2}$. It is straightforward to express expression \eqref{Leff2s} in terms of off-shell component fields, \begin{equation} \label{Leff2c} \begin{split} \mathcal{L}_\text{eff}^{n=2} & = g_\text{eff}^{-2}(a,\bar{a}} \def\Abar{\bar{A})\Bigl\{ |\partial_\mu a|^2 + \tfrac{1}{4} F_{\mu\nu}^2 + \bigl(\tfrac{i}{2}\psi\sigma_\text{E}^\mu D_\mu\bar{\psi}} \def\Psibar{\bar{\Psi} + \tfrac{i}{2}\lambda} \def\La{\Lambda\sigma_\text{E}^\mu D_\mu\bar{\lambda} + \text{H.c}\bigr) - |F|^2 - \tfrac{1}{2} D^2 \\ &\quad + \bigl[\G(a,\bar{a}} \def\Abar{\bar{A})\bigl(\tfrac{1}{2}\psi^2 F^* + \tfrac{1}{2} \lambda} \def\La{\Lambda^2 F - \tfrac{i}{\sqrt{2}}\lambda} \def\La{\Lambda\psi D - \tfrac{1}{\sqrt{2}}\lambda} \def\La{\Lambda\sigma_\text{E}^{\mu\nu}\psi F_{\mu\nu}\bigr)+\text{H.c.}\bigr] \\ &\quad - \bigl[R(a,\bar{a}} \def\Abar{\bar{A})\tfrac{1}{4}\lambda} \def\La{\Lambda^2\psi^2 + \text{H.c}\bigr]\Bigr\} + \tfrac{1}{32\pi^2}\,\theta_\text{eff}(a,\bar{a}} \def\Abar{\bar{A})\, F_{\mu\nu}\widetilde{F}_{\mu\nu}. \end{split} \end{equation} In expression \eqref{Leff2c}, each fermion bilinear is shorthand for a spinor contraction ({\it e.g.}, $\psi^2 \equiv \psi^\alpha\psi_\alpha$), the Abelian field strength $F_{\mu\nu} \equiv \partial_\mu A_\nu - \partial_\nu A_\mu$ and its Hodge dual $\widetilde{F}^{\mu\nu} \equiv \tfrac{1}{2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}$ (with $\epsilon^{0123} \equiv 1$), and \begin{subequations} \begin{align} D_\mu &= \partial_\mu - \G(a,\bar{a}} \def\Abar{\bar{A})\, \partial_\mu a \,, \\ \G(a,\bar{a}} \def\Abar{\bar{A}) &= \gamma} \def\G{\Gamma(a,\bar{a}} \def\Abar{\bar{A})^{-1}\, \partial_a \gamma} \def\G{\Gamma(a,\bar{a}} \def\Abar{\bar{A}) \,, \\ \nonumber R(a,\bar{a}} \def\Abar{\bar{A}) &= \partial_a \G(a,\bar{a}} \def\Abar{\bar{A}) + \G(a,\bar{a}} \def\Abar{\bar{A})^2 \\ &= \gamma} \def\G{\Gamma(a,\bar{a}} \def\Abar{\bar{A})^{-1}\, \partial_a^2 \gamma} \def\G{\Gamma(a,\bar{a}} \def\Abar{\bar{A})\,. \end{align} \end{subequations} The on-shell form of $\mathcal{L}_\text{eff}^{n=2}$ may be obtained by solving the equations of motion for the auxiliary fields, but we will not need that result.% \footnote{ The D-term equation is $D = -\frac{1}{\sqrt{2}}\G(a,\bar{a}} \def\Abar{\bar{A})i\lambda} \def\La{\Lambda\psi + \text{H.c.}$ and the F-term equation is $F = \tfrac{1}{2}\G(a,\bar{a}} \def\Abar{\bar{A})\psi^2 + \tfrac{1}{2}\G(a,\bar{a}} \def\Abar{\bar{A})^* \bar{\lambda}^2$. Substituting these into expression \eqref{Leff2c} produces four-fermion operators only. We do not need the explicit result since we will carry out perturbative calculations using Feynman rules for off-shell fields. } The prepotential may be determined as follows. The one-loop beta function for the running gauge coupling of $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory is \begin{equation} \label{betafn1} \mu \frac{dg^2}{d\mu} = -\frac{1}{2\pi^2} \, g^4\Bigl[1 + O(e^{-8\pi^2/g^2})\Bigr]. \end{equation} Higher order loop corrections vanish due to supersymmetry, but we have included the form of nonperturbative one-instanton corrections. Integrating Eq.~\eqref{betafn1} yields \begin{equation} g^{-2}(\mu) = \frac{1}{4\pi^2}\ln(\mu^2/\La^2) + \text{const.} + O(\La^4/\mu^4) \,, \end{equation} where $\La$ is the conventional definition of the strong scale. This may be matched to the effective gauge coupling of the low energy effective theory at the $W$ mass scale. That is, at $\mu = M_W$ one has $g_\text{eff}^2(a,\bar{a}} \def\Abar{\bar{A}) = g^2(M_W)$. For asymptotically large $|u|$, the mass formula Eq.~\eqref{Wmass} implies \begin{equation} \text{Im}\,\tau(a) \approx \frac{1}{\pi}\ln\biggl(\frac{|a|^2}{\La^2}\biggr). \end{equation} This leading log is reproduced by a prepotential \begin{equation} \label{prepot} \mathcal{F}(a) \approx \frac{i}{2\pi}\, a^2\ln\biggl(\frac{a^2}{\La^2}\biggr). \end{equation} The line element on moduli space is $ds^2 = \frac{1}{4\pi}\text{Im}\,\tau(a) da d\bar{a}} \def\Abar{\bar{A}$, where the metric is given explicitly by $\frac{1}{4\pi}\text{Im}\,\tau(a) \approx \frac{1}{4\pi^2}[\ln(|a|^2/\La^2) + 3]$. The metric is single-valued and positive for $|a| \gg \La$. It diverges as $|a|/\La \to \infty$ which means that the effective gauge coupling becomes arbitrarily small. This is just a restatement of asymptotic freedom. For smaller values of $|a|$ there is a difficulty: the metric is negative. In fact, $\tau(a)$ is holomorphic so $\text{Im}\,\tau(a)$ must be harmonic, and since it is not constant, it must be unbounded from below. The metric fails to be positive-definite, or equivalently, $g_\text{eff}$ fails to be real. One is forced to concede that $a$ is valid as a local coordinate only asymptotically far out on moduli space. A key observation of Ref.~\cite{SW1} is that the form of the metric, as well as its positivity, can be maintained if an additional coordinate on moduli space is introduced: $a_D = \partial\mathcal{F}(a)/\partial a$. Then the line element on moduli space may be expressed as $ds^2 = \frac{1}{2i}(da_D \, d\bar{a}} \def\Abar{\bar{A} - da \, d\bar{a}} \def\Abar{\bar{A}_D)$ which exhibits a symmetry under $\bigl(\begin{smallmatrix} a_D \\ a \end{smallmatrix}\bigr) \to \bigl(\begin{smallmatrix} a \\ -a_D \end{smallmatrix}\bigr)$. This allows one to also use $a_D$ as a local coordinate on moduli space, with a different harmonic function serving as the metric. The region of the $u$-plane in which $a_D$ is a good coordinate will be discussed shortly. The metric on moduli space is preserved by real linear fractional transformations of the coordinates $\bigl(\begin{smallmatrix} a_D \\ a \end{smallmatrix}\bigr)$. Define a holomorphic vector field $\vec{a}(u) = \bigl(\begin{smallmatrix} a_D(u) \\ a(u) \end{smallmatrix}\bigr)$ over the $u$-plane. The induced metric is given by $ds^2 = -\frac{i}{2}\, \epsilon_{mn} \frac{da^m}{du}\frac{d\bar{a}} \def\Abar{\bar{A}^n}{d\bar{u}} \, du \, d\bar{u}$, and is preserved by monodromies $M \in SL(2,\mathbb{Z})$ that act on the vector field as $\vec{a} \to M\vec{a}$.% \footnote{ The invariance of the mass formula for BPS-saturated states under the action of monodromies implies that $M$ must be integer-valued, and that no constant vector can be added to the linear transformation $\vec{a}\to M\vec{a}$. The actual monodromy group turns out to be $\G(2) \subset SL(2,\mathbb{Z})$ which consists of matrices congruent to the identity matrix (modulo 2, taken element-wise) \cite{SW1}. } The presence of monodromy and $\mathcal{N}\,{=}\,2$ supersymmetry naturally imply a type of electric-magnetic duality. This fact is uncovered by asking the following question: if monodromies rotate $a$ into $a_D$, but $a$ belongs to a vector multiplet, then what effect does an $SL(2,\mathbb{Z})$ transformation have on $A_\mu$? In particular, consider the monodromy \begin{equation} S = \begin{pmatrix} \phantom-0 & 1 \\ -1 & 0 \end{pmatrix} \end{equation} which fully rotates $a$ into $-a_D$. In Ref.~\cite{SW1}, it was demonstrated that $S$ acts on the gauge fields as a Fourier transformation in field configuration space from $A_\mu$ to a dual gauge field $A_{D\mu}$. The form of the action remains the same in the new variables except that the effective gauge coupling inverts (as a consequence of integrating a Gaussian functional). In summary, $SL(2,\mathbb{Z})$ acts linearly on the $\mathcal{N}\,{=}\,1$ chiral multiplets $A$ and $A_D = \partial\mathcal{F}(A)/\partial A$, and by electric-magnetic duality on the $\mathcal{N}\,{=}\,1$ chiral field strengths $W_\alpha$ and $W_{D\alpha}$. Just as $\mathcal{L}_\text{eff}^{n=2}$ contains two derivative terms for the effective theory asymptotically far out on moduli space, one can define an analogous Lagrange density for the $S$-dual effective theory, \begin{equation} \mathcal{L}_{D,\,\text{eff}}^{n=2} = -\tfrac{1}{4\pi}\,\text{Im}\biggl[ \int d^2\theta\> \tfrac{1}{2}\,\mathcal{F}_D''(A_D) W^\alpha_D W_{D\alpha} + \int d^2\theta\, d^2\bar{\theta}\> \mathcal{F}_D'(A_D)\Abar_D\biggr]. \end{equation} This description is useful precisely where the original gauge coupling blows up. Thus, the $S$-dual effective theory is valid near the massless monopole point in moduli space. The dual prepotential $\mathcal{F}_D$ may be related to the original prepotential via a Legendre transform \cite{Hen}. In practice, one may obtain $\mathcal{F}_D(A_D)$ using a method similar to the one used to obtain $\mathcal{F}(A)$. Four derivative terms in the $S$-dual effective theory are given by the Lagrange density $\mathcal{L}_{D,\,\text{eff}}^{n=4}$. The explicit form for $\mathcal{L}_{D,\,\text{eff}}^{n=4}$ is identical in structure to expression \eqref{Leff4s} except that dual chiral superfields $A_D$ appear in place of elementary ones, and the dynamics is determined by a non-holomorphic function $\mathcal{K}_D$. A priori, there is no relation between $\mathcal{K}$ and $\mathcal{K}_D$. However, in Ref.~\cite{Hen} it was proved directly in $\mathcal{N}\,{=}\,2$ superspace that the non-holomorphic function $\mathcal{K}$ is a modular function with respect to $SL(2,\mathbb{Z})$. In particular, under an $S$ transformation (which amounts to a Fourier transformation), \begin{equation} \label{SonK} \mathcal{K}_D(-A_D, -\Abar_D) \equiv \mathcal{K}(A, \Abar)\,. \end{equation} This relation will be important later. The $\mathcal{N}\,{=}\,2$ theory has a BPS bound, $M \geq \sqrt{2}|Z|$, where $Z$ is the central charge in the extended supersymmetry algebra. The lightest states saturate this bound. A state with electric charge $n_e$ (defined by its coupling to the photon $A_\mu$) and magnetic charge $n_m$ has mass \begin{equation} \label{BPSmass} M = \sqrt{2}\,|Z|, \end{equation} where $Z(u) = a(u)\, n_e + a_D(u)\, n_m$. Since $Z$ determines particle masses, it is renormalization group invariant. The singularities in the finite $u$-plane arise from massive $\mathcal{N}\,{=}\,2$ Abelian hypermultiplets that become exactly massless at $u = \pm u_0$ \cite{SW1}. At $u_0$, a magnetic monopole with charges $(n_m,n_e) = (1,0)$ becomes massless. At $-u_0$, a dyon with charges $(n_m,n_e) = (1,-1)$ becomes massless.% \footnote{ In Ref.~\cite{SW1} these charge assignments for the extra massless multiplets were shown to pass several consistency checks. For instance, there is a monodromy around each singularity in the $u$-plane and the set of monodromies should furnish a representation of the fundamental group of the $u$-plane (with singularities deleted) in $SL(2,\mathbb{Z})$. If the monodromies around $u_0$ and $-u_0$ are calculated assuming that the hypermultiplets becoming massless are a $(1,0)$ monopole and $(1,-1)$ dyon, respectively, then the monodromies indeed generate a subgroup of $SL(2,\mathbb{Z})$. As another check, the triangle inequality and conservation of energy ensure that when the ratio $a_D/a$ is not real, the monopole and dyon cannot decay because $n_m$ and $n_e$ are relatively prime. Asymptotically far out on moduli space, there exist field configurations with magnetic charge --- these are the semiclassical monopoles. Moving in from infinity along the real $u$-axis toward $\pm u_0$, one never crosses a curve on which $a_D/a$ becomes real. This implies that whatever stable BPS-saturated states exist at infinity must also appear in the strongly-coupled region.} Let us study the monopole. According to Eq.~\eqref{BPSmass}, the monopole has a mass proportional to the coordinate $a_D$, \begin{equation} \label{Mmass} M_\text{m} = \sqrt{2}\,|a_D|. \end{equation} This mass vanishes at $u_0$ only if $a_D(u_0) = 0$. In the vicinity of the point $u_0$, the low energy effective theory must describe a $U(1)$ $\mathcal{N}\,{=}\,2$ gauge theory coupled to a massive hypermultiplet. This theory is essentially an $\mathcal{N}\,{=}\,2$ version of QED with the light monopoles playing the role of ``electrons.'' It is infrared free and the renormalization of the dual effective gauge coupling is due primarily to one-loop photon self-energy diagrams where the fields of the light hypermultiplet run around the loop. Consequently, the renormalization group implies that each decade of momentum, from a fixed UV cutoff down to the monopole mass, contributes the same amount to the inverse coupling. Since the monopole mass vanishes as $u \to u_0$, it follows that the dual effective gauge coupling $g_D$ vanishes as $u \to u_0$. Hence, $a_D$ is a good coordinate on moduli space in a neighborhood of the point $u_0$. To determine the dual prepotential, consider the beta function of $\mathcal{N}\,{=}\,2$ QED with a single hypermultiplet, \begin{equation} \label{betafn2} \mu\frac{dg^2_D}{d\mu} = \frac{1}{4\pi^2}\,g_D^4. \end{equation} Integrating Eq.~\eqref{betafn2} yields $g_D^{-2}(M_\text{m}) = \frac{1}{8\pi^2}\ln(\La^2/M_\text{m}^2) + \text{const.}$ This result holds for $u$ close to $u_0$. The inverse gauge coupling is related to the imaginary part of a holomorphic function $\tau_D(A_D) = \mathcal{F}_D''(A_D)$. Plugging in Eq.~\eqref{Mmass} yields \begin{equation} \text{Im}\,\tau_D(a_D) \approx -\frac{1}{2\pi}\ln\biggl(\frac{|a_D|^2}{\La^2}\biggr). \end{equation} This leading log is reproduced by a dual prepotential \begin{equation} \mathcal{F}_D(a_D) \approx -\frac{i}{4\pi}\,a_D^2\ln\biggl(\frac{a_D^2}{\La^2}\biggr). \end{equation} An explicit expression for the four derivative terms in $\mathcal{L}_\text{eff}^{n=4}$, including a formula for the non-holomorphic function $\mathcal{K}$, is discussed in Sec.~\ref{sec:lowtemp}. Because there are $R$-symmetry transformations which act on the entire vacuum manifold as $u \to -u$, the behavior of the free energy density near the massless monopole and dyon points is identical. Consequently, it is unnecessary to write down a low energy effective theory incorporating light dyons. Our low temperature analysis relies only on the weakly-coupled effective descriptions near $u = \infty$ and $u = u_0$. \section{High temperature behavior} \label{sec:hightemp} $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory at asymptotically high temperatures, \begin{equation} \label{hierarchy} T \gg gT \gg g^2T \gg \La\,, \end{equation} is weakly coupled, $g \ll 1$, on length scales small compared to $1/(g^2T)$. [Here and henceforth, $g \equiv g(T)$ stands for the running gauge coupling evaluated at the scale $T$.] Given the hierarchy of scales \eqref{hierarchy}, one may compute the dependence of the free energy density $F/V$ on the (translationally invariant) thermal expectation value of the complex scalar field $\phi$, using effective field theory techniques and perturbation theory. In other words, one may compute the thermal effective potential for $\phi$. Minimizing $F/V$ with respect to $\vev{\phi}$ determines the number and location of equilibrium states, which in turn determine the realization of the discrete {\it R}-symmetry. High temperature perturbation theory, at $\vev\phi = 0$, shows that the non-Abelian $\mathcal{N}\,{=}\,2$ plasma has a positive Debye mass, \begin{equation} \label{staticmass} m_\text{D}^2 = 2g^2T^2 + O(g^4T^2) \,, \end{equation} and that the field $\phi$ also develops an effective thermal mass, \begin{equation} \label{scalarmass} m_\phi^2 = g^2T^2 + O(g^4T^2)\,. \end{equation} Since the curvature of the thermal effective potential for $\phi$, at $\vev\phi = 0$, equals the thermal mass $m_\phi^2$, the positive value \eqref{scalarmass} indicates that $\phi = 0$ is a local minimum of the free energy. To demonstrate that this is, in fact, the global minimum, one must evaluate the effective potential for $\phi$ arbitrarily far away from $\phi = 0$. This we do in Appendix \ref{app:hightemp}. The result is unsurprising: for asymptotically high temperatures there is a unique equilibrium state at $\vev{\phi} = 0$. The free energy density in this equilibrium state has the asymptotic expansion \begin{equation} \label{FEhigh3} \begin{split} F/V &= 3T^4\biggl[-\frac{\pi^2}{12} + \frac{g^2}{8} -\frac{1 + \sqrt{2}}{6\pi}\, g^3 + O(g^4)\biggr]. \end{split} \end{equation} The leading term is the ideal gas blackbody contribution. The $O(g^2)$ term comes from two-loop contributions at the momentum scale of $T$, while the $O(g^3)$ term arises from zero frequency contributions on the scale of $gT$. The positive thermal mass (squared) \eqref{scalarmass} and the unique minimum of the thermal effective potential imply that the discrete {\it R}-symmetry, which is spontaneously broken at zero temperature, is restored for sufficiently high temperature, $T \gg \La$.% \footnote {% A similar conclusion was reached in Ref.~\cite{Pastras}. However, this analysis did not include the effects of interactions. } To argue this formally, recall that the gauge invariant order parameter involves the operator $\text{tr}\,(\phi^2)$. To probe spontaneous symmetry breaking one may consider the correlator $D(\vec{x}{-}\vec{y}) \equiv \vev{\text{tr}\,(\phi(\vec{x})^\dag)^2\> \text{tr}\,(\phi(\vec{y})^2)}$, where $\vev{\dotso}$ represents an expectation in a $R$-symmetry invariant thermal equilibrium state (which might be a statistical mixture of two noninvariant pure states). A non-vanishing large distance limit, $ D(\vec{x}{-}\vec{y}) \mathop{\rlap{\:\:/}{\longrightarrow}} 0 $ as $|\vec{x}{-}\vec{y}|\to\infty$, would indicate breakdown of cluster decomposition and consequent spontaneous symmetry breaking. However, the positive thermal mass \eqref{scalarmass} implies that scalar correlators, evaluated at the $\vev{\phi} = 0$ global minimum of the effective potential, fall exponentially fast at distances large compared to $m_\phi^{-1}$. Consequently, $D(\vec{x}-\vec{y})$ approaches zero at large distances and does not contain a disconnected part, signaling unbroken $R$ symmetry. \section{Low temperature effective theory} \label{sec:lowtemp} The spectrum and low energy dynamics of $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory varies drastically over moduli space. Far out on moduli space, near the asymptotically free vacuum, charged particle masses become arbitrarily large and the low energy dynamics contains only an Abelian massless vector multiplet. Near the strongly-coupled vacua there are, in addition to the massless vector multiplet, charged hypermultiplets consisting of light magnetic monopoles or dyons. Whether or not this extra matter affects the low energy dynamics is determined by the ratio of the monopole (or dyon) mass to the temperature. We will focus attention on three distinct regions of the free energy surface where weakly-coupled effective theories may be constructed. Using these effective theories, our goal is to compute the free energy density as a function of the moduli space coordinate $u$. \subsection{Near the singularity at infinity} \label{subsec:infty} Consider the asymptotically free region of moduli space. Near $u = \infty$, the spectrum of the theory includes very heavy BPS states satisfying $M_\text{m} \gg M_W \gg \La$. Suppose the temperature $T \ll M_W$. In this regime, it is valid to use an effective description of the low energy physics in terms of the scalar field $a(x)$ discussed in Sec.~\ref{sec:review}. The appropriate effective theory contains only the $U(1)$ vector multiplet $\mathcal{A}$ and is described by the Lagrangian $\mathcal{L}_\text{eff}$ given in Eq.~\eqref{Leffs}. Although it is possible to construct an effective theory incorporating the additional massive charged vector multiplets (see, for instance, Ref.~\cite{SW1}), the $W$ bosons have mass $M_W = \sqrt{2}|a|$ which, by assumption, is large compared to $T$. The Boltzmann weight of these $W$ bosons exponentially suppress their contribution to the free energy density relative to the contributions of the massless particles. Since the $W$ bosons and their superpartners form a dilute gas, it is straightforward to obtain% \footnote{ Each charged degree of freedom in the plasma generates a contribution to the free energy density equal to $f_\pm = \pm \tfrac{1}{2} T\sum_{n\in\mathbb{Z}} \int\frac{d^3p}{(2\pi)^3} \> \ln\bigl[(\omega} \def\W{\Omega_n^\pm)^2 + \vec{p}^{\,2} + M_W^2\bigr]$, where $\omega} \def\W{\Omega_n^+ \equiv 2n\pi T$ for bosons, and $\omega} \def\W{\Omega_n^- \equiv (2n{+}1)\pi T$ for fermions. The mass $M_W$ is the same for all particles belonging to a given multiplet. Note that $M_W$ cuts off the infrared divergence in the spatial momentum integral of the bosonic zero-frequency contribution. The temperature-dependent part of the free energy contribution $f_\pm$ is finite, and is a standard integral in statistical mechanics, $f_\pm = (T\text{-indep.}) \pm \frac{T^4}{2\pi^2} \int_0^\infty dx\, x^2 \ln\bigl(1 \mp e^{-\sqrt{x^2 + M_W^2/T^2}}\bigr)$. In a charged vector multiplet, the on-shell bosonic fields (gauge field and complex scalar) account for four real degrees of freedom. By supersymmetry the on-shell fermionic fields (two Weyl fermions) must also account for four real degrees of freedom. There are two such charged multiplets for $SU(2)$ broken to $U(1)$. Thus, $(F/V)_\text{charged} = 8\,(f_+ + f_-)$. Evaluating the leading contribution from the saddle-point at $x=0$ immediately gives the result \eqref{FEcharged}. } \begin{equation} \label{FEcharged} (F/V)_\text{charged} \approx -16 T^4\Bigl(\frac{M_W}{2\pi T}\Bigr)^{3/2}e^{-M_W/T}. \end{equation} Since the pressure is just minus the free energy density (when all chemical potentials vanish), the negative sign in this result shows that the dilute gas of heavy particles exerts a positive pressure, as it must. Eq.~\eqref{FEcharged} also indicates that the pressure decreases as $M_W/T$ grows large. This may be achieved, at fixed temperature, by moving toward large $|u|$ in the free energy surface. Also, note that the contributions of monopoles and dyons to the free energy will have even greater exponential suppression since these excitations, for large $|u|$, are much heavier than $W$ bosons. This follows from the mass ratio of electrically and magnetically charged BPS-saturated states near infinity on moduli space, $M_\text{m}/M_W = |a_D|/|a| \sim |\ln(u/\La^2)|$. Thus, as noted in Ref.~\cite{Wirstam}, one may ignore massive charged fields altogether and focus on the interactions of the massless neutral fields. To compute the effective potential $V_\text{eff}$ for the scalar field $a$, one may first integrate out thermal fluctuations of the vector multiplet fields in perturbation theory. As usual, one considers the functional integral representation for the partition function, with a constant source coupled to the scalar field, and performs a saddle point expansion. Let $a_0$ represent the translationally invariant expectation value for the scalar field in the presence of the source. By construction, $a_0$ is a solution to the Euler-Lagrange equations for a fixed value of the source. Let \begin{equation} \label{expansion} a(x) = a_0 + \tilde{a}} \def\Atilde{\tilde{A}(x). \end{equation} Substituting this decomposition into the Lagrange density $\mathcal{L}_\text{eff}^{n=2}$ [given by expression \eqref{Leff2c}] and using the prepotential \eqref{prepot} leads to \begin{equation} \label{Leff2cexpanded} \mathcal{L}_\text{eff}^{n=2} = \mathcal{L}^{(0)}_\text{free} + \mathcal{L}^{(1)}_\text{int} + \mathcal{L}^{(2)}_\text{int} + \dotsb. \end{equation} The coupling constant $g_0^2 = 4\pi^2/(\ln|a_0/\La|^2+3)$ is small in the asymptotic regime under consideration. The perturbative expansion for $V_\text{eff}$ will be a series controlled by $g_0$. The free Lagrange density is \begin{equation} \mathcal{L}^{(0)}_\text{free} = |\partial_\mu\tilde{a}} \def\Atilde{\tilde{A}|^2 + \tfrac{1}{4} F_{\mu\nu}^2 + \bigl(\tfrac{i}{2}\psi\sigma^\mu_\text{E}\partial_\mu\bar{\psi}} \def\Psibar{\bar{\Psi} + \tfrac{i}{2}\lambda} \def\La{\Lambda\sigma^\mu_\text{E}\partial_\mu\bar{\lambda} + \text{H.c.}\bigr) - |F|^2 - \tfrac{1}{2} D^2. \label{eq:L0free} \end{equation} The interactions cubic and quartic in fluctuations are given by \begin{subequations} \begin{align} \frac{4\pi^2}{g_0^3}\,\mathcal{L}^{(1)}_\text{int} &= \Bigl(\frac{\tilde{a}} \def\Atilde{\tilde{A}}{a_0}+\frac{\tilde{a}} \def\Atilde{\tilde{A}^*}{\bar{a}} \def\Abar{\bar{A}_0}\Bigr) \mathcal{L}^{(0)}_\text{free} + \frac{i}{4} \Bigl(\frac{\tilde{a}} \def\Atilde{\tilde{A}}{a_0}-\frac{\tilde{a}} \def\Atilde{\tilde{A}^*}{\bar{a}} \def\Abar{\bar{A}_0}\Bigr)F_{\mu\nu} \widetilde{F}_{\mu\nu} + \Bigl(\frac{1}{a_0}\,\mathcal{O}_\text{fermi} + \text{H.c.}\Bigr), \\ \frac{4\pi^2}{g_0^{4}}\, \mathcal{L}^{(2)}_\text{int} &= - \frac{1}{2}\Bigl(\frac{\tilde{a}} \def\Atilde{\tilde{A}^2}{a_0^2}+\frac{\tilde{a}} \def\Atilde{\tilde{A}^{*2}}{\bar{a}} \def\Abar{\bar{A}_0^2}\Bigr) \mathcal{L}^{(0)}_\text{free} - \frac{i}{8} \Bigl(\frac{\tilde{a}} \def\Atilde{\tilde{A}^2}{a_0^2}-\frac{\tilde{a}} \def\Atilde{\tilde{A}^{*2}}{\bar{a}} \def\Abar{\bar{A}_0^2}\Bigr) F_{\mu\nu}\widetilde{F}_{\mu\nu} + \Bigl(-\frac{\tilde{a}} \def\Atilde{\tilde{A}}{a_0^2}\,\mathcal{O}_\text{fermi} + \frac{1}{4a_0^2}\,\lambda} \def\La{\Lambda^2\psi^2 + \text{H.c.}\Bigr), \end{align} \label{eq:Lint}% \end{subequations} where \begin{equation} \mathcal{O}_\text{fermi} \equiv -\tfrac{i}{2}\bigl(\psi\sigma_\text{E}^\mu\bar{\psi}} \def\Psibar{\bar{\Psi} + \lambda} \def\La{\Lambda\sigma_\text{E}^\mu\bar{\lambda}\bigr)\partial_\mu\tilde{a}} \def\Atilde{\tilde{A} + \tfrac{1}{2}\psi^2 F^* + \tfrac{1}{2} \lambda} \def\La{\Lambda^2 F - \tfrac{i}{\sqrt{2}} \lambda} \def\La{\Lambda\psi D - \tfrac{1}{\sqrt{2}}\lambda} \def\La{\Lambda\sigma_\text{E}^{\mu\nu}\psi F_{\mu\nu} \end{equation} is a dimension five operator composed of fermion bilinears. To obtain Eqs.~\eqref{eq:L0free} and \eqref{eq:Lint}, all fluctuating fields have been rescaled by a common factor of $g_0$. In general, the contribution $\mathcal{L}_\text{int}^{(p)}$ to the interaction Lagrange density involves $2+p$ factors of fluctuating fields. Each such term has an overall factor of $g_0^{p+2}/|a_0|^p$ multiplying operators of dimension $4+p$.% \footnote { The reason $\mathcal{L}_\text{int}^{(p)}$, for $p>0$, contains an overall factor of $g_0^{p+2}$ (instead of just $g_0^p$) is due to the fact that in the original expression \eqref{Leff2c} for $\mathcal{L}_\text{eff}^{n=2}$, the K\"ahler connection $\Gamma(a,\bar a)$ and $R(a,\bar a)$ contain an inverse power of the K\"ahler metric $\gamma(a,\bar a)$ which (because $\gamma$ coincides with $g_{\rm eff}^{-2}$) cancels the overall factor of $g_{\rm eff}^{-2}$. } A schematic expression for the effective potential is \begin{equation} \label{Veffseries} \begin{split} V_\text{eff} &= -\frac{\pi^2}{12}\,T^4 + \Bigl\langle \mathcal{L}_\text{int}^{(1)} \Bigl\rangle_0^\text{1PI} + \Bigl\langle \mathcal{L}_\text{int}^{(2)} + \bigl(\mathcal{L}_\text{int}^{(1)}\bigr)^2 \Bigr\rangle_0^\text{1PI} + \Bigl\langle \mathcal{L}_\text{int}^{(3)} + \mathcal{L}_\text{int}^{(1)}\,\mathcal{L}_\text{int}^{(2)} + \bigl(\mathcal{L}_\text{int}^{(1)}\bigr)^3 \Bigr\rangle_0^\text{1PI} \\ &\quad + \Bigl\langle \mathcal{L}_\text{int}^{(4)} + \mathcal{L}_\text{int}^{(1)}\,\mathcal{L}_\text{int}^{(3)} + \bigl(\mathcal{L}_\text{int}^{(2)}\bigr)^2 + \bigl(\mathcal{L}_\text{int}^{(1)}\bigr)^2\mathcal{L}_\text{int}^{(2)} + \bigl(\mathcal{L}_\text{int}^{(1)}\bigr)^4 \Bigr\rangle_0^\text{1PI} + \dotsb. \end{split} \end{equation} The first term in Eq.~\eqref{Veffseries} is the blackbody contribution from a massless $\mathcal{N}\,{=}\,2$ vector multiplet.% \footnote{ Note that there are no background-dependent terms in $\mathcal{L}^{(0)}_{\rm free}$. Consequently, the one-loop contribution to $V_\text{eff}$ (involving the logarithm of a functional determinant) simply gives the blackbody result. } In the remaining terms, we have omitted spacetime integrals and combinatorial coefficients. A generic term of the form $\vev{\prod_{i=1}^m\mathcal{L}_\text{int}^{(p_i)}}_0^\text{1PI}$ represents the expectation value of $m$ spacetime integrals of the interaction Lagrange densities in the unit-normalized Gaussian measure. If this term is expressed diagrammatically, then each diagram will have $m$ vertices (representing insertions of the $\mathcal{L}_\text{int}^{(p_i)}$), joined by propagators arising from $\mathcal{L}_\text{free}^{(0)}$. Only the one-particle irreducible (1PI) portion of these correlators contributes to the effective potential. The effective potential admits a double series expansion in the dimensionless coupling $g_0$ and the ratio of scales $T/|a_0|$. To this end, the expansion in Eq.~\eqref{Veffseries} has been organized by operator dimension, with the dimension 5, 6, 7, and 8 terms shown explicitly. The Gaussian measure is invariant under independent $U(1)$ phase rotations for $\tilde{a}} \def\Atilde{\tilde{A}$, $\psi$, $\lambda} \def\La{\Lambda$, and $F$, and $\mathbb{Z}_2$ parity transformations for $A_\mu$ and $D$. These symmetries immediately imply that the dimension 5 and 7 terms in Eq.~\eqref{Veffseries} (which come with odd powers of $g_0$), and the term $\bigl\langle\mathcal{L}_\text{int}^{(2)}\bigr\rangle_0^\text{1PI}$, vanish identically. From $\bigl\langle\bigl(\mathcal{L}_\text{int}^{(1)}\bigr)^2\bigr\rangle_0^\text{1PI}$ there are eight basic diagrams that must be considered. These ``basketball diagrams'' are shown in Figure \ref{fig:holoinfty}.% \footnote{ Although auxiliary field propagators are momentum-independent, the diagrams containing $F$ and $D$ propagators have two independent loop momenta. This is clear if the diagrams are constructed using position-space Feynman rules. Diagrams \ref{fig:holoinfty}c and \ref{fig:holoinfty}d may be thought of as originating from four-fermion interactions in the on-shell formalism. The auxiliary field is a constraint that causes these graphs to ``pinch,'' yielding a graph with a figure-eight topology. Such an on-shell diagram has two fermion loops which contribute two overall factors of $-1$; this is matched in the off-shell diagram by one fermion loop and one central auxiliary field propagator that each contribute a factor of $-1$. } All of these diagrams vanish because they reduce to the sum-integrals% \footnote{ Sum-integrals are defined as $\inlinesumint{\ell}{\pm} = T \sum_{\ell_0}\int\!\!\frac{d^3\ell}{(2\pi)^3},$ where the sum is over even ($+$) or odd ($-$) integer multiples of $\pi T$. } \begin{subequations} \label{sumints}% \begin{align} \label{sumintsA} \sumint{p}{\pm}\sumint{q}{\pm} & \> \frac{p\cdot q}{p^2 \, q^2} = 0\,, \\ \noalign{\noindent\text{or}} \sumint{p}{\pm} & 1 = 0\,. \label{sumintsB} \end{align} \end{subequations} Expression \eqref{sumintsA} vanishes by Euclidean time reflection and spatial parity invariance. To justify this one may choose to regulate the theory by dimensional continuation, which preserves spacetime symmetries. [Or one may ignore the issue of regulation, since that is part of defining the theory at zero temperature, and focus only on the temperature-dependent part of the sum-integral. Each discrete frequency sum may be recast as a pair of contour integrals just above the real axis: one temperature-independent and the other temperature-dependent. The latter contains the appropriate statistical distribution function which dies off exponentially fast in the upper half complex plane. The temperature-dependent piece is finite and vanishes by the symmetry arguments.] Expression \eqref{sumintsB} has a scale-free spatial momentum integral that vanishes in dimensional continuation. [Alternatively, in the contour integral method, the temperature-dependent integrand is analytic in the upper half plane, so closing the contour there produces zero.] Thus, there is no $O(g_0^6 \, T^2/|a_0|^2)$ contribution to $V_\text{eff}/T^4$. This is consistent with the analysis of Ref.~\cite{Wirstam}. \FIGURE[t]{ \label{fig:holoinfty} \centerline{\includegraphics[width=4.0in]{holoinfty.pdf}} \vspace*{-20pt} \caption{Two-loop 1PI diagrams that could contribute to the effective potential at $O(g_0^6 T^6/|a_0|^2)$. Solid lines represent complex scalar fields, dashed lines represent either type of Weyl fermion, wavy lines represent Abelian gauge fields, and dotted lines represent auxiliary fields.} } Continuing on to the subsequent terms in the expansion \eqref{Veffseries} for the effective potential, one finds that contributions to $V_\text{eff}/T^4$ from dimension 8 operators debut at $O(g_0^8\, T^4/|a_0|^4)$. The local dimension 8 piece $\langle\mathcal{L}_\text{int}^{(4)}\rangle_0$ displayed in the Eq.~\eqref{Veffseries} has an overall factor of $g_0^6$ but (just as for $\langle\mathcal{L}_\text{int}^{(2)}\rangle_0$ this piece vanishes due to phase rotation symmetries. It was pointed out in Ref.~\cite{Wirstam} that the first non-vanishing correction to the effective potential does not come from the $n\,{=}\,2$ terms in the low energy effective action, but rather from the higher derivative $n\,{=}\,4$ terms. In other words, $\mathcal{L}_\text{eff}^{n=4}$ and the non-holomorphic function $\mathcal{K}$ is more important than $\mathcal{L}_\text{eff}^{n=2}$ and the holomorphic prepotential $\mathcal{F}$, when it comes to understanding the leading dependence of the effective potential on the expectation value of the scalar field. To see this one must use the known form of $\mathcal{K}$ \cite{deWit,Lind}, \begin{equation} \label{K} \mathcal{K}(A,\Abar) \approx \frac{c}{64} \ln\Bigl(\frac{A^2}{\La^2}\Bigr) \ln\Bigl(\frac{\Abar^2}{\La^2}\Bigr), \end{equation} with $c$ a constant. This is the leading approximation for the non-holomorphic function $\mathcal{K}$ when $a \equiv A|_{\theta = \bar{\theta} = 0} \gg \La$.% \footnote {% This term is responsible for producing four derivative terms in the effective action, such as $F^4$. (It is simpler to discuss perturbative corrections to $\int d^4\theta \, d^4\bar{\theta}\>\mathcal{K}$, rather than $\mathcal{K}$ itself, since the former is a Lagrange density and thus a direct output of renormalizing the short distance theory.) It has been shown, at least in the case of $\mathcal{N}\,{=}\,2$ QED, that there is a nonvanishing quantum correction at two-loop order to the supersymmetric completion of the $F^4$ term \cite{KuzMc1}. In our case, the $F^4$ term is given by $-4C\int d^2\theta \, d^2\bar{\theta}\> \mathcal{K}_{AA\Abar\Abar}\,W^2\bar{W}^2$, and the corresponding statement about quantum corrections to this operator may be expressed as $C = 1 + O(g_0^2)$. The loop expansion parameter in the short distance theory is the running coupling $g_0^2$, evaluated at the scale $a$, or in other words, the inverse logarithm $(\ln |a/\La|^2)^{-1}$. Far out in moduli space, this coupling is small. Such two-loop corrections lead to $O(g_0^6 T^8/|a_0|^4)$ contributions to the effective potential which will turn out to be subleading. } A couple points regarding the formula \eqref{K} for $\mathcal{K}$ are worth mentioning. First, the concise form \eqref{K} is due to the fact that the low energy gauge group is Abelian.% \footnote{ The leading form of $\mathcal{K}$ for non-Abelian gauge group $SU(2)$ was originally determined in Ref.~\cite{deWit}. On the Coulomb branch of $SU(2)$ $\mathcal{N}\,{=}\,2$ theory, the leading form of $\mathcal{K}$ simplifies considerably, as noted in Ref.~\cite{Lind}. } Second, both $U(1)_R$ and scale transformations (of the form $\La \to b\La$) change Eq.~\eqref{K} additively by terms that are (anti)holomorphic in the chiral superfield. Such terms vanish under the full superspace integration, and therefore have no effect on the dynamics. For simplicity, we compute the contribution to the free energy density from the purely scalar terms in $\mathcal{L}_\text{eff}^{n=4}$. Such terms originate from the piece \begin{equation} \label{nonholoscalar} \mathcal{L}_\text{eff}^{n=4} \supset -\int d^2\theta \, d^2\bar{\theta}\> \mathcal{K}_{A\Abar}(A,\Abar)\left[ (D^\alpha D_\alpha A)(\bar{D}_{\dot{\alpha}} \bar{D}^{\dot{\alpha}} \Abar) + 2(\bar{D}_{\dot{\alpha}} D^\alpha A)(D_\alpha \bar{D}^{\dot{\alpha}} \Abar) \right]. \end{equation} We write expression \eqref{nonholoscalar} in components, perform the expansion \eqref{expansion}, and rescale fluctuating fields by $g_0$.% \footnote{ When chiral superfields occur in denominators, we factor out the lowest (scalar) component and expand the remaining function as a power series in $\theta$ and $\bar{\theta}$. } The resulting purely scalar terms that are both invariant under phase rotations of the fluctuation $\tilde a$, and suppressed by no more than $|a_0|^{-4}$, are given by% \footnote { Contributions from multi-point correlators of terms which are not individually invariant under phase rotations of $\tilde a$ first contribute to the effective potential at $O(g_0^6 T^8/|a_0|^4)$. This is smaller by two powers of $g_0$ than the contributions which will result from the unperturbed expectation of the $U(1)$-invariant terms \eqref{purescalar}. } \begin{equation} \label{purescalar} \begin{split} \mathcal{L}_\text{eff}^{n=4} \supset -c \, \biggl[ \frac{g_0^2}{|a_0|^2}\bigl(|\partial_\mu\partial_\nu\tilde{a}} \def\Atilde{\tilde{A}|^2 + |\partial^2\tilde{a}} \def\Atilde{\tilde{A}|^2\bigr) +\frac{g_0^4}{|a_0|^4} & \Bigl\{ (|\partial_\mu\tilde{a}} \def\Atilde{\tilde{A}|^2)^2 + |\tilde{a}} \def\Atilde{\tilde{A}|^2 |\partial_\mu\partial_\nu\tilde{a}} \def\Atilde{\tilde{A}|^2 + |\tilde{a}} \def\Atilde{\tilde{A}|^2|\partial^2\tilde{a}} \def\Atilde{\tilde{A}|^2 \\ &+ \bigl[ \tilde{a}} \def\Atilde{\tilde{A}\,(\partial_\mu\partial_\nu\tilde{a}} \def\Atilde{\tilde{A})(\partial_\mu\tilde{a}} \def\Atilde{\tilde{A}^*)(\partial_\nu\tilde{a}} \def\Atilde{\tilde{A}^*) + \text{H.c.} \bigr] \Bigr\} \biggr] \,, \end{split} \end{equation} up to total spacetime derivatives. It will turn out that only the first two terms inside the curly braces generate a non-zero contribution to $V_\text{eff}/T^4$ of $O(g_0^4 T^4/|a_0|^4)$. \FIGURE[t]{ \label{fig:feynrule} \centerline{\includegraphics{feynrule.pdf}} \vspace*{-20pt} \caption{Vertices representing leading four-derivative scalar self-interactions.} } The interaction terms \eqref{purescalar} generate quadratic and quartic interaction vertices, illustrated in Fig.~\ref{fig:feynrule}, with momentum-dependent vertex factors. Explicitly, the resulting (Euclidean space) vertex factors are \begin{subequations} \begin{align} V_1 &= \frac{2c g_0^2}{|a_0|^2} \, (p^2)^2, \\ V_2 &= \frac{c g_0^4}{|a_0|^4} \Bigl[2(p_1\cdot p_2)(p_3\cdot p_4) + 2(p_1\cdot p_4)(p_2\cdot p_3) + p_1^2\, p_2^2 + p_2^2\, p_3^2 + p_3^2\, p_4^2 + p_4^2\, p_1^2 \nonumber \\ &\qquad + (p_1\cdot p_2)^2 + (p_2\cdot p_3)^2 + (p_3\cdot p_4)^2 + (p_4\cdot p_1)^2 \label{quarticrule} \\ &\qquad + 2(p_1\cdot p_2)(p_1\cdot p_4) + 2(p_2\cdot p_3)(p_3\cdot p_4) + 2(p_1\cdot p_2)(p_2\cdot p_3) + 2(p_1\cdot p_4)(p_3\cdot p_4) \Bigr]. \nonumber \end{align} \end{subequations} The momenta are taken along the arrows (which distinguish $\tilde{a}} \def\Atilde{\tilde{A}^*$ from $\tilde{a}} \def\Atilde{\tilde{A}$). The overall signs of both vertex factors are positive because there is a minus sign from the definition of the Euclidean path integral weight, a minus sign from the Lagrange density itself, and two factors each of $i$ and $-i$ from Fourier transforms of derivatives. \FIGURE[t]{ \label{fig:nonholoinfty} \centerline{\includegraphics[width=2.5in]{nonholoinfty.pdf}} \vspace*{-20pt} \caption{Leading scalar contributions to free energy density up to $O(g_0^4 T^8/|a_0|^4)$.} } These quadratic and quartic vertices generate the bubble diagrams shown in Figure \ref{fig:nonholoinfty}. Diagrams $I_1$ and $I_2$ vanish, since their spatial momentum integrals are scale-free. The last diagram, $I_3$, reduces to the square of a nontrivial one-loop sum-integral,% \footnote{ The last eight terms in the Feynman rule for $V_2$ in Eq.~\eqref{quarticrule} contribute terms to the expression for $I_3$ that integrate to zero. These eight terms originate from the operators $|\tilde{a}} \def\Atilde{\tilde{A}|^2|\partial^2\tilde{a}} \def\Atilde{\tilde{A}|^2$ and $\tilde{a}} \def\Atilde{\tilde{A}(\partial_\mu\partial_\nu\tilde{a}} \def\Atilde{\tilde{A})(\partial_\mu\tilde{a}} \def\Atilde{\tilde{A}^*)(\partial_\nu\tilde{a}} \def\Atilde{\tilde{A}^*) + \text{H.c.}$ in expression \eqref{purescalar}. Since these operators have no bearing on the calculation, they are omitted in Eq.~(3.6) of Ref.~\cite{Wirstam}. } \begin{equation} \begin{split} \label{I3} I_3 &= \beta V \, \frac{1}{2} \sumint{p}{+}\sumint{q}{+} \frac{c\, g_0^4}{|a_0|^4} \biggl[\frac{4(p\cdot q)^2}{p^2q^2}\biggr] = \beta V \, \frac{8 c\, g_0^4}{3|a_0|^4} \biggl(\sumint{p}{+}\frac{\vec{p}^{\,2}}{p^2}\biggr)^2, \end{split} \end{equation} where we have used rotational symmetry to replace $\frac{(p\cdot q)^2}{p^2 q^2}$ by $(1+\frac{1}{d-1})\frac{\vec{p}^{\,2}\vec{q}^{\,2}}{p^2q^2}$ inside the integrals and $d$ is the spacetime dimension (continued infinitesimally away from $4$). In Eq.~\eqref{I3} the sum-integral in parentheses $J_+ \equiv \inlinesumint{p}{+}\frac{\vec{p}^{\,2}}{p^2}$ evaluates to $\pi^2 T^4/30$. The scalar self-interactions contribute $-(\beta V)^{-1}I_3$ to the effective potential, so \begin{equation} V_\text{eff}(a_0)\bigr\vert_\text{scalar-scalar} = -c\,\frac{2\pi^4}{675} \frac{g_0^4 T^4}{|a_0|^4} \,T^4. \end{equation} Obviously, the sign of the coefficient $c$ is of the utmost importance for interpreting the effective potential --- the sign determines whether local equilibrium is directed toward smaller or larger values of $|a_0|$. An attempt has been made to derive the non-holomorphic function $\mathcal{K}$ for $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory and fix the value of the overall constant \cite{Ketov}. The method involves technical superfield calculations and is difficult to check for errors.% \footnote{ In the case of $SU(2)$ $\mathcal{N}\,{=}\,4$ gauge theory, three independent superfield calculations of the non-holomorphic function $\mathcal{K}$ are known, and they completely agree \cite{GonzRey,Buchbind,Tseytlin}. The calculations for the superconformal theories discussed in these works may be readily extended to $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory. For example, Eq.~(4.9) in Ref.~\cite{Tseytlin} represents the one-loop effective action for $\mathcal{N}\,{=}\,2$ gauge theory with four fundamental hypermultiplets and gauge group $SU(2)$ Higgsed to $U(1)$. Of direct relevance to this work is the sum of the second and third terms on the right hand side of Eq.~(4.9)---this coincides with the non-holomorphic contribution to the effective action for $\mathcal{N}\,{=}\,2$ theory without matter. The $F^4$ term may be extracted by Taylor expanding the functions $\zeta(t\Psi,t\bar{\Psi})$ and $\omega} \def\W{\Omega(t\Psi,t\bar{\Psi})$ to zeroth order around $\Psi = \bar{\Psi} = 0$. This amounts to focusing on just the leading term in a derivative expansion. Since $\zeta(0,0) = 1/12$ and $\omega} \def\W{\Omega(0,0) = 0$, it follows that $c$ is positive. A positive value for $c$ agrees with the conclusion in this work. We thank the JHEP referee for explaining this to us. } Our goal, in the next couple of pages, is to find an independent determination of the sign of $c$, since that is the crucial information needed to understand low temperature thermodynamics in this theory. A simple and physical method that fixes the sign of the coefficient $c$ is provided by studying the forward amplitude for scalar scattering \cite{Adams}. There are spinless, one-particle states in the spectrum of the theory that arise from quantum fluctuations of the field $\tilde{a}} \def\Atilde{\tilde{A}$. Let us denote the particle excitation by $\varphi$ and its antiparticle by $\bar{\varphi}$. Consider the scattering process $\varphi\bar{\varphi} \to \varphi\bar{\varphi}$. At center-of-momentum energies far below $M_W$, one may use the low energy effective action to reliably compute the scattering amplitude in a momentum expansion. The tree-level diagram for this process is derived entirely from the vertex $V_2$ shown in Figure \ref{fig:feynrule}, remembering that the Minkowski space vertex gets an extra factor of $i$ relative to its Euclidean counterpart.% \footnote{ For scattering we take all momenta as incoming (rather than following the arrows) and label them as $p_i$, $i = 1,\dotsc,4$ starting at the upper left corner and continuing counterclockwise. This amounts to flipping the signs of $p_2$ and $p_4$ in Eq.~\eqref{quarticrule}, which clearly does nothing to the whole expression. We use a Minkowski space with $-+++$ signature, and define the usual Mandelstam variables $s = -(p_1+p_2)^2$ and $t = -(p_2+p_3)^2$. The mass-shell condition is $p_i^2 = 0$. } It is worth noting that, from the point of view of the expanded low energy effective Lagrange density, the operators appearing in expression \eqref{purescalar} comprise only a subset of the irrelevant interactions. Moreover, they are not even the ones with lowest dimension. Nevertheless, the quartic operators in expression \eqref{purescalar} generate the leading contribution to the $\varphi\bar{\varphi} \to \varphi\bar{\varphi}$ scattering process. Contributions from other terms in the effective theory are suppressed by additional factors of the dimensionless coupling $g_0$. The resulting Lorentz-covariant scattering amplitude is% \footnote{ Recall that the LSZ reduction formula relates the Fourier transform of $i$ times time-ordered correlation functions to the scattering amplitude $\mathcal{M}$. Thus, $-i\mathcal{M}$ is the object to which diagrammatic rules apply. } \begin{equation} -i\,\mathcal{M}(s,t) = i\,\frac{c g_0^4}{|a_0|^4}\,(s+t)^2 + O(g_0^6). \end{equation} In the forward scattering limit, \begin{equation} \label{forward} \mathcal{A}(s) = \lim_{t \to 0^-}\mathcal{M}(s,t) = -c \, g_0^4 \frac{s^2}{|a_0|^4} + O(g_0^6). \end{equation} Now consider the contour integral \begin{equation} I = \oint_\gamma} \def\G{\Gamma\frac{ds}{2\pi i}\> \frac{\mathcal{A}(s)}{s^3}\,. \end{equation} The analytic structure of the exact forward scattering amplitude in the complex $s$-plane is shown in Figure \ref{fig:contour}. $\mathcal{A}(s)$ must have a branch cut along the positive, real $s$-axis with a branch point that corresponds to the threshold for pair production. Since $\mathcal{N}\,{=}\,2$ gauge theory has excitations with arbitrarily low momentum, there is no mass gap and the branch point sits at the origin. Following Ref.~\cite{Adams}, one may modify the theory in the deep IR by giving a small regulator mass $m_\text{gap}$ to the $\tilde{a}} \def\Atilde{\tilde{A}$ fields. The cut then extends only down to $(2m_\text{gap})^2$. There is no cut along the negative real axis, since the amplitude $\mathcal{M}$ is only symmetric under interchange of $s$ and $t$, not under interchange of $s$ and $u$. By construction, the integrand $\mathcal{A}(s)/s^3$ also has a pole at the origin. \FIGURE[t]{ \label{fig:contour} \centerline{\includegraphics{contour.pdf}} \vspace*{-20pt} \caption{Analytic structure of the forward amplitude $\mathcal{A}(s)$ as a function of (complexified) center-of-momentum energy squared. $\mathcal{N}\,{=}\,2$ gauge theory is recovered by sending $m_\text{gap} \to 0$.} } The integral $I$ may be evaluated by deforming the contour $\gamma} \def\G{\Gamma$ usefully in one of two ways: in a tight circle around the origin yielding the residue at the pole at the origin, or around infinity. In the latter scenario the integral along the large circular portion of the contour vanishes since $SU(2)$ $\mathcal{N}\,{=}\,2$ gauge theory is UV-complete and thus the forward amplitude (when the theory has a mass gap) grows no faster than $s^2$ at high energies. This leaves only the contour wrapping the cut which measures the integrated discontinuity of $\mathcal{A}(s)$ across the cut. Since $\mathcal{A}(s)$ is real along part of the real $s$-axis, the Schwarz reflection principle relates the discontinuity to the imaginary part of $\mathcal{A}(s)$ just above the cut. Consequently, \begin{equation} \label{contoureq} \tfrac{1}{2} \, \mathcal{A}''(0) = \frac{1}{\pi}\int_{4m_\text{gap}^2}^\infty ds \> \frac{\text{Im}[\mathcal{A}(s+i\epsilon)]}{s^3}\, . \end{equation} On the left-hand side of Eq.~\eqref{contoureq}, one may approximate the forward amplitude at weak coupling with the tree-level formula. Since the introduction of a gap modifies the mass-shell condition, the tree-level amplitude is given by Eq.~\eqref{forward} with additional terms of $O(m_\text{gap}^4)$ or $O( m_\text{gap}^2\, s)$. However, these additional terms have no effect on the final result, since only the unmodified $s^2$ behavior is extracted by the residue theorem. Because the tree-level amplitude Eq.~\eqref{forward} is analytic at the origin, we find $\tfrac{1}{2} \mathcal{A}''(s=0) = -c\, g_0^4/|a_0|^4$. The right-hand side of Eq.~\eqref{contoureq} involves an integral of a negative-definite quantity, as unitarity of the $S$-matrix requires that $\text{Im}[\mathcal{A}(s+i\epsilon)] < 0$.% \footnote{ One may phrase the fact that the imaginary part of the forward amplitude must be negative in terms of the optical theorem, as done in Ref.~\cite{Adams}. Ultimately, the negative sign reflects $S$-matrix unitarity, since $S = 1 - i\mathcal{M}$ and $S S^\dag = 1$ imply that $\text{Im}(\mathcal{M}) = -\tfrac{1}{2}|\mathcal{M}|^2 < 0$. See, for example, problem 17 in Ch.~3 of Ref.~\cite{Brown}. } Hence, $c$ must be positive.% \footnote{ As discussed in Ref.~\cite{Adams}, this constraint on the sign of $c$ is a special case of a more general scenario. The forward amplitude $\mathcal{A}(s)$, away from the real axis and probing $m_\text{gap}^2 \ll |s| \ll M_W^2$, has a Taylor expansion around any point $s_0$ in this region that begins as $(s-s_0)^2$ with a coefficient that is negative, up to corrections which scale as $O(|s_0|^2/M_W^2, m_\text{gap}^2/M_W^2)$. In our case, the low energy theory is weakly-coupled and this permits an analytic expansion for $\mathcal{A}(s)$ at the origin. } We conclude that the scalar self-interaction provides a negative contribution to the effective potential at $O(g_0^4\, T^8/|a_0|^4)$. There are, of course, additional interactions from $\mathcal{L}_\text{eff}^{n=4}$ that should also be considered. The local operators whose thermal expectation values lead to $O(g_0^4 T^8/|a_0|^4)$ corrections to the effective potential are all dimension eight and involve four fields. They give rise to the set of two-loop diagrams shown in Figure \ref{fig:twoloop}. The scalar-scalar contribution $I_{\text{ss}}$ in Fig.~\ref{fig:twoloop} is the just-discussed $I_3$. The other diagrams all have the same figure-eight topology and involve some pairing of scalars, fermions, and vectors running in the two loops. They were calculated in Ref.~\cite{Wirstam}, and were found to all have the same relative sign. However, the sign of these contributions asserted in Ref.~\cite{Wirstam} is opposite to our conclusion for $I_{\text{ss}}$, stemming from the fact that the single overall coefficient $c$ is negative in Ref.~\cite{Wirstam}.% \footnote{ \label{cneg}% The negativity of $c$ in Ref.~\cite{Wirstam} may be traced back to one of the references cited in that paper. The value of $c$ may be obtained from Eq.~(3.11) of Ref.~\cite{Ketov}; expressing their equation in the form of our Eq.~\eqref{Leffss} implies that $c = -1/(8\pi^2)$, which has the wrong sign. } We now review the calculations of these diagrams and adjust the conclusions in lieu of the fact that $c$ is actually positive. \FIGURE[t]{ \label{fig:twoloop} \centerline{\includegraphics{twoloop.pdf}} \vspace*{-20pt} \caption{Two-loop diagrams that contribute to the effective potential at $O(g_0^4 T^8/|a_0|^4)$. Solid lines represent complex scalar fields, dashed lines represent either type of Weyl fermion, and wavy lines represent Abelian gauge fields.} } The $F^4$ terms of the low energy effective Lagrange density are readily obtained from the last term in expression \eqref{Leff4s} involving four copies of the spinor-valued field strength. The purely gauge interactions that are suppressed by no more than $|a_0|^{-4}$ are \begin{equation} \mathcal{L}_\text{eff}^{n=4} \supset - \frac{c g_0^4}{16|a_0|^4}\Bigl[(F_{\mu\nu}F_{\mu\nu})^2 - (F_{\mu\nu}\widetilde{F}_{\mu\nu})^2\Bigr]. \end{equation} The corresponding diagram in Figure \ref{fig:twoloop} is $I_\text{gg}$. We find that (for an arbitrary Lorentz gauge-fixing parameter), \begin{equation} I_\text{gg} = \frac{d}{2}\, \frac{c g_0^4}{|a_0|^4} \, \beta V \, \Omega_{++}\,, \end{equation} where $\Omega_{++} \equiv \inlinesumint{p}{+}\inlinesumint{q}{+} \frac{(p\cdot q)^2}{p^2 q^2} = \frac{4}{3}J_+^2$ and $d = 4$. This is the same double sum-integral that we found in our evaluation of the scalar self-interactions. Note that $I_\text{gg} = I_\text{ss}$. The four-fermion interaction with all $\psi$'s is easily derived from the same superfield integral used to obtain the purely scalar interactions, namely expression \eqref{nonholoscalar}. The purely $\psi$ interactions that are suppressed by $|a_0|^{-4}$ are \begin{equation} \mathcal{L}_\text{eff}^{n=4} \supset -\frac{c g_0^4}{|a_0|^4} \bigl[(\bar{\psi}} \def\Psibar{\bar{\Psi}\bar{\sigma}_\text{E}^\mu\partial_\mu\psi) (\partial_\nu\bar{\psi}} \def\Psibar{\bar{\Psi}\bar{\sigma}_\text{E}^\nu\psi) - (\psi\partial_\mu\psi)(\bar{\psi}} \def\Psibar{\bar{\Psi}\partial_\mu\bar{\psi}} \def\Psibar{\bar{\Psi})\bigr]. \end{equation} The four-fermion interaction with all $\lambda} \def\La{\Lambda$'s must have exactly the same form by $SU(2)_R$ symmetry. In Figure \ref{fig:twoloop}, the corresponding diagrams are $I_{\psi\psi}$ and $I_{\lambda} \def\La{\Lambda\la}$. We find \begin{equation} I_{\psi\psi} = I_{\lambda} \def\La{\Lambda\la} = 2\,\frac{c g_0^4}{|a_0|^4}\,\beta V \,\Omega_{--}\,, \end{equation} where $\Omega_{--} \equiv \inlinesumint{p}{-} \inlinesumint{q}{-} \frac{(p\cdot q)^2}{p^2 q^2} = \frac{4}{3}J_-^2$ and $J_- \equiv \inlinesumint{p}{-} \frac{\vec{p}^{\,2}}{p^2} = -7\pi^2 T^4/240$. It is straightforward, but tedious, to find the interactions that mix scalars and vectors, or mix different types of Weyl fermion. To save ourselves some trouble we rely on the component field expression of $\mathcal{L}_\text{eff}^{n=4}$ given in Eq.~(3.6) of Ref.~\cite{Wirstam}. Based on our discussion in footnote \ref{cneg}, we shall factor out an overall $-1/(8\pi^2)$ and replace it by the constant $c$. According to Ref.~\cite{Wirstam}, the scalar-gauge and $\psi$-$\lambda} \def\La{\Lambda$ interactions that are suppressed by $|a_0|^{-4}$ and contribute to finite temperature effects are given by% \footnote{ Our $\sigma$-matrix conventions differ from those of Ref.~\cite{Wirstam}. In particular, $\sigma_\text{E}^0(\text{ours})= -\sigma_\text{E}^0(\text{theirs})$, with similar spatial matrices. The two conventions are related by a spatial parity transformation which has the effect of conjugating $\sigma$-matrices by $\sigma^0$ (or $\bar{\sigma}^0$ as appropriate). We have done a parity transformation in order to write the mixed $\psi$-$\lambda} \def\La{\Lambda$ interactions in our convention. } \begin{equation} \mathcal{L}_\text{eff}^{n=4} \supset -\frac{2c g_0^4}{|a_0|^4}\bigl[ (\partial_\mu\tilde{a}} \def\Atilde{\tilde{A}^*)(\partial_\nu\tilde{a}} \def\Atilde{\tilde{A}) F_{\mu\rho} F_{\nu\rho} -(\bar{\lambda}\partial_\mu\bar{\psi}} \def\Psibar{\bar{\Psi})(\psi\sigma_\text{E}^{\mu\nu}\partial_\nu\lambda} \def\La{\Lambda) -(\lambda} \def\La{\Lambda\partial_\mu\psi)(\bar{\psi}} \def\Psibar{\bar{\Psi}\bar{\sigma}_\text{E}^{\mu\nu}\partial_\nu\bar{\lambda})\bigr]. \end{equation} In Figure \ref{fig:twoloop}, the corresponding diagrams are $I_\text{sg}$ and $I_{\psi\lambda} \def\La{\Lambda}$. We find \begin{equation} I_\text{sg} = 2(d{-}2)\frac{c g_0^4}{|a_0|^4}\,\beta V \,\Omega_{++}, \qquad I_{\psi\lambda} \def\La{\Lambda} = 4\,\frac{c g_0^4}{|a_0|^4}\,\beta V \, \Omega_{--}. \end{equation} (Reassuringly, the gauge dependence cancels completely in $I_\text{sg}$.) It remains to compute the diagrams for the scalar-fermion and gauge-fermion interactions. In Figure \ref{fig:twoloop}, these contributions are $I_{\text{s}\psi}$, $I_{\text{s}\lambda} \def\La{\Lambda}$, $I_{\text{g}\psi}$, and $I_{\text{g}\lambda} \def\La{\Lambda}$. Since each vertex involves one type of Weyl fermion paired with its Hermitian conjugate, $SU(2)_R$ symmetry requires equivalent interactions for $\psi$ and $\lambda} \def\La{\Lambda$. It follows that $I_{\text{s}\psi} = I_{\text{s}\lambda} \def\La{\Lambda}$ and $I_{\text{g}\psi} = I_{\text{g}\lambda} \def\La{\Lambda}$. The latter relation means that it is impossible for there to be any nontrivial gauge dependence in the diagrams involving gauge fields since there are no other diagrams left to cancel it. Since the complex scalar and gauge fields belong to the same supersymmetry multiplet, they have equal numbers of propagating degrees of freedom. Hence, all four diagrams must be equal. To determine their common value consider computing the index $\text{tr}\,((-1)^F e^{-\beta H})$ at weak coupling. In perturbation theory, the $O(g_0^4)$ contribution to the index comes from the class of diagrams shown in Figure \ref{fig:twoloop}, but with periodic temporal boundary conditions for all fields. This means that all frequency sums are taken over even integer multiples of $\pi T$. Effectively, this changes all instances of $\Omega_{--}$ and $\Omega_{+-}$ to $\Omega_{++}$. The index, which must be an integer, cannot change as the coupling $g_0$ is varied. Therefore, the $O(g_0^4)$ part of the index must be identically zero, \begin{equation} \begin{split} 0 &= \bigl[ I_\text{ss} + I_\text{gg} + I_{\psi\psi} + I_{\lambda} \def\La{\Lambda\la} + I_\text{sg} + I_{\psi\lambda} \def\La{\Lambda} + I_{\text{s}\psi} + I_{\text{s}\lambda} \def\La{\Lambda} + I_{\text{g}\psi} + I_{\text{g}\lambda} \def\La{\Lambda}\bigr]\Big|_\text{p.b.c.} \\ &= 16\,\frac{c g_0^4}{|a_0|^4}\, \beta V \, \Omega_{++} + 4 I_{\text{s}\psi}\big|_\text{p.b.c.}. \end{split} \end{equation} Consequently, \begin{equation} \label{Ispsi} I_{\text{s}\psi}\big|_\text{p.b.c.} = -4\,\frac{c g_0^4}{|a_0|^4}\, \beta V\, \Omega_{++}\, . \end{equation} This is a contribution to the index, but what we really want is the contribution to the partition function $\text{tr}\,(e^{-\beta H})$. The functional representation of the trace requires antiperiodic temporal boundary conditions for fermions. Since each diagram involves a single fermion loop, we only need to turn one of the frequency sums in Eq.~\eqref{Ispsi} into a sum over odd integer multiples of $\pi T$. Thus, the thermal contributions are given by \begin{equation} I_{\text{s}\psi} = I_{\text{s}\lambda} \def\La{\Lambda} = I_{\text{g}\psi} = I_{\text{g}\lambda} \def\La{\Lambda} = -4\,\frac{c g_0^4}{|a_0|^4} \, \beta V \, \Omega_{+-} \,, \end{equation} where $\Omega_{+-} \equiv \inlinesumint{p}{+} \inlinesumint{q}{-} \frac{(p\cdot q)^2}{p^2 q^2} = \frac{4}{3}J_+ J_-$. Having deduced the values of the diagrams in Figure \ref{fig:twoloop}, and knowing that $c$ is positive, we can now understand how $\mathcal{N}\,{=}\,2$ gauge theory equilibrates at low temperature. The free energy density, viewed as a functional of $a_0$, is the effective scalar potential after having integrated out all thermal fluctuations. Adding the blackbody and (undetermined) higher-order contributions yields \begin{equation} \label{FEneutral} (F(a_0)/V)_\text{neutral} = \biggl[-\frac{\pi^2}{12} - \frac{\pi^4}{24}\frac{c g_0^4 T^4}{|a_0|^4} + O\Bigl(\frac{g_0^6 \, T^4}{|a_0|^4}\Bigr)\biggr]T^4. \end{equation} The `neutral' subscript is just a reminder that this is the contribution from the neutral degrees of freedom described by the low-energy effective Abelian theory; the heavy charged degrees of freedom add the Boltzmann suppressed contribution \eqref{FEcharged}. A key feature of the result \eqref{FEneutral} is that the free energy density decreases (becomes more negative) as one moves toward smaller values of $|a_0|$. The subleading $-T^8/|a_0|^4$ behavior of the free energy density has a three-loop origin in the microscopic description of the theory. Analogous behavior is also observed in IIB supergravity calculations for the semiclassical region of $SU(2)$ $\mathcal{N}\,{=}\,2$ theory \cite{deMello} and in the $SU(N_{\rm c})$ $\mathcal{N}\,{=}\,4$ theory with $N_{\rm c} \to \infty$ and strong 't Hooft coupling \cite{Tseyt}.% \footnote{ In Ref.~\cite{Tseyt}, the supergravity interaction potential between a stack of $N_{\rm c}$ coincident non-extremal D3-branes and a single ``probe'' D3-brane was computed. This potential was interpreted as arising from the Wilsonian effective action for the massless modes obtained by integrating out the massive modes of $\mathcal{N}\,{=}\,4$ theory on its Coulomb branch. The potential was found to be attractive. This implies that the leading dependence of the free energy on the scalar expectation value also comes with a minus sign. Indeed, Eq. (3.6) of Ref.~\cite{Tseyt} shows this explicitly. It was also argued that the weak-coupling expansion for the free energy density contains a nontrivial $T^8/|a_0|^4$ term. } It is also worth noting that the $F^4$ interaction perturbs the free Hamiltonian by \begin{equation} -\frac{c \, g_0^4}{64|a_0|^4} \int d^3x\> (F_{\mu\nu}+\widetilde{F}_{\mu\nu})^2 (F_{\rho\sigma}-\widetilde{F}_{\rho\sigma})^2 \end{equation} which is negative semi-definite since $c$ is positive. Therefore, the $F^4$ terms lower the classical energy. [This does not mean that the spectrum is unbounded below --- the signs of higher powers of $F^2$ become important for large field strengths.] Reassuringly, similar behavior is found in other effective theories with Abelian gauge fields ({\it e.g.}, the Born-Infeld action for a $U(1)$ gauge field localized to a D-brane, or the Euler-Heisenberg action for QED) \cite{Adams}. It is also reassuring to note that the sum of the free energy density contributions at low temperature from charged and neutral fields is consistent with the high temperature result. More precisely, the expressions for $F/V$ far out on the free energy surface at $T \ll M_W$, and at $T \gg M_W$, match to leading order in the coupling. This follows from applying the asymptotic formula $h(\rho) \sim -\frac{4}{\pi^2}(2\rho)^{3/2}e^{-\pi\rho}$ (derived in Ref.~\cite{YamYaf}) to Eq.~\eqref{FEhigh2}, and comparing that to the sum of Eqs.~\eqref{FEcharged} and \eqref{FEneutral}. Lastly, let us make explicit why instanton contributions to the prepotential may be ignored compared to the non-holomorphic function $\mathcal{K}$. Far out on moduli space, one-instanton corrections to $\mathcal{F}(a)$ take the form $f_1\, a^2(\La/a)^4$ with $f_1 \neq 0$ \cite{SW1}. The off-shell form of $\mathcal{L}_\text{eff}^{n=2}$ given in expression \eqref{Leff2c} involves only the real and imaginary parts of $\mathcal{F}''(a)$ and its derivatives. After inserting Eq.~\eqref{expansion}, suitably redefining the coupling constant to include one-instanton effects, and expanding in powers of $\tilde{a}} \def\Atilde{\tilde{A}/a_0$, the terms containing $f_1$ and at least one field $\tilde{a}} \def\Atilde{\tilde{A}$ begin at $O(\La^4/a_0^5)$. This leads to a subleading power correction relative to the leading result \eqref{FEneutral}. \subsection{Near the massless monopole singularity} We now switch attention to the strongly-coupled region of moduli space. Vacua near $u = u_0$ have spectra that include two types of BPS states: electrically charged $W$ bosons with mass $M_W/\La \sim O(1)$ and magnetically charged monopoles with mass $M_\text{m}/\La \sim O((u/u_0)-1)$. We assume the temperature is far below the strong scale, $T \ll \La$, but this leaves the freedom to consider two distinct regimes: (\textit{i}) $T \gg M_\text{m}$ (hot monopoles), or (\textit{ii}) $T \ll M_\text{m}$ (cold monopoles). In terms of the monopole dynamics, case (\textit{i}) is a high temperature regime, so it is natural to construct a three-dimensional effective theory as in Appendix \ref{app:hightemp}. Recall that the low energy theory near $u = u_0$ is an Abelian gauge theory of $\mathcal{A}_D = (A_D, W_{D\alpha})$ with hypermultiplet matter $\mathcal{H} = (Q,Q')$ of mass $M_\text{m} = \sqrt{2}\,|a_D|$. The monopole couples locally to the dual photon so this is simply an $\mathcal{N}\,{=}\,2$ generalization of QED in four dimensions. The effective theory is infrared free with a coupling $g_D = g_D(M_\text{m}) \ll 1$. In $\mathcal{N}\,{=}\,1$ superspace, \begin{equation} \begin{split} -g_D^2 \, \mathcal{L}_\text{QED} & = \biggl(\int d^2\theta\> \tfrac{1}{4} W_D^\alpha W_{D\alpha} + \text{H.c.}\biggr) + \int d^2\theta\, d^2\bar{\theta}\> A_D^\dag A_D \\ & + \int d^2\theta \, d^2\bar{\theta}\> \bigl( Q^\dag e^{2V_D} Q + Q'^\dag e^{-2V_D} Q' \bigr) + \biggl(-i\sqrt{2}\int d^2\theta\> Q' A_D Q + \text{H.c.}\biggr). \end{split} \end{equation} The chiral multiplets $Q$ and $Q'$ are oppositely charged under the magnetic $U(1)$ gauge group, and under an ordinary $U(1)_\text{f}$ flavor symmetry. The superpotential is uniquely fixed by $\mathcal{N}\,{=}\,2$ supersymmetry. Under $U(1)_R$ transformations, $A_D$ has charge 2, $W_{D_\alpha}$ has charge 1, and $Q$ and $Q'$ are neutral. It follows that both Weyl fermions from the hypermultiplet have {\it R}-charge $-1$, so conservation of the $U(1)_R$-current is anomalous at the one-loop level. One may determine the residual symmetry from the fact that $\tau_D \sim -\frac{i}{\pi}\ln(a_D)$ must be $2\pi$-periodic under shifts of the effective theta angle. The global symmetry is $SU(2)_R \times (\mathbb{Z}_4)_R \times U(1)_\text{f}$. In components, \begin{equation} \label{QED} \begin{split} g_D^2\, \mathcal{L}_\text{QED} &= \tfrac{1}{4} F_D^{\mu\nu}F_D^{\mu\nu} + i\bar{\lambda}_D\bar{\sigma}_\text{E}^\mu\partial_\mu\lambda} \def\La{\Lambda_D + i\bar{\psi}} \def\Psibar{\bar{\Psi}_D\bar{\sigma}_\text{E}^\mu\partial_\mu\psi_D + |\partial_\mu a_D|^2 \\ &\quad + |D^+_\mu q|^2 + |D^-_\mu q'|^2 + i\bar{\psi}} \def\Psibar{\bar{\Psi}_q \bar{\sigma}_\text{E}^\mu D_\mu^+ \psi_q + i\bar{\psi}} \def\Psibar{\bar{\Psi}_{q'} \bar{\sigma}_\text{E}^\mu D_\mu^- \psi_{q'} \\ &\quad + \Bigl[ i\sqrt{2}\bigl(q\bar{\lambda}_D\bar{\psi}} \def\Psibar{\bar{\Psi}_q + q'^*\lambda} \def\La{\Lambda_D\psi_{q'} - q\psi_D\psi_{q'} - q'\psi_D\psi_q - a_D \psi_q \psi_{q'}\bigr) + \text{H.c.}\Bigr] \\ &\quad + 2|a_D|^2(|q|^2 + |q'|^2) + \tfrac{1}{2}\bigl(|q|^2 + |q'|^2\bigr)^2, \end{split} \end{equation} where $F_D^{\mu\nu} = \partial^\mu A_D^\nu - \partial^\nu A_D^\mu$ and $D_\mu^\pm = \partial_\mu \pm iA_{D\mu}$. The mass term $M_\text{m}$ for the hypermultiplet components appears when $a_D$ attains a translationally invariant expectation value. This $\mathcal{N}\,{=}\,2$ QED theory is valid below the momentum scale $\La$. The next most relevant energy scale is the temperature $T$. Integrating out thermal fluctuations produces a three-dimensional effective theory which we denote as ``QED$_3$.'' By construction it will reproduce gauge invariant correlators for distances large compared to $T^{-1}$. It is given by \begin{equation} Z = \int \mathcal{D} A_D^i\, \mathcal{D} A_D^0\, \mathcal{D} a_D\, \mathcal{D} q\, \mathcal{D} q'\; \exp\Bigl[-\frac{1}{g_{D,3}^2}\int_V d^3x\, \mathcal{L}_{\text{QED}_3}\Bigr]\,, \end{equation} with \begin{equation} \begin{split} \mathcal{L}_{\text{QED}_3} &= f + \tfrac{1}{4} (F_D^{ij})^2 + \tfrac{1}{2}(\partial_i A_D^0)^2 + \tfrac{1}{2} m_\text{E}^2 (A_D^0)^2 + |\partial_i a_D|^2 + m_\text{s}^2|a_D|^2 \\ &\quad + |D_i^+ q|^2 + |D_i^- q'|^2 + \bigl(m_\text{h}^2 + (A_D^0)^2 + 2|a_D|^2\bigr) \bigl(|q|^2 + |q'|^2\bigr) + \tfrac{1}{2}\bigl(|q|^2 + |q'|^2\bigr)^2 \\ &\quad +\delta U_\text{thermal}(F_D^{ij}, A_D^0, a_D, q, q'). \end{split} \end{equation} The construction is similar in spirit to that for ESYM discussed in Appendix \ref{app:hightemp}. The fields are all mass dimension one bosonic zero-frequency modes that have been rescaled so that their kinetic terms have canonical normalization. To leading order in the dual coupling $g_D$, $g_{D,3}^2 = g_D^2 T$. The covariant derivative acting on charged scalar components (originally from the hypermultiplet) is $D^\pm_i = \partial_i \pm i A_{Di}$ and the field strength is $F_D^{ij} = \partial^i A_D^j - \partial^j A_D^i$. The effective theory QED$_3$ has $U(1)$ gauge invariance, plus translation and rotation symmetry. The global symmetries are realized as follows: $\bigl(\begin{smallmatrix} q \\ q'^*\end{smallmatrix}\bigr)$ transforms as a doublet of $SU(2)_R$, $a_D \to -a_D$ under the $(\mathbb{Z}_4)_R$ generator, and $q \to e^{i\omega} \def\W{\Omega}q$ and $q' \to e^{-i\omega} \def\W{\Omega}q'$ under $U(1)_\text{f}$ for arbitrary real $\omega} \def\W{\Omega$. Gauge invariance allows mass terms for the various three-dimensional scalar fields. {\it R}-symmetry requires that the hypermultiplet scalars appear in the invariant combination $|q|^2 + |q'|^2$, and that there are no operators cubic in $a_D$. All other local, gauge invariant operators of mass dimension 4 or higher are lumped into $\delta U_\text{thermal}$. The electrostatic mass $m_\text{E}$, hypermultiplet mass $m_\text{h}$, and dual scalar mass $m_\text{s}$ are fixed via matching calculations. One finds \begin{equation} m_\text{E}^2 = m_\text{h}^2 = 2m_\text{s}^2 = g_D^2 T^2 + O(g_D^4 T^2)\,. \end{equation} Terms in $\delta U_\text{thermal}$ may be calculated using background field methods. Note that the tree-level scalar potential in the QED Lagrange density, given by the last line of Eq.~\eqref{QED}, vanishes when $q = q' = 0$, regardless of the value of $a_D$. So in the four-dimensional action one may expand around a saddle point $a_D = a_{D0}$ (constant) and all other fields zero. Integrating out Gaussian fluctuations around this background leads to the following effective potential from non-static modes% \footnote{ The mean-field-dependent quadratic forms in the shifted Lagrange density involve only the hypermultiplet component fields, so one does not need to fix a gauge at leading order. } \begin{multline} (T/g_{D,3}^2) U_\text{thermal}(a_{D0})\Bigr\vert_\text{all other fields zero} \\ = -\frac{\pi^2}{6}T^4 + \frac{\pi^2}{4} \biggl[\frac{M_\text{m}^2}{\pi^2 T^2} + \ln 2\biggl(\frac{M_\text{m}^2}{\pi^2 T^2}\biggr)^2+ \sum_{n=3}^\infty c_n \biggl(\frac{M_\text{m}^2}{\pi^2 T^2}\biggr)^n\biggr]T^4 + O(g_D^2T^4) \,, \end{multline} where the effective monopole mass \begin{equation} M_\text{m}^2 = 2|a_{D0}|^2. \end{equation} An expression for the coefficients $c_n$ is given in Appendix \ref{app:hightemp}. In $U_\text{thermal}$, the constant term represents the blackbody radiation from an Abelian vector and hypermultiplet, and the coefficient of the term quadratic in $a_{D0}$ agrees with $m_\text{s}^2$; everything else constitutes $\delta U_\text{thermal}$. Consider the momentum hierarchy $T \gg M_\text{m} \gg g_D T$.% \footnote { The other regime, $M_\text{m} \ll g_D T$, is unremarkable since, for sufficiently small $a_D$, the $O(g_D T)$ screening mass provides a big curvature at the origin of field space. } To integrate out massive monopole fields in QED$_3$, expand the dual scalar field around its expectation value as $a_D = \vev{a_D} + \sigma$ with $\vev{a_D} = a_{D0}$. Then \begin{equation} \begin{split} \mathcal{L}_{\text{QED}_3} &= a_D\, (-\nabla^2 + m_\text{s}^2) \, a_D^* + \tfrac{1}{2} A_D^0 \, (-\nabla^2 + m_\text{E}^2) \, A_D^0 \\ &\quad + \begin{pmatrix}q^*, & q'\end{pmatrix} \begin{pmatrix}-\nabla^2 + m_\text{h}^2 + M_\text{m}^2 & 0 \\ 0 & -\nabla^2 + m_\text{h}^2 + M_\text{m}^2\end{pmatrix} \begin{pmatrix}q \\ q'^*\end{pmatrix} + \dotsb \,, \end{split} \end{equation} where the ellipsis indicates terms cubic and higher order in fluctuations. The static hypermultiplet contribution to the effective potential is \begin{equation} (T/g_{D,3}^2) U_\text{static} = 4\, I(M_\text{m}^2)T \bigl[1 + O(m_\text{h}^2/M_\text{m}^2)\bigr], \end{equation} where the function $I$ is given in Eq.~\eqref{loop}. The overall factor of 4 accounts for the four real degrees of freedom in $q$ and $q'$. The new lowest energy effective theory, valid for distances large compared to $M_\text{m}^{-1}$, is a three-dimensional $U(1)$ gauge theory with coupling $g_{D,3}^2$ which also includes a neutral real scalar $A_D^0$ with mass $m_\text{E}$ and a neutral complex scalar $a_D$ with mass $m_\text{s}$. The free energy density is obtained from the sum of $U_\text{thermal}$ and $U_\text{static}$, \begin{equation} \label{FEstrong1} \begin{split} F(a_{D0})/V &= T^4\biggl\{ -\frac{\pi^2}{6} + \frac{\pi^2}{4}\biggl[\frac{M_\text{m}^2}{\pi^2 T^2} + \ln 2\biggl(\frac{M_\text{m}^2}{\pi^2 T^2}\biggr)^2+ \sum_{n=3}^\infty c_n \biggl(\frac{M_\text{m}^2}{\pi^2 T^2}\biggr)^n\biggr] + O(g_D^2)\biggr\} \\ &\quad + M_\text{m}^3T\biggl[-\frac{1}{3\pi} + O(g_D^2T^2/M_\text{m}^2)\biggr] \\ &\quad + O((g_D T)^3T)\,. \end{split} \end{equation} Each line in Eq.~\eqref{FEstrong1} displays a contribution from one of the three momentum scales: $T$, $M_\text{m}$, and $g_D T$ (in that order). Defining a dimensionless mass ratio, \begin{equation} \rho = \frac{M_\text{m}}{\pi T} \,, \end{equation} we have \begin{equation} \label{FEstrong2} F(a_{D0})/V = \Bigl[-\frac{\pi^2}{12} + \frac{\pi^2}{4}\,h(\rho) + O(g_D^2)\Bigr]T^4, \end{equation} where $h(\rho)$ is a function that increases monotonically for all $\rho$ and is given explicitly by Eq.~\eqref{eq:h}. In the interval $1 \gg \rho \gg g_D$, where Eq.~\eqref{FEstrong2} is valid, the free energy density is minimized as the monopole mass approaches zero.% \footnote{ Ref.~\cite{Wirstam} claims that a nontrivial minimum of the free energy density exists at $M_\text{m} \sim O(g_D^2 T)$. Since $M_\text{m} \propto |u-u_0|$, this would imply an entire circle of minima in the $u$-plane centered around the massless monopole point. A continuous set of degenerate equilibrium states suggests a spontaneously broken continuous symmetry. What is it? There are only two possibilities: $U(1) \subset SU(2)_R$ or $U(1)_\text{f}$, but $a_D$ is not charged under either group. The resolution to this puzzle lies in properly handling the contributions of bosonic zero frequency modes to the effective scalar potential. The assertion of Ref.~\cite{Wirstam} follows from balancing the positive $O(M_\text{m}^2 T^2)$ term in Eq.~\eqref{FEstrong1} against the negative $O(g_D^2 M_\text{m} T^3)$ term, and this seems to imply that a minimum occurs at $M_\text{m}/T \sim O(g_D^2)$. However, this is outside the range of validity of the calculation and is a misuse of the renormalization group. Recall that Eq.~\eqref{FEstrong1} is valid in the regime $1 \gg M_\text{m}/T \gg g_D$. For $M_\text{m}/T \ll g_D$, the Wilsonian approach requires integrating out fluctuations whose correlation lengths are set by the Debye screening length $(g_D T)^{-1}$, not $M_\text{m}^{-1}$. Therefore, it is impossible to generate effective interactions which are non-analytic in $M_\text{m}^2$. In particular, there is no term linear in $M_\text{m}$ in the effective potential. } Near the monopole singularity, $a_{D0}$ is mapped back to the gauge invariant coordinate $u$ via the linear relation $a_{D0} \approx c_0(u- u_0)$, where $c_0 = i/(2\La)$ may be determined from the elliptic curve solution \cite{SW1}. Since the free energy density decreases as $u$ approaches $u_0$, the point $u_0$ at which monopoles become massless must be a local equilibrium state. The effective theory at $u = u_0$ is infrared free, so the free energy density at this particular point is simply the blackbody contributions from a massless vector multiplet and hypermultiplet, up to corrections suppressed by the strong coupling scale, \begin{equation} F/V = -\tfrac{\pi^2}{6}\, T^4 \left(1 + O(T/\La)\right) . \end{equation} Using the discrete $R$-symmetry, this formula for $F/V$ must also hold at $u = -u_0$, the point where dyons become massless. Hence, there are two degenerate local minima of the free energy surface. Finally, let us consider case (\textit{ii}), where $T \ll M_\text{m} \ll \Lambda$, so the monopoles are cold and heavy and must be integrated out before considering the effects of thermal fluctuations. The resulting effective theory is given to next-to-leading order in the derivative expansion by a Lagrange density \begin{equation} \mathcal{L}_{D,\,\text{eff}} = \mathcal{L}_{D,\,\text{eff}}^{n=2} + \mathcal{L}_{D,\,\text{eff}}^{n=4} + O(n \geq 6). \end{equation} This describes the interactions of a massless $U(1)$ $\mathcal{N}\,{=}\,2$ vector multiplet $\mathcal{A}_D = (A_D, W_{D\alpha})$ for distances large compared to $M_\text{m}^{-1}$. As in Sec.~\ref{subsec:infty}, one may expand around a translationally invariant background $a_D(x) = a_{D0}$, define the small coupling $g_{D0}^2 = 8\pi^2/(\ln|\La/a_{D0}|^2 - 3)$, and compute the free energy density perturbatively in $g_{D0}$ and as an expansion in inverse powers of $a_{D0}$. Since the leading logs in $\mathcal{F}(a)$ and $\mathcal{F}_D(a_D)$ only differ in functional form by a multiplicative factor, it follows that operators from $\mathcal{L}_{D,\,\text{eff}}^{n=2}$ do not contribute to $F/V$ (aside from the trivial blackbody terms) until possibly $O(g_{D0}^8 T^8/|a_{D0}|^4)$ \cite{Wirstam}.% \footnote{ Corrections to $\mathcal{F}_D$ of the form $\La^2 \sum_{n=1}^\infty c_n (a_D/\La)^n$ arise from integrating out infinitely many massive BPS states \cite{Lerche}. This can lead to $O(T/\La)$ corrections in the free energy. We assume a separation of scales $T \ll M_\text{m} \lll \La$ so that $T/\La$ is still smaller than $(T/M_\text{m})^4$. } The leading correction to $F/V$ comes from $\mathcal{L}_{D,\,\text{eff}}^{n=4}$. No new work is needed to find the correction because electric-magnetic duality in the form \eqref{SonK} implies that $\mathcal{K}_D$ and $\mathcal{K}$ have identical formulas. Hence, we simply adapt the result from Eq.~\eqref{FEneutral}. The free energy density functional is \begin{equation} F(a_{D0})/V = \biggl[-\frac{\pi^2}{12} - \frac{\pi^4}{24}\frac{c g_{D0}^4 T^4}{|a_{D0}|^4} + O\Bigl(\frac{g_{D0}^6 T^4}{|a_{D0}|^4}\Bigr)\biggr]T^4. \end{equation} This expression decreases as one moves toward the massless monopole point, and crosses over into the form \eqref{FEstrong2} when the monopole mass drops below $T$. \section{Mass-deformed \boldmath $SU(2)$ $\mathcal{N}\,{=}\,4$ gauge theory} \label{sec:star} A simple generalization of pure $\mathcal{N}\,{=}\,2$ gauge theory is the addition of a single massive elementary hypermultiplet in the adjoint representation. This theory (often referred to as $\mathcal{N}\,{=}\,2^*$) is controlled in the far UV by a fixed point (the conformal $\mathcal{N}\,{=}\,4$ gauge theory), but the relevant mass deformation induces running in the coupling, so that in the deep IR the theory is again pure $\mathcal{N}\,{=}\,2$ gauge theory. The Lagrange density can be obtained by adding to Eq.~\eqref{Ls} the K\"ahler term for the hypermultiplet and the superpotential \begin{equation} W = -i\frac{2}{g^2}\text{tr}\bigl(\sqrt{2}\Phi[Q,Q'] + mQQ'\bigr). \end{equation} By a field redefinition, $m$ may be chosen real. One is free to specify the value of an exactly marginal coupling in the UV. Let $q_0 = e^{2\pi i\tau_0}$, where $\tau_0 = \theta_0/(2\pi) + i 4\pi/g_0^2$ is any complex number in the upper half plane. A choice of $q_0$ defines a scale $\La_0 \sim |q_0^{1/4}m|$ where the theory evolves into the strongly-coupled pure $\mathcal{N}\,{=}\,2$ theory \cite{SW2}. We consider the limit of weak coupling, $|q_0| \ll 1$, so that a large hierarchy exists between $m$ and $\La_0$. Classically, moduli space is given by $Q = Q' = 0$ and $[\Phi,\Phi^\dag] = 0$, so once again, vacua may be described as points in the $u$-plane. Far from the origin, at $|u| \gg \La_0^2$, the weak coupling permits a mean field analysis. After applying the Higgs mechanism (for $\phi = a\,\sigma^3/2$), one may read off the spectrum from the hypermultiplet F-term contributions to the scalar potential and the Higgs kinetic term. In $\mathcal{N}\,{=}\,2$ language, the vector multiplet splits into a massless photon $\mathcal{A}^3$ plus charged $W$ bosons $\mathcal{A}^\pm$ with masses $\sqrt{2}|a|$. The hypermultiplet splits into a neutral component $\mathcal{H}^3$ with mass $m$ and charged components $\mathcal{H}^\pm$ with masses $|m \mp \sqrt{2}a|$. A novel feature of this spectrum is that one of the electrically charged hypermultiplets (call it an ``electron'') can become massless at $a = \pm m/\sqrt{2}$. Therefore, in addition to the singularities where either a magnetic monopole or dyon goes massless, there is an additional singularity where an electron becomes massless \cite{SW2}. This third singularity is located at $u \approx \frac 14 m^2$. A simple QED-like effective theory can be constructed near this point valid on distances $\gg m^{-1}$. By matching the $SU(2)$ gauge coupling onto the one-loop QED coupling at the scale $M_W \sim m$, then running the QED coupling down to the mass scale of the light electron, one obtains the prepotential for an effective Abelian theory \cite{BPP}, \begin{equation} \label{prepotstar} \mathcal{F}(a) \sim \tfrac{1}{2} \tau_0 \, a^2 + \frac{i}{4\pi}\,a^2\ln\biggl(\frac{a^2}{\La_0^2}\biggr) - \frac{i}{4\pi}\,(a-m/\sqrt{2})^2\ln\biggl(\frac{(a-m/\sqrt{2})^2}{\La_0^2} \biggr). \end{equation} The perturbative analysis of Sec.~\ref{subsec:infty} may be repeated for the prepotential \eqref{prepotstar} by expanding around a point near the third singularity, $a_0 = m/\sqrt{2} + \Delta a_0$. A similar conclusion is reached: the prepotential cannot contribute to the free energy density until at least $O(\bar{g}_0^8 \, T^8/|\Delta a_0|^4)$, where $1/\bar{g}_0^2 = 1/g_0^2 + \frac{1}{4\pi^2} \ln\bigl|\frac{m/\sqrt{2}}{\Delta a_0}\bigr|$. Instead of studying in detail four derivative terms in the effective action determined by a non-holomorphic function $\mathcal{K}$, one may argue that the third singularity must be a local minimum of the effective potential as follows. Since the electron becomes massless at $u \approx \frac 14 m^2$, the effective coupling $\bar{g}_0$ vanishes at this point and the low energy theory is free. For low temperatures, $T \ll \La_0$, the free energy density at the third singularity is simply $F/V = -\frac{\pi^2}{6}\,T^4(1 + O(T/\La_0, T/m))$.% \footnote{ Recall that free $\mathcal{N}\,{=}\,2$ vector and hypermultiplets each contribute $-\frac{\pi^2}{12}\,T^4$ to the free energy density. } Consider turning off the temperature and choosing a vacuum close to the third singularity. The spectrum still includes a massless photon, but the electron will have some small non-zero mass $m_\text{e}$. Now turn on a temperature $T \ll m_\text{e}$. It is difficult to thermally excite electrons so their contribution to $F/V$ will be exponentially suppressed. Since the effective theory near this point is weakly-coupled, we may trust the blackbody approximation to the free energy, but now this comes from a {\it single} type of $\mathcal{N}\,{=}\,2$ multiplet rather than two. Hence, the third singularity must lie deeper in the free energy surface than nearby points. Finally, we must understand the behavior of the free energy density for $|u| \gg m^2$. One cannot simply adapt the $\mathcal{N}\,{=}\,2$ result since $\mathcal{N}\,{=}\,2^*$ is UV conformal, rather than asymptotically free. The non-holomorphic function $\mathcal{K}$ for $\mathcal{N}\,{=}\,4$ theory is believed to be exactly of the form given in Eq.~\eqref{K} without suffering renormalization \cite{DS,KuzMc2,Kuz}. The coefficient is known to be $c = 1/\pi^2$ \cite{Peri}. With positive $c$, thermal fluctuations again make the asymptotic region of the free energy surface locally unstable. \section{Discussion} \label{sec:discuss} The non-trivial moduli space of $SU(2)$ $\mathcal{N}\,{=}\,2$ supersymmetric Yang-Mills theory leads to a rich variety of dynamics on multiple length scales. Analyzing the thermodynamics of the theory requires careful application of effective field theory techniques to disentangle contributions from different types of fluctuations. At high temperature, $T \gg \La$, we found a unique $\mathbb{Z}_2$-invariant equilibrium state with a free energy density given by Eq.~\eqref{FEhigh3}. At low temperatures, the flat zero-temperature ground state energy surface deforms into a non-trivial free energy surface. Far from the origin of moduli space, where $M_W \gg \La$, we found that an arbitrarily small temperature, $T \ll M_W$, causes the free energy surface to rise asymptotically. This corrects previously reported results \cite{Wirstam} on the thermodynamics of this theory, and implies that minima of the free energy surface must lie in the portion of moduli space where the gauge coupling is strong. By using the dual description of the theory near the massless monopole (or dyon) points, we were able to analyze the thermodynamics when $M_{\rm m} \ll T \ll \La$ and monopoles (or dyons) are hot, as well as when $T \ll M_{\rm m} \ll \La$ and monopoles (or dyons) are cold. We found that the free energy surface has degenerate local minima at the massless monopole and dyon points. Our results are summarized in Figure \ref{fig:uflow}(b). As there are no points in moduli space with enhanced gauge symmetry, the simplest scenario consistent with the above observations is to assume that there are no other local minima of the free energy density, and that the free energy surface smoothly decreases toward the massless monopole and dyon points throughout the intermediate regions in which no weak coupling description is applicable. This gives a simple, consistent picture in which the discrete $R$-symmetry is spontaneously broken at low temperatures, with two co-existing equilibrium states. The restoration of $R$-symmetry at high temperature, combined with its spontaneous breakdown at low temperature, implies that there must be a genuine thermodynamic phase transition. The transition temperature must be some pure number times $\La$, which is the only intrinsic scale in the theory. The spontaneous symmetry breaking, and consequent change in the number of distinct equilibrium states, ultimately arises from the existence of multiple special points in moduli space where equal numbers of massless states appear in the low energy spectrum. It is instructive to contrast this with $SU(N_{\rm c})$ $\mathcal{N}\,{=}\,4$ gauge theory. The moduli space of this theory is locally flat and corresponds to the orbifold $\mathbb{R}^{6(N_{\rm c}-1)}/S_{N_{\rm c}}$. Only the single vacuum state at the origin is a fixed point of the entire permutation group. At this point the theory is superconformal and has the maximal number of massless gluons in its low energy spectrum. At weak coupling, these gluons provide the largest possible order one contribution to the free energy density in the form of blackbody radiation. At any non-zero temperature, there is a unique equilibrium state. We also examined weakly-coupled $SU(2)$ $\mathcal{N}\,{=}\,2^*$ theory. At low temperature, we found that the additional hypermultiplet leads to the appearance of a third local minima in the free energy surface. This suggests the possibility of three distinct thermal equilibrium states. They correspond to points in the $u$-plane where a hypermultiplet (either solitonic or elementary) becomes massless. All three local minima on the free energy surface have the same free energy density, $F/V = -\frac{\pi^2}{6} \, T^4$, up to corrections suppressed by the ratio of temperature to the strong coupling scale $\La_0$. As only two local minima are related by the discrete {\it R}-symmetry, we are unable to determine, based on our effective theory analysis, whether the three local minima are exactly degenerate, and if not which are the true global minima. An obvious extension of this work would be to generalize the low temperature analysis to $SU(N_{\rm c})$ gauge groups with any number of colors. In particular, it would be interesting to study thermodynamics in the weakly-coupled $SU(N_{\rm c})$ $\mathcal{N}\,{=}\,2^*$ theory. If an $R$-symmetry phase transition exists in this theory, one could parametrize the transition temperature as $T_c = m f(N_{\rm c}, \La_0/m)$ for some dimensionless function $f$. An understanding of the large $N_{\rm c}$ limit of this formula might shed light on recent results obtained from the supergravity dual for the strongly-coupled version of the theory at finite temperature \cite{BDKL}. In the strong-coupling limit, the scale $\La_0$ is the same as the mass deformation $m$. Therefore, the critical temperature of the transition may be parametrized as $T_c = m \tilde{f}(N_{\rm c})$ for some unknown function $\tilde{f}$. A phase transition can be detected, in the large $N_{\rm c}$ limit, by finding a zero of the free energy density as a function of $m/T$. However, numerical calculations do not show any zero-crossing behavior in the interval $0 \leq m/T \lesssim 12$ \cite{Buch}. This is somewhat unexpected, as one might have expected qualitatively different behavior in the regimes $T \ll m$ and $T \gg m$ \cite{BL}. One possible resolution would be for the function $\tilde{f}(N_{\rm c})$ to scale as some (positive) power of $1/N_{\rm c}$, forcing $m/T_c$ to move off to infinity as $N_{\rm c} \to\infty$. Our work was originally motivated by the desire to better understand the nature of phase transitions in non-conformal gauge theories with supergravity duals. One of the basic questions for the thermodynamics of such theories is understanding how the large $N_{\rm c}$ limit affects the number and location of thermal equilibrium states. \acknowledgments S.P. is indebted to Andreas Karch, Ann Nelson, M\aa ns Henningson, and Sergei Kuzenko for helpful conversations and correspondence. We thank the JHEP referee for pointing out how the non-holomorphic function $\mathcal{K}$ in $\mathcal{N}=2$ theory may be obtained from existing superfield calculations in $\mathcal{N}\,{=}\,4$ theory. Comments on the manuscript from Ethan Thompson, Carlos Hoyos, and Alex Buchel are much appreciated. This work was supported in part by the U.S. Department of Energy under grant DE-FG02-96ER40956.
1,116,691,498,059
arxiv
\section{Introduction} Ontology Matching (OM) or Ontology Alignment (OA) is the process of finding correspondence between the entities of two ontologies for the purpose of unifying data from different sources and reducing heterogeneity, making these more viable for research and development. The correspondence could be associated with element-level matching or structure-level matching, among other categories, as discussed in \cite{neutel2021towards}. In this work, we focus on the element level, where the matching is performed for each class in the source ontology to the classes in the target ontology. Based on the previous works, we can further categorize OM into contextual and non-contextual approaches. For non-contextual methods, the model understands lexical similarity but fails to understand textual semantics, thereby resulting in ambiguity. With contextual approaches, the objective is to match complex pairs which are lexically different but semantically similar and vice-versa. For example, ``Encephalopathy'' and ``Disorder of brain'' are lexically different but are used in the same context. However, ``Structure of permanent maxillary right second molar tooth'' and ``Structure of permanent mandibular right first molar tooth'' are lexically similar but are semantically different. Recently, transformer-based models~\cite{vaswani2017attention} obtained state-of-the-art (SOTA) for several tasks in natural language processing such as machine translation as in \cite{johnson2017google,xu2021editor,liu2021re}, question answering as in \cite{clark2019boolq}, etc. thanks to their ability to learn textual contexts. In the field of OM, using BERT \cite{devlin2018bert}, a transformer-based framework has been proposed in \cite{he2022bertmap}, which showed promising results when compared to other OM systems. Motivated by the potential of the transformer models for understanding textual semantic context, the present work proposes Truveta Mapper (TM), a novel zero-shot sequence-to-sequence multi-tasking transformer-based framework for OM, with the capability of learning both the structure and semantics of the ontology graphs. OM is treated here as a translation task, where the source ontology class is translated to a path in the matching target ontology class. Both pre-training and fine-tuning are performed, where the pre-trained model learns the structure of the ontology graph and semantics of each node, and the fine-tuned model learns downstream translation tasks for OM in a zero-shot manner. In contrast to the existing OM methods, the proposed approach: \renewcommand{\theenumi}{\roman{enumi}}% \begin{enumerate} \item Supports multi-tasking, i.e., a single model is capable of matching different biomedical ontologies such as SNOMED to FMA, SNOMED to NCIT, and so on, and thereby takes advantage of transfer learning, \item Is based on zero-shot learning/prediction and performs end-to-end mapping from the source to the target, \item Reduces the time-complexity for OM from quadratic to log-linear without reducing the search space of target ontologies, \item Learns the overall graph structure of all the ontologies, \item Uses byte-level tokenizer, making it more robust towards the bias introduced by the tokenization scheme. \end{enumerate} Primary steps in the proposed mapping solution include: SmartIDs and corpus generation for pre-training and fine-tuning based on the hierarchy of the ontologies, starting from a model pre-trained on a large public corpus (C4, Raffel et al.~\shortcite{raffel2020exploring}), further pre-training it on the full ontologies using Masked Language Modeling (MLM) and then fine-tuning it for the OM translation task. Empirical comparison is made with the state-of-the-art lexical matching approaches and the recent contextual models presented in \cite{OAEI22,he2022machine} on the Unified Medical Language System (UMLS) datasets as part of the New Bio-ML track for OAEI 2022. Our solution surpasses state-of-the-art LogMap, AML models, Edit-similarity, and recently proposed BERTMap, AMD, LogMap-Lite, BERTMap-Lite, LSMatch, Matcha and ATMatcher, and offers log-linear complexity in contrast to quadratic in many existing approaches. The remainder of this paper is as follows. Section \ref{Relwork} reviews the recent SOTA-related works on OM/OA; Section \ref{Methodology} defines the problem statement, provides a high-level understanding of our proposed approach and the ontologies used; Section \ref{TM} describes TM in detail, elaborates on pre-training, fine-tuning, zero-shot learning, and predictions; Section \ref{TM:results} shows the evaluation criteria, results, and gives insight about the overall model performance; and lastly, Section \ref{Conc} provides a detailed discussion, conclusion on the framework, and outlines our potential future works. \section{Related Work}\label{Relwork} Classical approaches are primarily based on non-contextual matching. Related to that, some notable works in the field of OM include Edit-Similarity \cite{deeponto}, LSMatch \cite{sharma2022lsmatch}, LogMap \cite{jimenez2011logmap}, and AgreementMakerLight (AML) \cite{faria2013agreementmakerlight}, among others. Edit-Similarity is a naïve lexical matching approach based on normalized edit similarity scores. LSMatch is another lexical matching approach based on string similarity match. LogMap and AML are two classical OM systems with leading performance in many equivalence matching tasks. These two approaches are based on lexical matching, mapping extension {{(adding new mappings for semantically related classes of the current mappings)}}, and mapping repair {{(removing mappings that can lead to logical conflicts)}}. However, these lexical approaches do not consider contextual semantics, resulting in ambiguity for complex scenarios. Recently, several OM systems, such as OntoEmma \cite{wang2018ontology}, DeepAlignment \cite{kolyvakis2018deepalignment}, VeeAlign \cite{iyer2020veealign}, leveraged dense word embeddings, in which words are projected into a vector. Word pairs with smaller Euclidean distances in the vector space will have closer semantic meanings. Different techniques are used to generate these embeddings. OntoEmma and \cite{zhang2014ontology} use word2vec \cite{mikolov2013efficient}, which is trained on Wikipedia; \cite{tounsi2019ontology} uses FastText \cite{bojanowski2017enriching}; LogMap-ML \cite{chen2021augmenting} uses OWL2Vec* \cite{chen2021owl2vec}, which is a word2vec model trained on corpora extracted from the ontology with different kinds of semantics; DeepAlignment uses refined word embeddings using counter-fitting; VeeAlign proposes dual embeddings using class labels. These are primarily traditional non-contextual word embedding methods and do not consider the word-level contexts, resulting in ambiguity. Some of these approaches, such as VeeAlign, are based on supervised training, which requires high-quality labeled mappings for training and can be challenging to obtain. Recent developments in the field have shown the potential of using context-based matching for OM. For example, \cite{neutel2021towards} employed contextual BERT embeddings to match two domain ontologies associated with occupations. Each sentence is embedded using BERT, and similarity is applied to get the scores for OM. More recently, \cite{he2022bertmap} proposed BERTMap model, which is obtained by fine-tuning the already pre-trained BERT model. Concatenated classes/strings from inter/intra ontologies with some auxiliary data were used as the training input for the binary classification task to predict synonyms/non-synonyms pairs. BERTMap also uses mapping extension and repair to refine the output mappings further. The BERTMap model often outperformed non-contextual approaches such as LogMap, AML, and LogMap-ML. However, it requires quadratic time complexity, which is challenging for large ontologies. AMD \cite{wang2022amd} is another recent context-based matching approach that uses a BERT-based model to generate mappings and then filters these mappings using graph embedding techniques. Other related ontology matching systems that participated in OAEI 2022 \cite{OAEI22} are LogMap-Lite, BERTMap-Lite, Matcha, and ATMatcher. \section{Methodology}\label{Methodology} \subsection{Problem statement}\label{Methodology:ProbState} Ontology Matching (OM) or Ontology Alignment (OA) is the process of finding correspondence between the entities/classes of two ontologies \cite{he2022machine}. In this work, a new perspective is presented by treating OM as a translation task and can be mathematically presented as $f(o_1, \mathbf{O_2}, \mathbf{A})$, where function $f$ gives the matching target ontology class $o_2 \in \mathbf{O_2}$ in the target ontology $\mathbf{O_2}$, given a source class $o_1 \in \mathbf{O_1}$ in input ontology $\mathbf{O_1}$, target ontology $\mathbf{O_2}$, and alignment task $\mathbf{A}$, where the alignment task could be equivalence or subsumption matching. The present work focuses on the equivalence matching, where classes having the same semantic meaning in different ontologies are matched with each other. In Figure \ref{fig1}, we illustrate our high-level solution, where target class $o_2 \in \mathbf{O_2}$ is obtained as a path in the target ontology graph, for a given input node representing class $o_1 \in \mathbf{O_1}$ in the input ontology. \begin{figure*}[h] \centering\includegraphics[width=12cm]{Fig1_1.png}\caption{The equivalence matching between the SNOMED class ID 78904004 – ``Chest Wall Structure'' and two FMA concepts, ``Wall of thorax'' with ID of fma10428 and ``Chest wall'' with ID of fma50060, is illustrated in this figure. TM translates from the source node encoding ``Chest Wall Structure'' in the SNOMED graph to the highlighted path ``A \ldots C \ldots F'' (presenting Chest Wall) and ``A \ldots B \ldots E'' (Thoracic Wall) in FMA ontology. While the SNOMED graph's ``Chest Wall Structure'' node and the FMA graph's ``Chest Wall'' node have children, the FMA ontology's ``Thoracic Wall'' is considered a leaf in this graph (no children).} \label{fig1} \end{figure*} \subsection{Ontologies}\label{Methodology:Ontologies} The Ontology Alignment Evaluation Initiative (OAEI) organizes ontology evaluation campaigns on different ontology matching tasks yearly. In this work, as a part of the New Bio-ML track \cite{OAEI22}, we focus on three UMLS equivalence matching tasks, SNOMED to FMA (Body), SNOMED to NCIT (Neoplas), and SNOMED to NCIT (Pharm). Pharm, Neoplas, and Body are associated with the semantic types of ``Pharmacologic Substance'', ``Neoplastic Process'', and ``Body Part, Organ, or Organ Components'' in UMLS, respectively. Based on these semantics types, subset ontologies are also provided in \cite{OAEI22}, and are given as SNOMED (Body), SNOMED (Neoplas), SNOMED (Pharm), FMA (Body), NCIT (Neoplas) and NCIT (Pharm), where the first three are the source and last three are the target ontologies in our matching task (Table \ref{tab1}). For each of the classes/nodes present in the given ontologies, class ID is provided along with its associated label and possible synonyms (class descriptions). For example, in Figure \ref{fig1}, for Snomed ID 78904004, the class label is “Chest Wall Structure,” and its synonyms could be ``Thoracic Wall'' and ``Chest Wall'' among others. UMLS tasks for these ontologies are particularly selected as they follow a hierarchical graph-type structure, which is vital for our formulation. In this work, we focus on the unsupervised setting provided for these three UMLS tasks. \begin{table} \centering \begin{tabular}{lllll} \hline Ontologies & Version & \#Classes & Subsets & \#Classes\\ \hline SNOMED & US.2021. & 358,222 & Body & 24,182 \\ & 09.01 & & Pharm & 16,045\\ & & & Neoplas & 11,271 \\ \hline FMA & V4.14.0 & 104,523 & Body & 64,726\\ \hline NCIT & V21.02d & 163,842 & Pharm & 15,250\\ & & & Neoplas & 13,956 \\ \hline \end{tabular} \caption[tab1]{Full and subset ontologies \cite{OAEI22}, same version as \cite{he2022machine}. SNOMED subsets are the source ontologies, while FMA and NCIT are the target ontologies.} \label{tab1} \end{table} \begin{figure*}[h] \includegraphics[clip, trim={0cm 0cm 0cm 3cm},scale=.5]{Training_Architecture.pdf} \caption{Training Architecture. Starting from a language model pre-trained on the C4 dataset, and further pre-training is done using MLM on the full ontology graphs. The pre-trained model is then fine-tuned on downstream tasks, translating from the class descriptions (label/synonyms) to the target node path (SmartIDs). The pre-training and fine-tuning are done in a multi-task manner. The pre-training is performed on both source, and target inner-ontologies, and fine-tuning is done on target subset ontologies.}\label{fig: train_arch} \end{figure*} \paragraph{Unsupervised setting.} For predictions, we are using the unsupervised setting provided in \cite{OAEI22}, where the matching pairs are divided into validation (10\%) and testing (90\%) sets. The purpose of the split is to use the validation set for hyperparameter tuning and the test set for the final evaluation. \begin{figure}[h] \includegraphics[width=8cm]{treeids.png} \caption{SmartIDs generation. This diagram illustrates SmartID and SynonymIDs generation for the Enzyme concept in the SNOMED ontology. The enzyme has four paths because this node has multiple parents. The shortest ID (highlighted) is chosen as SmartID, and others are SynonymIDs for this concept.}\label{fig: smartids} \end{figure} \section{Truveta Mapper (TM): Proposed approach for OM}\label{TM} Pre-training, fine-tuning, and zero-shot predictions are the three main steps in the proposed approach, as shown in Figures \ref{fig: train_arch} and \ref{fig: pred}. Starting from a language model pre-trained on the C4 dataset, the model is further pre-trained on the full ontologies, learning each ontology’s semantics and hierarchical structure. The model is trained on the downstream task structure as well as domain adaptation on the subsets using the subset ontology data during the fine-tuning stage. The pre-training and fine-tuning steps are done in a multi-task manner on inner-ontologies, which enables the model in extensive transfer-learning (Figure \ref{fig: train_arch}). In the prediction step, given a source ontology, the output is predicted in a zero-shot manner (Figure \ref{fig: pred}). More details are provided for each step in the subsequent subsections. \subsection{Pre-training}\label{TM:pre-training} \paragraph{SmartID generation.} An ontology can be represented in the form of a graph where each node represents a class, and the parent and child relations of the ontology serve as connections between classes. Based on this graph structure of each full ontology, some SmartIDs and SynonymIDs are generated for all the classes. These are constructed by starting from the root node, separated by ``-'' at each hierarchy level, and traversing through each node in that level as shown in Figure \ref{fig: smartids}. Following this method, a unique ID is generated for each path traversed. As such, for ontologies like SNOMED, where there are multiple paths between the root and any given class, there could be multiple IDs for that node. In such cases, the shortest ID is considered the SmartID of that node (highlighted in yellow in Figure \ref{fig: smartids}), while the other path IDs are considered its SynonymIDs. Each node ID inherently captures the information of all its ancestors. This enables the model to trace from a broader class, starting from the root and getting more granular at each level, thus simplifying the translation task. \paragraph{Training.} After generating the SmartIDs, multi-task pre-training is done on full ontologies using MLM by randomly masking the nodes, enabling the model to learn the hierarchy and semantics. For each ontology, multiple tasks are included in the pre-training step to learn the relations between child and parent nodes, SmartIDs and SynonymIDs, labels, definitions, and synonym relations, as well as their associated SmartIDs (Figure \ref{fig: train_arch}). The ByT5 model \cite{xue2022byt5}, a token-free variation of mT5 \cite{xue2020mt5} which supports multi-task training, is used as the model structure for both pre-training and fine-tuning. Without preprocessing, ByT5 operates directly on the raw text, converting them into UTF-8 bytes. ByT5 does not have tokenization bias, and thereby it can represent medical terminologies thoroughly. The pre-training dataset has 2,406,456 instances with 52.5\% SNOMED, 38.4\% NCIT, and 9.1\% FMA ontologies. The model is trained for 3 epochs, with an increasing masking percentage linearly over time, starting at 10\% and increasing to 35\% in the final batch. The pre-training is done on 8 V100 32GB Nvidia GPUs with a batch size of 20, using a learning-rate of 1e-3 with linear decay scheduler and AdamW optimizer. \subsection{Fine-tuning}\label{TM:fine-tuning} The fine-tuning step aims to train the model on the downstream OM tasks. Only target subset ontologies, i.e., NCIT (Pharm), NCIT (Neoplas), and FMA (Body), are used for fine-tuning, using the same SmartIDs generated in the pre-training step. The training data of each target sub-ontologies is augmented using the exact matches present in the labels and synonyms of other subset ontologies. We are also taking advantage of older ontology versions to add more synonyms to each target label. This expands the training corpus, enriches the data with minimal processing, and helps to perform more comprehensive learning. After the data augmentation for all the target sub-ontologies, fine-tuning is performed only on these target sub-ontologies corpora, i.e., NCIT (Pharm), NCIT (Neoplas), and FMA (Body). \begin{figure*}[h] \includegraphics[clip, trim={4cm 6cm 7cm 5cm}, scale=0.7]{Prediction.pdf} \caption{Zero-shot predictions. Given a source term and the assigned translation task (e.g., SNOMED to FMA), the output is generated in two steps: Prediction step and Validation step. In the Prediction step, a potential target candidate is generated along with the embeddings associated with the source term. In the Validation step, the target candidate class is again passed through our translation model to generate embeddings. Based on the source and target terms embeddings, a similarity score between the source and target candidate is obtained. This is done in a zero-shot manner with time complexity of $O(log(n))$.}\label{fig: pred} \end{figure*} For each node in the target ontologies, translation is performed from the target node label and all its synonyms to the target SmartIDs. The 462,789 samples that made up the fine-tuning data included 33.6\% Pharm, 13\% Neoplas, and 53.4\% Body subsets. Using 8 Nvidia V100 32GB GPUs with a batch size of 20, the fine-tuning took around 21 epochs. For the fine-tuning, a learning rate of 1e-3 with linear decay scheduler and warm-up of 1.5 epoch using AdamW optimizer with eps of 1e-8 and weight decay of 1e-2 is used. For the validation set, 10\% of the test data is used, with 46.6\% Pharm, 18.5\% Neoplas, and 34.9\% Body. \subsection{Zero-shot Predictions}\label{TM:predictions} TM is a multi-task model with the capability to translate between multiple ontologies from the input source class labels/synonyms to target SmartIDs. For the inference, we have performed zero-shot predictions from the source ontology classes to target ontology SmartIDs. One of the main advantages of our proposed TM is that given an input term with a specified task identifier, it is able to predict the best possible match from the target ontology with $O(log(n))$ complexity, where $n$ corresponds to the size of target ontology. As such, without even considering the confidence score of the prediction into account, TM offers high accuracy with much less inference time as compared to the existing methods. For confidence scoring, typically, two techniques of greedy and beam search are used. However, to make the TM predictions more robust and improve model precision, we leverage semantic similarity using embeddings of source terms and predicted target candidates. As such, the output is generated in two steps: (i) Prediction step: Given a source term, the model predicts the potential candidate in the target ontology graph, and (ii) Validation step: The embeddings are generated for the target candidate using the same model and the similarity score is obtained between the source term and predicted target term embeddings (Figure \ref{fig: pred}). Scores are generated across all the source and predicted class/node labels and synonyms, where the synonyms are also augmented by singularizing the terms in the descriptions. The maximum generated score is considered as the similarity score. The source and the target candidates are considered valid mapping pairs if their similarity score exceeds a particular selected threshold. As such, the proposed model takes advantage of both graph search and semantic matching. If exact match output is available, we combine it with the model predictions using the maximum similarity score. This is done since the operation can be performed in a constant time, and embedding similarity will also generate a similar maximum score. More specifically, the exact match logic is defined as the following: Given a source ontology class $o_1 \in \mathbf{O_1}$, and a target ontology $\mathbf{O_2}$, if any of the synonyms or label of $o_1$ class matches with the synonym/label of $o_2 \in \mathbf{O_2}$, then $o_1$ and $o_2$ are considered as an exact match. Mathematically, score $S$ is calculated as: \begin{equation} S = \begin{cases} \ 1,& \text{if } \Omega(o_1) \cap \Omega(o_2) \neq \emptyset \\ max(Sim(\Omega(o_1),\Omega(o_2)), & \text{otherwise} \end{cases} \end{equation} where $o_2$ is the predicted ontology node/class for $o_1$, $\Omega(o_1)$ and $\Omega(o_2)$ are sets of all labels and synonyms for node $o_1$ and $o_2$, respectively, and $max(Sim(\Omega(o_1),\Omega(o_2))$ selects the maximum cosine similarity score across all the labels and synonyms of the two nodes $o_1$ (source) and $o_2$ (predicted). \section{Results}\label{TM:results} \subsection{Evaluation criteria} Commonly used metrics for evaluating OM systems \cite{he2022machine}: Precision (P), Recall (R), and F-score are used as the global evaluation metrics. Mathematically, \begin{equation} \begin{aligned} & P=\frac{\left| M_{out} \cap M_{ref} \right|}{\left| M_{out}\right|} \; , \;\;\;\;\ R=\frac{\left| M_{out} \cap M_{ref} \right|}{\left| M_{ref}\right|} \\ & F_{\beta} = (1+\beta^2)\frac{P.R}{\beta^2.P+R} \end{aligned} \label{Eq:global} \end{equation} where, $M_{ref}$ are the reference mappings, consisting of matching pairs, $m=(c,c')$, such that $c$ and $c'$ are two classes from the to-be-aligned ontologies $\mathbf{O_1}$ and $\mathbf{O_2}$, and $M_{out}$ are the mappings computed by OM systems and $\beta=1$. Local evaluation metrics, $Hits@K$ and Mean Reciprocal Rank ($MRR$), introduced in \cite{he2022machine} are also used for current evaluation and can be represented as: \begin{equation} \begin{aligned} &Hits@K=\frac{\left| \{m \in M_{ref} | Rank(m) \leq K \} \right|}{\left| M_{ref}\right|} \\ &MRR=\frac{ \sum_{m \in M_{ref}} Rank(m)^{-1} }{\left| M_{ref}\right|} \end{aligned} \label{Eq:global} \end{equation} where $Rank(m)$ returns the ranking position of $m$ among $M_m \cup \{m\}$ according to their scores, $M_m$ represents a set of negative mappings pairs for each of the source term $c$ in $M_{ref}$, such that $(c,c''_i) \in M_m$ with $i \in \{1,2,...,100\}$ and $c''_i$ are the 100 negative output candidates from target ontologies for each of the source terms $c$ in $M_{ref}$. As such, the Hits and MRR would be different for different selected 100 samples. We have published the results of our model based on the provided $M_m$ set in \cite{he2022machine} for a fair comparison. To provide a more robust measure of local metrics, we are reporting overall accuracy as well, although this is not provided for any of the other models. Accuracy here can be mathematically presented as: \begin{equation} Accuracy=\frac{\left| \{m \in M_{ref} | Pred(c) = c' \} \right|}{\left| M_{ref}\right|} \end{equation}\label{Eq:local_new} where $m=(c,c')$ represents matching pairs in the $M_{ref}$ set, and $Pred(c)$ refers to the target candidate predicted by the model, given an input term $c$. \paragraph{Baselines. } Results are compared with the SOTA approaches: Edit-Similarity, LogMap, AML, BERTMap \cite{he2022machine}, and recently published results in \cite{OAEI22}. To be consistent, evaluation for P, R, F-score, Hit@1, and MRR is done using \cite{deeponto} library. \subsection{{Prediction Results}} Prediction results are shown in Tables \ref{tab:res1}--\ref{tab:res3}, for the three equivalence OM tasks, from SNOMED to FMA (Body), SNOMED to NCIT (Pharm), and SNOMED to NCIT (Neoplas), respectively. The results demonstrate the precision, recall, F-score, Hit@1, MRR, and accuracy for TM and the baseline approaches presented in \cite{he2022machine} and \cite{OAEI22} on the 90\% of test data in the unsupervised setting. The highest numbers for each of these metrics are highlighted in the tables to emphasize which model is outperforming others in each category. The overall results illustrate that TM is outperforming all the baselines for all three OM tasks in F-score, Hit@1, and MRR. A high threshold is selected to generate the most confident cross-ontology matching pairs. Note that a single unified model is trained and leveraged here to predict all the results in the form of a source class to target SmartIDs, using a unique task identifier for each task. \begin{table*} \centering \begin{tabular}{lllllll} \hline Task & Precision & Recall & F-score & MRR & Hit@1 &Accuracy\\ \hline TM(Ours)$^1$ & 0.947 & 0.738 & \textbf{0.830} & \multirow{2}{*}{\textbf{0.960}} & \multirow{2}{*}{\textbf{0.942}} &\multirow{2}{*}{\textbf{0.801}}\\ TM(Ours)$^2$ & 0.960 & 0.720 & 0.823 & & &\\ \hline Edit-Similarity$^*$ &0.976 &0.660 &0.787 &0.895 &0.869 &NA\\ LogMap$^*$ &0.702 &0.581 &0.636 &0.545 &0.330 &NA\\ AML$^*$ &0.841 &0.776 &0.807 &NA &NA &NA\\ BERTMap$^*$&0.997 &0.639 &0.773 & 0.954 &0.930 &NA\\ LogMap-Lite$^{**}$ &0.967 &0.543 &0.695 &NA &NA&NA \\ AMD $^{**}$ &0.890 &0.704 &0.786 &NA &NA&NA \\ BERTMap-Lite$^{**}$ &0.976 &0.660 &0.787 &0.895 &0.869 &NA\\ Matcha$^{**}$ &0.875 &0.594 &0.707 &NA &NA &NA\\ ATMatcher $^{**}$&0.264 &0.226 &0.244 &NA &NA &NA\\ LSMatch$^{**}$ &0.809 &0.072 &0.132 &NA &NA &NA\\ \hline \multicolumn{7}{c}{\scriptsize{$^{1,2}$ are based on our proposed TM model, where the former is based on similarity score and later is based on greedy search score}}\\ \multicolumn{7}{c}{\scriptsize{$^*$ These numbers are based on \cite{he2022machine} and we used the same evaluation metrics for TM}}\\ \multicolumn{7}{c}{\scriptsize{$^{**}$ These numbers are based on recent \cite{OAEI22} published results.}}\\ \hline \end{tabular} \caption{Result for equivalence matching – SNOMED (Body) to FMA (Body).} \label{tab:res1} \end{table*} \begin{table*} \centering \begin{tabular}{lllllll} \hline Task & Precision & Recall & F-score & MRR & Hit@1 &Accuracy\\ \hline TM(Ours)$^1$& 0.972 & 0.929 & \textbf{0.950} & \multirow{2}{*}{\textbf{0.987}} & \multirow{2}{*}{\textbf{0.982}} & \multirow{2}{*}{\textbf{0.946}}\\ TM(Ours)$^2$ & 0.977 & 0.872 & 0.922 & & & \\ \hline Edit-Similarity$^*$ &0.979 &0.432 &0.600 &0.836 &0.760 &NA\\ LogMap$^*$ &0.915 &0.612& 0.733 &0.820& 0.695&NA \\ AML$^*$ &0.940 &0.615 &0.743 &NA &NA &NA\\ BERTMap$^*$&0.966 &0.606 &0.745 &0.919 &0.876 &NA\\ LogMap-Lite$^{**}$ &0.995 & 0.598 & 0.747 &NA &NA &NA\\ AMD $^{**}$ &0.962 & 0.745 & 0.840 &NA &NA &NA\\ BERTMap-Lite$^{**}$ &0.979 & 0.432 & 0.600 & 0.836 & 0.760 &NA\\ Matcha$^{**}$ &0.941 & 0.613 & 0.742 &NA &NA &NA\\ ATMatcher $^{**}$&0.937 & 0.566 & 0.706 &NA &NA &NA\\ LSMatch$^{**}$ &0.982 & 0.551 & 0.706 &NA &NA &NA\\ \hline \multicolumn{7}{c}{\scriptsize{$^{1,2}$ are based on our proposed TM model, where the former is based on similarity score and later is based on greedy search score}}\\ \multicolumn{7}{c}{\scriptsize{$^*$ These numbers are based on \cite{he2022machine} and we used the same evaluation metrics for TM}}\\ \multicolumn{7}{c}{\scriptsize{$^{**}$ These numbers are based on recent \cite{OAEI22} published results.}}\\ \hline \end{tabular} \caption{Results for equivalence matching – SNOMED (Pharm) to NCIT (Pharm).} \label{tab:res2} \end{table*} \begin{table*} \centering \begin{tabular}{lllllll} \hline Task & Precision & Recall & F-score & MRR & Hit@1 &Accuracy\\ \hline TM(Ours)$^1$& 0.809 & 0.795 & \textbf{0.802} & \multirow{2}{*}{\textbf{0.962}} & \multirow{2}{*}{\textbf{0.944}} & \multirow{2}{*}{\textbf{0.802}}\\ TM(Ours)$^2$& 0.812 & 0.773 & 0.792 & & & \\ \hline Edit-Similarity$^*$ &0.815 & 0.709 & 0.759 & 0.900 & 0.876 &NA \\ LogMap$^*$ &0.823 & 0.547 & 0.657 & 0.824 & 0.747&NA \\ AML$^*$ &0.747 & 0.554 & 0.636 &NA &NA&NA \\ BERTMap$^*$&0.655 & 0.777 & 0.711 & 0.960 & 0.939&NA\\ LogMap-Lite$^{**}$ &0.947 & 0.520 & 0.671 &NA &NA&NA \\ AMD $^{**}$ &0.836 & 0.534 & 0.652 &NA &NA &NA\\ BERTMap-Lite$^{**}$ &0.815 & 0.709 & 0.759 & 0.900 & 0.876&NA \\ Matcha$^{**}$ &0.754 & 0.564 & 0.645 &NA &NA &NA\\ ATMatcher $^{**}$&0.866& 0.284 & 0.428 &NA &NA &NA\\ LSMatch$^{**}$ &0.902 & 0.238 & 0.377 &NA &NA&NA \\ \hline \multicolumn{7}{c}{\scriptsize{$^{1,2}$ are based on our proposed TM model, where the former is based on similarity score and later is based on greedy search score}}\\ \multicolumn{7}{c}{\scriptsize{$^*$ These numbers are based on \cite{he2022machine} and we used the same evaluation metrics for TM}}\\ \multicolumn{7}{c}{\scriptsize{$^{**}$ These numbers are based on recent \cite{OAEI22} published results.}}\\ \hline \end{tabular} \caption{Result for equivalence matching – SNOMED (Neoplas) to NCIT (Neoplas).} \label{tab:res3} \end{table*} There are two TM results presented in the given tables, and both are based on different scoring schemes. TM$^2$ is based on greedy graph search scores with softmax probabilities using temperature scaling. TM$^1$ is based on the prediction scheme shown in Figure \ref{fig: pred}, and also previously described in Subsection \ref{TM:predictions}, taking advantage of both graph search and semantic similarity. It can be seen that both of our methods surpass SOTA for all the tasks, but TM$^1$ is more robust and has significant improvements as compared to any of the existing methods. To be precise, 2.3\% improvement over the second best result (AML) in Body, 11.0\% improvement for Pharm (as compared to AMD), and 4.3\% improvement for Neoplas as compared to BertMap-Lite and Edit-Similarity, is seen for TM$^1$ in the F-score. Also, it should be noted that even without TM, none of these methods are SOTA in all the tasks. Thus, TM is the only single model which is SOTA for all tasks. For generating local metrics for Hit@1 and MRR, TM is used to generate the embedding similarity score of input terms in the test set and their corresponding candidates in $M_m \cup \{m\}$ set, using the appropriate task identifier. We are also outperforming all the models based on MRR and Hit@1. In addition to the above metrics, we are also reporting accuracy metric, which is consistent, robust, and applicable not only to the proposed approach but also to other machine learning models as well. For this metric, the TM predictions are obtained across the entire target ontology without using any smaller subset of negative samples from the test set. This is achieved with $O(klog(n))$ time complexity, where $k$ is the size of the test set, and $n$ is the size of the target ontology. \section{Conclusions and Discussions}\label{Conc} This work presents a new approach to OM by treating the OM process as a translation task and performing multi-task pre-training, fine-tuning, and predictions in a zero-shot, unified and end-to-end manner. The proposed approach takes advantage of transfer learning across different ontologies and does not require manual annotations for training. Additionally, the trained model understands the semantics of the text as well as the structure of the ontologies. We show that our proposed method outperforms Edit-Similarity, LogMap, AML, BERTMap, and the recently proposed OM frameworks in the OM22 conference \cite{OAEI22} in all the tasks. Our approach provides several advantages: (1) It reduces the time complexity to log-linear as opposed to quadratic in the existing approaches\footnote{Note that BERTMap reduces the time complexity from $O(n^2)$ in traditional approaches to $O(kn)$, where $k<<n$ with an additional preprocessing step by considering only a small portion of target subset ontology classes with at least one subword token common to the source class candidate, which adds dependency on the tokenization hyperparameters and could be error prone since some semantically matching cases with lexical variations could get filtered out in this process. Contrary to that, such limitation does not exist in TM since it performs matching from source to target without reducing the target corpora size. Time-complexity of TM is $O(nlog(n))$, where n represents the number of nodes in the target ontology graph (same as the number of classes), noting that a single search in a tree structure with $n$ nodes can be performed in $O(log(n))$ time.}, (2) It does not require additional post-processing as we do not employ mapping extension or mapping repair in contrast to the other methods, (3) It does not require any manual labeled cross-ontologies matching pairs due to zero-shot learning, (4) One unified framework is used as a result of multi-tasking, which makes it easier to productionize these large transformer-based models, (5) It is robust toward different tokenization schemes as it uses byte level tokenization, (6) It learns complete ontologies graphs, using the SmartIDs which provides a more natural path for translation, and would be significantly helpful for subsumption mappings. In the future, we will pre-train the starting checkpoint with more domain-related corpus (e.g., PubMed, MIMIC-III, clinical notes) instead of the C4 dataset. Another interesting track can be ensemble learning of existing SOTA models with TM. \bibliographystyle{named}
1,116,691,498,060
arxiv
\section{Introduction} \label{sec:1} {In the search for laboratory analogs of black-hole radiation (c.f. \citet{Barcelo2019} for an updated review) Schutzhold and Unruh \cite{schutzhold2002gravity} theoretically demonstrated how surface gravity waves, in the presence of a counter--current flow in a shallow basin, can be used to simulate phenomena around black holes (BH) in the laboratory. \citet{rousseaux2008observation} reported the first successful analog gravity experiment mimicking white hole (WH) horizons by surface gravity waves. \citet{weinfurtner2011measurement} used localized obstacle to block the upstream propagation of a long wave, converting it into a pair of short waves with opposite-signed energy, one with positive and the other with negative energy. This experiment successfully demonstrated the thermal nature of the stimulated Hawking process at an analog WH horizon. Hawking radiation in analog wave-current systems have been further established experimentally and numerically in recent years, see \cite{euve2016observation,robertson2016scattering,euve2020scattering}. Specifically, \citet{euve2016observation} established analog quantum Hawking radiation using correlation of the randomly fluctuating free surface downstream of the obstacle. } { The objective in this paper is more modest. It aims to propose a minimal water wave analog of pairs of virtual particles with equal and opposite energy, created out of near horizon vacuum fluctuations, where the particle with the positive energy escapes to infinity, and the one with negative energy falls into the BH, leading to BH evaporation \cite{hawking1974black,hawking1975particle}. As this phenomena by itself is not necessarily related to wave-scattering, it is enough to assume here a flow system with a constant mean counter--current over a flat bathymetry (i.e., constant water depth, see Fig.\,\ref{fig:1}).} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fig1.png} \caption{Schematic diagram of the black-hole analog set-up. For details about the various symbols, see text.} \label{fig:1} \end{figure} \section{Pseudo-energy and pseudo-momentum} \label{sec:2} {Consider for simplicity a rectangular quasi-2D domain $(x,z)$ of the size $(0,L)\times (-H,\eta')$, filled with water (assumed here to be inviscid and incompressible), where $L$ is the horizontal length, $H$ is the mean fluid depth, and $\eta'(x,t)$ denotes the free surface elevation about the mean depth (e.g. Fig.\,~\ref{fig:1}). For this setup the continuity and Euler's momentum equations read: \renewcommand{\theequation}{\arabic{equation}a,b} \begin{equation} \nabla\cdot{\bf u} = 0; \qquad \DDt{\bf u} \equiv \left ( \der{}{t} + {\bf u}\cdot\nabla \right ){\bf u} = -{\nabla p\over \rho} +{\bf g}\, . \label{eq:NSE} \end{equation} Here $\nabla \equiv (\partial/\partial x, \partial/\partial z)$ is the 2D gradient operator, ${\bf u} = (u,w)$ denotes velocity, $p$ denotes pressure, $\rho$ is the density of water (assumed constant), and ${\bf g} = -g{\hat {\bf z}}$ is the gravity vector pointing downwards.} \renewcommand{\theequation}{\arabic{equation}} { Assuming periodic boundary conditions at $x=0$ and $L$, it is straightforward to show that both the domain-integrated momentum in the $x$--direction ($P$) and the total fluid energy ($E$): \begin{subequations} \begin{align} P & = \rho \int_{x=0}^L \int_{z=-H}^{\eta'} u dx dz\,, \label{eq:PM1}\\ E & = {\rho\over 2}\int_{x=0}^L \left[ \left ( \int_{z=-H}^{\eta'} |{\bf u}|^2 dz \right) + g\left (\eta'^2-H^2 \right) \right] dx \,, \label{eq:PE1} \end{align} \end{subequations} are conserved \cite{Buhler2009}. The two terms in the RHS of Eq.\,~\eqref{eq:PE1} are respectively the fluid kinetic and potential energy. Consider a steady mean current in the negative $x$ direction: ${\bf u} = (-\ol{U},0)$ with $\ol{U}>0$, and a constant mean height $H$ satisfying hydrostatic balance. This flow is a solution of Eq.\,~\eqref{eq:NSE} and posses the domain integrated momentum and energy \renewcommand{\theequation}{\arabic{equation}a,b} \begin{equation} \ol P = -\rho L H \ol{U}\,,\qquad \ol E = {\rho L H \over 2} \left ( \ol{U}^2 - gH \right ) \,. \end{equation} \renewcommand{\theequation}{\arabic{equation}} } { Now suppose that on top of this steady base state we add a perturbation that is composed of surface gravity waves of the form $\eta'(x,t)=a{\rm e}^{{\rm i}(kx-\omega t)}$+ c.c., where $a$ and $k$ respectively denote amplitude and wavenumber (defined positive here), $\omega = k c_p$ denotes frequency, $c_p$ is the phase speed, and c.c. denotes complex conjugate. Then \begin{equation} \omega = {\hat \omega} -k\ol{U} = k({\hat c}_p - \ol{U}) = k\, c_p \,, \label{eq:om1} \end{equation} where the intrinsic surface gravity wave frequency and phase speeds (denoted by hat) are given by the familiar dispersion relation: \begin{equation} {\hat \omega} = k{\hat c}_p = \pm \sqrt{gk\tanh{kH}}\, . \label{eq:disp_rel} \end{equation} Denoting the wave fields by prime so that ${\bf u} = (-\ol{U}+u',w')$, we obtain \begin{subequations} \begin{align} P & = \ol P +\delta P\, , \hspace{0.5cm} \delta P = \rho \int_{x=0}^L \int_{z=0}^{\eta'} u' dx dz\,, \label{eq:P1} \\ E & = \ol E +\delta E\, , \hspace{0.5cm} \delta E = E' -\ol{U} \delta P\, , \hspace{0.5cm} E' = {\rho\over 2} \int_{x=0}^L \left ( \int_{z=-H}^{\eta'} {|{\bf u}'|}^2 dz + g{\eta'}^2 \right )dx \,. \label{eq:E1} \end{align} \end{subequations} } { The quantities $\delta P$ and $\delta E$ are respectively known by (the somewhat confusing terms) pseudo-momentum and pseudo-energy. As is evident from Eqs.\,\eqref{eq:P1}--\eqref{eq:E1}, they are simply the momentum and energy contribution of the waves to the system. Since $\ol P$ and $\ol E$ are constant, $\delta P$ and $\delta E$ are also conserved (in Appendix \ref{sec:App} we explicitly show that $\delta E$ in the shallow water limit is equivalent to the energy density integral in \citet[Eqs.~(67--68)]{schutzhold2002gravity}). Note that $E'$ -- the positive definite wave eddy energy -- is only one of the contributions by the surface waves to the total change in the energy (as will be clarified further in the next section). Hence, neither the pseudo-momentum nor the pseudo-energy are sign definite; negative pseudo-energy implies that the addition of linear waves to the base flow reduces the energy of the system below its mean value $\ol E$, whereas positive pseudo-energy increases the energy above its mean value. } \section{Pairs of zero-sum Pseudo-energy wave packets} \label{sec:3} { The essential idea in this analogy is that confined surface gravity wave packets represent virtual particles. Therefore we aim to choose superposition pairs of wave packets with equal and opposite values of pseudo-energy $\delta E$ in a way that the sign of their group velocity (in the frame of rest) will be equal to the sign of their pseudo-energy. When this is achieved, the wave packet with the positive pseudo-energy manages to overcome the leftward counter--current $-{\ol U}$ and escapes rightward (from the BH horizon into the outer space), whereas the negative pseudo-energy wave packet is drifted leftward with the base flow (into the BH). Consequently, the energy in the left region (inside the BH) is reduced on average and become ${\ol E} -|\delta E|$. Eventually when the leftward wave packet dissipates, it is expected to reduce the mean energy of BH, so that the new mean energy ${\ol E}_{new} \approx {\ol E} -|\delta E|$.} {Next we wish to suggest how to choose excited pairs of oppositely signed pseudo-energy wave packets based on their physical properties. We first note that for surface waves it can be shown, after some algebra, that the wave eddy energy satisfies: \begin{equation} E' = {1 \over 2} \rho g L a^2 = {\hat c}_p\, \delta P\,, \label{eq:E'} \end{equation} implying that ${\hat c}_p$ and $\delta P$ are of the same sign. This sign agreement can be understood from Fig. \ref{fig:2}. The mechanism of surface wave propagation is such that the horizontal convergence (divergence) results in upward (downward) motion that translates the vertical height anomaly $\eta'$. Hence for rightward or positive propagation, ${\hat c}_p > 0$ (Fig. \ref{fig:2}(a)), and $u'$ is in phase with $\eta'$. Therefore the vertical integration of positive $u'$ from the bottom to the wave crests exceeds the vertical integration of negative $u'$ from the bottom to the wave troughs and consequently $\delta P$ is positive, in agreement with Eq.\,\eqref{eq:P1}. By the same argument it follows that $\delta P$ is negative when ${\hat c}_p$ is negative (Fig. \ref{fig:2}(b)). Equations \eqref{eq:om1}, \eqref{eq:E1}, and \eqref{eq:E'} then imply the following relations: \begin{equation} \delta E = ({\hat c}_p - \ol U)\delta P = c_p \delta P = \left (1 - {\ol U \over {\hat c}_p} \right ) E' \, . \label{eq:PE_def} \end{equation}} \begin{figure} \centering \includegraphics[width=\textwidth]{fig2_new.png} \caption{Schematic description of the fact that (a) rightward propagating surface waves have a positive pseudo-momentum, while (b) leftward propagating surface waves have a negative pseudo-momentum.} \label{fig:2} \end{figure} Consider then two waves with different wavenumbers $k^+$ and $k^-$ (both defined positive), where both waves have a positive ${\hat c}_p$ (and hence a positive $\delta P$). Thus both waves are ``trying'' to propagate to the right (in the positive $x$ direction) against the mean current $-\ol U$, see Fig. \ref{fig:1}. If we assume a situation such that \begin{equation*} {\hat c}_p^- < \ol U < {\hat c}_p^+, \end{equation*} then Eq. \eqref{eq:PE_def} implies that $\delta E^+ > 0$ while $\delta E^- < 0$. In other words, the wave that manages to counter--propagate against the current with a positive phase speed in the rest frame ($c_p^+ > 0$) carries a positive pseudo-energy, whereas the wave whose intrinsic phase speed is not large enough to match the opposed current ($c_p^- < 0$) carries a negative pseudo-energy and consequently propagates to the left in the rest frame (despite that the pseudo-momentum of both waves being positive), as shown in Fig. \ref{fig:1}. This statement can be written in terms of frequency and wave-action. Defining the wave-action as $\delta A \equiv \delta P /k$, we obtain from Eq. \eqref{eq:PE_def} that $\delta E = \omega \delta A$. Consider $\delta A$ as an analog for $\hbar$, then for positive $\delta A$ the sign of the pseudo-energy is determined by the sign of its frequency $\omega$. This suggests that we can set a perturbation of \emph{zero} pseudo-energy composed of two waves ($\delta E = \delta E^+ + \delta E^- = 0$) with the same positive value of wave-action $\delta A^+ = \delta A^- > 0$. These in combination yields: \begin{subequations} \begin{align} \Omega^+ = - \Omega^- > 0 &\implies \hat{\Omega}^++\hat{\Omega}^-= \alpha^+ + \alpha^-, \label{eq:omega_nondim}\\ \bigg(\frac{a^-}{a^+}\bigg)^2 ={ {\hat \Omega}^- \over {\hat \Omega}^+ } & = \sqrt{\alpha^-\tanh{\alpha^-} \over \alpha^+\tanh{\alpha^+}}. \label{eq:ampratio_nondim} \end{align} \end{subequations} Here we have used the following non-dimensionalizations: $\alpha^{+(-)} \equiv k^{+(-)}H $, $\hat{\Omega}^{+(-)} \equiv \hat{\omega}^{+(-)} H/\ol U$ and $\Omega^{+(-)} \equiv \omega^{+(-)} H/\ol U$. Additionally Eq. \eqref{eq:om1} has also been used, from which we obtain $\Omega^{+(-)}=\hat{\Omega}^{+(-)}-\alpha^{+(-)}$, where $\hat{\Omega}^{+(-)}=Fr^{-1}\sqrt{\alpha^{+(-)}\tanh{\alpha^{+(-)}}}$, in which the Froude number $Fr \equiv \ol U/\sqrt{gH}$. According to Eq. \eqref{eq:omega_nondim}, the waves have equal and opposite frequencies. Hence in the rest-frame, ``$+$'' wave will propagate to the right against the mean current whereas the ``$-$'' wave will be drifted to the left, following the scenario depicted in Fig. \ref{fig:1}. Furthermore, Eq.~\eqref{eq:ampratio_nondim} provides a direct relation of the amplitude ratio of the ``$+$'' and ``$-$'' waves. An interesting point to notice from Eq.~\eqref{eq:ampratio_nondim} is that the condition of zero pseudo-energy superposition does \emph{not} imply that the free surface should be initially flat. { While the pseudo-momentum of a monochromatic sinusoidal wave is perfectly well defined, its position is obviously not. Therefore, in order to generate an initial zero pseudo-energy perturbation whose position and momentum are both reasonably well defined, we should construct pairs of narrow wave packets rather than pairs of monochromatic waves. Hence, the positive (negative) pseudo-energy wave packet should propagate with a positive (negative) group speed $c_g$ (or in non-dimensional terms, ${C_g}^{+(-)} \equiv c_g^{+(-)}/\ol U$), satisfying: \begin{equation} C_g^{+(-)}\equiv \frac{\partial \Omega^{+(-)}}{\partial \alpha^{+(-)}}=-1+\frac{1}{2Fr}\sqrt{\frac{1 }{\alpha^{+(-)}}\tanh \alpha^{+(-)}}\Bigg[1+\frac{2\alpha^{+(-)}}{\sinh 2\alpha^{+(-)}} \Bigg]. \label{eq:c-g} \end{equation} Furthermore, the centroid group and phase speeds of each wave packet should posses the same sign. This is because the sign of $c_p$ (or in non-dimensional terms, ${C_p}^{+(-)} \equiv c_p^{+(-)}/\ol U$) determines the sign of $\delta E$ whereas the sign of $c_g$ determines the wave packet's direction of propagation.} \begin{figure} \centering \includegraphics[width=\textwidth]{figure3.png} \caption{Dispersion curves: (a) $\Omega$ versus $\alpha$, and (b) $C_p$ versus $\alpha$. The blue, yellow and green curves respectively denote $Fr=0.4$, $0.6$ and $0.8$. The short red lines in (a) denotes the slope of the blue curve, which equals to the group speed. The ``\,*\,''s of same color denote a pair-wave; the one above the zero-line has $\delta A>0$ and $\delta E>0$, while that below the zero-line has $\delta A>0$ and $\delta E<0$. } \label{fig:3} \end{figure} { Consider the positive branch of $\Omega$ and address only sub-critical flows, i.e. $Fr <1$, in order to enable wave's counter-propagation. The variations of $\Omega$ and $C_p$ with $\alpha$ for different $Fr$ values are respectively plotted in Figs.\, \ref{fig:3}(a) and \ref{fig:3}(b). Two wave packets with equal wave-action, and equal and opposite pseudo-energy, consist of a ``pair-wave'' (denoted by the same colored ``\,*\,''s), and therefore satisfies Eqs.\, \eqref{eq:omega_nondim}--\eqref{eq:ampratio_nondim}. The ``$+$'' (``$-$'') wave packet's frequency, phase and group speeds are all positive (negative), and hence escapes into space (falls into the BH), in analogy with Hawking radiation. Notice that for sub-critical flows, this condition fails in the shallow-water limit (since the pseudo-energy is always positive); see Appendix \ref{sec:App}.} {Figure \ref{fig:4} shows a pair of wave-packets (both having positive wave-action but equal and opposite pseudo-energy) in a counter-current flow over a flat bathymetry. This configuration is numerically simulated using an in-house High-order Spectral code, detailed in \citet{raj_guha_2019}. As already mentioned, a sum-zero pseudo-energy does not necessarily imply that the superposition of the wave packet pair would render the free surface flat, as clearly shown in Fig.\,\ref{fig:4}(a), which is the configuration at $t=0$. The background flow is sub-critical with $Fr=0.7$. The ``$+$'' wave packet (centroid wavenumber $\alpha^+=0.8$) emits as Hawking radiation while the ``$-$'' wave packet (centroid wavenumber $\alpha^-=2.47$) falls inside the BH; the wave pair has the same magnitude of centroid frequency as per Eq.\,\eqref{eq:omega_nondim}. Here the definition of the event-horizon is arbitrary, however it must be located to the left of the superposed wave packets at $t=0$. The fact that $\alpha^->\alpha^+$ is evident from the dispersion curve in Fig.\,\ref{fig:3}(a). A consequence of $\alpha^->\alpha^+$ is that $a^->a^+$ as per Eq.\,\eqref{eq:ampratio_nondim}, which is also clear from Fig.\,\ref{fig:4}(b). } \begin{figure} \centering \includegraphics[width=1.0\textwidth]{snapshots.png} \caption{Simulation of sum-zero pseudo-energy wave packet pair for $Fr=0.7$. (a) Configuration at $t=0$, and (b) configuration at a later time when the ``$+$'' wave packet escapes the BH while the ``$-$'' wave packet falls inside it.} \label{fig:4} \end{figure} \section{Parallels with the ratio of Bogoliubov coefficients and low-frequency mode amplification} { The study of classical and quantum fields around BHs reveals that a pair wave created with a temporal frequency $\Omega$ satisfies \cite{hawking1974black,schutzhold2002gravity}: \begin{equation} \bigg(\frac{\beta^-}{\beta^+}\bigg)^2 = \exp{\bigg(-\frac{\Omega}{T}\bigg)}, \label{eq:Bog} \end{equation} where $\beta^{+(-)}$ are referred to as the positive (negative) norm amplitudes (also known as the Bogoliubov coefficients), and $T$ denotes an effective temperature proportional to the surface gravity of a BH. According the Hawking's prediction $(\beta^-)^2= [\exp{(\Omega/T)}-1]^{-1}$, which implies divergence as $\Omega \rightarrow 0$ since for this limit, $(\beta^-)^2 \approx T/\Omega$.} { In analog gravity experiments with surface waves in a counter-current flow over a localized obstacle, parallels between Eq.~\eqref{eq:Bog} and scattering coefficients were first established in \citet{weinfurtner2011measurement}, and then in subsequent studies, e.g.\, see Refs.\, \cite{euve2016observation,robertson2016scattering}. Although we have \emph{not} solved a scattering problem here, yet it is interesting to see how the ratio of a conserved norm compares with Eq. \eqref{eq:Bog}. The scattering coefficients in analog-gravity experiments corresponds to the wave-action of the ``$+$'' and ``$-$'' waves \cite{weinfurtner2011measurement}, which in our case are equal by construction (i.e. $\delta A^+=\delta A^-$). Hence the $\Omega \rightarrow 0$ limit of Eq. \eqref{eq:Bog} is always satisfied. Furthermore, noting that \begin{equation} \delta A^{+(-)}=\frac{\rho g L}{2}\,\frac{ \{a^{+(-)}\}^2 }{{\omega}^{+(-)}+k^{+(-)}\overline{U}}\,\,\,, \label{eq:wa} \end{equation} we readily find that $\delta A^+ \rightarrow \infty$ when $\hat{\omega}^+ \rightarrow 0$, leading to both $k^+ \rightarrow 0$ and ${\omega}^+ \rightarrow 0$ (c.f. Fig.\, \ref{fig:3}(a)). Hence by construction $\delta A^- \rightarrow \infty$, however the denominator in Eq.\,\eqref{eq:wa} for this case does not vanish, rather $a^- \rightarrow \infty$. This fact can also be clearly observed from Eq.\,\eqref{eq:ampratio_nondim}. } { In summary, the aspect of low-frequency mode amplification in Hawking's prediction is satisfied by this minimal model. } \section{Discussion} \label{sec:4} The aim of this paper is to characterize the properties of zero-sum energy pair wave packets in the hydrodynamic analogy of Hawking radiation. First we {wished to clarify} the somewhat non-intuitive physical meaning of positive and negative energy norms (pseudo-energy), how those are related to the wave propagation mechanism, and how the general energy norm converges to the one suggested by \citet{schutzhold2002gravity} in the shallow water limit. Next we considered a simple setup consisting of a constant sub-critical counter-current flow over a flat bathymetry; this setup was enough to demonstrate the analog phenomena where positive (negative) energy wave packets escape from (drifted into) the black hole. The combined requirements of a wave packet pair with equal (and positive in our case) wave action, and equal and opposite signed pseudo-energy, determine their centroid wavenumbers as well as their surface elevation amplitude. While forming such pairs of wave packets in the laboratory might not be a simple task, it is straight forward to numerically simulate stochastic generation of such zero-sum energy pairs, mimicking near-horizon vacuum fluctuations. The nonlinear effects of wave dissipation and wave-mean flow interaction, which feedback into the counter-current and shift the horizon position, are under ongoing numerical investigation and will be published in a follow-up paper. \textcolor{orange}{ }
1,116,691,498,061
arxiv
\section{Introduction} In the design of a NEO survey there is a possible trade-off between covering less sky to a deeper magnitude or more sky to a fainter one, as described by \citet{T11}. In the present paper we denote the first strategy as ``Deep Survey'', and the second one ``Wide Survey''. In the literature, the basic idea of a Deep Survey is in \citet{M92}, while the idea of a Wide Survey is described in \citet{H95}. The choice between the two observing strategies is driven by the goals of the survey \citep{S02}. According to \citet{M92}, Deep Surveys such as the present American ones are more effective in reaching the completeness of the NEO population as they scan larger volumes of the Near Earth space for a fixed absolute magnitude. As a matter of fact, \citet{Ma11} claim that more than 90\% of objects larger than 1 km (first Spaceguard goal) have been discovered so far, predominantly by US surveys. Nevertheless, these surveys are not optimized for detecting imminent, relatively small impactors from 10 to 160 m diameter, which may still cause very important damages and losses on the ground. As shown by \citet[Fig. 4]{BR02}, the energy released by such impactors ranges from 50 to $10^5$ kT and already \citet{M92} discusses the substantial local damage that a Tunguska-sized impactor can inflict to a populated area. The reason why deep surveys are not suited for imminent impactors is that their observing strategy is to cover the same area in the sky after a few days, and to take only a minimum number of images. This impairs the successful identification of objects that are going to impact within a few days. For instance, \citet{V09} prove that Pan-STARRS would not have been able to collect enough detections to compute an orbit for $2008TC_3$, a $\sim$ 5 m asteroid which impacted on Earth on October 2008. To deal with imminent impacts, a more effective strategy is the Wide Survey, as demonstrated by \citet{H95} and \citet{T11}. This kind of survey provides a more responsive NEO impact warning system and thus could nicely complement the current NEO discovery and cataloging strategy of the US programs. In the present paper we measure the performance of an assumed Wide Survey design through a simulation of a 100 yr time span of operations. We deal with small impactors, i.e., those with absolute magnitude between 22 and 28, and measure the time it takes to reach a 50\% threshold for the fraction of objects discovered with warning time sufficient to undertake proper mitigation actions. \section{Blind time} An impactor can arrive from almost anywhere in the sky. In particular, if the impactor comes from the direction of the Sun, it will be most likely not detectable in the last days before its fall. This implies that such an object should be discovered at a previous apparition, if at all possible. Thus, we are led to the possibility that, after the beginning of the operations of a survey, there is no chance for the potential impactor to be discovered before its fall; i.e., the survey is ``blind'' for this specific impactor. Such a situation depends on several factors, including the orbit and absolute magnitude of the impactor, and the parameters characterizing the survey (limiting magnitude, sky coverage, cadence, etc.). Given an impactor, we define the ``lead time'' as the interval of time between the first orbit determination and the time of impact. According to the size of the impactor, the lead time should be large enough to undertake the required mitigation actions, i.e., the larger impactor the more time is necessary for the mitigation. In absence of specific information about the albedo and the shape of an imminent impactor, the size can be inferred from its absolute magnitude $H$. Thus, we define the minimum required lead time as a function of $H$, using the following constraints: \begin{itemize} \item a minimum lead time of 30 days for objects of $H=22$; \item a minimum lead time of about one week for Tunguska-sized impactors ($H=24.5$). \end{itemize} We adopt the following function \begin{equation}\label{eq:lead_time} t(\text{d})= c_1 e^{-c_2(H-22)}\ ,\ c_1=30\ \text{d}\ ,\ c_2=0.5\ . \end{equation} that fulfills the above constraints. Figure~\ref{fig:lead} shows $t$ as a function of $H$. It is important to point out that this simple law is tailored to the population used in our simulations, with $H$ ranging between 22 and 28 (see Sec.~\ref{s:impactors}). For these objects, which are the target of the Wide Survey described in this paper, the mitigation actions to be undertaken are essentially orbit improvement and evacuation. Dealing with bigger objects requires a different approach and mitigation strategy, thus Eq.~(\ref{eq:lead_time}) should be replaced with a different model, possibly involving a larger number of parameters. \begin{figure}[t] \begin{center} \includegraphics[width=12cm]{lead} \end{center} \caption{Minimum lead time as a function of the absolute magnitude $H$.} \label{fig:lead} \end{figure} Given an NEO population, we define as ``blind time'' for a given survey and a given absolute magnitude $\bar H$, the time between the start of operation of the survey and the moment at which 50\% of the impactors with magnitude $\bar H$ have a lead time larger than the minimum threshold defined by Eq.~(\ref{eq:lead_time}). The blind time can be used as an indicator of the performances of a given survey. As a metric, the blind time is a variant of the time required for a survey to discover 90\% of a defined population. The usefulness of this definition is that, when dealing with small but numerous NEOs, the time scale for a 90\% completion is very long and uncertain, due to the poor modeling of the small object population. \section{The simulation} Hereafter we describe our assumptions on the optical network, on the impactor population, and the orbit determination process used in the simulation. \subsection{Optical network} For the optical sensors we assume the use of the innovative fly-eye telescope design, having the following main characteristics: \begin{itemize} \item an equivalent aperture of 1 m; \item a FoV of 45 deg$^2$ ($6.66^\circ \times 6.66^\circ$); \item high efficiency CCDs (80-90\%) with very fast read-out times ($\simeq 2$s) and very good cosmetics ($\simeq 99\%$); \item a fill-factor $\simeq 1$, that is the ratio of the effectively detected area of the FoV and the FoV; \item a minimum elevation of 15$^\circ$ above the horizon. \end{itemize} The above assumptions on the sensor hardware require a significant effort in both technological development and resources. The concept design of the assumed telescope is described in \citet{C11}. However, we are aware that these assumptions need to be validated by further studies when the telescope is actually available, at least in a prototype phase. A discussion on this is beyond the scope of this paper. For the telescope network we assume: \begin{itemize} \item One equivalent dedicated survey telescope in the northern and one in the southern hemisphere. \item The northern telescope covers the northern hemisphere of the celestial sphere, while the southern telescope covers the southern hemisphere. \item One dedicated follow-up telescope in the northern and one in the southern hemisphere, typically 30$^\circ$ West of the survey telescopes. \item The images are processed locally in real time, included the astrometric reduction, and the data are made available to the scientific community in less than two hours. Therefore, the dedicated follow-up telescopes can be triggered to follow the newly discovered objects. \end{itemize} With these assumptions each equivalent telescope can take about 766 images for an average 10 hour night. This corresponds to a total of about $34450$ deg$^2$ which is equivalent to $17225$ deg$^2$ of the celestial sphere taken twice per night. For the observing strategy we assume: \begin{itemize} \item Observations that cover $36400$ deg$^2$ ($\simeq 88\%$ of the celestial sphere), corresponding to all the visible sky except the regions with solar elongation less than $40^\circ$. \item The regions of the sky within $30^\circ$ of the Moon or within $15^\circ$ of the galactic plane are not covered by the telescopes due to the increase of the sky background. Therefore, the effective visible sky ranges between 22987 deg$^2\simeq 56\%$ of the celestial sphere (when the forbidden regions around the Sun, the Moon and the galactic plane do not overlap) and 34348 deg$^2\simeq 83\%$ (when the intersection between the forbidden regions is maximized). On average, each telescope covers between $11500$ and $17200$ deg$^2$. \item A limiting magnitude $V_{lim}=21.5$, corresponding to $\simeq 45$ s of exposure time, for the survey mode, and $V_{lim}=23$ for the follow-up mode. \item Coverage of the visible sky at least two times per night. \end{itemize} In a real system the number of telescopes has to be increased and could be between 5 and 6. Indeed, to deal with the cloud coverage and meteorological correlations we need a minimum of two survey telescopes per hemisphere widely spread in longitude. Furthermore, to increase the detection efficiency, a higher number of detections may be necessary and also this can be achieved with a higher number of survey telescopes. \subsection{Impactor population} \label{s:impactors} In our simulation we use the population of 4950 synthetic impactors described by \citet{C04} which have impacts in a time frame of 100 yr starting from July 2009. This impactor population is selected within the population model by \citet{B02}. Figure~\ref{fig:aei} shows the distribution of semimajor axis, eccentricity and inclination. The majority (68\%) of the objects has a perihelion between 0.8 and 1 AU. \begin{figure}[h] \centerline{\includegraphics[height=5.5cm]{a_e} \includegraphics[height=5.5cm]{i}} \caption{Left: scatter plot in the $(a,e)$ plane of the impactor population by \protect{\citet{C04}}. The two solid lines enclose the region of Earth crossing orbits. Right: the distribution of inclinations of the same population.} \label{fig:aei} \end{figure} We assign a fixed value of the absolute magnitude $H$ to all the asteroids and we repeat the simulation for integer values of $H$ ranging between 22 and 28, roughly corresponding to diameters between 160 and 10 m. We choose this simulation strategy to measure the performance of the proposed network as function of the size range of the asteroids. \begin{figure}[t] \begin{center} \includegraphics[width=12cm]{radiants} \end{center} \caption{The radiant distribution of the simulated impactors of \citet{C04}. The radiants are shown in an equal area projection of the sky centered on the opposition; the angular coordinates are ecliptic longitude minus the longitude of the Sun, and ecliptic latitude. The bold lines refer to 40$^\circ$ of solar elongation.} \label{sky} \end{figure} To obtain the sky distribution of the impactors shortly before the event, it is useful to plot their radiants (see \ref{s:rad}). Figure~\ref{sky} shows the distribution of the radiants for 4\,465 Earth impactors in such a representation\footnote{The analytical procedure to compute the radiants assumes a circular orbit for the Earth. 485 among the objects of \protect{\citet{C04}} have either $a(1-e)>1$ or $a(1+e)>1$, so that they are excluded from the analytical computation.}. The sky distribution of impactor radiants is far from uniform and is a consequence of the $a$-$e$-$i$ distribution of the impactor population. The fraction of radiants with a solar elongation larger than $40^\circ$ ---the minimum elongation from the Sun at which the assumed survey can observe--- is $80.1\%$ of the whole sample. It is worth noticing that, for the impactors with radiant within 40$^\circ$ of the Sun, a detection at an apparition before the one corresponding to the impact is the only chance to have a long lead time. \subsection{Methodology} We split the impactor population in 10 bins according to the impact epoch with respect to the beginning of the simulation. For example, the first bin contains objects impacting within 10 yr, the second one objects impacting between 10 and 20 yr, and so on. Such a binning allows us to measure the performance as a function of the time from the start of the survey. For each object we generate a list of observations according to the assumed configuration, the performance of the optical network, and the visibility constraints. The simulation provides one tracklet (see later) per night for survey telescopes and up to two tracklets per night for follow-up telescopes. The follow-up observations are triggered once the object has been detected by the survey telescope(s), with a minimum delay time of two hours. For each simulated observation Gaussian noise with a standard deviation of 0.3 arcsec is added to the astrometric position. Similarly a Gaussian noise is added to the magnitude estimate for the detection, but in this case the noise is split in a correlated component (for the light curve effect) of 0.2 magnitudes and a random component, again of 0.2 magnitudes. The tracklet is the atomic information for a moving asteroid, consisting of a small number (2-5) of detections in different images of the same field, taken at moderately short intervals of time (15 min to 2 hours). A tracklet normally provides an amount of information which can be described by 4 scalar quantities (two angles and two angular rates), therefore such detections do not imply discovery \citep{M07}. We consider as discovered a moving object belonging to the solar system only when enough information has been accumulated to establish its dynamical properties, that is by means of a heliocentric orbit, for which at least 6 scalar quantities are required. The orbit determination process starts by selecting n-tuples of tracklets which could belong to the same object. Then, for each of them a preliminary orbit compatible with all the tracklets is computed, using the methods described in \citet{M04} and \citet[Chapters 7 and 8]{M10}. Thereafter, the preliminary orbit is used as first guess in a differential correction procedure, which usually converges to a least squares fit orbit. If the orbital fit satisfies suitable quality control conditions, this can be considered a real object. To perform the simulation we set up a data center architecture, by ingesting observational data day by day. Each time new observations become available we update the previously known orbits and compute the new ones, corresponding to newly discovered objects. It is important to point out that a more realistic simulation should take into account the presence of Main Belt background asteroids, which increases the computational load and the rate of occurrence of false identifications. However, observations of known Main Belt asteroids should be filtered before looking for new objects, as discussed in \citet{M12}. For completeness, our procedure could include the risk assessment for each simulated impactor including the explicit computation of an impact probability. Such a procedure would require a very large amount of CPU time and was performed in a previous study \citep{F11} using a subsample of the present population over the first 20 yr of survey operations. That paper shows that, in more than $99.5\%$ of the cases, the availability of an orbit involving at least 4 tracklets for a newly discovered potential impactor is accompanied by the successful computation of an impact probability by using the CLOMON2 software robot \citep{M05}. Such a percentage is so high as to make not cost effective the computation of the entire impact monitoring chain in the present case. Thus, we stipulate that an impactor is considered as discovered when an orbit from at least 4 tracklets is computed. \section{Results and Discussion} The main outcome of the simulation is shown in Fig.~\ref{efficiency_diff}. \begin{figure}[h] \centerline{\includegraphics[width=7cm]{disc_eff_diff} \includegraphics[width=7cm]{lead_eff_diff}} \caption{Left: differential discovery completeness as function of the impact date and absolute magnitude. Right: same as left panel, considering successful a discovery with lead time greater than the minimum threshold defined by Eq.~(\protect{\ref{eq:lead_time}}).} \label{efficiency_diff} \end{figure} The left panel shows the differential discovery completeness as a function of time from the survey beginning and absolute magnitude. For each 10 yr bin, the differential discovery completeness is defined as the ratio between the number of impactors discovered and those impacting in that time frame. For example, the differential discovery completeness for $H=24$ and the bin from 20 to 30 yr is given by the fraction of objects discovered before the impact among those impacting between 20 and 30 yr with respect to the beginning of the survey operations. For impactors with $H=23$ a 70\% discovery completeness is achieved in the first decade, while the 90\% threshold is reached after two more decades. For impactors with $H=25$ a 60\% discovery completeness is achieved after two decades, while the 90\% threshold is almost reached at the end of the simulation. For $H=28$ the discovery completeness starts slightly below 40\%, and increases slowly during the next decades as expected, due to the small size of the objects. Notice that the completeness is greater than $50\%$ already from the start for $H=26$. The right panel shows the differential completeness for a lead time given by Eq.~(\protect{\ref{eq:lead_time}}). This means that a discovery is considered successful only if it takes place sufficiently early, allowing the mitigation actions appropriate for the size of the impactor. For impactors with $H=23$ a $\simeq60\%$ completeness is achieved in the first decade, while the 90\% threshold is reached after 30 yr. For $H=25$ a $\simeq 50\%$ discovery completeness is achieved in the second decade, while the 90\% threshold is not yet reached at the end of the simulation. For smaller asteroids the discovery completeness is larger than 25\% from the beginning of the operations and overcomes the 50\% threshold after more than 70 yr. We can sum up the results in terms of blind time, that is the intersection between the 50\% completeness and the $H=const$ curves in Fig.~\ref{efficiency_diff}. The survey simulated in this paper would have a blind time of about 20 yr for imminent impactors with $H=25$. Note that a Tunguska-sized ($H\simeq24.5$) object impacting within 10 yr from the start of the survey would have a $>60\%$ probability of being discovered and would have a lead time larger than 1 week with a probability $\simeq45\%$. For smaller impactors the blind time increases up to $\sim$60 yr for $H=28$. \begin{figure}[ht] \centerline{\includegraphics[width=7cm]{wt_22} \includegraphics[width=7cm]{wt_24}} \centerline{\includegraphics[width=7cm]{wt_26} \includegraphics[width=7cm]{wt_28}} \caption{Histogram of the lead times for different values of $H=$22 (top left), 24 (top right), 26 (bottom left), 28 (bottom right). The vertical lines denote, from left to right, 1 day, 1 week, 1 month, and 1 yr. Note that the scales are different.} \label{wt_det} \end{figure} Figure~\ref{wt_det} shows the distribution of the lead time for different values $H$. As expected, the lead time strongly depends on the value of the absolute magnitude. A clear trimodality is visible in all the panels: either the object impacts without being discovered (left bar), or is discovered during its last apparition (central peak), or at a previous apparition (right peak). Table~\ref{peaks} details the fractions of objects in each peak for a fixed value of $H$. Most of the impactors with $H=22$ are discovered during a previous apparition with respect to the impact. As $H$ increases there are more and more cases of objects either discovered during the last apparition or not discovered at all. \begin{figure}[h] \centerline{\includegraphics[width=7cm]{disc_eff_int} \includegraphics[width=7cm]{lead_eff_int}} \caption{Left: integral discovery completeness as function of the impact date and absolute magnitude. Right: same as left panel, considering successful a discovery with lead time greater than the minimum threshold defined by Eq.~(\protect{\ref{eq:lead_time}}).} \label{efficiency_int} \end{figure} To conclude this discussion we report the integral completeness achieved by our simulated survey. The integral completeness is computed by the weighted sum \[ \text{Comp}(H\leq \bar H)=\sum_{22\leq H_i\leq\bar H}w_{H_i} \text{Comp}(H_i)\ \big{/} \sum_{22\leq H_i\leq\bar H}w_{H_i} \] where $\text{Comp}(H)$ is the completeness for a fixed absolute magnitude $H$. To get a more realistic result we keep into account a power law distribution for the number of asteroids at a given absolute magnitude. Consistently with \citet{B02} and \citet{S04}, the computed efficiencies for a fixed absolute magnitude value $H$ are given the weights: \[ w_H=10^{0.37(H-28)}\,. \] The results are summarized in Fig.~\ref{efficiency_int}. \begin{table}[t] \begin{center} \begin{tabular}{r|r|r|r} $H$ & undiscovered & last apparition & previous apparition\\ \hline 22 & 4.2\% & 8.9\% & 86.9\%\\ 23 & 8.1\% & 13.6\% & 78.4\%\\ 24 & 15.0\% & 18.5\% & 66.5\%\\ 25 & 22.3\% & 25.8\% & 51.9\%\\ 26 & 31.4\% & 31.0\% & 37.6\%\\ 27 & 41.3\% & 33.0\% & 25.7\%\\ 28 & 52.1\% & 31.1\% & 16.8\% \end{tabular} \end{center} \caption{Percentages of impactors not discovered (2nd column), discovered during the last apparition (3rd column) or discovered at a previous apparition (4th column), as a function of $H$ (1st column).} \label{peaks} \end{table} \section{Conclusions} We simulated the operations, over a time span of 100 yr, of a Wide Survey capable of covering all the sky at solar elongation larger than $40^\circ$, down to apparent magnitude 21.5, with a nightly cadence. The survey includes the operation of follow-up with a limiting magnitude 23.0. The goal of the simulation was to compute the ``blind time'' of the survey, i.e., the time between the start of survey operations and the moment at which 50\% of the impactors at a given magnitude are discovered and their orbits determined, with an advance large enough to allow undertaking the appropriate mitigation actions. In fact, our modeling allowed us to compute a realistic distribution of the lead time, i.e., the interval of time between the first orbit determination and the time of impact. The distribution shows a trimodality corresponding to 1) undiscovered objects, 2) objects discovered at the impact apparition, and 3) objects discovered during a previous apparition. The survey discussed in this paper can efficiently deal with Tunguska-sized impactors for which the blind time is about 10 yr. This means that, already in the first ten years of survey operations, there would be 50\% probability of discovering such an impactor at least one week before impact. The pre-impact discovery of Tunguska-sized impactors was not an original goal of past NEO surveys but the case of 2008TC3 and the present paper show that it is becoming within reach. In a future paper, we plan to compare the performances of the discussed Wide Survey and a state of the art Deep Survey thus allowing us to quantitatively measure the contribution of the Wide Survey to the current NEO search programs. \section*{Acknowledgements} This study was partly supported by ESA Contract n. 22929/09/ML/GLC and by the PRIN INAF ``Near Earth Objects''. The authors wish to thank A. Milani for useful discussions during the development of this work, S.~R. Chesley for providing us with the impactor population. We thank S.~R. Chesley and an anonymous referee for their constructive comments.
1,116,691,498,062
arxiv
\section{Introduction} The study of integrable structures in planar perturbative $\mathcal{N}=4$ supersymmetric Yang--Mills theory following the works \cite{Lipatov:1997vu,Minahan:2002ve,Beisert:2003yb} has led to the discovery of an exciting integrable spin chain model. It displays several unusual and novel features with respect to the established integrable spin chains: First of all, the spin chain is perturbatively long-ranged \cite{Beisert:2003tq}. In other words, the Hamiltonian not only acts on nearest-neighbouring spins, but also on longer blocks of adjacent spins. The range is controlled by the perturbative order in a coupling constant $g\approx 0$. Moreover the chain is dynamic \cite{Beisert:2003ys}, that is, the Hamiltonian may dynamically change the number of spin sites of the chain. Finally, the Hamiltonian is an inseparable part of the symmetry algebra. Consequently, all the above features of the Hamiltonian apply to the symmetry generators as well. In addition it can be remarked that the spin module is non-compact and graded into bosons and fermions. Despite these complications, it appears that the Hamiltonian is completely integrable \cite{Lipatov:1997vu,Minahan:2002ve,Beisert:2003yb,Beisert:2003tq,Beisert:2003ys,Serban:2004jf}. Because it is homogeneous and acts locally, one can apply the asymptotic coordinate Bethe ansatz \cite{Sutherland:1978aa,Staudacher:2004tk}. The form of the asymptotic Bethe equations \cite{Beisert:2005fw} is fully constrained by symmetry considerations \cite{Beisert:2007ty}, merely one phase function remains undetermined. Imposing a further crossing symmetry \cite{Janik:2006dc,Arutyunov:2006iu} together with inspiration from the dual superstring theory on $AdS_5\times S^5$ \cite{Maldacena:1998re} and its integrable structure \cite{Bena:2003wd} one arrives at a viable proposal for the phase \cite{Beisert:2006ib,Beisert:2006ez} which has since passed several highly non-trivial tests \cite{Bern:2006ew,Benna:2006nd,Basso:2007wd,Roiban:2007dq}. Note well that the above mentioned asymptotic Bethe equations describe the spectrum only up to certain finite-size corrections, see \cite{Fiamberti:2007rj,Fiamberti:2008sh} and references therein, yet to be understood from the integrable model point of view. A conceivable path towards the exact finite-size spectrum is to fully understand the algebraic structure underlying the integrable spin chain model. One of the obstacles are the dynamic effects for which the conventional algebraic structures appear to be inapplicable. In this note we consider the prototypical dynamic sector of the $\mathcal{N}=4$ SYM spin chain with $\alg{su}(2|3)$ symmetry \cite{Beisert:2003ys}.% \footnote{The $\mathcal{N}=6$ superconformal Chern--Simons theory \cite{Aharony:2008ug} with $\alg{osp}(6|4,\mathbb{R})$ symmetry has an analogous $\alg{su}(2|3)$ sector \cite{Minahan:2008hf}. The results of \cite{Beisert:2003ys} and of this note are general and they also apply to this model with some minor modifications regarding, e.g.\ the coupling constant and the embedding.} We shall propose an undynamic reformulation where length fluctuations are absent for a large part of the algebra including the Hamiltonian. This is meant to facilitate an eventual algebraic treatment of the model. We will start with a review of the $\alg{su}(2|3)$ sector, then propose the undynamic reformulation and finally discuss the implications and potential pitfalls. \section{Dynamic Chain} Let us start by reviewing the (apparently) integrable $\alg{su}(2|3)$ dynamic spin chain constructed in \cite{Beisert:2003ys}. \subsection{Hilbert Space.} The spin at each site can be in three bosonic states $\state{\phi^a}$ with $a=1,2,3$, and two fermionic states $\state{\psi^\alpha}$ with $\alpha=1,2$. Thus the graded spin module $\mathcal{V}$ is spanned by the five states \[ \mdl{V}= \bigvspan{\phi^1,\phi^2,\phi^3\mathpunct{\big|}\psi^1,\psi^2}. \] The Hilbert space $\mdl{H}$ of the spin chain model is given by the direct sum of cyclic chain spaces $\mdl{H}_L$ of arbitrary length $L$ \[\label{eq:Hilbert} \mdl{H}=\bigoplus_{L=1}^\infty \mdl{H}_L,\qquad \mdl{H}_L=\bigeval{\mdl{V}^{\otimes L}}\indup{cyclic}. \] The space $\eval{\mdl{V}^{\otimes L}}\indup{cyclic}$ represents the subspace of $\mdl{V}^{\otimes L}$ on which the graded cyclic shift operator acts as the identity. The dynamic nature of the model consists in the fact that the Hamiltonian (as well as the other symmetry generators) acts as an endomorphism of $\mdl{H}$ and not of the individual $\mdl{H}_L$'s, in other words, the length of the spin chain is a dynamic quantity. Furthermore our spin chain is homogeneous which entails the restriction to cyclic states: Homogeneous operators commute with the graded permutation whose spectrum $\exp(2\pi i\mathbb{Z}/L)$ crucially depends on the length. The only common eigenvalue on chains of $L$ and $L+1$ is $1$ and thus dynamic homogeneous models must be based on cyclic states. \subsection{Symmetry Algebra.} The symmetry of the dynamic chain is assumed to be $\alg{su}(2|3)$. This algebra is spanned by the $\alg{su}(3)$ generators $\gen{R}^a{}_b$ ($\gen{R}^a{}_a=0$), the $\alg{su}(2)$ generators $\gen{L}^a{}_b$ ($\gen{L}^a{}_a=0$), the fermionic generators $\gen{Q}^\alpha{}_b$ and $\gen{S}^a{}_\beta$ and finally the Hamiltonian $\gen{H}$. The Lie superalgebra is given by the canonical Lie brackets for $\alg{su}(3)$ and $\alg{su}(2)$ and the supercharges transform in (anti)fundamental representations, e.g.\ \[ \comm{\gen{R}^a{}_b}{\gen{Q}^\gamma{}_d}= -\delta^a_d\gen{Q}^\gamma{}_b +\sfrac{1}{3}\delta^a_b\gen{Q}^\gamma{}_d. \] The non-trivial brackets among the supercharges are given by \[ \acomm{\gen{Q}^\alpha{}_b}{\gen{S}^c{}_\delta}= \delta^\alpha_\delta\gen{R}^c{}_b +\delta^c_b\gen{L}^\alpha{}_\delta +\delta^\alpha_\delta\delta^c_b\gen{H}. \] Finally, the weights of the supercharges with respect to the Hamiltonian read \[\label{eq:QSeng} \comm{\gen{H}}{\gen{Q}^\alpha{}_b}=+\sfrac{1}{6} \gen{Q}^\alpha{}_b, \qquad \comm{\gen{H}}{\gen{S}^a{}_\beta}=-\sfrac{1}{6} \gen{S}^a{}_\beta. \] \subsection{Representation.} We want to construct a family of representations $\gen{J}(g)$ of $\alg{su}(2|3)$ on the Hilbert space $\mdl{H}$ parametrised by a coupling constant $g$. The coupling constant $g$ is assumed to be small and we shall treat the representation as a perturbation series around $g=0$ \[ \gen{J}(g)=\gen{J}_0+g \gen{J}_1+g^2 \gen{J}_2+\ldots \] At leading order the representation $\gen{J}_0$ is given by the standard tensor product of fundamental representations of $\alg{su}(2|3)$ \[ \begin{array}{rcl} (\gen{R}_0)^a{}_b\earel{=} \PTerm{a}{b}-\sfrac{1}{3}\delta^a_b\PTerm{c}{c}, \\[3pt] (\gen{L}_0)^\alpha{}_\beta\earel{=} \PTerm{\alpha}{\beta}-\sfrac{1}{2}\delta^\alpha_\beta\PTerm{\gamma}{\gamma}, \end{array}\quad \begin{array}{rcl} (\gen{Q}_0)^\alpha{}_b\earel{=} \PTerm{\alpha}{b}, \\[3pt] (\gen{S}_0)^a{}_\beta\earel{=} \PTerm{a}{\beta}, \end{array}\quad \gen{H}_0= \sfrac{1}{3}\PTerm{a}{a} +\sfrac{1}{2}\PTerm{\alpha}{\alpha}. \] The interaction symbols $\PTerm{\cdot}{\cdot}$ have the following meaning: For example, $\PTerm{\beta}{a}$ picks any boson $\phi^a$ from the chain and replaces it by a fermion $\psi^\beta$. Here Latin and Greek indices refer to bosons and fermions, respectively. A homogeneous sum over all sites with proper grading is implicit in this notation. The $\alg{su}(3)$ and $\alg{su}(2)$ representations are finite-dimensional and cannot be deformed continuously \[ \gen{R}^a{}_b(g)=(\gen{R}_0)^a{}_b,\qquad \gen{L}^\alpha{}_\beta(g)=(\gen{L}_0)^\alpha{}_\beta. \] The representation of supercharges is deformed at all order in $g$, the first correction reads \[\label{eq:QS1} (\gen{Q}_1)^\alpha{}_b= \varepsilon^{\alpha\gamma}\varepsilon_{bde} \PTerm{de}{\gamma}, \qquad (\gen{S}_1)^a{}_\beta= \varepsilon^{acd}\varepsilon_{\beta\epsilon} \PTerm{\epsilon}{cd}. \] Symbols $\PTerm{\cdots}{\cdots}$ with more than two indices refer to more complex interactions. For example, $\PTerm{\epsilon}{cd}$ replaces a sequence of two bosons $\phi^c\phi^d$ by a single fermion $\psi^\epsilon$. In the model the range of interactions is bounded by the perturbative order: At order $g^n$ the interactions may consist of no more than $2+n$ spins (incoming plus outgoing), i.e.\ three in this case. In fact, this is the leading appearance of dynamic effects within the model. The restriction to cyclic states simplifies the specification of interaction symbols: In cyclic states only the sequence of spins matters but not their overall position along the chain. Thus there is no need to specify how the final spins ($\psi^\epsilon$) are aligned with respect to the initial spins ($\phi^c\phi^d$), e.g.\ left, right or centred. These first corrections to the supercharges preserve the algebra. The possibility of such corrections is in fact very remarkable and related to a compatibility of the representation theory of cyclic chains of length $L$ and $L+1$. \subsection{Hamiltonian.} The role of the Hamiltonian is somewhat special. It is a Cartan generator of $\alg{su}(2|3)$, but unlike the others its representation does receive corrections. Without loss of generality \cite{Beisert:2003ys} we may assume that \eqref{eq:QSeng} holds for $\gen{H}_0$ instead of $\gen{H}(g)$ \[ \comm{\gen{H}_0}{\gen{Q}^\alpha{}_b(g)}=+\sfrac{1}{6} \gen{Q}^\alpha{}_b(g), \qquad \comm{\gen{H}_0}{\gen{S}^a{}_\beta(g)}=-\sfrac{1}{6} \gen{S}^a{}_\beta(g). \] and consequently for $\delta\gen{H}(g)=\gen{H}(g)-\gen{H}_0$ \[ \comm{\delta\gen{H}(g)}{\gen{Q}^\alpha{}_b(g)}=0, \qquad \comm{\delta\gen{H}(g)}{\gen{S}^a{}_\beta(g)}=0. \] In other words, the quantum corrections to the Hamiltonian are invariant under the full representation of $\alg{su}(2|3)$. In particular, the leading correction to $\gen{H}(g)$ must be invariant under the undeformed $\alg{su}(2|3)$ representation. The simplest non-trivial such term is a graded permutation of two sites which can first appear at order $g^2$. Together with a two-site identity operator the second order contribution reads \[\label{eq:H2} \gen{H}_2= \PTerm{ab}{ab} +\PTerm{\alpha b}{\alpha b} +\PTerm{a\beta}{a\beta} +\PTerm{\alpha\beta}{\alpha\beta} -\PTerm{ba}{ab} -\PTerm{b\alpha}{\alpha b} -\PTerm{\beta a}{a\beta} +\PTerm{\beta\alpha}{\alpha\beta} . \] The next correction to the Hamiltonian appears at order $g^3$ \[\label{eq:H3} \gen{H}_3= -\varepsilon^{abc}\varepsilon_{\delta\epsilon}\PTerm{\delta\epsilon}{abc} -\varepsilon^{\alpha\beta}\varepsilon_{cde}\PTerm{cde}{\alpha\beta} . \] It is compatible with the first corrections to the supercharges $\gen{Q}_1$ and $\gen{S}_1$. To some extent one can say that the Hamiltonian generally is shifted by two orders in $g$ with respect to the remainder of the algebra. \subsection{Beyond.} The higher orders of the Hamiltonian and the algebra have been constructed at orders $\order{g^6}$ and $\order{g^4}$, respectively in \cite{Beisert:2003ys}. The concrete expressions are long and little enlightening, but they appear to preserve integrability. A dynamic charge which commutes with the whole algebra has been derived in \cite{Agarwal:2005jj} at order $\order{g^1}$ providing evidence for the compatibility of integrability with dynamic effects. To make integrability rigorous one could construct the bi-local Yangian generators and show that they commute properly with the algebra and among themselves. The Yangian generators $\gen{\hat J}$ are expected to take the generic form \cite{Serban:2004jf,Agarwal:2004sz,Zwiebel:2006cb,Beisert:2007jv} \[\gen{\hat J}^I{}_J\sim \{J^I{}_K|J^K{}_J\} -\{J^K{}_J|J^I{}_K\} +\mbox{local}, \] where the vertical bar stands for arbitrarily many intermediate sites and the local terms represent a local regularisation of the bi-local insertions. For example, the Yangian generator $\gen{\hat Q}$ corresponding to the supercharge $\gen{Q}$ reads at leading order \[(\gen{\hat Q}_0)^\alpha{}_b\sim \PYTerm{\alpha}{c}{c}{b} -\PYTerm{c}{b}{\alpha}{c} +\PYTerm{\alpha}{\gamma}{\gamma}{b} -\PYTerm{\gamma}{b}{\alpha}{\gamma}. \] The first correction is expected to take the form \[ (\gen{\hat Q}_1)^\alpha{}_b\sim \varepsilon^{\alpha\gamma} \varepsilon_{def} \bigbrk{ \PYTerm{de}{\gamma}{f}{b} -\PYTerm{f}{b}{de}{\gamma} } +\varepsilon^{\gamma\delta}\varepsilon_{bde} \bigbrk{ \PYTerm{\alpha}{\gamma}{de}{\delta} -\PYTerm{de}{\delta}{\alpha}{\gamma} } , \] where in both expressions the local regularisation terms are very restricted and can merely be proportional to $\gen{Q}_0$ and $\gen{Q}_1$, respectively. It may be interesting to treat the realisation of the Yangian algebra explicitly. In particular, there may be complications \cite{Zwiebel:2006cb} due to the fact that the Hamiltonian is part of the algebra itself and because it is well-known that the Yangian is conserved only up to boundary terms. \section{Undynamic Chain} Dynamic spin chains as presented in the previous section have not been explored to a large extent yet. In this section we present an alternative formulation in terms of a chain with an undynamic Hamiltonian. The reformulation will show that the difficulties of this particular model cannot be attributed to the dynamic effects. They are rather due to the long-range nature of the interactions. \subsection{Hilbert Space.} The dynamic effects are essentially due to the degeneracy of quantum numbers for $\phi_{[1}\phi_2\phi_{3]}$ and $\psi_{[1}\psi_{2]}$. The trick of freezing out the dynamic effects consists in moving one of the bosons into the ``background'' and thus balancing the number of spins. Let us single out one of the three bosons \[ \mathcal{Z}:=\phi^3 \] and restrict Latin indices to the range $a,b=1,2$ for the remainder of the paper. We now introduce composites as the fundamental spin degrees of freedom \[\label{eq:newspin} \phi^a_n:=\phi^a\underbrace{\mathcal{Z}\cdots\mathcal{Z}}_{n}\,, \qquad \psi^\alpha_n:=\psi^\alpha\underbrace{\mathcal{Z}\cdots\mathcal{Z}}_{n}\,, \qquad \mdl{V}= \bigoplus_{n=0}^\infty\, \bigvspan{\phi^1_n,\phi^2_n\mathpunct{\big|}\psi^1_n,\psi^2_n}. \] Every state of the above dynamic Hilbert space can obviously be translated to a state of an undynamic Hilbert space defined analogously to \eqref{eq:Hilbert}. One simply counts the number of $\mathcal{Z}$'s following any of the $\phi^a$ or $\psi^\alpha$ and puts as an additional index to the spin.% \footnote{The only exceptions are the states made from $\mathcal{Z}$ alone. These states cannot be represented, but luckily they are trivial and can be ignored to a large extent.} Note that by this redefinition we trade in the dynamic effects for infinitely many spin degrees of freedom. \subsection{Algebra Decomposition.} Clearly the new notation breaks the manifest $\alg{su}(3)$ symmetry of the bosons to $\alg{su}(2)$. Together with the other $\alg{su}(2)$ and some of the fermionic generators the residual symmetry algebra reduces to $\alg{u}(2|2)$. This subalgebra is characterised by preserving the number of spin sites and it includes the Hamiltonian. The remaining generators are actually still dynamic but in a controlled way: They either add or take away one site. Let us decorate the residual $\alg{u}(2|2)$ generators by a tilde. Their embedding into $\alg{su}(2|3)$ is given by \[\label{eq:res} \begin{array}[b]{rcl} \gen{\tilde R}^a{}_b\earel{=}\gen{R}^a{}_b+\sfrac{1}{2} \delta^a_b \gen{R}^3{}_3, \\[3pt] \gen{\tilde L}^\alpha{}_\beta\earel{=}\gen{L}^\alpha{}_\beta, \end{array} \quad \begin{array}[b]{rcl} \gen{\tilde Q}^\alpha{}_b\earel{=}\gen{Q}^\alpha{}_b, \\[3pt] \gen{\tilde S}^a{}_\beta\earel{=}\gen{S}^a{}_\beta, \end{array} \quad \begin{array}[b]{rcl} \gen{\tilde B}\earel{=}\sfrac{3}{2}\gen{R}^3{}_3, \\[3pt] \gen{\tilde C}\earel{=}\gen{H}-\sfrac{1}{2} \gen{R}^3{}_3. \end{array} \] We shall call the remaining generators dynamic and distinguish them by a hat. Their embedding into $\alg{su}(2|3)$ reads \[\label{eq:dyn} \begin{array}[b]{rcl} \gen{\hat R}^a\earel{=}\gen{R}^a{}_3, \\[3pt] \gen{\hat Q}^\alpha{}\earel{=}\gen{Q}^\alpha{}_3, \end{array} \quad \begin{array}[b]{rcl} \gen{\hat R}_a\earel{=}\gen{R}^3{}_a, \\[3pt] \gen{\hat S}_\alpha{}\earel{=}\gen{S}^3{}_\alpha. \end{array} \] The residual $\alg{u}(2|2)$ algebra is determined by the following brackets \[\label{eq:rescomm} \begin{array}{rcl} \comm{\gen{\tilde B}}{\gen{\tilde Q}^\alpha{}_b}\earel{=} +\sfrac{1}{2}\gen{\tilde Q}^\alpha{}_b, \\[3pt] \comm{\gen{\tilde B}}{\gen{\tilde S}^a{}_\beta}\earel{=} -\sfrac{1}{2}\gen{\tilde S}^a{}_\beta, \end{array} \quad \acomm{\gen{\tilde Q}^\alpha{}_b}{\gen{\tilde S}^c{}_\delta}= \delta^\alpha_\delta\gen{\tilde R}^c{}_b +\delta^c_b\gen{\tilde L}^\alpha{}_\delta +\delta^\alpha_\delta\delta^c_b \gen{\tilde C}, \] along with the obvious brackets of $\alg{su}(2)\times\alg{su}(2)$ generators and trivial brackets for the central charge $\gen{\tilde C}$. The dynamical generators form two irreducible multiplets of $\alg{u}(2|2)$: $(\gen{\hat R}^a,\gen{\hat Q}^\alpha)$ and $(\gen{\hat R}_a,\gen{\hat S}_\alpha)$. The non-obvious mixed brackets for the first multiplet take the form \[\label{eq:mixedcomm} \begin{array}[b]{rcl} \comm{\gen{\tilde Q}^\alpha{}_b}{\gen{\hat R}^c}\earel{=} \delta^c_b\gen{\hat Q}^\alpha, \\[3pt] \acomm{\gen{\tilde S}^a{}_\beta}{\gen{\hat Q}^\gamma}\earel{=} \delta^\gamma_\beta\gen{\hat R}^a, \end{array} \quad \begin{array}[b]{rcl} \comm{\gen{\tilde B}}{\gen{\hat R}^a}\earel{=}-\sfrac{3}{2}\gen{\hat R}^a, \\[3pt] \comm{\gen{\tilde B}}{\gen{\hat Q}^\alpha}\earel{=}-\gen{\hat Q}^\alpha, \end{array} \quad \begin{array}[b]{rcl} \comm{\gen{\tilde C}}{\gen{\hat R}^a}\earel{=}+\sfrac{1}{2}\gen{\hat R}^a, \\[3pt] \comm{\gen{\tilde C}}{\gen{\hat Q}^\alpha}\earel{=}+\sfrac{1}{2}\gen{\hat Q}^\alpha. \end{array} \] The brackets for the conjugate multiplet essentially follow by conjugation. Finally, the non-trivial brackets between the dynamic generators yield \[\label{eq:dynacomm} \begin{array}[b]{rcl} \comm{\gen{\hat R}^a}{\gen{\hat R}_b}\earel{=}\gen{\tilde R}^a{}_b -\delta^a_b \gen{\tilde B}, \\[3pt] \comm{\gen{\hat R}^a}{\gen{\hat S}_\beta}\earel{=}\gen{\tilde S}^a{}_\beta, \end{array} \quad \begin{array}[b]{rcl} \comm{\gen{\hat Q}^\alpha}{\gen{\hat R}_b}\earel{=}\gen{\tilde Q}^\alpha{}_b, \\[3pt] \acomm{\gen{\hat Q}^\alpha}{\gen{\hat S}_\beta}\earel{=} \gen{\tilde L}^\alpha{}_\beta +\delta^\alpha_\beta(\gen{\tilde B}+\gen{\tilde C}). \end{array} \] \subsection{Representation of the Residual Algebra.} With the above decomposition relations it is straight-forward to convert the representation of the previous section to the new basis. The leading order $\alg{u}(2|2)$ algebra reads \[ \begin{array}[b]{rcl} \gen{R}^a{}_b\earel{=} \PTerm{a(n)}{b(n)}-\sfrac{1}{2}\delta^a_b\PTerm{c(n)}{c(n)}, \\[3pt] \gen{L}^\alpha{}_\beta\earel{=} \PTerm{\alpha(n)}{\beta(n)}-\sfrac{1}{2}\delta^\alpha_\beta\PTerm{\gamma(n)}{\gamma(n)}, \end{array}\quad \begin{array}[b]{rcl} (\gen{Q}_0)^\alpha{}_b\earel{=} \PTerm{\alpha(n)}{b(n)}, \\[3pt] (\gen{S}_0)^a{}_\beta\earel{=} \PTerm{a(n)}{\beta(n)}, \end{array}\quad \begin{array}[b]{rcl} \gen{\tilde C}_0\earel{=} \sfrac{1}{2}\PTerm{I(n)}{I(n)}, \\[3pt] \gen{\tilde B}\earel{=} n\PTerm{I(n)}{I(n)} -\sfrac{1}{2}\PTerm{a(n)}{a(n)}. \end{array} \] Here we have extended the notation for interaction symbols in a hopefully evident way to the new states \eqref{eq:newspin}, where $n$ stands for the number of trailing $\mathcal{Z}$'s. A repeated upper and lower index $n$ is implicitly summed over all integers starting from $0$. A capital Latin letter represents either a boson or fermion. For example, the symbols $\PTerm{I(n)}{I(n)}$ and $n\PTerm{I(n)}{I(n)}$ count the length of the new chain and the number of $\mathcal{Z}$'s, respectively. The leading correction to the supercharges reads \< (\gen{\tilde Q}_1)^\alpha{}_b\earel{=} \varepsilon^{\alpha\gamma}\varepsilon_{bd} \lrbrk{\PTerm{d(n+1)}{\gamma(n)}-\PTerm{I(k+1),d(n)}{I(k),\gamma(n)}}, \nonumber\\ (\gen{\tilde S}_1)^a{}_\beta\earel{=} \varepsilon^{ac}\varepsilon_{\beta\delta} \lrbrk{\PTerm{\delta(n)}{c(n+1)}-\PTerm{I(k),\delta(n)}{I(k+1),c(n)}}. \> While in \eqref{eq:QS1} all interactions were one-to-two or two-to-one site, here we get one-to-one site or two-to-two site operators. In the case of the two-to-two site contributions the second site is merely needed to account for the change of leading $\mathcal{Z}$'s which cannot be represented otherwise. A careful conversion of the leading interacting Hamiltonian \eqref{eq:H2} yields the new representation \< \gen{\tilde C}_2 \earel{=} \PTerm{I(k),J(n+1)}{I(k),J(n+1)} -\PTerm{I(k+1),J(n)}{I(k),J(n+1)} -\PTerm{I(k),J(n+1)}{I(k+1),J(n)} +\PTerm{I(k+1),J(n)}{I(k+1),J(n)} \nl +\PTerm{I(0),J(n)}{I(0),J(n)} -\PTerm{a(0),b(n)}{b(0),a(n)} -\PTerm{\alpha(0),b(n)}{b(0),\alpha(n)} -\PTerm{b(0),\alpha(n)}{\alpha(0),b(n)} +\PTerm{\beta(0),\alpha(n)}{\alpha(0),\beta(n)}. \> Gladly, this is still a nearest-neighbour spin chain Hamiltonian. Note that the terms on the two above lines have a somewhat different meaning: The terms on the first row represent propagation terms of the magnons along the original chain, while the terms on the second row represent spin interactions of two adjacent magnons. The first correction to the interacting Hamiltonian \eqref{eq:H3} marks the leading appearance of dynamic effects. In the new basis, however, the length remains fixed \< \gen{\tilde C}_3\earel{=} \varepsilon_{cd}\varepsilon^{\alpha\beta} \lrbrk{ -\PTerm{c(0),d(n+1)}{\alpha(0),\beta(n)} +\PTerm{c(1),d(n)}{\alpha(0),\beta(n)} -\PTerm{I(k+1),c(0),d(n)}{I(k),\alpha(0),\beta(n)} } \nl +\varepsilon^{cd} \varepsilon_{\alpha\beta} \lrbrk{ -\PTerm{\alpha(0),\beta(n)}{c(0),d(n+1)} +\PTerm{\alpha(0),\beta(n)}{c(1),d(n)} -\PTerm{I(k),\alpha(0),\beta(n)}{I(k+1),c(0),d(n)} }. \> \subsection{Representation of Dynamic Generators.} Note that $\gen{C}_0$ measures half the length of the undynamic chain and thus the two brackets in \eqref{eq:mixedcomm} imply that the generators $\gen{\hat R}^a$ and $\gen{\hat Q}^\alpha$ add one site while $\gen{\hat R}_a$ and $\gen{\hat S}_\alpha$ remove one site. The leading-order representation takes the form \[ \begin{array}{rcl} \gen{\hat R}^{a}\earel{=} \PTerm{I(k),a(n-1-k)}{I(n)}, \\[3pt] (\gen{\hat Q}_0)^{\alpha}\earel{=} \PTerm{I(k),\alpha(n-1-k)}{I(n)}, \end{array}\quad \begin{array}{rcl} \gen{\hat R}_{a}\earel{=} \PTerm{I(n)}{I(k),a(n-1-k)}, \\[3pt] (\gen{\hat S}_0)_{\alpha}\earel{=} \PTerm{I(n)}{I(k),\alpha(n-1-k)}, \end{array} \] which changes the length by one unit, because they replace a background spin $\mathcal{Z}$ by something else or vice versa. Despite the length fluctuation, these generators close onto the one-to-one generators of the residual $\alg{u}(2|2)$ representation. For example the non-manifest $\alg{su}(3)$ brackets can be performed easily \< \comm{\gen{\hat R}^{a}}{\gen{\hat R}_{b}} \earel{=} \sum_{m=0}^\infty \sum_{n=0}^\infty \sum_{k=0}^{n-1} \sum_{j=0}^{m-1} \lrcomm{\PTerm{I(k),a(n-1-k)}{I(n)}}{\PTerm{J(m)}{J(j),b(m-1-j)}} \nonumber\\\earel{=} \sum_{k=0}^{\infty} \sum_{n=0}^\infty \PTerm{I(k),a(n)}{I(k),b(n)} -\sum_{n=0}^\infty \delta^a_b n\PTerm{I(n)}{I(n)} =\gen{\tilde R}^{a}{}_{b} -\delta^a_b\gen{\tilde B}, \> as it should according to \eqref{eq:dynacomm}. The first correction to the dynamic supercharges reads \[ (\gen{\hat Q}_1)^\alpha{}= \varepsilon^{\alpha\beta}\varepsilon_{cd} \PTerm{c(0),d(n)}{\beta(n)}, \qquad (\gen{\hat S}_1){}_\alpha= \varepsilon^{cd}\varepsilon_{\alpha\beta} \PTerm{\beta(n)}{c(0),d(n)}. \] Actually, it is not necessary to specify either of the pairs $\gen{\hat Q},\gen{\hat S}$ or $\gen{\tilde Q},\gen{\tilde S}$ explicitly because according to \eqref{eq:mixedcomm,eq:dynacomm} one pair can simply be obtained from the other by commutation with the exact generators $\gen{\hat R}$. Dynamic (super)symmetries which relate conventional nearest-neighbour spin chain models at lengths differing by one unit are not unheard of: In particular they have appeared in various sectors of AdS/CFT \cite{Beisert:2004ry,Beisert:2005fw,Zwiebel:2005er,Beisert:2007sk}. They also exist for the XXX$_1$ chain \cite{Schoutens:2008aa}, the XXZ$_{1/2}$ chain with $q=e^{\pm 2\pi i/3}$ \cite{Fendley:2003je,Fendley:2006mj} (or more generally XXZ$_s$ with $q=e^{n\pi i/(1\pm s)}$), and a more exotic model \cite{Santachiara:2005aa}. They all share the feature that Bethe roots at rapidity $0$ induce the symmetry and that the symmetry can only exist for cyclic closed chains or for open chains. \section{Comments} In this final section I would like to comment on the reformulation performed in the previous section and on the possibility of extending such a reformulation to the whole AdS/CFT spin chain with $\alg{psu}(2,2|4)$ symmetry. \subsection{Algebraic Formulation.} It is fair to say that the picture presented in the previous section does not constitute an improvement of the situation per se. For example, the construction of \cite{Beisert:2003ys} would not simplify in the new basis. In fact it would be somewhat worse, because the range of the interactions changes drastically between the pictures: The perturbative construction is expected to follow the range of the original spin chain, while the range in the new basis represents the number of magnon excitations involved in the interaction. Moreover, the manifest $\alg{su}(3)$ symmetry reduces to merely $\alg{su}(2)\times \alg{u}(1)$. Finally, there is an unaesthetic asymmetry between leading and trailing background spins $\mathcal{Z}$. Nevertheless, there is a one-to-one map of interactions, and thus essentially nothing is lost by the change of picture. The potential advantages of the new basis are of a more formal nature. There is some hope that the absence of dynamic effects for a large part of the algebra will make the model more accessible to conventional algebraic methods such as a Hopf algebra treatment. For example, to represent the action of symmetry generators through a coproduct appears reasonable only if the representation is undynamic. However, the problems introduced by long-range interactions certainly remain to be overcome. The remaining dynamic symmetry generators change the length by exactly one unit, which is much better than the arbitrariness in the original picture. In fact the various symmetry enhancements discussed in \cite{Fendley:2003je,Santachiara:2005aa,Fendley:2006mj} and \cite{Beisert:2004ry,Beisert:2005fw,Beisert:2007sk} are of or can be brought to this form and they call for a more general story yet to be understood. \subsection{Excitation Picture.} The picture advertised here closely resembles what one obtains by performing the coordinate Bethe ansatz in \cite{Beisert:2005tm}.% \footnote{A similar picture also underlies the NLIE approach, see e.g.\ \cite{Bombardelli:2007ed,Freyhult:2007pz}.} Namely, the spin $\mathcal{Z}$ is treated as a background spin and the magnons become sites of the reduced chain with $2|2$ spin orientations per site. The only difference is that magnons carry a definite momentum while here the spin orientation also specifies the distance between two adjacent excitations.% \footnote{\label{fn:1}It might also be worthwhile to investigate an absolute (instead of relative) position space picture, where, however, length fluctuations may become difficult to handle.} In that sense, these two pictures are essentially related by Fourier transformation. In fact, the residual $\alg{u}(2|2)$ algebra acting on the new basis coincides with the $\alg{su}(2|2)$ algebra of the coordinate Bethe ansatz. In this context the difference between the pictures is that here UV effects, i.e.\ what happens when two magnons come close (in the original model), can be honestly represented. This may be crucial for understanding finite-size effects. In the asymptotic coordinate Bethe ansatz such effects are largely ignored and collectively accounted for by the S-matrix. Conversely, here it is not possible to represent gauge transformations in a consistent manner. In the coordinate Bethe ansatz the gauge transformations alias the central extensions were crucial for success of the construction. Representing the $\alg{su}(2|2)$ algebra within the coordinate Bethe ansatz is particularly simple because one only has to understand the single-magnon representation and how to assemble multi-magnon representations from that. The latter is achieved by a coproduct \cite{Gomez:2006va,Plefka:2006ze} within the Hopf algebra framework. One might actually do the same here, at least to some approximation: Namely, find a representation of $\alg{u}(2|2)$ on the infinite-dimensional spin module. A similar proposal has appeared recently for the closely related exceptional superalgebra $\alg{d}(2,1;\alpha)$ in \cite{Matsumoto:2008ww}. \subsection{Complete AdS/CFT Spin Chain.} It would be desirable to represent the complete AdS/CFT spin chain with $\alg{psu}(2,2|4)$ symmetry. However, the generalisation is not straight-forward: The spin module $\mdl{V}$ contains not only the background spin and single excitations, but also multiple excitations. The decomposition of $\mdl{V}$ in terms of the subalgebras $\alg{psu}(2|2)\times\alg{psu}(2|2)$ reads \cite{Beisert:2006qh} \[ \mdl{V}=\bigoplus_{n=0}^\infty \mdl{V}^{}_n\otimes\mdl{V}'_n, \qquad \mdl{V}^{}_n=\lreval{\lrbrk{\mdl{V}^{}_1}^{\otimes n}}\indup{antisym}. \] Here $\mdl{V}_0$ is the trivial module spanned by the background spin $\mathcal{Z}$ and $\mdl{V}_n$ is the $n$-fold graded anti-symmetric tensor product of $\mdl{V}_1=\vspan{\phi^1,\phi^2\mathpunct{|}\psi^1,\psi^2}$. The $\mdl{V}'_n$ denote the corresponding modules of the second $\alg{psu}(2|2)$. There are now two ways in which one could attempt to proceed: As before one could dress each of the components $\mdl{V}^{}_n\otimes\mdl{V}'_n$ for $n\neq 0$ by an arbitrary number of background spins $\mathcal{Z}$. However this would not freeze the spin chain because only the overall number of excitations $n$ is conserved. For example, two single excitations ($n=1$) can be mapped to one double excitation ($n=2$). Instead one should work only with $\mdl{V}^{}_1\otimes\mdl{V}'_1$ trailed by arbitrarily many background spins $\mathcal{Z}$. The higher excitations would be represented by gluing together single excitations. For example the double excitation $\bar\mathcal{Z}$ can be thought of as being composed from $\phi^b\otimes\phi^{\dot a}$ and $\phi^d\otimes\phi^{\dot c}$: \[ \bar\mathcal{Z}_n \to \varepsilon_{bd}\varepsilon_{\dot a\dot c}\, (\phi^b\otimes\phi^{\dot a})_{-1} (\phi^d\otimes\phi^{\dot c})_n. \] Here the number ``$-1$'' of trailing $\mathcal{Z}$'s is meant to indicate that the two consecutive single excitations reside on a single site (i.e.\ $-1$ sites in between) and thus form a double excitation. The problem with this representation is the graded anti-symmetrisation implicit for multiple excitations. Consequently the Hilbert space $\mdl{H}_L$ of the model contains additional unphysical states.% \footnote{The same problem arises in an absolute position space picture, cf.\ footnote \ref{fn:1} on page \pageref{fn:1}, when the excitations are not well-ordered.} Therefore one has to ensure that the Hamiltonian and the symmetry generators do not map physical states to unphysical states. One could project out unphysical states from the Hilbert space from the start. This would lead to potential problems with the definition of interactions (they have to be compatible with the projection). Alternatively one could adjoin the Hamiltonian with a projection onto physical states. Unfortunately the latter are defined in a long-ranged fashion (an arbitrary number of adjacent spins has to be symmetrised). This apparently makes even the leading-order Hamiltonian long-ranged. \subsection{Conclusions.} In conclusion, I have presented a reformulation of the $\alg{su}(2|3)$ dynamic spin chain constructed in \cite{Beisert:2003ys} where the dynamic effects are frozen out for a $\alg{u}(2|2)$ subalgebra including the Hamiltonian. The other generators remain dynamic, but they merely change the length by precisely one unit as in \cite{Fendley:2003je,Fendley:2006mj,Santachiara:2005aa,Beisert:2004ry,Beisert:2005fw,Zwiebel:2005er,Beisert:2007sk}. The reformulation is intended to make the chain more accessible to a conventional algebraic treatment; it is merely the first step. The change of picture works nicely for $\alg{su}(2|3)$ where only single excitations of the ferromagnetic vacuum exist. A similar treatment of the complete AdS/CFT spin chain with $\alg{psu}(2,2|4)$ symmetry and infinite-dimensional spin representations requires further insight due to the existence of multiple coincident excitations. However, if the proposed undynamic reformulation leads to a better understanding of the $\alg{su}(2|3)$ model then there may well be a way to generalise those results to $\alg{psu}(2,2|4)$. \subsection{Acknowledgements.} I would like to thank the Galileo Galilei Institute (workshop ``Non-Per\-tur\-ba\-tive Methods in Strongly Coupled Gauge Theories''), Institut des Hautes \'Etudes Scientifiques and in particular Kyoto University (conference ``30 Years of Mathematical Methods in High Energy Physics'') for hospitality while part of this work was performed.
1,116,691,498,063
arxiv
\section*{High-resolution information retrieval} \medskip \begin{figure} \begin{center} \includegraphics[width=15 cm]{figure_02.pdf} \caption{The data analysis procedure on a single sub-data-cube. $\boldsymbol{\mathrm{a}}$: Data-cube $I(\Delta x,\Delta y,\delta x,\delta y)$. $\boldsymbol{\mathrm{b}}$: The obtained standard resolution sub-image $J_m(\Delta x, \Delta y)$ by summing the sub-data-cube shown by the square non-greyed out area over $\delta x$ and $\delta y$. $\boldsymbol{\mathrm{c}}$: The obtained speckle-scan matrix $K_m(\delta x, \delta y)$ by summing the sub-data-cube shown by the square non-greyed out area over $x$ and $y$. \textbf{d}: The intensity of the Fourier components of $J_m(\Delta x, \Delta y)$. $\boldsymbol{\mathrm{e}}$: The phase of the Fourier components of $J_m(\Delta x, \Delta y)$. \textbf{f}: The intensity of the Fourier components of $K_m(\delta x, \delta y)$. \textbf{g}: The phase of the Fourier components of $K_m(\delta x, \delta y)$.} \label{fig:procedure} \end{center} \end{figure} In Fig. \ref{fig:procedure} we show the data analysis procedure. We divide the data-cube (Fig. \ref{fig:procedure}a) into $N$ sub-data-cubes by applying $N$ square window functions of $W_m(x,y)$ that each have a width and a height equal to half of the speckle-scan range (1 \textmu m) that each can be processed in parallel. We construct a standard resolution sub-image $J_m(\Delta x,\Delta y)$ (Fig. \ref{fig:procedure}b) and a speckle-scan matrix $K_m(\delta x, \delta y)$ (Fig. \ref{fig:procedure}c) from the corresponding sub-data-cube as follows: We sum our sub-data-cube over $\delta x$ and $\delta y$, and obtain the standard resolution sub-image $J_m(\Delta x, \Delta y)$. In our approach, it is useful to represent $J_m(\Delta x,\Delta y)$ in the Fourier domain, where its spatial information is given by the intensity and the phase of the Fourier components (Fig. \ref{fig:procedure}d,e). To obtain the speckle-scan matrix $K_m(\delta x,\delta y)$, we calculate the following summation \begin{align} K_m(\delta x,\delta y)&= \sum\limits_{\Delta x, \Delta y}I(\Delta x,\Delta y,\delta x,\delta y)W_m(\Delta x,\Delta y) \nonumber\\ &= \sum\limits_{\Delta x, \Delta y}O(\Delta x,\Delta y)S(\Delta x - \delta x,\Delta y - \delta y) \nonumber\\ W_m(\Delta x,\Delta y) \nonumber\\ &=[(O\cdot W_m)\ast S](\delta x, \delta y), \label{eq:convolution} \end{align} where the symbol $\ast$ denotes a convolution product and where in the last step we assumed that the scan range stays within the optical memory effect range. In Figs. \ref{fig:procedure}f and \ref{fig:procedure}g we represent the speckle-scan matrix $K_m(\delta x, \delta y)$ in the Fourier domain. We obtain the intensity of the high-frequency Fourier components of the object from its speckle-scan matrix as follows: \begin{align} \vert \Fourier{K_m}\vert &=\vert \Fourier{O\cdot W_m}\vert\cdot \vert \Fourier{S}\vert \nonumber\\ &= C\vert\Fourier{O\cdot W_m}\vert \label{eq:Fourier_modulus} \end{align} where $C$ is the autocorrelation of the amplitude transfer function of our scattering lens, and $\Fourier{}$ denotes a Fourier transform. Here we use the approximation that within the NA of the GaP scattering lens, the absolute value of the spatial spectrum of the field is constant for a fully developed speckle pattern \cite{Goodman2000}. Equation \ref{eq:Fourier_modulus} shows that the intensity of the high-frequency Fourier components of the object is retained behind the scattering layer (Fig. \ref{fig:procedure}f). The phase information of the object's Fourier components is lost due to the random and unknown phase of the speckle pattern (Fig. \ref{fig:procedure}g). Fortunately, it is often possible to infer the lost phase information using an iterative phase retrieval algorithm \cite{Fienup1978_OL, Fienup1982_AO, Millane1990_JOSAA, segev_2012_natmater}. In essence, our approach relies on reducing the light scattering problem into a phase retrieval problem. \section*{Image reconstruction} \medskip \begin{figure} \begin{center} \includegraphics[width=8.8 cm]{figure_03.pdf} \caption{Phase retrieval in the Fourier domain. $\boldsymbol{\mathrm{a}}$: The phase of the Fourier components of the object. $\boldsymbol{\mathrm{b}}$: The intensity of the Fourier components of the object. $\boldsymbol{\mathrm{c}}$: The Gerchberg-Saxton-type algorithm. $\boldsymbol{\mathrm{d}}$: The retrieved phase of high-frequency Fourier components of the object. (The phase data comes from Fig. \ref{fig:procedure}e and the intensity data comes from Fig. \ref{fig:procedure}f. Colourbars are as in Fig. \ref{fig:procedure}).} \label{fig:phase_retrieval} \end{center} \end{figure} We have developed a new Gerchberg-Saxton-type algorithm that uniquely retrieves the high-frequency phase information of the Fourier components of our object using the low-frequency phase information of the Fourier components of the object as constraint. In general, a Gerchberg-Saxton-type algorithm retrieves the phase of the Fourier components of an image from its intensity of the Fourier components with some constraints on the image such as consisting of real and positive numbers. In a Gerchberg-Saxton-type algorithm, using only the intensity of the Fourier components gives ambiguities in the solution \cite{Millane1990_JOSAA, segev_phase_retrieval_2014}. These ambiguities are flips or translations of the reconstructed intensity object. In our Gerchberg-Saxton-type algorithm, we use a standard resolution image of our object to use its phase of the low-frequency Fourier components as additional information to obtain a unique solution. We use constraints both in the object domain and in the Fourier domain. In the object domain we use the information that the measured intensity of our fluorescent object is real and positive. In the Fourier domain, we use the phase of the low-frequency Fourier components. Combining these two types of information the algorithm converges to a unique solution which gives us the shape, the position and the orientation of the object. This is a major improvement over previous approaches \cite{bertolotti_nature_2012, psaltis_seethrough_2014, katz_seethrough_2014} that do not provide position and orientation information. In Fig. \ref{fig:phase_retrieval} the phase retrieval procedure of high-frequency Fourier components is shown for a single sub-data-cube. First, we Fourier transform both a standard resolution sub-image, $J_m(\Delta x, \Delta y)$ and the corresponding speckle-scan matrix, $K_m(\delta x, \delta y)$. We discard the intensity of the Fourier components of $J_m(\Delta x, \Delta y)$ and the phase of the Fourier components of $K_m(\delta x, \delta y)$. We input the phase information of low-frequency Fourier components of $J_m(\Delta x, \Delta y)$ and the intensity information of high-frequency Fourier components of $K_m(\delta x, \delta y)$ into our Gerchberg-Saxton-type algorithm. The algorithm outputs the phase information of high-frequency Fourier components. Finally, we combine and inverse Fourier transform all available phase and intensity information of the Fourier components to obtain the high-resolution sub-image. To acquire a wide-field image, we apply our phase retrieval procedure (see Fig. \ref{fig:phase_retrieval}) to every sub-data-cube (see Fig. \ref{fig:procedure}) in parallel. Each reconstructed overlapping high-resolution sub-image is windowed by a smooth window function to minimize edge effects. We tile the reconstructed high-resolution sub-images to yield a wide-field image of the complete object. The field of view of the reconstructed image is wider than the speckle-scan range and spans the field of view of the detection optics. \section*{Discussion} \medskip \begin{figure} \begin{center} \includegraphics[width=15 cm]{figure_04.pdf} \caption{Wide-field images of fluorescent nanospheres with diameter of 100 nm. \textbf{a}: The wide-field image by conventional microscopy. \textbf{b}: A zoomed image of \textbf{a}. \textbf{c}: A cross section of \textbf{b} represented by the white line. \textbf{d}: The wide-field image by SCORE microscopy. \textbf{e}: A zoomed image of \textbf{d}. \textbf{f}: A cross section of \textbf{e} represented by the white line. In \textbf{c}, a single nanosphere is apparent while in \textbf{f} two smaller nanospheres are apparent with a center to center distance of 146 nm from each other.} \label{fig:result} \end{center} \end{figure} To experimentally test our new imaging method we use a collection of fluorescent nanospheres with a diameter of 100 nm as test objects. Fig. \ref{fig:result}a shows an image of a collection of many fluorescent nanospheres taken with conventional high-NA microscopy in a field of view of $10\times10$ \textmu m$^2$. The zoom-in in Fig. \ref{fig:result}b reveals five separate nanospheres. Fig. \ref{fig:result}c shows a cross-section of two nanospheres from Fig. \ref{fig:result}b that have a full-width-half-maximum of about 450 nm. We now turn to the high-resolution SCORE results. Fig. \ref{fig:result}d shows the same area as in Fig. \ref{fig:result}a. In Fig. \ref{fig:result}d the nanospheres are sharper compared to the image in Fig. \ref{fig:result}a. The zoom-in in Fig. \ref{fig:result}e shows the same area as in Fig. \ref{fig:result}b: We see that the nanospheres are much sharper compared to Fig. \ref{fig:result}b and we see six separate nanospheres, whereas less nanospheres were discernible in Fig. \ref{fig:result}b. Notably at the left center two nanospheres are distinguished that were observed as one blob on Fig. \ref{fig:result}b. Fig. \ref{fig:result}f shows a cross-section of three nanospheres from Fig. \ref{fig:result}e. A clear demonstration of the enhanced resolution is given in Fig. \ref{fig:result}f where we clearly resolve two nanospheres with a center to center distance of 146 nm, and an edge to edge distance of 46 nm. A numerical deconvolution of the image of a single nanosphere with the known shape of the object reveals that we have a resolution of 130 nm according to the Sparrow's criterion. The deconvolved full-width-half-maximum of our point spread function is 140 nm, which is slighty larger than the full-width-half-maximum of $r = 116$ nm expected for the given illumination beam width. The difference between the expected and the demonstrated resolutions may be due to sample drifts during the experiment and pointing noise of the laser. Our results demonstrate that regardless of the range of the optical memory effect, speckle correlations enhance the resolution of an optical microscope without any restriction on its field of view. In summary, we experimentally demonstrate a new method to obtain high-resolution and wide-field fluorescence images. In combination with a gallium phosphide scattering lens, speckle correlation resolution enhancement (SCORE) has the ability to acquire very high-resolution images with a field of view that is much wider than the speckle-scan range. SCORE is thus excellently suited to be used for imaging of two-dimensional slice of an object as large as a few hundred micrometers with subcellular resolution. Characterization of the scattering medium by methods such as wavefront shaping \cite{vanPutten2011_PRL}, digital optical phase conjugation \cite{Hsieh2010aa} or transmission matrix measurement \cite{Popoff2010ab, Choi2011_PRL} is not needed. The resolution of our current proof of principle experimet is limited by signal to noise and stage drift. A higher illumination power, a wider beam, and a shorter excitation wavelength can be used to approach the resolution limit of $\lambda_{ill}/2n = 80$ nm in GaP where $n$ = 3.45 for $\lambda_{ill}$ = 550 nm. Without any additional hardware, resolution of SCORE can be improved up to $(2n/\lambda_{ill}+2NA/\lambda_{flu})^{-1} = 64$ nm by using the resolution information of the conventional microscope objective in detection as in SIM \cite{1999_proc_SIM, SIM_2000, 2005_NSIM, 2012_bSIM_sentenac}. \section*{Methods} \medskip \textbf{Parallel detection:} Speckle-scan matrices contain high-resolution information of imaging object. In order to measure a speckle-scan matrix $K_m(x,y,\delta x,\delta y)$, the speckle pattern has to stay correlated over the resolution $R = \lambda_{flu}/2\text{NA}$. This constraint is met when $R < \pi nL/\lambda_{ill} d$ where $n$ is the refractive index of the GaP substrate, $L$ the thickness of the GaP substrate, $\lambda_{ill}$ the wavelength of the incident light, and $d$ the thickness of the GaP porous layer. In our GaP substrate $\pi nL/\lambda_{ill} d$ is in the order of 2 \textmu m. Our detection optics has a resolution ($R$ = 322 nm) that is high enough to fulfill this condition. The average speckle grain size of a GaP scattering lens is $r = \lambda/[2n\text{sin}(\text{tan}^\text{-1}(W/2L))]$, where $W$ is the beam width. In our case, an average speckle grain size is $r = 116$ nm. We scan the speckle pattern with steps of 40 nm over a range of 2 \textmu m in two dimensions, requiring $N$ = 2500 measurements. For each measurement the full camera image is stored, which allows us to retrieve the object at any position of the captured field of view.
1,116,691,498,064
arxiv
\section*{INTRODUCTION.} The theorem about the Moon in a puddle provides the simplest meaningful example of a local-to-global theorem which is mainly what differential geometry is about. Yet, the theorem is surprisingly not well-known. This paper aims to redress this omission by calling attention to the result and applying it to a well-known theorem \section*{MOON IN A PUDDLE.} The following question was initially asked by Abram Fet and solved by Vladimir Ionin and German Pestov \cite{pestov-ionin}. \begin{theorem}\label{thm:moon-orginal} Assume $\gamma$ is a simple closed smooth regular plane curve with curvature bounded in absolute value by~1. Then the region surrounded by $\gamma$ contains a unit disc. \end{theorem} We present the proof from our textbook \cite{petrunin-zamora} which is a slight improvement of the original proof. Both proofs work under the weaker assumption that the signed curvature is at most one, assuming that the sign is chosen suitably. A more general statement for a barrier-type bound on the curvature was given by Anders Aamand, Mikkel Abrahamsen, and Mikkel Thorup~\cite{aamand-abrahamsen-thoru}. There are other proofs. One is based on the curve-shortening flow; it is given by Konstantin Pankrashkin \cite{pankrashkin}. Another one uses cut locus; it is sketched by Victor Toponogov \cite[Problem 1.7.19]{toponogov}; see also \cite{petrunin-2020}. Let us mention that an analogous statement for surfaces does not hold --- there is a solid body $V$ in the Euclidean space bounded by a smooth surface whose principal curvatures are bounded in absolute value by 1 such that $V$ does not contain a unit ball; moreover one can assume that $V$ is homeomorphic to the 3-ball. Such an example can be obtained by inflating a nontrivial contractible 2-complex in $\mathbb{R}^3$ (Bing's house constructed in \cite{bing} would do the job). This problem is discussed by Abram Fet and Vladimir Lagunov \cite{lagunov-2,lagunov-fet}; see also \cite{petrunin-zamora}. \medskip A path $\gamma\colon [0,1]\z\to \mathbb{R}^2$ such that $\gamma (0) = \gamma (1)$ will be called a \emph{loop}; the point $\gamma (0)$ is called the \emph{base} of the loop. A loop is \emph{smooth}, \emph{regular}, and \emph{simple} if it is smooth and regular in $[0,1]$, and injective in the open interval $(0,1)$. Let us use the term \emph{circline} as a shorthand for a \emph{circle or line}. Note that the osculating circline of a smooth regular curve is defined at each of its points --- there is no need to assume that the curvature does not vanish. Suppose that $\gamma$ is a closed simple smooth plane loop. We say that a circline $\sigma$ \emph{supports} $\gamma$ at a point $p$ if the point $p$ lies on both $\sigma$ and $\gamma$, and the cicrline $\sigma$ lies in one of the closed regions that $\gamma$ cuts from the plane. If furthermore this region is bounded, then we say that $\sigma$ \emph{supports} $\gamma$ \emph{from inside}. Otherwise, we say that $\sigma$ \emph{supports} $\gamma$ \emph{from the outside}. \begin{keylemma}\label{thm:moon} Assume $\gamma$ is a simple smooth regular plane loop. Then at one point of $\gamma$ (distinct from its base), its osculating circle $\sigma$ supports $\gamma$ from inside. \end{keylemma} Spherical and hyperbolic versions of this lemma were given in \cite[Lemma 8.2]{panov-petrunin} and \cite[Proposition 7.1]{alexakis-mazzeo} respectively. \medskip\noindent\textit{Proof of the theorem modulo the key lemma.} Since $\gamma$ has absolute curvature of at most~1, each osculating circle has radius of at least 1. According to the key lemma, one of the osculating circles $\sigma$ supports $\gamma$ from inside. In this case, $\sigma$ lies inside $\gamma$, whence the result. \qed \medskip\noindent\textit{Proof of the key lemma.} Denote by $F$ the closed region surrounded by $\gamma$. Arguing by contradiction, assume that the osculating circle at each point $p\ne p_0$ on $\gamma$ does not lie in~$F$. Given such a point $p$, let us consider the maximal circle $\sigma$ that lies entirely in $F$ and is tangent to $\gamma$ at $p$. The circle $\sigma$ will be called the {}\emph{incircle} of $F$ at $p$. \begin{figure}[!ht] \vskip-3mm \centering \includegraphics{pic-32} \vskip0mm \end{figure} Note that the curvature of the incircle $\sigma$ has to be strictly larger than the curvature of $\gamma$ at $p$, hence there is a neighborhood of $p$ in $\gamma$ that intersects $\sigma$ only at $p$. Further note that the circle $\sigma$ has to touch $\gamma$ at another point at least; otherwise, we could increase $\sigma$ slightly while keeping it inside $F$. Choose a point $p_1\ne p_0$ on $\gamma$, and let $\sigma_1$ be the incircle at $p_1$. Choose an arc $\gamma_1$ of $\gamma$ from $p_1$ to a first point $q_1$ on $\sigma_1$. Denote by $\hat\sigma_1$ and $\check\sigma_1$ the two arcs of $\sigma_1$ from $p_1$ to $q_1$ such that the cyclic concatenation of $\hat\sigma_1$ and $\gamma_1$ surrounds~$\check\sigma_1$. Let $p_2$ be the midpoint of $\gamma_1$, and $\sigma_2$ be the incircle at $p_2$. Note that $\sigma_2$ cannot intersect $\hat\sigma_1$. Otherwise, if $s$ is a point of the intersection, then $\sigma_2$ must have two more common points with $\check\sigma_1$, say $x$ and $y$ --- one for each arc of $\sigma_2$ from $p_2$ to $s$. Therefore $\sigma_1\z=\sigma_2$ since these two circles have three common points: $s$, $x$, and $y$. On the other hand, by construction, $p_2\in \sigma_2$ and $p_2\notin \sigma_1$ --- a contradiction. \begin{wrapfigure}{r}{37 mm} \vskip-2mm \centering \includegraphics{pic-64} \caption*{Two ovals pretend to be circles.} \vskip0mm \end{wrapfigure} Recall that $\sigma_2$ has to touch $\gamma$ at another point. From above it follows that it cannot touch $\gamma \setminus \gamma_1$, and therefore we can choose an arc $\gamma_2$ in $\gamma_1$ that runs from $p_2$ to a first point $q_2$ on $\sigma_2$. Since $p_2$ is the midpoint of $\gamma_1$, we have that \[\mathop{\rm length}\nolimits \gamma_2< \tfrac12\cdot\mathop{\rm length}\nolimits\gamma_1.\leqno({*})\] Repeating this construction recursively, we obtain an infinite sequence of arcs $\gamma_1\supset \gamma_2\supset\dots$; by $({*})$, we also get that \[\mathop{\rm length}\nolimits\gamma_n\to0\quad\text{as}\quad n\to\infty.\] Therefore the intersection $\gamma_1\cap\gamma_2\cap\dots$ contains a single point; denote it by $p_\infty$. Let $\sigma_\infty$ be the incircle at $p_\infty$; it has to touch $\gamma$ at another point, say $q_\infty$. The same argument as above shows that $q_\infty\in\gamma_n$ for any $n$. It follows that $q_\infty =p_\infty$ --- a contradiction. \qed \begin{exercise}\label{ex:moon-rad} Assume that a closed smooth regular curve (possibly with self-intersections) $\gamma$ lies in a figure $F$ bounded by a closed simple plane curve. Suppose that $R$ is the maximal radius of a disc contained in $F$. Show that the absolute curvature of $\gamma$ is at least $\tfrac1R$ at some parameter value. \end{exercise} \section*{FOUR-VERTEX THEOREM.} Recall that a \emph{vertex} of a smooth regular curve is defined as a critical point of its signed curvature; in particular, any local minimum (or maximum) of the signed curvature is a vertex. For example, every point of a circle is a vertex. The classical four-vertex theorem says that \emph{any closed smooth regular plane curve without self-intersections has at least four vertices}. It has many different proofs and generalizations. A very transparent proof was given by Robert Osserman \cite{osserman}; his paper contains a short account of the history of the theorem. Note that if an osculating circline $\sigma$ at a point $p$ supports $\gamma$, then $p$ is a vertex. The latter can be checked by direct computation, but it also follows from the Tait--Kneser spiral theorem \cite{ghys-tabachnikov-timorin}. It states that the \emph{osculating circlines of a curve with monotonic curvature are disjoint and nested}; in particular, none of these circlines can support the curve. Therefore the following theorem is indeed a generalization of the four-vertex theorem: { \begin{wrapfigure}{r}{33 mm} \vskip-4mm \centering \includegraphics{pic-63} \vskip0mm \end{wrapfigure} \begin{theorem}\label{thm:4-vert} Any smooth regular simple plane curve is supported by its osculating circlines at 4 distinct points; two from inside and two from outside. \end{theorem} \medskip\noindent\textit{Proof.} According to the key lemma, there is a point $p\in\gamma$ such that its osculating circle supports $\gamma$ from inside. The curve $\gamma$ can be considered as a loop with $p$ as its base. Therefore the key lemma implies the existence of another point $q$ with the same property. This shows the existence of two osculating circles that support $\gamma$ from inside; it remains to show the existence of two osculating circles that support $\gamma$ from outside. Let us apply to $\gamma$ an inversion with respect to a circle whose center lies inside~$\gamma$. Then the obtained curve $\gamma_1$ also has two osculating circles that support $\gamma_1$ from inside. } Note that these osculating circlines are inverses of the osculating circlines of $\gamma$. Indeed, the osculating circline at a point $x$ can be defined as the unique circline that has second order of contact with $\gamma$ at $x$. It remains to note that inversion, being a local diffeomorphism away from the center of inversion, does not change the order of contact between curves. Note that the region lying inside $\gamma$ is mapped to the region outside $\gamma_1$ and the other way around. Therefore these two new circlines correspond to the osculating circlines supporting $\gamma$ from outside. \qed \begin{advancedexercise}\label{ex:curve-crosses-circle} Suppose $\gamma$ is a closed simple smooth regular plane curve and $\sigma$ is a circle. Assume $\gamma$ crosses $\sigma$ at the points $p_1,\dots,p_{2{\cdot}n}$ and these points appear in the same cycle order on $\gamma$ and on $\sigma$. Show that $\gamma$ has at least $2{\cdot}n$ vertices. \end{advancedexercise} \begin{figure}[!ht] \begin{minipage}{.48\textwidth} \centering \includegraphics{pic-65} \end{minipage}\hfill \begin{minipage}{.48\textwidth} \centering \includegraphics{pic-305} \end{minipage} \end{figure} The order of the intersection points is important. An example with only 4 vertices and arbitrarily many intersection points can be guessed from the diagram on the right. \begin{acknowledgment}{Acknowledgments.} We wish to thank anonymous referees for thoughtful reading and insightful suggestions. This work was supported by the National Science Foundation under Grant DMS-2005279; Simons Foundation under Grant \#584781. \end{acknowledgment}
1,116,691,498,065
arxiv
\section{Introduction}\label{sec:intro} In various real-life modeling problems, we have limited prior information regarding which model family is more suitable for the problem. In such cases, a method that would allow one to choose between different model families on the fly would be useful, eliminating the need for modeling with each candidate model class separately and comparing. This provides computational gains especially when the number of parameters and candidate model classes are high. An example is the choice between different \textit{probability density function} (pdf) models for noise or signals. The pdf estimation problem is a frequently encountered problem in signal processing and statistics, and their application fields such as in image processing and telecommunications. In communication systems, channel modelling has been an important issue so as to characterize the whole system. However, for most of the cases, performing a deterministic channel modelling might be impossible and to represent real life systems, statistical channel models are very important. In addition, in applications of noise reduction operations in image processing, power-line communication systems, etc. dealing with a suitable statistical model beforehand is also important for the methods to be developed. Despite this importance, estimating the correct (or suitable) probability distribution along with its parameters within a number of generic distribution models may necessitate testing each candidate in order to choose the best possible model for the observed data/noise. General tendency is to model noise/data with a Gaussian process especially in communications, network modeling, digital images, due to its analytical ease. In the case of non-Gaussian impulsive noise/data, various model families exist, for example, Middleton Class A, Bernoulli-Gaussian, $\alpha$-Stable, Generalized Gaussian (GG), Student's t, etc. It has been reported in the literature that noise exhibits non-Gaussian and impulsive characteristics in application areas such as wireless communications \cite{bhatti2009impulsive,blackard1993measurements}, \textit{power line communications} (PLC) \cite{lin2013impulsive,alsusa2013dynamic}, \textit{digital subscriber lines} (xDSL) \cite{al2011impulsive,fantacci2010impulse}, image processing \cite{simoncelli1997statistical, achim2003sar} and seismology \cite{yue2015validation}. \textit{Reversible jump Markov chain Monte Carlo} (RJMCMC) is a Bayesian model determination method which has had success in vast areas of applications since its introduction by Peter Green \cite{green1995}. Unlike the widespread MCMC algorithm, \textit{Metropolis-Hastings} (MH), RJMCMC allows one to search in solution spaces of different dimensions which has been the main motivation for its use up to date. Classical applications of RJMCMC are model selection in regression and mixture processes \cite{troughton1997reversible, ehlers2004bayesian, eugri2010bayesian, richardson1997bayesian, viallefont2002bayesian, salas2009finite}. Unlike the classical applications in the literature, the original formulation of RJMCMC in \cite{green1995} permits a wider interpretation than just exploring the models with different dimensions. As an example of the applicability of RJMCMC beyond model dimension selection: it was utilized to learn polynomial autoregressive (PAR) \cite{karakus2015PAR}, polynomial moving average (PMA) \cite{karakus2016PMA} and polynomial autoregressive moving average (PARMA) \cite{karakus2017PARMA} processes and identification of Volterra system models \cite{karakucs2017bayesian} by exploring linear and nonlinear model spaces in preliminary work by the authors. This paper contributes to the literature with a new interpretation on RJMCMC beyond trans-dimensional sampling, which we call \textit{trans-space RJMCMC}. The proposed method uses RJMCMC in an unorthodox way and reveals its potential to be a general estimation method by performing the reversible jump mechanism between spaces of different model classes. To demonstrate this potential, we focus our attention on a more special but generic problem of choosing between different probability distribution families. The problem is a frequently encountered problem in signal processing and statistics, and their application fields such as in image processing and telecommunications. In this paper, we propose a Bayesian statistical modeling study of impulsive noise/data by estimating the probability distribution among three conventional impulsive distributions families: symmetric $\alpha$-Stable (S$\alpha$S), GG and Student's $t$. Other than identifying the distribution family, the proposed method estimates shape and scale parameters of the distribution. These distributions are the most popular statistical models in applications covering diverse areas such as wireless channel modeling, financial time series analysis, seismology, radar imaging. We study the algorithm extensively on synthetic data providing statistical significance tests. In addition, as case studies, we look into two statistical modeling problems of actual interest impulsive noise on PLC channels and \textit{2-D discrete wavelet transform} (2-D DWT) coefficients. Particularly, PLC impulsive noise measurements in \cite{cortes2010analysis,lopes2013dealing} have been utilized in the simulations. Apart from this, statistical modeling for 2-D DWT coefficients have been performed on different kinds of images such as Lena, \textit{synthetic aperture radar} (SAR) \cite{SAR3}, \textit{magnetic resonance imaging} (MRI) \cite{MRI} and mammogram \cite{mammogram1}. Rest of the paper is organized as follows: general definitions for trans-dimensional RJMCMC and the proposed method are discussed in Section \ref{sec:RJMCMC}. Section \ref{sec:RJMCMCinImp} reviews three distribution families and describes the impulsive data modeling scheme of the proposed method. Experimental studies for synthetically generated noise processes and for real applications are explained in Section \ref{sec:sim}. Section \ref{sec:conclusion} draws conclusions on the results. \section{Reversible jump MCMC}\label{sec:RJMCMC} RJMCMC has been first introduced by Peter Green in \cite{green1995} as an extension of MCMC to a model selection method. Green, Green firstly derives the condition for the satisfaction of detailed balance requirements in terms of the Borel sets which the candidate models belong to. In the continuation of the derivation, he specializes his discussion to moves between spaces which differ only in dimensions and the general discussion is abandoned. In the follow up, to the best of our knowledge almost all publications utilized RJMCMC for model dimension selection. Popular use of RJMCMC is in linear parametric models such as \textit{autoregressive} (AR) \cite{troughton1997reversible}, \textit{autoregressive integrated moving average} (ARIMA) \cite{ehlers2004bayesian} and \textit{fractional} ARIMA (ARFIMA) \cite{eugri2010bayesian} and mixture models such as Gaussian mixtures \cite{richardson1997bayesian}, Poisson mixtures \cite{viallefont2002bayesian} and $\alpha$-stable mixtures \cite{salas2009finite}. Apart from the popular applications above, RJMCMC has been used in other various applications such as detection of clusters in disease maps \cite{knorr2000bayesian}, graphical models based variable selection and automatic curve fitting \cite{lunn2009generic}, log-linear model selection \cite{dellaportas1999markov}, non-parametric drift estimation \cite{van2014reversible}, delimiting species using multilocus sequence data \cite{rannala2013improved}, random effect models \cite{oedekoven2016using}, generation of lane-accurate road network maps from vehicle trajectory data \cite{roeth2017extracting}. In this study, our motivation is to propose a new interpretation on the classical RJMCMC beyond trans-dimensionality. The classical trans-dimensional RJMCMC of \cite{green1995} and the proposed method, \textit{trans-space} RJMCMC are discussed in the sequel. \subsection{Trans-dimensional RJMCMC} The standard MH algorithm \cite{Hastings70} accepts a transition from Markov chain state $x\in\mathcal{X}$ to $y\in\mathcal{X}$ with a probability of: \begin{align}\label{equ:mh} A(x\rightarrow y) = \min \left\{ 1, \dfrac{\pi(y)q(x,y)}{\pi(x)q(y,x)} \right\} \end{align} where $\pi(\cdot)$ represents the target distribution and $q(y,x)$ refers to the proposal distribution from state $x$ to $y$. RJMCMC, in the sense of trans-dimensional MCMC, generalizes MH algorithm by defining multiple parameter subspaces $\zeta_k$ of different dimensionality \cite{green1995}. This is only achieved by defining different types of moves between subspaces providing that the detailed balance is attained. For this condition to hold, a reverse move from state $y$ to $x$ should be defined and dimension matching should be satisfied between parameter subspaces. Assume that we propose a move $m$ with probability $p_m$ from a Markov chain state $\kappa$ to $\kappa'$ each of which has parameter vectors $\theta\in\zeta_1$ and $\theta'\in\zeta_2$, respectively with different dimensions. The move $m$ is reversible and its reverse move $m^{\text{R}}$ is proposed with a probability $p_{m^{\text{R}}}$. The general detailed balance condition can be stated as: \begin{align}\label{equ:detbal} \pi(\kappa) q(\kappa', \kappa) A(\kappa \rightarrow \kappa') = \pi(\kappa') q(\kappa, \kappa') A(\kappa' \rightarrow \kappa), \end{align} where proposal distribution $q(\cdot)$ is directional and includes the probabilities of both the move itself and the proposed parameters. Then, the general expression for the acceptance ratio in (\ref{equ:mh}) turns into \cite{green1995}: \begin{dmath}\label{equ:rjmcmc_alpha} A(\kappa \rightarrow \kappa') = \min \left\{ 1, \dfrac{\pi(\kappa') p_{m^{\text{R}}} \chi_2(\mathbf{u'})}{\pi(\kappa) p_m \chi_1(\mathbf{u})} \left| \dfrac{\partial (\theta', \mathbf{u'})}{\partial (\theta, \mathbf{u})} \right| \right\}, \end{dmath} where $\chi_1(\cdot)$ and $\chi_2(\cdot)$ are the distributions for the auxiliary variable vectors $\mathbf{u}$ and $\mathbf{u'}$, respectively which are required to provide dimension matching for the moves $m$ and $m^{\text{R}}$. The term $\left| \frac{\partial (\theta' \mathbf{u'})}{\partial (\theta, \mathbf{u})} \right|$ is the magnitude of the Jacobian. In each RJMCMC run, the standard Metropolis-Hastings algorithm is applied in moves within the same dimensional models, which is called as \textit{life} move. Sampling is performed in a single parameter space and there is no dimension change in life move. For trans-dimensional transitions between models, moves such as \textit{birth}, \textit{death}, \textit{split} and \textit{merge} are performed which require the creation or the deletion of new variables corresponding to the increased or decreased dimension. Green handles the dimension changing moves as variable transformations and defines a dummy variable to match dimensions which provides a square Jacobian matrix that can be used to update the acceptance ratio easily. \subsection{Trans-space RJMCMC} In spite of RJMCMC's use in trans-dimensional cases, the original formulation in \cite{green1995} holds a wider interpretation than just sampling between spaces of different dimensions. In the beyond trans-dimensional RJMCMC point of view, the main requirements of RJMCMC stated by Green are still valid with one exception, that is, a change in parameter space definition. In the original formulation, Green firstly derives the condition for the satisfaction of detailed balance requirements in terms of the Borel sets which the candidate models belong to. In the continuation of the derivation, he specializes his discussion to moves between spaces which differ only in dimensions and the general discussion is abandoned. However, the parameter vectors in (\ref{equ:detbal}) may belong to Borel sets which differ not only in their dimensions but also in the generic models they belong to. Thus, the algorithm can be used for much more generic implementations. Notwithstanding, this general interpretation should be taken with caution to have a useful method. Particularly, the Borel sets should be \emph{related} somehow, which can be conveniently set by \emph{matching a common property (e.g. norm)} in defining the spaces. Defining proposals in this way will provide sampling more efficient candidates and help algorithm to converge faster. As an example, model transitions can be designed to provide fixed first ordered moments between spaces. Thus, this moment based approach provides a more efficient way to explore all the candidate models within the combined space. Carrying the trained information to a new generic model space is very crucial in this framework. Otherwise, the algorithm would start to train from scratch repeatedly each time it changes states and sampling across unrelated spaces would not give us a computational advantage. In that case, one could solve for different spaces separately and compare the final results to choose the best model. Two examples one can think of firstly, are: \begin{enumerate} \item $\kappa$ might correspond to a linear parametric model such as AR while $\kappa'$ might correspond to a nonlinear model such as Volterra AR. \item $\kappa$ might correspond to a pdf $p_A$ with certain distribution parameters while $\kappa'$ might correspond to another pdf $p_B$ with some other distribution parameters. \end{enumerate} To this end, we define a combined parameter space $\varphi=\bigcup_k \varphi_k$ for $k > 1$. Assume that a move $M$ from Markov chain state $x\in\varphi_1$ to $x'\in\varphi_2$ is defined and Borel sets $A\subset\varphi_1$ and $B\subset\varphi_2$ are related with a set of functions each of which are invertible. Particularly, for any Borel sets in both of the spaces, $\varphi_1$ and $\varphi_2$, functions $h_{12}:A\mapsto B$ and $h_{21}:B\mapsto A$ can be defined by matching a common property of the spaces. For generality, if the proposed move requires matching the dimensions, auxiliary variables $\mathbf{u}_1$ and/or $\mathbf{u}_2$ can be drawn from proper densities $Q_1(\cdot)$ and $Q_2(\cdot)$, respectively. Otherwise, one can set $\mathbf{u}_1$ and $\mathbf{u}_2$ to $\emptyset$. Please note that the dimensions of the parameter spaces at both sides of the transitions can be different or the same and reversible jump mechanism is still applicable. Consequently, although the candidate spaces are of different classes, since the Borel sets are defined as to be related, the assumption of Green still holds for a symmetric measure $\xi_m$ and densities for joint proposal distributions, $\pi(\cdot)q(\cdot, \cdot)$, can be defined with respect to this symmetric measure by satisfying the equilibrium in (\ref{equ:detbal}). Thus, the acceptance ratio can be written as: \begin{dmath}\label{equ:rjmcmc_alpha2} A(x \rightarrow x') = \min \left\{ 1, \dfrac{\pi(x') p_{M^{\text{R}}} Q_2(\mathbf{u}_2)}{\pi(x) p_M Q_1(\mathbf{u}_1)} \left| \dfrac{\partial h_{12}(\theta_1, \mathbf{u}_1)}{\partial (\theta_1, \mathbf{u}_1)} \right| \right\}. \end{dmath} where $M^R$ is the reverse move of $M$ and $p_M$ and $p_{M^{\text{R}}}$ represent the probabilities of the moves. The Jacobian term appears in the equation as a result of the change of variables operation between spaces. Here we recall that in our previous works \cite{karakus2015PAR,karakus2016PMA, karakus2017PARMA, karakucs2017bayesian}, we have performed model estimation studies with RJMCMC for Volterra based nonlinear models PAR, PMA and PARMA as well as an identification study of Volterra system models. In these studies, RJMCMC has been utilized to explore the model spaces of linear and nonlinear models in polynomial sense instead of performing a model order selection study in a single linear model space. Hence, we add a few concluding remarks. \begin{remark}[Remark1.] We are going to name this new utilization on RJMCMC as \emph{trans-space} rather than \textit{trans-dimensional}. Trans-space RJMCMC reveals a general framework for exploring the spaces of different generic models whether or not their parameter spaces are of different dimensionality. Consequently, trans-dimensional case is a subset of trans-space transitions. \end{remark} \begin{remark}[Remark2.] Trans-space RJMCMC requires to define new types of moves due to the need for more detailed operations than, e.g. just being birth, death, split and merge of the parameters. These moves will be named as \textit{between-space moves} and may include both \textit{birth} and \textit{death} of the parameters at the same time or a norm based mapping between the parameter spaces. \textit{Switch} move (firstly proposed for Volterra system identification study \cite{karakucs2017bayesian}) will be proposed as a between-space move, which performs a switching between the candidate spaces of the generic model classes. \end{remark} \begin{remark}[Remark3.] As a special case of trans-space sampling, the proposed method can be used to explore the spaces of different distribution families. Therefore, this special case will be named as \textit{trans-distributional}. \end{remark} \section{Trans-distributional RJMCMC for Impulsive Distributions}\label{sec:RJMCMCinImp} In this study, we have applied RJMCMC to problems in which a stochastic process, $\mathbf{x}$, is given whose impulsive distribution is to be found. For this purpose, we define a reversible jump mechanism which estimates the distribution family among three impulsive distribution families, namely, S$\alpha$S, GG and Student's $t$. These three families cover many different noise modeling studies as stated in the above sections. All of them include Gaussian distribution as a special member, and many real life noise measurements can be modelled with these distribution families. For example, S$\alpha$S family has various demonstrated application areas such as PLC \cite{laguna2015use}, SAR imaging \cite{achim2003sar}, near optimal receiver design \cite{kuruoglu1998near}, modelling of counterlet transform subbands \cite{sadreazami2014study}, seismic amplitude data modelling \cite{yue2015validation}, as noise model for molecular communication \cite{farsad2015stable}, reconstruction of non-negative signals \cite{tzagkarakis2010greedy} (Please see \cite{nolan2010bibliography} and references therein for detailed applications). GG distributions have found applications in wavelet based texture retrieval \cite{do2002wavelet}, image modelling in terms of Markov random fields \cite{bouman1993generalized}, multicomponent texture discrimination in color images \cite{verdoolaege2011geodesics}, wheezing sound detection \cite{le2009wheezing}, modelling sea-clutter data \cite{novey2010complex}. Student's $t$ distribution is an alternative to Gaussian distribution especially for small populations where the validity of central limit theorem is questionable. Student's $t$ distribution has been used in applications of finance \cite{patton2006modelling, engle1986modelling}, full-waveform inversion of seismic data \cite{aravkin2011robust}, independent vector analysis for speech separation \cite{liang2013independent}, medical image segmentation \cite{nguyen2012robust}, growth curve modelling \cite{zhang2013bayesian}. One might argue that training separate MCMC samplers for each of the seemingly irrelevant distribution families and comparing their modelling performances afterwards would be computationally more advantageous. However, in cases when the number of candidate models is not known or dramatically large, implementing a single Markov chain via RJMCMC could be simpler. In addition, when the number of models are small, one can not conclude that parallel MCMC approach would be a better choice than RJMCMC and this requires an analysis. By efficiently choosing the proposal distributions, the advantage of incorporating reversible jump mechanism can be extended to searching several distribution families which will be described in the sequel. In the literature, RJMCMC usage in this problem has been limited and it has been used to be examples of trans-dimensional approach deciding between two specific distributions \cite{hastie2012model, barker2013bayesian}. Particularly, when modelling count data, reversible jump mechanism has been applied to choose between Poisson and negative binomial distributions in \cite{hastie2012model}. This study deals with the question whether the count data is over-dispersed relative to Poisson distribution. In \cite{barker2013bayesian} an approach which is a combination of Gibbs sampler and RJMCMC has been used to decide between Poisson and geometric distributions by using a universal parameter space called ``palette". Both of the studies have utilized RJMCMC in distribution estimation; however, the number of candidate distributions was limited to two. Moreover, in both of the studies, Poisson distribution is a special member of the distribution families in question (or, there is a direct relation between Poisson and negative binomial or geometric distributions), hence, the methods in these studies can be handled with a single family search. The proposed method, \emph{trans-distributional} RJMCMC, is much more general than the examples above and aims to fit a distribution to a given process $\mathbf{x}$ among various distributions by identifying the distribution's family and estimating its shape and scale parameters. Two types of between-class moves have been defined, namely \emph{intra-class-switch} and \emph{inter-class-switch}. These moves propose model class changes \textit{within} and \textit{between} probability distribution families, respectively. \subsection{Impulsive Distribution Families} \label{sec:distfamilies} \subsubsection{Symmetric $\alpha$-Stable Distribution Family} There is no closed form expression for probability density function (pdf) of S$\alpha$S distributions except for the special cases of Cauchy and Gaussian. However, its characteristic function, $\varphi(x)$, can be expressed explicitly as: \begin{align}\label{equ:aS_CF} \varphi(x) = \exp(j\delta x - \gamma|x|^{\alpha}) \end{align} where $0<\alpha \leq 2$ is the characteristic exponent, \emph{a.k.a. shape parameter}, which controls the impulsiveness of the distribution. Special cases Cauchy and Gaussian distributions occur when $\alpha=1$ and $\alpha=2$, respectively. $-\infty<\delta<\infty$ represents the \emph{location parameter}. The $\gamma>0$ provides a measure of the dispersion which is the \emph{scale parameter} expressing the spread of the distribution around $\delta$. \subsubsection{Generalized Gaussian Distribution Family} The univariate GG pdf can be defined as: \begin{align}\label{equ:GG_pdf} f(x) = \dfrac{\alpha}{2\gamma\Gamma(1/\alpha)} \exp\left(-\left(\dfrac{|x-\delta|}{\gamma}\right)^{\alpha}\right) \end{align} where $\Gamma(\cdot)$ refers to the gamma function, $\alpha>0$ is the shape parameter, $-\infty<\delta<\infty$ represents the location parameter and the $\gamma>0$ is the scale parameter. GG family has well-known members such as Laplace, Gauss and uniform distributions for $\alpha$ values of 1, 2 and $\infty$, respectively. \subsubsection{Student's $t$ Distribution Family} The univariate symmetric Student's $t$ distribution family is an impulsive distribution family with parameters, $\alpha>0$ which is the number of degrees of freedom, \emph{a.k.a shape parameter}, the location parameter $-\infty<\delta<\infty$ and the scale parameter $\gamma>0$. Its pdf can be defined as: \begin{align}\label{equ:t_pdf} f(x) = \dfrac{\Gamma\left(\dfrac{\alpha+1}{2}\right)}{\Gamma(\alpha/2) \gamma \sqrt{\pi \alpha}} \left( 1+\dfrac{1}{\alpha} \left( \dfrac{x - \delta}{\gamma} \right)^2 \right)^{-((\alpha+1)/2)}. \end{align} Special members of the symmetric Student's $t$ distribution family are Cauchy and Gauss which are obtained for shape parameter values of $\alpha=1$ and $\alpha=\infty$, respectively. \subsection{Parameter Space}\label{sec:paramspace} RJMCMC construction for impulsive data modeling begins by firstly defining the parameter space. Parameter space has been defined on the common parameters for all three distribution families. These are: \emph{shape, scale} and \emph{location} parameters ($\alpha, \gamma$ and $\delta$, respectively). In addition to them, \emph{the family identifier}, $k$, which defines the estimated distribution family has been added to the parameter space. The $k$ values of the distributions S$\alpha$S, $\text{GG}$ and Student's $t$ are 1, 2 and 3, respectively. Therefore, the parameter vector $\theta$ can be formed as: $\theta = \{ k, \alpha, \delta, \gamma \}$. In this study, the observed data from all three families are assumed to be symmetric around the origin for simplicity. Therefore, $\delta$, is set to 0 and its effect will be invisible in the simulations. Consequently, parameter vector $\theta$ is reduced to: $\theta = \{ k, \alpha, \gamma \}$. \subsection{Hierarchical Bayesian Model}\label{sec:bayeshier} The target distribution, $f(\theta|\mathbf{x})$, can be decomposed to likelihood times priors due to Bayes Theorem as: \begin{align}\label{postlhdpri} f(\theta|\mathbf{x}) \propto f(\mathbf{x}|k, \alpha, \gamma) f(\alpha|k) f(k) f(\gamma). \end{align} where $f(\mathbf{x}|k, \alpha, \gamma)$ represents the likelihood and $f(\alpha|k), f(k)$, and $f(\gamma)$ are the priors. \subsection{Likelihood}\label{sec:lhd} We assume that the stochastic process $\mathbf{x}$ with a length of $n$ comes from one of the distributions in candidate families (S$\alpha$S, GG and Student's $t$). Then, the likelihood corresponds to a pdf from one of these distributions: \begin{align}\label{equ:likelihood} f(\mathbf{x}|k, \alpha, \gamma) &= \left\{ \begin{array}{ll} \prod_{i=1}^{n} \text{S$\alpha$S}(\gamma), &k=1 \\ \prod_{i=1}^{n} \text{GG}_{\alpha}(\gamma), &k=2 \\ \prod_{i=1}^{n} t_{\alpha}(\gamma), &k=3 \end{array} \right . \end{align} \subsection{Priors}\label{sec:priors} Priors have been selected as the following: \begin{align}\label{equ:priors} f(\gamma) &= \mathcal{IG}(a, b),\\ f(k) &= \mathbb{I}_{\{1/3, 1/3, 1/3\}} \quad \text{for } k = 1, 2, 3,\\ f(\alpha|k) &= \left\{ \begin{array}{ll} \mathcal{U}(0, 2) & k=1, \\ \mathcal{U}(0, \alpha_{\text{max,GG}}) & k=2, \\ \mathcal{U}(0, \alpha_{\text{max},t}) & k=3, \end{array} \right. \end{align} where $a$ and $b$ represent the hyperparameters for scale parameter and they are generally selected as to take small values such as $1, 0.1$ in the literature. The upper bounds for the shape parameters of $\text{GG}$ and Student's $t$ distributions have been defined as $\alpha_{\text{max,GG}}$ and $\alpha_{\text{max},t}$, respectively. Choosing an inverse gamma prior for scale parameter is a general practice especially for Gaussian problems. Due to the lack of information about conjugate priors for distributions other than the Gaussian case and since Gaussian distribution is common for all three families, an inverse gamma conjugate prior for scale parameters has been chosen for simplicity. Furthermore, all families are equiprobable \emph{a priori} and shape parameter is uniformly distributed between lower and upper bounds. \subsection{Model Moves}\label{sec:moves} Two RJMCMC model moves have been defined in order to perform trans-distributional transitions discussed in the previous sections. These are: \emph{life} and \emph{switch} moves. Life move performs classical MH algorithm to update $\gamma$. Switch move performs exploring the other distribution spaces. For this purpose, two types of switch moves have been defined: \textit{intra-class-switch} and \textit{inter-class-switch}. Intra-class-switch performs exploring the distributions in the same family, while inter-class-switch explores spaces of different families. At each RJMCMC iteration, one of the moves is chosen with probabilities $P_{\text{life}}, P_{\text{intra-cl-sw}}$ and $P_{\text{inter-cl-sw}}$, respectively. In Figure \ref{fig:flow} the flow diagram of the proposed method is depicted where the parameter $N$ refers to the maximum number of iterations. The details about the steps of the selected moves are discussed in the sequel. \begin{figure}[ht!] \centering \includegraphics[width=.8\linewidth]{flow15.pdf} \caption{Flow Diagram for the Proposed method.}\label{fig:flow} \end{figure} \subsubsection{Life Move} \emph{Life} move defines a transition from parameter space $(k, \alpha, \gamma)$ to $(k', \alpha', \gamma')$ and only proposes a candidate for the scale parameter, $\gamma$ ($\alpha'=\alpha$ and $k'=k$). The proposal distribution for scale parameter $\gamma'$ has been chosen as: \begin{align}\label{equ:lifeproposal} q(\gamma'|\gamma) = \mathcal{TN}(\gamma, \xi_{scale}) \quad \text{for interval } (0, \gamma+1] \end{align} where $\mathcal{TN}(\gamma, \xi_{scale})$ refers to a Gaussian distribution where its mean $\gamma$ is the last value of the scale parameter, and its variance is $\xi_{scale}$ and is truncated to lie within the interval of $(0, \gamma+1]$ afterwards by rejecting samples outside this interval. This truncation procedure aims to satisfy the condition $\gamma>0$ and forces candidate proposals not to lie far from the last value of $\gamma$. Hence, the resulting acceptance ratio for life move is: \begin{align}\label{equ:lifeaccratio} A_{\text{life}} = \min \left\{ 1, \dfrac{f(\mathbf{x}|k', \alpha', \gamma')}{f(\mathbf{x}|k, \alpha, \gamma)} \dfrac{f(\gamma')}{f(\gamma)} \dfrac{q(\gamma|\gamma')}{q(\gamma'|\gamma)} \right\} \end{align} \subsubsection{FLOM Based Proposals for $\gamma$ Transitions}\label{sec:FLOMBased} As mentioned earlier in this paper, using a common feature among the candidate model spaces for the transition to be made will provide efficient proposals and is important in order to link the subspaces of different classes. Assume we have two candidate families parameter vectors of which belong to Borel sets, $\mathcal{A}$ and $\mathcal{B}$, respectively. Providing fixed order norm for both of the Borel sets, the transition (e.g. $h:\mathcal{A}\mapsto\mathcal{B}$) from one set to another carries the information in the same direction which has been already learned at the most recent Borel set. Considering the convergence and mixing of the algorithm, such an approach is very important to determine the transition process between generic distribution models, whether within the family or between families. When dealing with distribution estimation problems, moments with various orders, $p$ have been defined for all distribution families. Moments of $t$ and GG families have been defined at any orders for $p>0$ and there are no restrictions on values of $p$. However, moments of the S$\alpha$S family have been defined subject to the constraint of $p < \alpha$. This constraint makes it possible to use the absolute \textit{fractional lower order moments (FLOMs)} which has been also used in the parameter estimation methods of the S$\alpha$S family. By taking into consideration of the facts that absolute FLOM expressions are defined for all impulsive families, and their success in parameters estimation studies of the S$\alpha$S distributions, using an absolute FLOM based approach helps to construct a reversible jump sampler between different impulsive families, by linking the candidate distributions through absolute FLOM. In impulsive data modelling study in this study, absolute FLOM-based approach will be used for the proposals of the $\gamma$ parameter. In particular, to perform sampling between related subspaces and generate efficient proposals on scale parameter $\gamma$, an absolute FLOM-based method has been used. The newly proposed scale parameter, $\gamma'$, is calculated via a reversible function, $g(\cdot)$ (or $w(\cdot)$), which provides equal absolute FLOMs with order $p$ for both the most recent and candidate distribution spaces. Thus, proposals on $\gamma$ carry the learned information to the candidate space via absolute FLOMs. Absolute FLOMs are defined only for $p$ values lower than alpha for the case of S$\alpha$S distributions. Moreover, there are several studies which suggest near-optimum values for FLOM order $p$ in order to estimate the scale parameter of S$\alpha$S distributions. \cite{tsihrintzis1996fast} suggests $p=\alpha/4$ and \cite{ma1995parameter} suggests $p=0.2$. However, in \cite{kuruoglu2001density} it has been stated that decreasing $p$ for a fixed value of $\alpha$ (i.e. increasing $\alpha/p$), increases the estimation performance of $\gamma$ and \cite{kuruoglu2001density} suggests the choice $p=\alpha/10$. We use the value $p=\alpha/10$ in our simulations for all the distribution families. For a given data, $\mathbf{x}$, in order to perform a transition from parameter space $\{ k, \alpha, \gamma\}$ to $\{ k', \alpha', \gamma'\}$ we assume that the absolute FLOM will be the same for both the most recent and candidate distribution spaces. In particular, \begin{align}\label{equ:FLOM1} E_k(|\mathbf{x}|^p) = E_{k'}(|\mathbf{x}|^p) \end{align} where absolute FLOMs for all three candidate families can be defined as: \begin{align}\label{equ:FLOM2} E_k(|\mathbf{x}|^p) &= \left\{ \begin{array}{ll} C_{\alpha}(p, \alpha) \gamma^{p/\alpha} & k=1, \\ C_{\text{GG}}(p, \alpha) \gamma^{p} & k=2, \\ C_{t}(p, \alpha) \gamma^{p} & k=3, \end{array} \right. \end{align} where \begin{align} C_{\alpha}(p, \alpha) &= \dfrac{\Gamma\left(\dfrac{p+1}{2}\right) \Gamma\left(\dfrac{-p}{\alpha}\right)}{\alpha \sqrt{\pi} \Gamma\left(\dfrac{-p}{2}\right)}2^{p+1}, \\ C_{\text{GG}}(p, \alpha) &= \dfrac{\Gamma \left( \dfrac{p+1}{\alpha} \right)}{\Gamma(1/\alpha)}, \\ \label{equ:FunctionCs}C_{t}(p, \alpha) &= \dfrac{\Gamma\left(\dfrac{p+1}{2}\right) \Gamma\left(\dfrac{\alpha-p}{2}\right)}{\sqrt{\pi} \Gamma\left(\dfrac{\alpha}{2}\right)}\alpha^{p/2}. \end{align} The candidate proposal, $\gamma'$, has been calculated via reversible functions which are derived by using the relations in (\ref{equ:FLOM1})-(\ref{equ:FunctionCs}) for each transition. These functions have been derived for both of the switch moves and are shown in Tables \ref{tab:LpIntra} and \ref{tab:LpInter}. \subsubsection{Intra-Class-Switch Move} RJMCMC performs a transition on shape and scale parameters in the same distribution family ($k'=k$) when an intra-class-switch move is proposed. The proposed shape parameter $\alpha'$ is sampled from a proposal distribution $q(\alpha'|\alpha)$. In addition, the candidate scale parameter $\gamma'$ is defined as a function $g(\alpha, \alpha', p, \gamma)$. The $\gamma$ transition in this move has been defined as dependent on the newly proposed $\alpha'$ parameter. For this reason, a step is first performed on shape parameter $\alpha$ to propose $\alpha'$, and this is used to calculate the candidate scale parameter $\gamma'$. For the shape parameter $\alpha$ transition, a proposal distribution such as $q(\alpha'|\alpha)$ has been used. For this distribution, we first have assumed a symmetric distribution around the most recent $\alpha$ value. In addition, it has been preferred that the proposal distribution has heavier tails than Gaussian in order to make it possible to sample candidates much farther than the most recent $\alpha$ relative to the samples from the Gaussian distribution. Since the Laplace distribution is a distribution that satisfies all these conditions, the proposal distribution is chosen as a Laplace distribution. Due to the numerical calculation problems caused when $\alpha$ and $\alpha'$ are close to each other (i.e. $|\alpha - \alpha'| \leq 0.03$), we have decided to utilize a finite number of candidate distributions (i.e. a finite number of $\alpha$ values) and the space on $\alpha$ is discretized with increments of 0.05. That's why a discretized Laplace ($\mathcal{D}\mathcal{L}(\alpha, \Gamma)$) distribution where the location parameter of which is equal to the most recent shape parameter $\alpha$ and scale parameter is $\Gamma$, has been utilized. An example figure of the proposal distribution $q(\alpha'|\alpha)$ is shown in Figure \ref{fig:proposaldist}. Importantly, our choice on the proposal distribution $q(\alpha'|\alpha)$ is not restrictive; any distribution other than Laplace can be selected as the proposal distribution (e.g. Gaussian like). However, different selections will cause the algorithms to perform differently. Candidate scale parameter $\gamma'$ has been calculated via reversible functions, $g(\cdot)$, which are derived for intra-class-switch move by using the method in Section \ref{sec:FLOMBased}. Functions for each family are shown in Table \ref{tab:LpIntra}. \begin{figure}[ht!] \centering \subfigure[]{\label{fig:proposaldist} \includegraphics[width=.4\linewidth]{discLap-eps-converted-to}} \centering \subfigure[]{\label{fig:mappingfunc} \includegraphics[width=.4\linewidth]{mapping-eps-converted-to}} \caption{(a) - Proposal distribution, $q(\alpha'|\alpha)$ for intra-class-switch move $(\gamma=1, \Gamma=0.4)$. (b) - Mapping functions on shape parameter for inter-class-switch move} \end{figure} \begin{table}[ht!] \centering \caption{Intra-Class-Switch Details [$(k, \alpha, \gamma)\rightarrow(k', \alpha', \gamma')$]}\label{tab:LpIntra} \scriptsize \resizebox{.7\linewidth}{!}{\begin{tabular}{ccll} \hline Family & Degree, $p$ & $\gamma'=g(\alpha, \alpha', p, \gamma)$ & Jacobian, $|J|$\\ \hline S$\alpha$S & $\alpha'/10$ & $\left(\dfrac{C_{\alpha}(p, \alpha)}{C_{\alpha}(p, \alpha')}\right)^{\alpha'/p} \gamma^{\alpha'/\alpha}$ & $\left(\dfrac{C_{\alpha}(p, \alpha)}{C_{\alpha}(p, \alpha')}\right)^{\alpha'/p} \dfrac{\alpha'}{\alpha} \gamma^{(\alpha' - \alpha)/\alpha}$ \\ \hlin $\text{GG}$ & $\alpha'/10$ & $\left(\dfrac{C_{\text{GG}}(p, \alpha)}{C_{\text{GG}}(p, \alpha')}\right)^{1/p} \gamma $ & $\left(\dfrac{C_{\text{GG}}(p, \alpha)}{C_{\text{GG}}(p, \alpha')}\right)^{1/p}$\\ \hlin $t$ & $\alpha'/10$ & $\left(\dfrac{C_{t}(p, \alpha)}{C_{t}(p, \alpha')}\right)^{1/p} \gamma$ & $\left(\dfrac{C_{t}(p, \alpha)}{C_{t}(p, \alpha')}\right)^{1/p}$\\ \hline \end{tabular}} \centering \vspace{0.3cm} \caption{Inter-Class-Switch Details [$(k, \alpha, \gamma)\rightarrow(k', \alpha', \gamma')$]}\label{tab:LpInter} \scriptsize \resizebox{.7\linewidth}{!}{\begin{tabular}{ccll} \hline ($k \rightarrow k'$) & Degree, $p$ & $\alpha'=\psi(\alpha, k, k')$ & $\gamma'=w(\alpha, \alpha', p, \gamma)$\\ \hline $1\rightarrow2$ & $\alpha'/10$ & $f_1(\alpha)=\dfrac{\alpha^2}{2}$ & $\left(\dfrac{C_{\alpha}(p, \alpha)}{C_{\text{GG}}(p, \alpha')}\right)^{1/p} \gamma^{1/\alpha}$\\ \hlin $1\rightarrow3$ & $\alpha'/10$ & $f_2(\alpha)=logit\left(\dfrac{\alpha+2}{4}\right)$ & $\left(\dfrac{C_{\alpha}(p, \alpha)}{C_{t}(p, \alpha')}\right)^{1/p} \gamma^{1/\alpha}$\\ \hlin $2\rightarrow1$ & $\alpha'/10$ & $f_1^{-1}(\alpha)$ & $\left(\dfrac{C_{\text{GG}}(p, \alpha)}{C_{\alpha}(p, \alpha')}\right)^{\alpha'/p} \gamma^{\alpha'}$\\ \hlin $2\rightarrow3$ & $\alpha'/10$ & $f_2(f_1^{-1}(\alpha))$ & $\left(\dfrac{C_{\text{GG}}(p, \alpha)}{C_{t}(p, \alpha')}\right)^{1/p} \gamma$\\ \hlin $3\rightarrow1$ & $\alpha'/10$ & $f_2^{-1}(\alpha)$ & $\left(\dfrac{C_{t}(p, \alpha)}{C_{\alpha}(p, \alpha')}\right)^{\alpha'/p} \gamma^{\alpha'}$\\ \hlin $3\rightarrow2$ & $\alpha'/10$ & $f_1(f_2^{-1}(\alpha))$ & $\left(\dfrac{C_{t}(p, \alpha)}{C_{\text{GG}}(p, \alpha')}\right)^{1/p} \gamma$\\ \hline \end{tabular}} \end{table} Consequently, proposals for intra-class-switch move are; \begin{align}\label{equ:inswtransitions} q(\alpha'|\alpha) &= \mathcal{D}\mathcal{L}(\alpha, \Gamma), \\ \gamma' &= g(\alpha, \alpha', p, \gamma). \end{align} As a result of the details explained above, acceptance ratio for RJMCMC intra-class-switch move can be expressed as; \begin{align}\label{equ:inswaccratio} A_{\text{intra-cl-sw}} = \min \left\{ 1, \dfrac{f(\mathbf{x}|k', \alpha', \gamma')}{f(\mathbf{x}|k, \alpha, \gamma)} \dfrac{f(\gamma')}{f(\gamma)} |J| \right\}, \end{align} where $|J|$ is the magnitude of the Jacobian (See Table \ref{tab:LpIntra}). \subsubsection{Inter-Class-Switch Move} Different from intra-class-switch move, distribution family has also been changed in inter-class-switch move ($k'\neq k$) as well as scale and shape parameters. Candidate distribution families are equiprobable for the candidate set $\{ 1, 2, 3\}\backslash\{k\}$, and we use functions below to propose candidate parameters of $\alpha'$ and $\gamma'$. \begin{align}\label{equ:outswtransitions} \alpha' &= \psi(\alpha, k, k') \\ \gamma' &= w(\alpha, \alpha', p, \gamma) \end{align} For intra-class transitions mentioned in the section above, the knowledge (about scale $\gamma$) learned in the previous algorithm steps was carried to the next step by the proposed method via FLOM based functions. The same approach is also utilized for $\gamma$ transitions in inter-class-switch move and functions $w(\cdot)$ are derived, however, this time, the sides of the transition are in different families. Details are shown in Table \ref{tab:LpInter}. In order to perform efficient proposals for $\alpha$ in inter-class-switch move, instead of using a random move, we perform a mapping, $\psi(\cdot)$ from one family to another by taking into consideration the special members which are common for both of the families. For example, to derive an invertible mapping function on $\alpha$ for a transition from S$\alpha$S to Student's $t$, we utilize the information that Cauchy and Gauss distributions are common for both of the families. Cauchy refers to $\alpha=1$ for both of the families and Gauss refers to $\alpha=2$ for S$\alpha$S and $\alpha=\infty$ for Student's $t$. Hence, the invertible function $f_2(\alpha)$ performs the mapping for a transition from S$\alpha$S to Student's $t$. Similarly, Gauss distribution is common for both S$\alpha$S and $\text{GG}$ for $\alpha$ value of 2. Thus, we derive another invertible function $f_1(\alpha)$ to move from S$\alpha$S to $\text{GG}$. Both of these mapping functions have been depicted in Figure \ref{fig:mappingfunc}. $\text{GG}$ and Student's $t$ distributions have only Gauss distribution in common for $\alpha$ values of 2 and $\infty$, respectively. Due to having only one common distribution and infinite range of $\alpha$, instead of deriving an invertible mapping for transitions between these distributions, we perform a 2-stage mapping mechanism by firstly mapping $\alpha$ to S$\alpha$S from the most recent family, then mapping this value to the candidate family by using functions $f_1(\cdot)$ or $f_2(\cdot)$. Then the mapping from $\text{GG}$ to Student's $t$ is derived as: $\alpha' = f_2(f_1^{-1}(\alpha))$. It is straightforward to show that the reverse transition between shape parameters from Student's $t$ to GG results as $\alpha' = f_1(f_2^{-1}(\alpha))$. For all the transitions, mapping functions have been shown in Table \ref{tab:LpInter}. So, the acceptance ratio for inter-class-switch move can be expressed as: \begin{align}\label{equ:outswaccratio} A_{\text{inter-cl-sw}} = \min \left\{ 1, \dfrac{f(\mathbf{x}|k', \alpha', \gamma')}{f(\mathbf{x}|k, \alpha, \gamma)} \dfrac{f(\gamma')}{f(\gamma)} \dfrac{f(\alpha|k)}{f(\alpha'|k')} |J| \right\} \end{align} where $|J| = \dfrac{\partial \gamma'}{\partial \gamma} \dfrac{\partial \alpha'}{\partial \alpha}$. \section{Experimental Study}\label{sec:sim} We study experimentally three cases: synthetically generated noise, impulsive noise on PLC channels and 2-D DWT coefficients. Without loss of generality, distribution of data $\mathbf{x}$ is assumed to be symmetric around zero ($\delta=0$). The algorithm starts with a Gaussian distribution model with initial values $k^{(0)}=2$ and $\alpha^{(0)}=2$. Initial value for scale parameter $\gamma$ is selected as half of the interquartile range of the given data $\mathbf{x}$ and upper bounds $\alpha_{\text{max,S}\alpha\text{S}}, \alpha_{\text{max,GG}}$ and $\alpha_{\text{max},t}$ are selected as 2, 2 and 5, respectively. Some intuitive selections have been performed for the rest of the parameters. Move probabilities for intra-class-switch and inter-class-switch moves are assumed to be equally likely during the simulations. Additionally, in order to speed up the convergence of the distribution parameter estimations during the life move, which is the coefficient update move, it is chosen a bit more likely than intra-class-switch and inter-class-switch moves. Thus, the model move probabilities are selected as $P_{\text{life}}=0.4, P_{\text{intra-cl-sw}}=0.3$ and $P_{\text{inter-cl-sw}}=0.3$. Hyperparameters for prior distribution of $\gamma$ are set to $a=b=1$ and variance of proposal distribution for $\gamma$ in life move is set to $\xi_{scale} = 0.01$. Scale parameter $\Gamma$ of the discretized Laplace distribution for intra-class-switch move is selected as 0.4. RJMCMC performs 5000 iterations in a single RJMCMC run and half of the iterations are discarded as burn-in period when estimating the distribution parameters. Random numbers from all the families have been generated by using Matlab's Statistics and Machine Learning Toolbox (for details please see\footnote{\url{https://www.mathworks.com/help/stats/continuous-distributions.html}}). Performance comparison has been performed under two statistical significance tests, namely \textit{Kullback-Leibler} (KL) divergence and \textit{Kolmogorov-Smirnov} (KS) statistics. KL divergence has been utilized to measure fitting performance of the proposed method between estimated pdf and data histogram. Two-sample KS test compares empirical CDF of the data and the estimated CDF. It quantifies the distance between CDFs and performs an hypothesis test under a null hypothesis that two samples are drawn from the same distribution. Details about KL divergence and KS test have been discussed in Appendix \ref{appendixSSTests}. \begin{table}[ht!] \centering \caption{Modeling results for synthetically generated processes.}\label{tab:syn} \small \resizebox{.7\linewidth}{!}{\begin{tabular}{ccccccc} \hline \textbf{Distribution} & \textbf{Est.} & \textbf{Est.} & \textbf{Est.} & \textbf{KL Div.} & \multicolumn{1}{l}{\textbf{KS}} & \multicolumn{1}{l}{\textbf{KS}}\\ Distribution & \textbf{Family}& \textbf{Shape ($\hat{\alpha}$)} & \textbf{Scale ($\hat{\gamma}$)} & & \textbf{Score} & $p$\textbf{-value}\\ \hline S$1.5$S$(2)$ & S$\alpha$S &1.4769 & 1.9162 & 0.0169 & 0.0125 & 1.0000\\ S$1$S$(0.75)$ & $t$ &0.9970 &0.7300 &0.0454 &0.0489 & $>0.9999$\\ GG$_{0.5}(0.5)$ & $\text{GG}$ &0.4990 &0.5199 &0.0229 &0.0152 & 1.0000\\ GG$_{1.7}(1.4)$ & $\text{GG}$ &1.6456 &1.3374 &0.0221 &0.0202 & 1.0000\\ $t_{3}(1)$ & $t$ &2.9303 &1.0039 &0.0251 &0.0203 & 1.0000\\ $t_{0.6}(3)$ & $t$ &0.6197 &2.9869 &0.0465 &0.0452 & $>0.9999$\\ \hline \end{tabular}}% \centering \vspace{0.2cm} \caption{Modeling results for PLC impulsive noise.}\label{tab:PLC} \small \resizebox{.7\linewidth}{!}{\begin{tabular}{ccccccc} \hline \textbf{Data} & \textbf{Est.} & \textbf{Est.} & \textbf{Est.} & \textbf{KL Div.} & \multicolumn{1}{l}{\textbf{KS}} & \multicolumn{1}{l}{\textbf{KS}}\\ & \textbf{Family}& \textbf{Shape ($\hat{\alpha}$)} & \textbf{Scale ($\hat{\gamma}$)} & & \textbf{Score} & $p$\textbf{-value}\\ \hline \textit{PLC-1} &S$\alpha$S &1.2948 &5.6969 &0.0086 &0.0112 & 1.0000\\ \textit{PLC-2} &S$\alpha$S& 0.7042 &0.1799 &0.0441 &0.0486 & $>0.9999$ \\ \textit{PLC-3} &S$\alpha$S&1.3140 &1.3488 &0.0046 &0.0132 & 1.0000\\ \hline \end{tabular}}% \centering \vspace{0.2cm} \caption{Modeling results for 2D-DWT coefficients.}\label{tab:wave} \small \resizebox{.7\linewidth}{!}{\begin{tabular}{lcccccc} \hline \textbf{Image} &\textbf{Est.} & \textbf{Est.} & \textbf{Est.} & \textbf{KL Div.} & \multicolumn{1}{l}{\textbf{KS}} & \multicolumn{1}{l}{\textbf{KS}}\\ & \textbf{Family}& \textbf{Shape ($\hat{\alpha}$)} & \textbf{Scale ($\hat{\gamma}$)} & & Score & $p$-value\\ \hline Lena (V)&$\text{GG}$ &0.5002 &1.7415 &0.0271 &0.0465 & $>0.9999$\\ Lena (H)& $t$ &1.0958 &2.2422 &0.0094 &0.0349 & $>0.9999$\\ Lena (D)& $t$ & 1.1628 &1.7735 &0.0145 & 0.0271 & 1.0000\\ SAR(V) &S$\alpha$S & 1.5381 &7.7395 &0.0025 &0.0123 & 1.0000\\ SAR(H) &S$\alpha$S &1.4500 & 8.6249 & 0.0043 & 0.0221 & 1.0000\\ SAR(D) & S$\alpha$S & 1.7500 & 6.3710 & 0.0062 & 0.0125 & 1.0000 \\ MRI(V) & $\text{GG}$ & 0.3913 & 0.2693 & 0.0365 & 0.1152 & 0.8744 \\ MRI(H) & $\text{GG}$ & 0.3527 & 0.1039 & 0.0305 & 0.0548 & $>0.9999$\\ MRI(D) & S$\alpha$S & 0.8504 & 0.5184 & 0.0245 & 0.0659 & $0.9998$ \\ Mammog.(V) & $t$ & 1.6325 & 1.6411 & 0.0363 & 0.0907 & 0.9816\\ Mammog.(H) & $\text{GG}$ & 0.7501 & 1.5154 & 0.0121 & 0.0555 & $>0.9999$\\ Mammog.(D) & $t$ & 1.6430 & 0.4851 & 0.0073 & 0.0117 & 1.0000 \\ \hline \end{tabular}}% \end{table}% \begin{figure}[htbp] \centering \subfigure[S$1.5$S$(2)$]{% \includegraphics[width=.8\linewidth]{stbl_alpha2-eps-converted-to}} \centering \hfil \subfigure[GG$_{1.7}(1.4)$]{% \includegraphics[width=.8\linewidth]{gg_alpha-eps-converted-to}} \hfil \centering \subfigure[$t_{3}(1)$]{% \includegraphics[width=.8\linewidth]{stu_alpha-eps-converted-to}} \centering \hfil \subfigure[S$1.5$S$(2)$]{ \includegraphics[width=.3\linewidth]{stbl_gamma3-eps-converted-to}} \centering \hfil \subfigure[GG$_{1.7}(1.4)$]{ \includegraphics[width=.3\linewidth]{gg_gamma2-eps-converted-to}} \centering \hfil \subfigure[$t_{3}(1)$]{ \includegraphics[width=.3\linewidth]{stu_gamma2-eps-converted-to}} \hfil \caption{Synthetically generated noise modeling - parameter estimation results in a single RJMCMC run. (a),(b),(c): Instantaneous $\alpha$ estimates. (d),(e),(f): Estimated posterior distributions for $\gamma$ after burn-in period.} \label{fig:SynAll1} \end{figure} \subsection{Case Study 1: Synthetically Generated Noise Modeling} In order to test the proposed method on modeling synthetically generated impulsive noise processes, six different distributions are chosen (2 distributions from each family). In a single RJMCMC run, data with a length of 1000 samples have been generated from one of the example distributions. The example distributions are S$1$S$(0.75)$, S$1.5$S$(2)$, GG$_{0.5}(0.5)$, GG$_{1.7}(1.4)$, $t_{3}(1)$ and $t_{0.6}(3)$. 40 RJMCMC runs have been performed for each distribution and estimated families with shape and scale parameters for each example distribution are shown in Table \ref{tab:syn}. In Figure \ref{fig:SynAll1}, instantaneous estimate of shape parameter $\alpha$ and estimated posterior distribution of scale parameter $\gamma$ are shown for three example distributions. Results represent the estimates obtained by a randomly selected RJMCMC run out of 40 runs. Burn-in period is not removed in the subfigures (a)-(c) in order to show the transient characteristics of the algorithm. These plots show that the proposed method converges to the correct shape parameters. In subfigures (d)-(f), vertical dashed-lines with $\nabla$ markers refer to $\pm\sigma$ \emph{confidence interval} (CI). Examining these subfigures shows that correct scale parameters lie within the $\pm\sigma$ CI of the posteriors. Estimated pdfs and CDFs for three example distributions are depicted in Figure \ref{fig:SynAll2}. In addition to the statistical significance values in Table \ref{tab:syn}, fitting performance of the algorithm has been presented visually. As can be seen in Figure \ref{fig:SynAll2}, estimated pdfs are very similar to the data histogram and fitting performances for all example distributions lie within KL distance of at most 0.0465. Moreover, estimated CDFs under KS statistic score are also very low and $p$-values are close to 1,0000. Please note that the estimation result in the second line of Table \ref{tab:syn} is meaningful for an example Cauchy distribution, since the Cauchy distribution is a special member in both S$\alpha$S and Student's $t$ families. \begin{figure*}[htbp] \centering \subfigure[S$1.5$S$(2)$]{% \includegraphics[width=.4\linewidth]{stbl_pdf3-eps-converted-to}} \centering \subfigure[S$1.5$S$(2)$]{% \includegraphics[width=.4\linewidth]{stbl_cdf3-eps-converted-to}} \centering \subfigure[GG$_{1.7}(1.4)$]{ \includegraphics[width=.4\linewidth]{gg_pdf2-eps-converted-to}} \centering \subfigure[GG$_{1.7}(1.4)$]{ \includegraphics[width=.4\linewidth]{gg_cdf2-eps-converted-to}} \centering \subfigure[$t_{3}(1)$]{% \includegraphics[width=.4\linewidth]{stu_pdf2-eps-converted-to}} \centering \subfigure[$t_{3}(1)$]{% \includegraphics[width=.4\linewidth]{stu_cdf2-eps-converted-to}} \caption{Synthetically generated noise modeling results. (a)-(c): Estimated pdfs, (d)-(f): Estimated CDFs.} \label{fig:SynAll2} \end{figure*} \subsection{Case Study 2: Modelling Impulsive Noise on PLC Systems} \begin{figure*}[ht!] \centering \subfigure[PLC-1]{\label{fig:PLCtime1} \includegraphics[width=.31\linewidth]{plc2-eps-converted-to}} \subfigure[PLC-2]{\label{fig:PLCtime2} \includegraphics[width=.31\linewidth]{plc4-eps-converted-to}} \subfigure[PLC-3]{\label{fig:PLCtime3} \includegraphics[width=.31\linewidth]{plc6-eps-converted-to}} \hfil\\ \centering \subfigure[PLC-1]{\label{fig:PLCfit1} \includegraphics[width=.31\linewidth]{plc1_pdf2-eps-converted-to}} \subfigure[PLC-2]{ \includegraphics[width=.31\linewidth]{plc2_pdf2-eps-converted-to}} \subfigure[PLC-3]{ \includegraphics[width=.31\linewidth]{plc3_pdf2-eps-converted-to}} \hfil\\ \subfigure[PLC-1]{% \includegraphics[width=.31\linewidth]{plc1_cdf2-eps-converted-to}} \subfigure[PLC-2]{ \includegraphics[width=.31\linewidth]{plc2_cdf2-eps-converted-to}} \subfigure[PLC-3]{\label{fig:PLCfit2} \includegraphics[width=.31\linewidth]{plc3_cdf2-eps-converted-to}} \hfil \caption{PLC impulsive noise modeling results. (a)-(c): Time plots, (d)-(f): Estimated pdfs, (g)-(i): Estimated CDFs.} \label{fig:PLCAll} \end{figure*} PLC is an emerging technology which utilizes power-lines to carry telecommunication data. Telecommunication speeds up to 200 Mb/s with a good quality of service can be achieved on PLC systems. Apart from this, PLC offers a physical medium for indoor multimedia data traffic without additional cables \cite{laguna2015use}. A PLC system has various types of noise arising from electrical devices connected to power line and external effects via electromagnetic radiation, etc. These noise sequences are generally non-Gaussian and they are classified into three groups, namely: i) Impulsive noise, ii) Narrowband noise, iii) Background Noise \cite{cortes2010analysis}. Among these, impulsive noise is the most common cause of decoding (or communications) error in PLC systems due to its high amplitudes up to 40 dBs \cite{andreadou2010modeling}. In this case study, we are going to use 3 different PLC noise measurements. First measurement (named as \emph{PLC-1}) has been performed during a project with number PTDC/EEA-TEL/67979/2006. Details for the measurement scheme and other measurements please see \cite{lopes2013dealing}. Data utilized in this thesis (\emph{PLC-1}) is an amplified impulsive noise measurement from a PLC system with a sampling rate of 200Msamples/sec. Measurements last for 5ms and there are 100K samples in the data set. In order to reduce the computational load, the data is downsampled with a factor of 50 and the resulting 2001 samples have been used in this study. In Figure \ref{fig:PLCAll}-(a) a time plot of the utilized downsampled data is depicted (For detailed description of the data please see\footnote[2]{http://sips.inesc-id.pt/$\sim$pacl/PLCNoise/index.html}). Remaining two data sets are periodic synchronous and asynchronous (named as \emph{PLC-2} and \emph{PLC-3}, respectively) impulsive noise measurements both of which have been performed during project with number TIC2003-06842 (for details please see \cite{cortes2010analysis}). Periodic synchronous measurements last for 4$\mu$s and contain 226 noise samples. Periodic asynchronous measurements contain 1901 noise samples and last for 35$\mu$s. In Figures \ref{fig:PLCAll} (b) and (c) time plots are depicted for synchronous and asynchronous noise sequences, respectively (For detailed description of the data please see\footnote[3]{http://www.plc.uma.es/channels.htm}). RJMCMC has been run 40 times for all three data sets. In Table \ref{tab:PLC}, estimated distribution families and resulting scale and shape parameters are depicted with significance test results. Estimated scale and shape parameters correspond to the average values after 40 repetitions. Examining the results in Table \ref{tab:PLC}, we can state that all three considered PLC noise processes follow S$\alpha$S distribution characteristics. In the literature, there are studies \cite{laguna2015use, tran2013plc} which model the impulsive noise in PLC systems by using stable distributions. Particularly, these studies provide a direct modelling scheme via stable distribution, whereas the proposed method has estimated the distribution among three impulsive distribution families. Thus, our estimation results for impulsive noise in PLC systems provide experimental verification to these studies. According to the results of KL and KS statistics shown in Table \ref{tab:PLC} on estimated pdfs and CDFs and Figures between \ref{fig:PLCfit1} and \ref{fig:PLCfit2}, RJMCMC fits to real data with a remarkable performance. KS $p$-values are all approximately 1 ($>0.9999$) and this provides strong evidence that the estimated and the correct distributions are of the same kind. \begin{figure*}[ht!] \centering \subfigure[Lena]{ \includegraphics[width=.25\linewidth]{lena-128x128.jpg}} \hfil \centering \subfigure[Lena - Coefficient H]{ \includegraphics[width=.35\linewidth]{LenaH_pdf2-eps-converted-to}} \hfil \centering \subfigure[Lena - Coefficient H]{% \includegraphics[width=.35\linewidth]{LenaH_cdf2-eps-converted-to}} \hfil\\ \centering \subfigure[SAR \cite{SAR3}]{ \includegraphics[width=.25\linewidth]{sar3.jpg}} \hfil \centering \subfigure[SAR - Coefficient D]{ \includegraphics[width=.35\linewidth]{sar3D_pdf3-eps-converted-to}} \hfil \centering \subfigure[SAR - Coefficient D]{ \includegraphics[width=.35\linewidth]{sar3D_cdf3-eps-converted-to}} \hfil \caption{2D-DWT coefficients modeling results for Lena and SAR images. (a),(d): Images, (b)-(e): Estimated pdfs, (c)-(f): Estimated CDFs.} \label{fig:Images21} \end{figure*} \subsection{Case Study 3: Statistical Modeling for Discrete Wavelet Transform (DWT) Coefficients} DWT which provides a multiscale representation of an image is a very important tool for recovering local and non-stationary features in an image. The resulting representation is closely related with the processing of the human visual system. DWT obtains this multiscale representation by performing a decomposition of the image into a low resolution approximation and three detail images capturing horizontal, vertical and diagonal details. It has been observed by several researchers that they have more heavier tails and sharper peaks than Gaussian distribution \cite{simoncelli1997statistical, achim2003sar} In this study, the proposed method has been utilized to model the coefficients (e.g. subbands) of 2D-DWT, namely vertical (V), horizontal (H) and diagonal (D). Four different images have been used to test the performance of the algorithm under statistical significance tests: Lena, \textit{synthetic aperture radar} (SAR) \cite{SAR3}, \textit{magnetic resonance imaging} (MRI) \cite{MRI} and mammogram \cite{mammogram1} which are shown in the first columns of Figures \ref{fig:Images21} and \ref{fig:Images22}. The proposed method has been performed for 40 RJMCMC runs. Estimated results for distribution families and their parameters ($\alpha$ and $\gamma$) are depicted in Table \ref{tab:wave} as averages of 40 runs. Estimated distributions for wavelet coefficients of images in Table \ref{tab:wave} show different characteristics. SAR and MRI images follow generally S$\alpha$S characteristics while results for Lena and mammogram images are generally $\text{GG}$ or Student's $t$. Moreover, despite modelling with different distribution families, all the coefficients for all the images have been modelled successfully according to the KL and KS test scores and $p$-values. The estimated pdfs and CDFs in Figures \ref{fig:Images21} and \ref{fig:Images22} show remarkably good fitting and provide support to the results which are obtained numerically in Table \ref{tab:wave}. \begin{figure*}[ht!] \centering \subfigure[MRI \cite{MRI}]{% \includegraphics[width=.25\linewidth]{mri2.jpg}} \centering \subfigure[MRI - Coefficient V]{% \includegraphics[width=.35\linewidth]{mri2V_pdf2-eps-converted-to}} \hfil \centering \subfigure[MRI - Coefficient V]{% \includegraphics[width=.35\linewidth]{mri2V_cdf2-eps-converted-to}} \hfil\\ \centering \subfigure[Mammogram \cite{mammogram1}]{% \includegraphics[width=.25\linewidth]{mamogram1.jpg}} \hfil \centering \subfigure[Mammogram - Coefficient D]{% \includegraphics[width=.35\linewidth]{mammog1D_pdf2-eps-converted-to}} \hfil \centering \subfigure[Mammogram - Coefficient D]{% \includegraphics[width=.35\linewidth]{mammog1D_cdf2-eps-converted-to}} \hfil \caption{2D-DWT coefficients modeling results for MRI and Mammogram. (a),(d): Images, (b)-(e): Estimated pdfs, (c)-(f): Estimated CDFs.} \label{fig:Images22} \end{figure*} \subsection{Graphical Evaluation by Q-Q Plots for Data Estimated as S$\alpha$S} Quantile-Quantile plot, or simply Q-Q plot can be described as a graphical representation of the sorted quantiles of a data set against the sorted quantiles of another data set. Suppose we have two samples with length $n\text{,} X_1, X_2, \ldots, X_n$ and $Y_1, Y_2, \ldots, Y_n$. In terms of Q-Q plot, these two samples are from the same distribution, as long as their ordered sequences, $X_{(1)}, X_{(2)}, \ldots, X_{(n)}$ and $Y_{(1)}, Y_{(2)}, \ldots, Y_{(2)}$, should satisfy, $X_{(i)} \approx Y_{(i)} \quad i = 1, 2, \ldots, n.$. Q-Q has been used to compare distributions of two populations, or compare distribution of one population to a reference distribution. Q-Q plot may provide information about the location, shape and scale parameters comparison between two populations. Figure \ref{fig:Appqq1} shows Q-Q plots for data sets estimated to be S$\alpha$S. Examining the figures clearly shows a remarkable match between estimated distribution and the data samples. Q-Q plots for PLC-2 and MRI-D results in Figure \ref{fig:Appqq1}-(c) and (h), respectively, show relatively lower performance than the others. However, this result is expected because the numerical estimation results in terms of KL and KS scores obtained for these two data sets are already higher than the others (KS scores of 0.0486 and 0.0659, respectively), but still acceptable due to $p$-values of 0.9999 and 0.9998, respectively. \begin{figure}[ht!] \centering \subfigure[S1.5S(2)]{% \includegraphics[width=0.32\linewidth]{qqplot100-eps-converted-to}} \centering \subfigure[\textit{PLC-1}]{ \includegraphics[width=0.32\linewidth]{qqplot2-eps-converted-to}} \centering \subfigure[\textit{PLC-2}]{% \includegraphics[width=0.32\linewidth]{qqplot3-eps-converted-to}} \centering \subfigure[\textit{PLC-3}]{ \includegraphics[width=0.32\linewidth]{qqplot4-eps-converted-to}} \centering \subfigure[SAR-V]{% \includegraphics[width=0.32\linewidth]{qqplot5-eps-converted-to}} \centering \subfigure[SAR-H]{ \includegraphics[width=0.32\linewidth]{qqplot6-eps-converted-to}} \centering \subfigure[SAR-D]{% \includegraphics[width=0.32\linewidth]{qqplot7-eps-converted-to}} \centering \subfigure[MRI-D]{ \includegraphics[width=0.32\linewidth]{qqplot8-eps-converted-to}} \caption{Q-Q plots for S$\alpha$S estimated data sets.} \label{fig:Appqq1} \end{figure} \section{Conclusion}\label{sec:conclusion} In this study, we have utilized RJMCMC beyond the framework of trans-dimensional sampling which we call trans-space RJMCMC. By defining a new combined parameter space of current and target parameter subspaces of possibly different classes or structures, we have shown that the original formulation of RJMCMC offers more general applications than just estimating the model order. This provides users to do model selection between different classes or structures. In particular, exploring solution spaces of linear and nonlinear models or of various distribution families is possible using RJMCMC. One can expect higher benefits from the trans-space RJMCMC compared to considering different model classes separately in the cases when the different model class spaces have intersections to exploit. The intersections for the trans-distributional RJMCMC considered in this paper have been the common distributions in the impulsive noise families. They made it possible to use the mapping functions benefiting from the FLOMs of the observed data. These functions in turn have enabled to transfer the information learned while searching in one family to the subsequent search after an inter-class-switch move. Candidate distribution space covers various impulsive densities from three popular families, namely S$\alpha$S, $\text{GG}$ and Student's $t$. In both synthetically generated noise processes and real PLC noise measurements and wavelet transforms of images, the proposed method shows remarkable performance in modeling. Simulation studies verify the remarkable performance in modelling the distributions in terms of both visual and numerical tests. KL and KS tests show the numerical results are statistically significant in terms of $p$-values which are generally close to 1.0000 (at least 0.85) for all the example data sets. Moreover, the algorithm indicated S$\alpha$S distributions for 2D-DWT coefficients of SAR images and noise on PLC channels which is in accordance with the other studies in the literature and confirms the success of the algorithm. We would like to underline that the ideas presented in this paper are not limited only to sampling across distribution families but can be extended to any class of models. \section*{Acknowledgement} Oktay Karakuş is funded as a visiting scholar at ISTI-CNR, Pisa, Italy by The Scientific and Technological Research Council of Turkey (TUBITAK) under grant program 2214/A.
1,116,691,498,066
arxiv
\section{Introduction} A breakthrough in information theory has been done by Shanon in 1948\cite{sh1,sh2}. The classical information theory has its practical manifestations in communication systems, computing devices, gaming, imaging and with several countless applications in real world. The current developments in technology is unbelievable without the existence of information theory. Before the establishment of information theory, there was also a major development in 1947; it is known for the invention of transistor\cite{tr}, which changed the whole electronics industry. The breakthrough developments known for transistor and information theory became the building blocks for revolutionary changes in science technology. Recently emerging quantum information theory overwhelming its applications in many domains like quantum computation, quantum communication, quantum cryptography, quantum imaging, quantum gaming and many others. The formulation of quantum information theory is based on the postulates of quantum mechanics along with the fundamental ingredients as superposition and entanglement. The efforts to develop quantum computer are on the way by using several physical techniques. Many companies in the market are eagerly applying efforts to push this area towards commercialization and to develop quantum computer with different physical approaches like NMR, Bose-Einestein condensation, Super conducting approach, Ion trap system, Ultra cold atoms, Majorana fermion etc. So there is a race among all the big companies to capture the existence of quantum computer, but still the universal quantum computer is missing. For practical applications, we need physical systems to store and to process the information. The microscopic world of various types of qubits is the basic physical system which support to process the quantum information. We may fire a natural question, can we store and process the information in these physical systems more efficiently than classical one? And what are the physical constraints responsible to execute the quantum information\cite{info1,info2}? The efficient Information storage and its processing in the microscopic world can be handled by the principles of quantum mechanics. It is always interesting to discover the feasible and non-feasible physical situations which play the important role to execute the quantum information and designing the quantum protocols. So, investigating the situations which are not possible is also an important paradigm. These impossible physical conditions are expressed by no-go theorems\cite{nogo1,nogo2}. Before applying the fruitful efforts to develop any quantum application it is always good to keep in view the structure of no-go theorems; which is always helpful to tackle the feasibility and non-feasibility of physical situations. On the other hand, towards the development of quantum computer, there are always challenges to manipulate and control the qubits and to protect these from decoherence. The phenomenon of decoherence is the killer of superposition in quantum systems and restrict to perform perfect quantum computation. But gradual efforts in quantum information are on the way to tackle the problem of efficient quantum information manipulation in varieties of physical systems. Recent developments in quantum computation is very progressive and rapid than the past historical developments. So, in this direction it is very important to understand the gradual milestone developments and track these; which may be useful to perform further progress in the future. In the following sections, we present major developments with theoretical aspects and touching the experimental discussions as well. We managed the time periods of developments in slots. The first time slot is dealing with 10 years and rest of the time slots deal with 9 years each to maintain the continuity. In these slots we presented gradual milestone developments over the corresponding time period of the slot. Here it is mention that, we are emphasized to discuss the milestone developments only, however there may be many others developments in parallel during each era. The sections begin with the era of 1970s; when the emergence of computation has been modeled with the concept of reversible computation and tour the article with the important major developments till the date mentioned as 2018. \section{Duration (1970-1980)} This period is most significant for the theoretical development in quantum information and known for producing the idea of reversible computation by C. H. Bennett\cite{rv} and famous Holevo's theorem\cite{hv}. Inspiring idea of reversible computation has been carried out by Toffoli to invent the first reversible quantum gate. This gate is called as CNOT gate, which is also known as reversible XOR gate; this development became the foundation of quantum circuit model\cite{tofo} in quantum computation. Another milestone development in the same era is the foundation of Holevo bound. Alexander Holevo has established the upper bound of an amount of information that can be contained in a quantum system by using the particular ensemble\cite{hv}. After three years of publishing Holevo bound, one of the first attempts to create the quantum information theory is made by Roman Stanislaw Ingarden; a Polish mathematical physicist; by publishing a seminal paper entitled as ``Quantum information theory" in 1976\cite{qi1}. This work generalizes Shannon's information theory\cite{th1} in the formalism of quantum mechanics of open systems. With the progress of quantum information theory, the idea towards quantum computing is proposed by Yuri Manin\cite{qc} in 1980 in his book entitled as ``Computable and Uncomputable". The work done by Yuri Manin opened the further research avenues in quantum computation. \section{Duration (1981-1990)} The time period (1981-1990) deals with the milestone development of no-cloning theorem. In 1982, a major result of no-cloning in quantum physics is discovered by William Wootters and Wojciech Zurek\cite{noclone1} and independently by Dennis Dieks\cite{noclone2}. The no-cloning theorem states that, it is not possible to clone an unknown quantum state. This theorem became the milestone for quantum information. We are inclined to discuss this theorem with its proof in sect (7.2). With this majour development, Paul Benioff proposed a first theoretical model for quantum computation based on quantum Hamiltonian\cite{qm}. He did first attempt to quantize the Turing machine and the framework of quantum Turing machine has taken place. The concept of entanglement has already taken birth during 1935 and 1936 with the debate of Albert Einstein and Erwin Schrödinger \cite{entan1,entan2}. The advantage of entanglement and no-cloning theorem together captured the discovery of quantum cryptography done by Artur Ekert in 1991\cite{AkEkert1991}. The development of quantum cryptography open the new filed of secure quantum communication, which is very promising to this date. \section{Duration (1991-2000)} This era of this period extensively contributes in the development of entanglement-based quantum algorithms. In 1992, David Deutsch and Richard Jozsa proposed a deterministic quantum algorithm to test weather a function is balanced or constant by using black box model in quantum computation\cite{drj}. With the continuation of this work, a first milestone quantum algorithm is formulated by Peter Shor at Bell Labs, New Jersey in 1994 and published in 1997\cite{fact}. The algorithm allowed a quantum computer to factor an integer very fast and run in polynomial time. This algorithm is quite useful to break the public-key cryptographic schemes as RSA scheme\cite{rsa}. Meanwhile, to the developments on quantum algorithms, Peter Shor and Andrew Steane proposed the schemes for quantum error corrections in 1995\cite{e1,e2}. Quantum error corrections protocols are used to protect the quantum information from decoherence and essentially needed for quantum computation. After the discovery of Peter Shor algorithm, Lov Grover invented the quantum database search algorithm in 1996\cite{dd} at Bell Labs. Which is the fastest database search algorithm and landmark in quantum computation. Here we mention that the period of (1990-1997) has been recognized as the golden period for theoretical as well as experimental developments in quantum computation. Beside the quantum algorithms development, there is also an important protocol discovered called quantum teleportation, which is proposed by C. H. Bennett et al. in 1993\cite{tele1}. The same has been experimentally verified in 1997\cite{tele2}. During 1997 onwards the scientific community strongly focused on experimental manifestations of quantum information around the world. The first experimental approach to realize the quantum gates by using nuclear magnetic resonance (NMR) technique is performed by Neil Gershenfeld and Isaac L. Chuang in 1997\cite{q2}. NMR technique came out as a useful resource to produce fruitful experimental manifestations of quantum computation. In 1998, the first execution of Deutsch-Jozsa algorithm was performed by using NMR technique, this has been done by Jonathan A. Jones and Michele Mosca at Oxford University and shortly after by Isaac L. Chuang at IBM's Almaden Research Center together with co-workers at Stanford University and MIT\cite{dj}. In the same year Grover's algorithm also experimentally verified with NMR quantum computation\cite{mi}. This experimental development encouraged the further investigations. Beside the theoretical and experimental manifestations it was also major interest to look into some physical situations which are not feasible like in non-cloning theorem. Towards this direction, in 2000, there is one important quantum no deleting theorem is proved by Arun K. Pati and Samuel L. Braunstein, which states that given two copies of arbitrary qubits one cannot delete a copy of an unknown qubit. This theorem has its own important implications in quantum information\cite{ng}. We are inclined to discuss this theorem in sect (7.3). \section{Duration (2001-2010)} This period is well known for the role of quantum optics in quantum information, in parallel with another major developments towards the implementation of quantum networks. In 2001, first experimental execution of Shor's algorithm at IBM's Almaden Research Center and Stanford University was implemented by using NMR technique\cite{ed}. The number $15$ was factored by using $10^{18}$ identical molecules in NMR. In the same year, the scenario of optical quantum computing has been started. Emanuel Knill, Raymond Laflamme and Gerard Milburn showed that optical quantum computing is possible with single photon sources, linear optical elements and single photon detectors\cite{rs}. They also have shown that quantum teleportation can be performed with beam splitters by using photonic qubits. Their contribution opened the avenues of usage of optics in quantum information. The role of optics is very promising now a days to establish long distance quantum communication. The implementation of quantum gates with optics is an essential requirement to perform quantum computation. In this direction, quantum controlled-Not gates using linear optical elements has been developed by Todd D. Pittman and collaborators at Applied Physics Laboratory, Johns Hopkins University in 2003\cite{op1}. The similar results have been produced independently by Jeremy L. O'Brien and collaborators at the University of Queensland\cite{op2}. Quantum optics not only had its applications in quantum cryptography but DARPA Quantum network also became operational by using optical fibers supporting the transmission of entangled photons\cite{dar}. Quantum networks use the protocol called quantum repeater for long distance quantum communication to overcome with the decoherence. These quantum repeaters transmit the quantum states to receiver with the help of quantum memories. The recognizable framework of quantum optics with atom-photon interaction proved to be a successful framework and assisted to develop quantum memories\cite{qm1}, which are essential to establish quantum Internet\cite{qint}. In 2005, Harvard University and Georgia Institute of Technology; researchers succeeded in transferring quantum information between ``quantum memories" from atoms to photons and back again\cite{in}. Along with the advancement of quantum networks, the concept of distributed quantum computing \cite{dist} has taken place and a protocol called quantum telecloning is proposed by M. Murao et al. in 2006\cite{tele}. This is the protocol in which the optical clones of an unknown quantum state are created and distributed over distant parties. Samuel L. Braunstein at the University of York along with the University of Tokyo and the Japan Science and Technology agency gave the first experimental demonstration of quantum telecloning in 2006\cite{qtc}. Quantum networks and quantum repeaters attracted much attention of quantum community, hence along this line of research the concept of entanglement swapping\cite{qrepet} is developed by Stefano Pirandola et al. in 2006, which has its important application in quantum repeaters. Beside the developments on quantum memories by using the optical techniques, there was also interest to develop the same by using the condensed matter approach. It is done in 2007 by using the Bose-Einstein condensation \cite{bed}. Till 2007, the experimental manifestation of two qubits entanglement is successfully performed, but entanglement in hybrid systems also attracted the attention of quantum community. Much progress has been done in 2008 to perform photonic qubit-qutrit entanglement\cite{gr}. In the direction of implementation of quantum networks and towards the reality of quantum Internet, the logic gates have been implemented in optical fibers by Prem Kumar, which became the foundation of quantum networks\cite{qn}. Apart from quantum networks the quantum community also shown the interest in developing quantum processors mainly by using two approaches, solid state and quantum optics. Along the line of research on quantum processors, a breakthrough is achieved for the development of spin-based electronics in silicon and a model of quantum transistor\cite{sed}, which is inspired with the work entitled as ``Single atom transistor" done in 2004\cite{satm}. Towards the development on quantum processors, there is also one more important proposal in the year 2007 by D-Wave Systems which proposed 28 qubits quantum computer based on quantum annealing\cite{quann}. The quantum annealer has experimental manifested now and commercially available. The race of developing quantum processors also started by using optical techniques. The ions were trapped in the optical trap and the two-photon optical chip was developed in 2009\cite{pr}. Along the line of research on quantum optics and its applications in quantum information, the experimental manifestations of quantum algorithms were still on the way by using new emerging quantum techniques. With the advancement of photonic chip in 2009, in the same year scientific community implemented Shor's quantum factoring algorithm on a photonic chip\cite{sp}. First time the use of Deutsch's Algorithm in a cluster state quantum computer is achieved in 2007\cite{djf}. Till 2000, the entanglement was the major quantum correlation to execute quantum information, on the other hand another important quantum correlation called quantum discord has been discovered by H.Ollivier and W. H. Zurek in 2001\cite{qd1}. Quantum discord is a measurement based quantum correlation, which also has its role in quantum information and further experimental investigations in terms of its applications are on the way. \section{Duration (2011-2018)} The continuity of past developments in quantum information and its experimental manifestations are maintained in this era with two major center of interest; how to develop efficient quantum processors and how to increase the coherence time in quantum systems? On the other hand few past records also broken in this era. With this continuity, In 2011, the von Neumann`s architecture was employed in quantum computing with superconducting approach\cite{von}. This work contributes in developing quantum central processing unit that exchanges the data with a quantum random-access memory integrated on a chip. There was a breakthrough in 2014 as the scientists transfer data by quantum teleportation over a distance of 10 feet with zero percent error rate, this was a vital step towards a feasible quantum internet\cite{tp}. In the same year, Nike Dattani and Nathan Bryans break the record for factoring the largest number 56153 using NMR by using 4 qubits only on a quantum device which breaks the record established in 2012 for factoring the number 143\cite{fc1}. After a long journey for development in quantum computation still there are many theoretical and experimental open problems inherited in the essence of quantum information. One of the major issues is controlling entanglement and its manipulation in many quantum systems and protect it from decoherence\cite{dec}. There have been gradual efforts to increase the coherence time in 2015, the coherence time has been increased up to six hours in nuclear spins\cite{dc1}. With the advancement of quantum processors, there is breakthrough in 2017 by D wave systems. The company developed commercially available quantum annealing based quantum processor, which is fully functional now and have been used for varieties of optimization problems\cite{qa} and has applications in quantum machine learning. With the connection of improving coherence time and deeper theoretical investigations on entanglement, here we mention that entanglement is a fragile phenomenon and very sensitive to quantum measurements and environmental interactions. It may die for a finite time in a quantum system and alive again as time advances. This phenomenon is called entanglement sudden death (ESD) which is investigated by Yu-Eberly\cite{yu1,yu2,ed1,ed2,ed3,ed4,ed5,ed6}. The phenomenon of ESD is a threat to quantum applications, so overcoming from it is again an issue and needs fruitful solutions. During the period 2011-2018, there is vast research on entanglement and related aspects such as distillable entanglement and bound entanglement in quantum information theory\cite{db}. Quantum community has investigated various mathematical tools of entanglement detections and quantification, distillable protocols, monogamy of entanglement\cite{m1,m2,m3,m4}. However, these aspects are lacking for higher dimensional quantum systems. The efforts of quantum community is always to search the quantum systems which can sustain long coherence time, which is an important topic of research. \section{Development on No go theorems} Under this section we provide the developments for important no-go theorems in quantum information. No-go theorem implies the impossibility of a particular physical situation. These theorems have the major impact on the experimental development of quantum information. All these theorems are developed by taking the linear property of quantum mechanics. Here in the following subsections we discuss important no-go theorems with their corresponding proofs. \subsection{Bell's theorem} Bell's theorem\cite{be1} is the no go theorem, which has its own beauty in quantum mechanics. This theorem states that, ``\textit{No physical theory of local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics}". Bell's theorem has strong connection with EPR paradox. As per Einstein\cite{be2}, a particle must have a separate reality independent of the measurements. It means, an electron has spin, location and so forth even when it is not being measured, this is called realistic point of view. On the contrary there is another point of view called as orthodox view, which state that, the measurement is responsible to create the attribute of the particle, which do not exist before measurement. Here we express EPR paradox with an example which points towards the elements of reality. Let consider a splitting of pi meson in two anti particles so called electron (e) and positron (p). Conservation of angular momentum demands that total spin will be zero, so the electron and positron composite wave function should be in singlet configuration as below, \begin{equation} |\psi\rangle=\frac{1}{\sqrt{2}}(|01\rangle+|10\rangle)\label{sn} \end{equation} Where $|0\rangle$ is the spin down state and $|1\rangle$ is the spin up state of particles. If one needs to perform the measurement on electron (e) and positron (p) with their respective detectors $D^{e}$ and $D^{p}$. Here we assume that both the detectors are in same direction for both particles. The measured state of either particle is just opposite to the another particle and vise versa. The measurement on one particle influence the state of other particle, but that influence do not travel faster than the speed of light, this is called "locality principle". EPR named it ``spooky action at a distance". With this phenomenon, EPR expected that there are hidden variables $(\lambda)$ associated with the wave function $(\psi)$, which form the ``elements of reality"\cite{be3}. But we do not have any information how to calculate these hidden variables or measure these. Bell looked into this problem in different way and concluded that hidden variables are unreasonable and this is matter of opinion which does not have any proof. As per Bell's opinion the existence of local hidden variables needs specific requirements which is not clearly obvious. Bell's established the famous inequality\citep{be4}; so called Bell's inequality, which is related to electron spin. He generalized the measurement tests done in EPR experiment. He oriented the detectors $(D^{e},D^{p})$ in different directions rather than a fix direction for both the particles electron (e) and positron (p). He allowed them to rotate independently and measured the average values of the product of the spins with the orientation angle between the detectors. Let we make the experimental setup of the detectors $(D_{1}^{e},D_{2}^{p})$ along the directions of unit vectors $(m,n)$ respectively. We collect the measurement data $(d_{i}^{e},d_{i}^{p})$ for each measurement $(1<i\leq l)$ and calculate the product $(d_{i}^{e}.d_{i}^{p})$. Now we find the average of this product as \begin{equation} P(m,n)=\frac{\sum_{i=1}^{l}(d_{i}^{e}.d_{i}^{p})}{l} \end{equation} Here we write, the measurement data takes the following values. \begin{equation} (d_{i}^{e},d_{i}^{p})=(\pm1,\pm1) \end{equation} Let suppose, both the detectors are parallel ie. $(m=n)$, than we can have the average of the product as, \begin{equation} P(m,m)=-1 \end{equation} If the detectors are anti parallel $(m=-n)$, then the average of the product is given by, \begin{equation} P(m,-m)=+1 \end{equation} For arbitrary orientations of the detectors, we can write as, \begin{equation} P(m,n)=-m.n\label{eq:g} \end{equation} Where $(.)$ is the dot product between the unit vectors $(m,n)$. The above Eq. $\text{\ref{eq:g}}$ is the general prediction of quantum mechanics. This prediction is disproved by Bell's inequality and prove the impossibility of theory of hidden variables. To proceed the proof of Bell's inequality let assume, there exists hidden variable(s) $(\lambda)$, which may vary and may not be controlled. So there exists the functions for the measurement directions $(m,n)$ such as, \[ M(m,\lambda)=N(n,\lambda)=\pm1 \] With the conditions, \begin{equation} [M(m,\lambda)]^{2}=[N(n,\lambda)]^{2}=1 \end{equation} If both the detectors are aligned than the results are anti-correlated, hence we can write, \begin{equation} M(m,\lambda)=-N(m,\lambda), \quad \forall \lambda \end{equation} or \begin{equation} M(n,\lambda)=-N(n,\lambda), \quad \forall \lambda\label{eq:1} \end{equation} Simply the average of the product of the measurements can be written as below, \begin{equation} P(m,n)=\int g(\lambda)M(m,\lambda)N(n,\lambda)d\lambda\label{eq:ee} \end{equation} Where $g(\lambda)$ is the probability density distribution of the of hidden variables $(\lambda)$, which satisfy the normalization condition \begin{equation} \int g(\lambda)=1 \end{equation} By using Eq.$\text{\ref{eq:1}}$, the Eq. $\text{\ref{eq:ee}}$ can be re-written as, \begin{equation} P(m,n)=-\int g(\lambda)M(m,\lambda)M(n,\lambda)d\lambda\label{eq:11} \end{equation} Let we assume another arbitrary unit vector $r$, so we can write another measurement average equation as, \begin{equation} P(m,r)=-\int g(\lambda)M(m,\lambda)M(r,\lambda)d\lambda\label{eq:22} \end{equation} Subtracting the Eq $\text{\ref{eq:22}}$ from Eq. $\text{\ref{eq:11}}$, we get, \begin{equation} P(m,n)-P(m,r)=-\int g(\lambda)[M(m,\lambda)M(n,\lambda)-M(m,\lambda)M(r,\lambda)]d\lambda \end{equation} \begin{equation} =-\int g(\lambda)[M(m,\lambda)M(n,\lambda)-M(m,\lambda)(1)M(r,\lambda)]d\lambda \end{equation} \begin{equation} =-\int g(\lambda)[M(m,\lambda)M(n,\lambda)-M(m,\lambda)([M(n,\lambda)]^{2})M(r,\lambda)]d\lambda \end{equation} \begin{equation} =-\int g(\lambda)[1-(M(n,\lambda)M(r,\lambda)]M(m,\lambda)M(n,\lambda)d\lambda \end{equation} Here the factors takes the following values, \begin{equation} -1\leq\{M(m,\lambda)M(n,\lambda)\}\leq1 \end{equation} and \begin{equation} g(\lambda)[1-(M(n,\lambda)M(r,\lambda)]\geq0 \end{equation} Hence, \begin{equation} |P(m,n)-P(m,r)|\leq\int g(t)[1-(M(n,\lambda)M(r,\lambda)]d\lambda \end{equation} Or, \begin{equation} |P(m,n)-P(m,r)|\leq1+P(n,r)\label{eq:b} \end{equation} Eq.$\text{\ref{eq:b}}$ is the famous Bell's inequality. The quantum mechanical assumption given in Eq.\ref{eq:g} is incompatible to this inequality. Let assume all the three vectors $(m,n,r)$ are in a same plane such that the vector $(r)$ makes $45^{\degree}$ angle with each of the vector $m$ and $n$. So applying the Eq.\ref{eq:g} we get, \begin{eqnarray} P(m,n)=0,\\ P(m,r)=P(n,r)=-0.707,\\ \end{eqnarray} But on the contrary bell's inequality present in Eq.\ref{eq:b}, gives the following result, \begin{equation} 0.707\nleqslant 0.293. \end{equation} Which indicate that Einstein's radical idea of ``elements of reality" which incorporate hidden variables with the wave function, is wrong. Bell's theorem is a landmark theorem in quantum mechanics but does not imply the existence of any nonlocality in quantum mechanics itself. \subsection{No-Cloning theorem} The no-cloning theorem states that one can not create an identical copy of an arbitrary unknown quantum state, the theorem is true for pure states. No-Cloning theorem is provided by Park in 1970, further re-investigated in 1982 by Wootters et al. and by Dieks separately\cite{nc1,nc2}. This is the same year in which the development on quantum computing models has been very much active. The theorem of quantum cloning is easy to prove. Here we give two proofs of this theorem by using the property of unitary operation and another by using the linearity property of the quantum mechanics. \subsection{Proof 1:} Here we present the proof of no-cloning theorem by using the property of unitary operation. Consider two pure states as $|\psi\rangle$, $|\phi\rangle$ and a blank state $|b\rangle$. Mixing each pure state with blank state and perform the unitary operation which has the goal to copy the pure state into a blank state. So we get, \begin{equation} U(|\psi\rangle\otimes|b\rangle)=|\psi\rangle\otimes|\psi\rangle\label{nc1} \end{equation} \begin{equation} U(|\phi\rangle\otimes|b\rangle)=|\phi\rangle\otimes|\phi\rangle\label{nc22} \end{equation} Taking the complex conjugate of both the sides of both the above equations, we get, \begin{equation} (\langle\psi|\otimes\langle b|)U^{\dagger}=\langle\psi|\otimes\langle\psi|\label{nc3} \end{equation} Multiplying the left and right sides of Eq.\ref{nc3} and Eq.\ref{nc22}, we get, \begin{equation} (\langle\psi|\otimes\langle b|)U^{\dagger}U(|\phi\rangle\otimes|b\rangle)=(\langle\psi|\otimes\langle\psi|)(|\phi\rangle\otimes|\phi\rangle) \end{equation} We know $U^{\dagger}U=I$, which further leads, \begin{equation} \langle\psi|\phi\rangle=\langle\psi|\phi\rangle^{2} \end{equation} The equation is conflicting, hence it is true with only two cases, either $\langle\psi|\phi\rangle=0$, or $|\psi\rangle=|\phi\rangle$. These conditions reveal that there is no unitary operation which can be used to clone the arbitrary quantum state (hence proved). \subsection{Proof 2:} Here we present the proof by using the linearity property of quantum mechanics. Suppose there exists a perfect cloning machine, which can copy an unknown pure quantum state $|\psi\rangle$, it can be defined as, \begin{equation} |\psi\rangle |\Sigma \rangle |A\rangle=|\psi\rangle|\psi\rangle|A\rangle \label{p21} \end{equation} Where $|\Sigma\rangle$ is the blank state in which the state $|\psi\rangle$ is to be copied and $|A\rangle$ is the auxiliary state. If the state $|\psi\rangle$ is prepared in $|0\rangle$ and $|1\rangle$ respectively, than the following equations will takes place, \begin{eqnarray} |0\rangle|\Sigma\rangle|A\rangle=|0\rangle|0\rangle|M(0)\rangle \\ |1\rangle|\Sigma\rangle|A\rangle=|1\rangle|1\rangle|M(1)\rangle \end{eqnarray} Lets consider the pure state as $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$, than the cloning machine takes place as, \begin{equation} (\alpha|0\rangle+\beta|1\rangle)|\Sigma\rangle|A\rangle=\alpha|00\rangle |M(0)\rangle+\beta|11\rangle |M(1)\rangle \label{p22} \end{equation}On the other hand the Eq.\ref{p22} can also be solved by using the Eq.\ref{p21} as below, \begin{equation} (\alpha|0\rangle+\beta|1\rangle)|\Sigma\rangle|A\rangle=(\alpha^{2}|00\rangle+\beta^{2}|11\rangle+\alpha\beta|01\rangle+\alpha\beta|10\rangle)|M(|\psi\rangle)\rangle \label{p23} \end{equation} Here we conclude that, Eq.\ref{p22} and Eq.\ref{p23} are not same and hence this result claims that cloning is not possible of pure states. In case the quantum state is mixed state rather than pure, than the generalization of no cloning theorem is treated by no-broadcasting theorem, which we are intended to deal in next section. \subsection{No-Broadcast theorem} The generalized framework of pure state no cloning theorem is treated with No-Broadcast theorem. The first attempt to prove that non commutating mixed state can no be broadcast is done by Barnum et al.\cite{br1}. Further extensions on No-broadcast theorem is done by many authors, to look into broad view of No-broadcasting and different paradigms, we suggest the reader to loon into the references \cite{br1,nbb1,br2,br3,br4,br5} for broad spectrum. Here we present the Barnum et al. idea, as per Barnum et al., a set of quantum states $A=\{\rho_{s}\}$ from a source can be broadcast to a target $(\Sigma)$ if and only if the states in the set A commutes. The composite system of source and target $(S\otimes T)$ can go through a physical process to broadcast the quantum state to target with the following broadcasting machine, \begin{equation} (\rho_{s}\otimes \Sigma) \longmapsto P(\rho_{s}\otimes \Sigma)=\rho_{out} \end{equation} Where $(P)$ is a physical process and $(\rho_{out})$ is the output state which satisfy the following conditions. \begin{eqnarray} Ptr_{s}{(\rho_{out})}=\rho_{s} \\ Ptr_{t}{(\rho_{out})}=\rho_{s} \end{eqnarray} The operation $(Ptr)$ denotes the partial traces over the subsystems $s$ and $t$ respectively. Here we recall that, the core result shown by Barnum et al. is that, the set $A$ can be broadcast if the states in this set $A$ commutes. \subsection{No-Deleting theorem} The No-Deletion theorem states that; given two copies of arbitrary quantum states\cite{nd1}, then it is impossible to delete one of these. The process of quantum deletion is different than quantum erasing. Let we define the quantum deleting machine as follows, \begin{equation} U(|\psi_{A}\rangle|\psi_{B}\rangle|A_{C}\rangle)=|\psi_{A}\rangle|0_{B}\rangle|A_{C}^{x}\rangle \end{equation} On the left-hand side of the equation, $U$ is the unitary operation on the composited system $(ABC)$. For right-hand side of the equation, the term $|0_{B}\rangle$signify the deletion of the state $|\psi_{B}\rangle$ and $|A_{C}^{x}\rangle$is the transformed auxiliary qubit. Let assume both the qubits $|\psi_{A}\rangle$ and $|\psi_{B}\rangle$ are in same states then the deleting machine takes place, \begin{equation} U(|0_{A}\rangle|0_{B}\rangle|A_{C}\rangle)=|0_{A}\rangle|0_{B}\rangle|A_{C}^{x1}\rangle\label{de1} \end{equation} and \begin{equation} U(|1_{A}\rangle|1_{B}\rangle|A_{C}\rangle)=|1_{A}\rangle|0_{B}\rangle|A_{C}^{x2}\rangle\label{de2} \end{equation} Let assume, the state of two arbitrary unknown qubits are assumed in the same state as, $|\psi_{A}\rangle=\alpha|0_{A}\rangle+\beta|1_{A}\rangle$ and $|\psi_{B}\rangle=\alpha|0_{B}\rangle+\beta|1_{B}\rangle$ respectively. Now implementing the deleting machine, we get, \begin{eqnarray} U(\alpha|0_{A}\rangle+\beta|1_{A}\rangle)(\alpha|0_{B}\rangle+\beta|1_{B}\rangle)|A_{C}\rangle=\alpha^{2}|0_{A}0_{B}\rangle|A_{C}\rangle+\beta^{2}|1_{A}1_{B}\rangle|A_{C}\rangle \nonumber \\ +\alpha\beta|0_{A}1_{B}\rangle|A_{C}\rangle+\beta\alpha|1_{A}0_{B}\rangle|A_{C}\rangle \end{eqnarray} Using the Eqs.\ref{de1} and \ref{de2} we get, \begin{eqnarray} U(\alpha|0_{A}\rangle+\beta|1_{A}\rangle)(\alpha|0_{B}\rangle+\beta|1_{B}\rangle)|A_{C}\rangle=\alpha^{2}U(|0_{A}\rangle|0_{B}\rangle|A_{C}^{x1}\rangle)+\beta^{2}U(|1_{A}\rangle|0_{B}\rangle|A_{C}^{x2}\rangle) \nonumber \\ +(\alpha\beta U|0_{A}1_{B}\rangle+\beta\alpha U|1_{A}0_{B}\rangle)|A_{C}\rangle \end{eqnarray} \begin{equation} =\alpha^{2}|0_{A}\rangle|0_{B}\rangle|A_{C}^{x1}\rangle+\beta^{2}|1_{A}\rangle|0_{B}\rangle|A_{C}^{x2}\rangle+(\sqrt{2}\alpha\beta)(\frac{1}{\sqrt{2}})(|0_{A}1_{B}\rangle+|1_{A}0_{B}\rangle)|A_{C}\rangle \end{equation} \begin{equation} =\alpha^{2}|0_{A}\rangle|0_{B}\rangle|A_{C}^{x1}\rangle+\beta^{2}|1_{A}\rangle|0_{B}\rangle|A_{C}^{x2}\rangle+\sqrt{2}\alpha\beta|\zeta_{AB}\rangle \label{df1} \end{equation} Where \begin{equation} |\zeta_{AB}\rangle=(\frac{1}{\sqrt{2}})(|0_{A}1_{B}\rangle+|1_{A}0_{B}\rangle)|A_{C}\rangle \end{equation} As per the deleting machine the output should be, \begin{equation} U(|\psi_{A}\rangle|\psi_{B}\rangle|A_{C}\rangle)=(\alpha|0_{A}\rangle+\beta|1_{B}\rangle)|0_{B}\rangle|A_{C}^{x}\rangle\label{df2} \end{equation} So we conclude that, the output in the Eq. \ref{df1} and the output in Eq. \ref{df2} are not equal, hence the machine do not delete the arbitrary unknown qubit. \subsection{No-Teleportation Theorem} In quantum information, the no teleportation theorem\cite{nt1} states that neither an arbitrary quantum state can be converted in a sequence of classical bits nor the classical bits can create original quantum state. This theorem is the consequence of no-cloning theorem. If an arbitrary quantum states allow producing sequence of classical bits, then as we know the classical bits can always be copied and hence the quantum state also can be copied, which violate the no cloning theorem. So the conversion of an arbitrary quantum states in sequence of classical bits is not possible. Theorem is simple to prove. The similarity of two states is defined as; two quantum states $\rho_{1}$ and $\rho_{2}$ are identical if the measurement results of any physical observable have the same expectation value for $\rho_{1}$ and $\rho_{2}$. Let prepare an arbitrary mixed quantum state $\rho_{input}$, now perform the measurement on the state and obtain the classical measurement results. Now by using these classical measurement results the original quantum states is recovered as $\rho_{output}$. Both the input and output states are not equal, ie., \begin{equation} \rho_{input}\neq \rho_{output}\label{r} \end{equation} This result is irrespective to the state preparation process and measurement results outcome. Hence Eq.\ref{r} proves, one can not convert an arbitrary quantum states in a sequence of classical bits. The theorem does not have any relation with the protocol quantum teleportation. \subsection{No-communication Theorem} No communication theorem\cite{ncc1,ncc2} is also known as the no-signaling principle. The theorem captures the essence, such that the measurement action performed at the end of Alice is not detectable by Bob at his end. This is true in both the cases weather the composite state of Alica and Bob is separable or entangled. Let we first consider the case when composite state is separable. Assume the composite state of Alice and Bob is $\rho$ and Alice perform the measurement on his end. The measurements performed by Alice can be modeled by Kraus operators, these may not be commuting. Let suppose the Kraus operators at the end of Alice are $\{A_{m}\}$. Now the probability of measurement outcome $x$ can be easily written in the formalism of Kraus operators as, \begin{eqnarray} p_{x}=\sum_{m} Tr(A_{xm} \rho A^{\dagger}_{xm}) =Tr[\rho V_{x}] \end{eqnarray} where \begin{equation} V_{x}=\sum_{x}A^{\dagger}_{xm} A_{xm} \end{equation} \begin{equation} \sum_{x}V_{x}=1 \end{equation} Let assume the Kraus operators at the end of Bob are $\{B_{n}\}$. The probability of measurement outcome $y$ at the end of Bob, irrespective to what Alice has found; is given as, \begin{equation} p_{y}=\sum_{x}Tr(\sum_{mn}B_{yn}A_{xm}\rho A^{\dagger}_{xm}B^{\dagger}_{yn})\label{nc1} \end{equation} The order of measurements on composite system does not matter, so the following commutation relation should satisfy, \begin{equation} [A_{xm},B_{yn}]=1\label{nc2} \end{equation} By using the Eq. \ref{nc1}, the Eq. \ref{nc2} can be written as, \begin{equation} p_{y}=\sum_{x}Tr(\sum_{mn}A_{xm}B_{yn}\rho B^{\dagger}_{yn} A^{\dagger}_{xm}) \end{equation} Using the cyclic property of trace operation and expanding the summation we obtain, \begin{equation} p_{y}=Tr(\sum_{n}B_{yn}\rho B^{\dagger}_{yn}) \end{equation} In this equation all the operators of Alice disappear, so Bob is not able to detect which statistics of measurements Alice has been performed at his end. Hence, the statistics of measurements at the end of Bob has not been effected by Alice also whatever Alice has been performed. The no communication theorem is trivially for separable case, but it can also be true if the composite state is entangled. Let consider the composite state is entangled which is prepared in singlet state given in Eq.\ref{sn}. Alice and Bob perform the measurements at his end by using the detectors $(D^{A},D^{B})$ respectively. Following the Bell's experiment, the detectors are oriented initially along the z axis and rotated independently at the end of Alice and Bob. Let consider the difference between the angles of detectors is $(\alpha-\beta)$, then quantum mechanically on can calculate the following conditional probabilities of measurements outcome \begin{eqnarray} \{A(0), B(0)\}, \quad p_{00}=\frac{1}{2}sin^{2}(\frac{\alpha-\beta}{2})\\ \{A(0), B(1)\}, \quad p_{01}=\frac{1}{2}cos^{2}(\frac{\alpha-\beta}{2})\\ \{A(1), B(0)\}, \quad p_{10}=\frac{1}{2}cos^{2}(\frac{\alpha-\beta}{2})\\ \{A(1), B(1)\}, \quad p_{11}=\frac{1}{2}sin^{2}(\frac{\alpha-\beta}{2}) \end{eqnarray} The following normalization condition is satisfied over the probabilities given below, \begin{equation} P_{00}+P_{01}+P_{10}+P_{11}=1 \end{equation} Calculating the probabilities of measurement outcome as spin up $(|1\rangle)$ and spin down $(|0\rangle)$ at the Alice end, \begin{eqnarray} P^{A}_{1}=P_{11}+P_{10}=\frac{1}{2}\\ P^{A}_{0}=P_{01}+P_{11}=\frac{1}{2} \end{eqnarray} Similarly we can calculate in the case of Bob as $(P^{B}_{1}=P^{B}_{0}=\frac{1}{2})$. We observe that, the probabilities of measurement outcomes are totally independent to the difference between the angles $(\alpha-\beta)$. So the actions performed of measurements at either end of Alice or Bob are not detected at another end and vice versa. This is the essence of no- communication theorem. \subsection{No-Hiding Theorem} No-hiding theorem\cite{nhd1} is an important theorem in quantum information, which also indicates the conservation principle of quantum information. The idea of no hiding theorem comes from the one time cipher protocol. The protocol is used to send the message by adding the random key in real information. Shanon proved that the original information neither reside in encoded message nor in the key, so where the information is gone? In the process of one time cipher pad method the information is hidden in the correlations of original information and key. One can think the same scenario in quantum mechanical sense. The teleportation can be assumed as a quantum analogue of one time cipher method. In teleportation, there are two parties Alice and Bob both share an entangled state; Alice apply few unitary operations at his end and send the measurement results to Bob. Bob apply corresponding measurements and recover the information. In this whole process, the decoherence is missed., as quantum systems are too evasive and always decoherence prone. If, one has to consider the decoherence in teleportation process than Alice might be interacting with the environment. If a quantum system is interacted with an environment in the form of ecoherence, the environment destroy the information. So a natural question arises; where the lost information from the original system has gone? In quantum mechanical case it does not reside in correlations. This idea leads to the "No-hiding theorem". Here the original information resides in the subspace of the environmental Hilbert space and not the part of correlation of the system and environment. To proceed the proof of no hiding theorem, let consider an arbitrary input quantum state $\rho_{I}$, after encoding into a larger Hilbert space. With respect to a hiding process, there exists an output state $\sigma_{O}$ into a subspace $O$, both the subspaces $(I,O)$ are the part of larger Hilbert space. The rest of the portion of larger Hilbert space is called as ancilla space $A$. The hiding process perform the following mapping, \begin{equation} \rho_{I}\longmapsto \sigma_{O}, \quad (\sigma \quad \text{fixed}\quad \forall \rho) \end{equation} Here we assume the input state in the subspace $(I)$ is a pure state. A hiding process of this pure state can be considered by taking into account the sub-spaces $(O,A)$, hence it can be written simply as a map given below, \begin{equation} |\psi_{I}\rangle=\sum_{i=1}^{n}\sqrt{p_{i}} |i\rangle_{O}\otimes |A_{n}(\psi)\rangle_{A} \end{equation} The right had side in the above equation is the Schmidt decomposition. Here $p_{i}$ are the positive eigenvalues of the state $\sigma_{O}$ and $(|i\rangle)$ are its eigenvectors. The set of basis $(|i\rangle, |A_{n}\rangle)$ are orthonormal basis. By imposing the restriction on the ancila and taking into account its linear property, we can write as follows, \begin{equation} |\psi_{I}\rangle=\sum_{i=1}^{n}\sqrt{p_{i}} |i\rangle_{O}\otimes (|q_{n}\rangle\otimes |\psi\rangle\oplus 0)_{A} \end{equation} Here we see, since we may swap the state $|\psi\rangle$ with any other state in the ancilla using purely ancilla-local operations, we conclude that any information about $|\psi\rangle$ that is encoded globally is in fact encoded entirely within the ancilla. Neither the information about $|\psi\rangle$ is encoded in system-ancilla correlations nor in system-system correlations. \section{Conclusions} In this article, we discussed milestone developments in quantum information. It covers the beginning of quantum information and computation based on the idea of reversible computing and quantum Turing machine. We captured the experimental manifestations of theoretical developments as well. In addition we discussed physical situations, which are not possible in no-go theorems with their mathematical proof. These theorems have their own important consequences for execution of quantum information and in designing in quantum protocols. Covering the broad aspects of milestone developments in this article may be useful for the quantum community.
1,116,691,498,067
arxiv
\section{INTRODUCTION}\label{sec:intr} The unexpected excess of high-energy positrons ($\geq 10$ GeV) observed by PAMELA \citep{2013PhRvL.111h1102A}, AMS-02 \citep{2014PhRvL.113l1101A}, VERITAS \citep{2015ICRC...34..411S}, \textit{Fermi}-LAT \citep{2012PhRvL.108a1103A,2017PhRvD..95h2007A}, and DAMPE \citep{2017Natur.552...63D} has been explored extensively by many researchers, yet its origin is still a matter of debate. While dark matter particles are believed to be a possible positron source, pulsars and pulsar wind nebulae (PWNe) are more natural astrophysical contributors owing to their high pair production capability \citep{2009JCAP...01..025H}. Pulsars inside PWNe produce ultra-relativistic wind of electron-positron pairs (hereafter, ``electrons'' refers to both electrons and positrons). A shock forms as the wind interacts with slow-moving supernova ejecta, and the electrons are thermalized and/or reaccelerated by the shock \citep{2006ARA&A..44...17G}. The resulting high-energy electrons may escape from PWNe, and propagate in the nearby interstellar medium (ISM). Considering radiative energy loss experienced by high-energy electrons while propagating, TeV positron sources must be located within hundreds of parsecs from Earth {if they are to contribute to the positron excess.} However, this scenario is questioned by the recent HAWC's detection of extended TeV $\gamma$-rays from the Geminga PWN and PSR B0656+14. Although this observation confirms the existence of TeV electrons in PWNe, the diffusion coefficients, constrained by the $\gamma$-ray surface brightness profile, are too small to account for the positron flux observed at Earth \citep{2017Sci...358..911A}. {Two-zone diffusion models seem to partially remedy this issue \citep{2017PhRvD..96j3013H,2018ApJ...863...30F}.} Nevertheless, the local PWNe origin of the positron excess is oppugned by non-detection of the Geminga PWN in \textit{Fermi} observations \citep{2018arXiv181010928X}, {and the nature of positron excess is still under debate \citep{2019MNRAS.tmp..670S}.} The Vela pulsar, lying at a distance of 287\,pc to Earth \citep{2003ApJ...596.1137D}, is one of the positron contributor candidates in the TeV energy range. Thanks to its proximity, the Vela X PWN is spatially resolved in multiwavelength observations from the radio to $\gamma$-rays. The extended radio nebula (ERN) of Vela X, which is spatially correlated with the GeV nebula seen by \textit{Fermi}, is much larger than the {X-ray ``cocoon" \citep{1995Natur.375...40M,1997ApJ...475..224F}. A counterpart of the cocoon was later revealed in TeV $\gamma$-rays \citep{2006A&A...448L..43A}.} Although there are extensive observational and theoretical studies, the formation of the complex frequency-dependent morphology {of the PWN} remains to be addressed \citep{2018A&A...617A..78T}. Two electron components have been assumed to explain the frequency dependent morphology \citep{2008ApJ...689L.125D}. However, challenges arise when TeV emission beyond the cocoon {(extended to $\sim 1.2^\circ$) is detected \citep{2012A&A...548A..38A,2017SSRv..207..175R}. We hereafter refer to the TeV-$\gamma$-ray-emitting plasma as the TeV nebula.} \citet{2011ApJ...743L...7H} argued that diffusive escape of high-energy electrons must be taken into consideration to explain the soft GeV spectrum of the ERN while the TeV {nebula} formed over the past few hundreds of years, so that high-energy electrons are still trapped in a cocoon structure. The escaped electrons, if diffused to Earth, should contribute to the local cosmic ray positron spectrum, suggesting that the Vela X PWN may be one of the positron contributors \citep{2011ApJ...743L...7H}. {However, AMS-02 measurement shows that the positron fraction saturates at $\sim 300$ GeV and likely drops above $\sim 500$ GeV \citep{2018arXiv181107551R,2019PhRvL.122d1102A}}, disfavoring the pulsars and PWNe as the origin of the positron excess. \citet{2018A&A...617A..78T} found that the TeV {nebula of the Vela X PWN} has a very hard spectrum above 100 GeV. This spectrum {suggests that the emission is dominated by electrons at tens of TeV}. Such a distribution is a natural consequence of a hard electron spectrum with a power-law index of less than 2 being subjected to synchrotron and IC radiative energy losses. The {spatial offset} of the cocoon from pulsar suggests that it was likely caused by interaction with the reverse shock of the supernova remnant (SNR) thousands of years ago \citep{2018ApJ...865...86S}. We therefore propose that the {TeV nebula} formed via {impulsive} injection of a hard power-law distribution of energetic electrons before the PWN was displaced by interaction with the reverse shock several thousands year ago. The consequent parallel diffusion and radiative energy loss of these high-energy particles in a large-scale magnetic field can naturally account for the spatial and spectral properties of the TeV {nebula} \citep{2003MNRAS.343..116D}. Based on the escape model proposed by \citet{2011ApJ...743L...7H}, \citet{2018ApJ...866..143H} argued that diffusion coefficient in the Vela X PWN should be less than $10^{28}$ cm$^2$ s$^{-1}$ for 10\,TeV electrons to avoid excessive cosmic ray electron flux above 10\,TeV. Here we suggest that the extended TeV emission beyond the X-ray cocoon arises from diffusion of high-energy electrons and use the spatial and spectral properties of the TeV {nebula} to constrain the diffusion coefficient in this nebula directly. The electrons are injected as a power-law function of energy, {and the diffusion-loss equation can be solved analytically}. Since the effective magnetic field strength for radiative energy loss can be constrained by the high-energy cutoff of the TeV spectrum and the $\gamma$-ray emission is dominated by {the electrons at tens of TeV}, the diffusion coefficient is essentially the only key parameter one can adjust to fit the TeV $\gamma$-ray radial brightness profile and the spectra of the inner region ($0^\circ$--$0.8^\circ$, {slightly larger than the X-ray cocoon}) and an outer ring ($0.8^\circ$--$1.2^\circ$, {diffused emission beyond the cocoon; \citeauthor{2012A&A...548A..38A}\,2012}). Our model is described in \S 2. The application of this model to the TeV nebula of the Vela pulsar is presented in \S 3. The discussion and conclusions are provided in \S 4. {In a follow-up paper (\citeauthor{paperII}\ 2019, hereafter Paper II), we explain the spectrum of the ERN and the associated GeV $\gamma$-ray emission.} \section{MODEL DESCRIPTION} We assume the TeV surface brightness profile stems from the diffusion {and radiation losses} of high-energy electrons. For simplicity, a simple diffusion-loss model is adopted to constrain the diffusion coefficient within the TeV nebula of Vela pulsar. The transport of electrons can be described by \begin{eqnarray} \label{eq:dif} \frac{\partial}{\partial t}f(\gamma,r,t)=\frac{D(\gamma)}{r^{2}}\frac{\partial}{\partial r}r^{2}\frac{\partial}{\partial r}f(\gamma,r,t)+\frac{\partial}{\partial \gamma}\left(Pf\right)+Q_\textup{inj}(\gamma,t), \end{eqnarray} {where $f(\gamma,r,t)$ is the electron distribution function}, $r$ is the radial distance to the center of the TeV nebula, $D(\gamma)=D_0\left(\gamma/\gamma_{_\textup{10\,TeV}}\right)^\delta$ is an energy-dependent diffusion coefficient (with $\gamma_{_\textup{10\,TeV}}$ the Lorentz factor of 10\,TeV electrons), $P$ is the radiation energy loss rate, and $Q_\textup{inj}$ is the electrons' injection rate associated with the pulsar's spin-down. {Since the slow-diffusion regions ($\sim$ 30--50\,pc in radii) in the two-zone diffusion models \citep{2018ApJ...863...30F,2018ApJ...866..143H} are apparently larger than the Vela SNR ($\sim 20$\,pc in radius), we assume a spatially independent diffusion coefficient. The $\gamma$-ray spectral indices are almost a constant throughout the cocoon \citep{2012A&A...548A..38A}, therefore the diffusion coefficient is expected to have a weak energy dependence; we thus fix $\delta=1/3$ (Kolmogorov diffusion, as is assumed in \citeauthor{2017Sci...358..911A}\ 2017)}. {Although a broken power law is usually used to explain the broadband spectral energy distribution (SED) of PWNe, a single power law is sufficient to explain the broadband spectrum of the Vela X PWN (see Paper II), therefore the injection rate is assumed to be} \begin{equation} Q_\textup{inj}(\gamma,t)=Q_0\delta(t-\tau_\textup{s})\left(\frac{\gamma}{10^7}\right)^{-\alpha}, \label{eq:Q} \end{equation} where $Q_0$ is the injection {constant} for the electrons with $\gamma=10^7$. {Here we assume that most of plasma injected before the reverse shock-PWN interaction is compressed into the ERN, and approximate the injection to be impulsive for the following reasons. The total energy in the cocoon is estimated to be $E_\textup{cocoon}\sim 1.5 \times 10^{46}$\,erg \citep{2010ApJ...713..146A}, much lower than the energy in the ERN ($5 \times 10^{48}$\,erg; \citeauthor{2010ApJ...713..146A}\,2010). Hydrodynamic simulations have revealed that the cocoon was formed $\sim 4$\,kyr ago \citep{2018ApJ...865...86S}, when the SNR reverse shock disrupted the PWN, creating a tail (the cocoon) to the south, which contains a small fraction of fresh plasma released by the pulsar. Meanwhile, the relatively faint TeV emission and bright X-ray emission in the very vicinity of the Vela pulsar imply a strong magnetic field formed after the reverse shock interacted with the PWN \citep[see, e.g.,][]{2011ApJ...743L...7H} that can cool the electrons (injected after the X-ray cocoon was created) to low energy in a short time ($\sim 10^2(B/400\ \mu\textup{G})^{-2}(E/700\ \rm GeV)^{-1} $\,yr). Therefore, injection after the cocoon was formed does not contribute to TeV electrons. The cooling timescale of 70\,TeV electrons is $\sim 4.4 \times 10^3(B/6\mu\textup{G})^{-2}$\,yr \citep{2011ApJ...743L...7H}, hence the very high-energy electrons injected $\sim 4$\,kyr ago have been cooled down, giving rise to the observed TeV spectral cutoff.} Given the very hard GeV spectrum of the TeV nebula \citep{2012A&A...548A..38A,2018A&A...617A..78T}, $\alpha$ needs to be less than 2, giving rise to an {energy} distribution piling up where the radiative cooling timescale is equal to the age of the injected electrons (see \autoref{fig:elec}). To get the structure of the nebula, we assume that the nebula is spherical symmetric\footnote{ Although the X-ray cocoon and its TeV counterpart is hardly spherically symmetric in the central region ($\sim 0.8^\circ$ in length), spherical symmetry could be a good approximation for the spatially integrated spectra and the larger extension of TeV emission out to $2^\circ$ in radius. On the other hand, the asymmetry in the central region would only affect the first two data points in the surface brightness profile. Also, the radial TeV brightness profile is given from circular annuli \citep{2012A&A...548A..38A}, so spherical symmetry could be a convenient technical treatment to fit the data. Moreover, the diffusion coefficient is derived from fitting the relative surface brightness, therefore the absolute magnitude of the injected energy is of less importance. The asymmetry does not affect our constraint on the diffusion coefficient obtained. The SED, however, depends on the total number of the $\gamma$-ray-emitting electrons injected in the TeV nebula after interaction with the reverse shock, which can be adjusted by changing the injection constant $Q_0$}. Here we only consider the synchrotron and inverse Compton (IC) losses, {and the loss rate can be} described by $d\gamma/dt=-p_2\gamma^2${, where $p_2=3.23 \times 10^{-20} (B/5\,\mu G)^{2}\ \textup{s}^{-1}$. Following the procedure described in \citet{1995PhRvD..52.3265A}, we define $\mathscr{F}=(d\gamma/dt)rf(\gamma, r, t)$, hence \autoref{eq:dif} can be written as} \begin{equation} \frac{\partial\mathscr{F}}{\partial z}=D_1(z)\frac{\partial^2\mathscr{F}}{\partial r^2} \end{equation} (see \citeauthor{1995PhRvD..52.3265A}\,1995 for the definition of $z$ and $D_1(z)$); consequently, the electron distribution function {$f(\gamma, r, t)$ can be obtained:} \begin{eqnarray} \label{eq:solution} f(\gamma,r,t)=\left \{ \begin{array}{ll} \frac{\gamma^2_tQ_\textup{inj}(\gamma_t,t)}{\gamma^2\pi^{3/2}r^3_\textup{dif}}\exp\left({-\frac{r^2}{r^2_\textup{dif}}}\right) &\ \ \gamma \le \gamma_{_\textup{max}},\\ 0 & \ \ \gamma > \gamma_{_{\rm max}}.\\ \end{array} \right .\\ r_\textup{dif}(\gamma,t)=2\sqrt{D(\gamma)t\frac{1-(1-{\gamma}/{\gamma_{_\textup{max}}})^{1-\delta}}{(1-\delta){\gamma}/{\gamma_{_\textup{max}}}}}, \end{eqnarray} where $\gamma_\textup{max}=1/(p_2t)$, $\gamma_t=\gamma/(1-p_2t\gamma)$ is the initial energy of the electrons that are cooled down to $\gamma$ after time $t$. {\citet{1995PhRvD..52.3265A} showed, $r_\textup{dif}\approx2\sqrt{D(\gamma)t}$ for $\gamma<0.5\gamma_{_{\rm max}}$}. Then we integrate the electron distribution function along the line of sight $l$ (noting that $l^2+R^2=r^2$, with $R$ being the projection distance): \begin{equation}\label{eqn:LO} F_\textup{LS}=\int^\infty_{-\infty} f dl=\int^\infty_{-\infty} \frac{\gamma^2_tQ_\textup{inj}(\gamma_t)}{\gamma^2\pi^{3/2}r^3_\textup{dif}}\exp\left({-\frac{l^2+R^2}{r^2_\textup{dif}}}\right) dl=\frac{\gamma^2_tQ_\textup{inj}(\gamma_t)}{\gamma^2\pi r^2_\textup{dif}}\exp\left({-\frac{R^2}{r^2_\textup{dif}}}\right). \end{equation} According to \citet{2012A&A...548A..38A}, the radial brightness profile is extracted from {annuli} with {the same width of} 12'. {To obtain the $\gamma$-ray fluxes observed now}, $F_\textup{LS}$ is further integrated over each {annulus and pulsar lifetime:} \begin{equation} \int^{T_\textup{age}}_0\int_{R_\textup{in}}^{R_\textup{out}} F_\textup{LS} 2\pi R\,dR\,dt=\frac{\gamma^2_t Q_0 \left(\gamma/10^7\right)^{-\alpha}}{\gamma^2}\left[\exp\left({ -\frac{D^2_\textup{in}}{r^2_\textup{dif}(\gamma,T_\textup{age}-\tau_s) }}\right)-\exp\left({ -\frac{D^2_\textup{out}}{r^2_\textup{dif}(\gamma,T_\textup{age}-\tau_s)} }\right) \right], \label{eq:solution} \end{equation} where $R_\textup{in}$ and $R_\textup{out}$ represent the inner and the outer radius of an {annulus}, respectively, and $T_\textup{age}$ represents the age of the Vela PWN. \section{Application to Vela X} The Vela pulsar and its associated SNR have been extensively studied thanks to the pulsar's short distance of 287\,pc obtained using VLBI parallax \citep{2003ApJ...596.1137D}. The age of the pulsar derived from its proper motion is $9000$--$27000$ yr \citep{1995Natur.373..587A}. It is the first H.E.S.S. TeV source showing a prominent high-energy spectral cutoff \citep{2012A&A...548A..38A}. \citet{2006A&A...448L..43A} showed that the magnetic field strength in the X-ray cocoon is $\sim$ 4 $\mu$G based on its X-ray flux. The formation of the X-ray cocoon is attributed to the reverse shock-PWN interaction thousands of years ago \citep{2018ApJ...865...86S}, and the TeV counterpart can be explained by the electron diffusion from the cocoon. We focus on the $\gamma$-ray data and consider the cosmic microwave background radiation (CMB) and the far infrared radiation (FIR) as the background photons for calculation of the $\gamma$-rays via the IC process. We assume blackbody spectrum with temperature of 25\,K and energy density of 0.2 eV cm$^{-3}$ to approximate the FIR radiation field presented in \citet{2008ApJ...682..400P}. We fit the $\gamma$-ray spectra of both the inner and outer regions as well as the surface brightness profile given in \citet{2012A&A...548A..38A}. The model parameters are listed in \autoref{tab:par}, where $d_\textup{Vela}$ is the distance to the Vela pulsar, $B_\textup{eff}$ is the effective magnetic field strength ($B_\textup{eff}^2=B^2+8\pi u_{_{\textup{CMB}}}+ 8\pi u_{_{\textup{FIR}}}$, with $u_{_\textup{CMB}}$ and $u_{_{\textup{FIR}}}$ being the energy density of CMB and FIR photons, respectively). {Because the energy losses are dominated by synchrotron radiation and IC off low-energy photons, the energy losses are assumed to be quadratic in energy. However, the Klein-Nishina effect is incorporated into our calculation of SEDs. Because the electron spectrum is very hard and unbroken below 70\,TeV (see \autoref{fig:elec}), IC off starlight is severely suppressed by the Klein-Nishina effect. Meanwhile, the energy density of starlight is as low as 0.3 eV cm$^{-3}$, with a temperature of $\sim 3000$ K \citep{2008ApJ...682..400P}. The IC off starlight contributes $\sim 1\%$ of the $\gamma$-ray emission and is thus neglected.} {\autoref{fig:elec} shows the spatially integrated electron distribution of the TeV {nebula} for parameters given in \autoref{tab:par}. The spectrum is cut off at $\sim 70$ TeV due to the radiative cooling.} \autoref{fig:Rings} shows the spectral fit to the inner and outer rings (left panel) and fit to the normalized brightness profile (right panel). {The SED can be fit well with $\alpha=1.7$, which is in agreement with the radio spectrum of the ERN (Paper II). {Since the magnetic field is weak in the X-ray cocoon, the synchrotron lifetime of 10\,TeV electrons is $ \approx 3\times 10^4(B/6\mu\textup{G})^{-2}$\,yr \citep{2011ApJ...743L...7H}, larger than the age of the Vela SNR, therefore the corresponding \textit{Fermi} $\gamma$-ray spectrum \citep{2018A&A...617A..78T} is a single powerlaw, consistent with theoretical expectations for $\gamma$-ray-emitting electrons with a power-law distribution. The spectrum of the outer ring is slightly harder than the inner ring due to the increase of the diffusion coefficient with energy. The spectrum of the outer ring is slightly harder than the inner ring due to the increase of the diffusion coefficient with energy.} In \autoref{fig:indices}, we show the spatial distribution of the $\gamma$-ray indices for different $\delta$. The $\gamma$-ray indices are calculated for nine annuli centered in the central position of the cocoon, with the same width of $0.1^\circ$. It can be seen that, for $\delta=1$, the $\gamma$-ray indices have apparent spatial variation, whereas for $\delta=1/3$, the spatial distributions of $\gamma$-ray indices are consistent with the indices extracted from sectors along the cocoon.} {The dependence of the TeV $\gamma$-ray brightness profile and spectra on the diffusion coefficient is plotted in \autoref{fig:general}.} It shows that the diffusion coefficient is well constrained ($D_0 \approx 1 \times10^{26}$\,cm$^{2}$\,s$^{-1}$) by the observed spectra and brightness profile. \begin{center} \begin{deluxetable}{p{2.5cm}cc} \tabletypesize{\footnotesize} \tablecaption{Fitting Parameters\label{tab:par}} \tablewidth{0pt} \tablehead{ \\ Parameter & Quantity \\ } \startdata $T_\textup{age}$ (yr) & 12000 \\ $Q_\textup{0}$ (cm$^{-3}$ s$^{-1}$) & $6.7 \times 10^{36}$\\ $d_\textup{Vela}$ (pc) & 287\\ $\tau_\textup{s}$ (yr) & 7500 \\ $B_\textup{eff}$ ($\mu$G) & 6\\ $D_0$ (cm$^2$ s$^{-1}$) & $10^{26}$\\ \hline $\alpha$ & 1.7 \\ \enddata \end{deluxetable} \end{center} \begin{center} \begin{figure}[H] \includegraphics[scale=0.48]{ringelec.eps} \caption{Spatially integrated electron distribution function of {annuli} for $\alpha=1.7$. {There are 10 annuli with a width of $12'$.}} \label{fig:elec} \end{figure} \end{center} \begin{center} \begin{figure}[H] \includegraphics[scale=0.48]{17Fermi.eps} \includegraphics[scale=0.48]{17SB.eps} \caption{Fit to the $\gamma$-ray spectra of the inner and outer regions (left), and the normalized surface brightness profile (right). The H.E.S.S. data are taken from \citet{2012A&A...548A..38A}, {\textit{Fermi} data are from \citet{2018A&A...617A..78T}.}} \label{fig:Rings} \end{figure} \end{center} \begin{center} \begin{figure}[H] \includegraphics[scale=0.48]{indices.eps} \caption{\textbf{Spatial distribution of $\gamma$-ray indices. The data are taken from \citet{2012A&A...548A..38A}.}} \label{fig:indices} \end{figure} \end{center} \begin{center} \begin{figure}[H] \includegraphics[scale=0.48]{SB.eps} \includegraphics[scale=0.48]{SED.eps} \caption{Dependence of the model fit on the diffusion coefficient for $\alpha=1.7$. {The solid lines in the right panel represent the spectra of the inner region, and the dashed lines represent the spectra of the outer region.}} \label{fig:general} \end{figure} \end{center} \section{discussion and conclusions} The discovery of a high-energy spectral cutoff in combination with a recent detection of a very hard GeV spectrum indicate that the $\gamma$-ray emission of the TeV nebula of the Vela X PWN is dominated by {electrons with energies of tens of TeV} via the IC process. The weak magnetic field strength derived from X-ray observations {and the high-energy cutoff are consistent with the radiative cooling of TeV electrons injected upon the formation of the cocoon, and the small total energy in the TeV nebula is consistent with the impulsive injection.} In this paper, {with} a simple diffusion and radiative loss model, {we demonstrate} that in the scenario {in which} the TeV cocoon formed via interaction of the PWN with the reverse shock several thousand years ago, the $\gamma$-ray spectra and radial brightness profile of the TeV nebula can be {reproduced.} Since the magnetic field strength is constrained by the injection time and {the electron index $\alpha$ can be constrained by the radio SED of the ERN}, the diffusion coefficient $D_0$ is the only key parameter that can be well constrained by the brightness profile and radial spectral change of the TeV nebula. {The diffusion coefficient of TeV electrons and positrons is thus determined to be $1 \times 10^{26}$\,cm$^{2}$\,s$^{-1}$ for 10\,TeV electrons, which is more than three orders of magnitude lower than the typical value in the ISM.} Our results, in combination with an early result from HAWC observations of the Geminga PWN suggest that slow diffusion {might be} common in TeV PWNe. {At 10\,TeV, the diffusion coefficient is almost the same as the Bohm diffusion ($8.3 \times 10^{25}$ cm$^2$ s$^{-1}$) in a magnetic field of $4\,\mu$G, which implies sub-Bohm diffusion at even higher energies. Since $r_\textup{dif} \sim 2\sqrt{DT_\textup{age}}\sim6$\,pc for 100\,TeV electrons, such slow diffusion could hardly transport the positrons below 100\,TeV out of the Vela SNR.} {One possible explanation for such a slow diffusion in the cocoon is efficient trapping of electrons in a magnetic mirror. Such trapping needs to be extended to the whole TeV nebula. Multiwavelength observations can be used to probe the magnetic field structure \citep{2012A&A...548A..38A}}. More detailed exploration of such a scenario will be presented in a future paper. The energy of high-energy electrons inside the cocoon is estimated to be $E_\textup{cocoon}\sim 1.5 \times 10^{46}$\,erg \citep{2010ApJ...713..146A}, which is extremely low, {but much higher than the energy of $\sim 10^{44}$\,erg carried by the magnetic field}. The cocoon is expected to form due to the reverse shock-PWN interaction \citep{2018ApJ...865...86S}, hence the bulk of the pulsar spin-down energy (and bulk of TeV electrons) is deposited into the ERN. The SED of the ERN can be explained by intense radiation loss in the reverberation phase {for the interaction of the nebula with reverse shock}; the interpretation of the broadband spectrum of the ERN is delegated to an ensuing paper (Paper II). Moreover, if the magnetic field strength in the ERN is high, the soft GeV spectrum may be attributed to hadronic processes. Slow diffusion around SNRs has been suggested for years \citep[see e.g.,][]{2009ApJ...707L.179F,2010sf2a.conf..313G,2010MNRAS.409L..35L,2012ApJ...745..140Y}. Theoretical works also show that cosmic rays may diffuse slowly due to self-generation waves and wave-wave turbulent cascading from a scale, which is in concordance with the size of the SNR \citep[see e.g.,][]{2012PhRvL.109f1101B}, and may also work inside PWNe {with weak magnetic fields}. \acknowledgments {We thank the anonymous referee for the helpful comments.} This work is supported by the National Key R\&D Program of China under grants 2018YFA0404203, 2015CB857100, and 2017YFA0402600, NSFC under grants 11773014, 11633007, 11851305, U1738122, and 11761131007, and the International Partnership Program of Chinese Academy of Sciences under grant 114332KYSB20170008.
1,116,691,498,068
arxiv
\section{Introduction} In 1991, Cohen, Kaplan and Nelson proposed a mechanism of ``Spontaneous Baryogenesis'' for producing baryons at the electroweak phase transition in the adiabatic limit of thick slowly moving bubble walls\cite{Cohen:1991iu}. Their original idea\cite{Cohen:1987vi,Cohen:1988kt} uses an effective chemical potential for biasing the baryon number, and the effective chemical potential was brought in by considering a time-dependent parameter. The mechanism avoids the ``out of thermal equilibrium'' condition in the famous Sakharov's three conditions\cite{Sakharov:1967dj} since the time-dependent background violates CPT. The mechanism has been considered in many models of baryogenesis since it has been obvious that the mechanism is quite useful for constructing mechanisms for generating the baryon number of the Universe. On the other hand, it has been suggested that the effective chemical potential may disappear from the Hamiltonian formalism when the field equation of the time-dependent parameter is taken into account\cite{Arbuzova:2016qfh}. For us, this point is one of the primary reasons for considering (complex) fundamental equations, instead of using the (useful) effective theory. In this paper, we are not considering thermal equilibrium, but the basic idea relies on the spontaneous baryogenesis scenario. In past studies, such as Ref.\cite{Pearce:2015nga, Adshead:2015jza,Adshead:2015kza}, baryogenesis with non-perturbative particle production has been discussed with a chemical potential. To show clearly the purpose of this paper, we first explain how ``chemical potential'' affects the non-perturbative particle production. Let us start with the simplest scenario of bosonic preheating given by the action\cite{Enomoto:2017rvc} \begin{eqnarray} S_0&=&\int d^4 x\sqrt{-g}\left[\partial_\mu\phi^*\partial^{\mu}\phi -m^2 |\phi|^2+\xi R|\phi|^2 \right]. \end{eqnarray} Using conformal time $\eta$, one can write the metric $g_{\mu\nu}=a^2(\eta){\rm diag}(1,-1,-1,-1)$ and $R=-6\ddot{a}/a^3$, where $a$ is the cosmological scale factor and the dot denotes time-derivative with respect to the conformal time. A convenient definition of a new field is $\chi\equiv a\phi$, which gives a simple form \begin{eqnarray} S_0&=&\int d^4 x \left[|\dot{\chi}|^2 -\omega^2 |\chi|^2\right], \end{eqnarray} where \begin{eqnarray} \omega^2&\equiv& a^2m^2 + \left(-\Delta + \frac{\ddot{a}}{a}(6\xi-1)\right). \end{eqnarray} Here $\Delta$ is the Laplacian. Annihilation ($a,b$) and creation ($a^\dagger,b^\dagger$) operators of ``particle'' and ``antiparticle'' appear in the decomposition \begin{eqnarray} \chi&=& \int \frac{d^3 k}{(2\pi)^{3/2}}\left[ h(\eta) a(\bm{k}) e^{i \bm{k}\cdot \bm{x}} +g^*(\eta) b^\dagger(\bm{k}) e^{-i \bm{k}\cdot \bm{x}}\right].\nonumber\\ \end{eqnarray} For our calculation, we introduce conjugate momenta $\Pi^\dagger\equiv\dot{\chi}$, which can be decomposed as \begin{eqnarray} \Pi^\dagger&=& \int \frac{d^3 k}{(2\pi)^{3/2}}\left[ \tilde{h}(\eta) a(\bm{k}) e^{i \bm{k}\cdot \bm{x}} +\tilde{g}^*(\eta) b^\dagger(\bm{k}) e^{-i \bm{k}\cdot \bm{x}}\right].\nonumber\\ \end{eqnarray} Following Ref.\cite{ZS-original}, we expand $h, \tilde{h}$ (particles) and $g, \tilde{g}$ (antiparticles) as \begin{eqnarray} h&=&\frac{e^{-i\int^\eta \omega d\eta'}}{\sqrt{2\omega}}A_h +\frac{e^{i\int^\eta \omega d\eta'}}{\sqrt{2\omega}}B_h,\nonumber\\ \tilde{h}&=&\frac{-i\omega e^{-i\int^\eta \omega d\eta'}}{\sqrt{2\omega}}A_h +\frac{i\omega e^{i\int^\eta \omega d\eta'}}{\sqrt{2\omega}}B_h, \end{eqnarray} and \begin{eqnarray} g&=&\frac{e^{-i\int^\eta \omega d\eta'}}{\sqrt{2\omega}}A_g +\frac{e^{i\int^\eta \omega d\eta'}}{\sqrt{2\omega}}B_g,\nonumber\\ \tilde{g}&=&\frac{-i\omega e^{-i\int^\eta \omega d\eta'}}{\sqrt{2\omega}}A_g +\frac{i\omega e^{i\int^\eta \omega d\eta'}}{\sqrt{2\omega}}B_g, \end{eqnarray} where $A$ and $B$ are known as the Bogoliubov coefficients. For further simplification, we introduce $\alpha$ and $\beta$, which are defined as \begin{eqnarray} \alpha_{h,g}&\equiv& e^{-i\int^\eta\omega d\eta'}A_{h,g}\\ \beta_{h,g}&\equiv& e^{i\int^\eta\omega d\eta'}B_{h,g}. \end{eqnarray} Now the equation of motion can be written as \begin{eqnarray} \dot{h}-\tilde{h}&=&0\\ \dot{\tilde{h}}+\omega^2 h&=&0, \end{eqnarray} which are solved for $\dot{\alpha}$ and $\dot{\beta}$ as \begin{eqnarray} \dot{\alpha}_h&=&-i\omega \alpha_h +\frac{\dot{\omega}}{2\omega}\beta_h\nonumber\\ \dot{\beta}_h&=&i\omega \beta_h +\frac{\dot{\omega}}{2\omega}\alpha_h. \end{eqnarray} Let us see what happens when a constant chemical potential is introduced. After adding a chemical potential \begin{eqnarray} {\cal L}&=&\dot{\chi}\dot{\chi}^* -\omega^2 |\chi|^2 -i\mu_\chi \left(\chi \dot{\chi}^*-\chi^* \dot{\chi}\right), \end{eqnarray} we find \begin{eqnarray} \label{eq-of-mo-boson} \ddot{\chi}-2i\mu_\chi\dot{\chi}+(\omega^2-i\dot{\mu}_\chi)\chi&=&0. \end{eqnarray} There are two terms which might cause differences. One is $-2i\mu_\chi\dot{\chi}$, and the other is $-i\dot{\mu}_\chi \chi$. If one assumes a constant chemical potential, only the first term will remain. {\bf Rather surprisingly, a constant chemical potential does not generate asymmetry. The reason will become very clear when the EWKB formalism is introduced, but here we will follow the standard formalism.} Then the equation of motion can be written as \begin{eqnarray} \dot{h}-\tilde{h}-i\mu_\chi h&=&0\nonumber\\ \dot{\tilde{h}}+\omega^2h -i\mu_\chi\tilde{h}&=&0, \end{eqnarray} where a complex parameter ($\sim i\mu_\chi$) appears. One can solve these equations for $\dot{\alpha}$ and $\dot{\beta}$ to find \begin{eqnarray} \dot{\alpha}_h&=&-i(\omega-\mu_\chi)\alpha_h +\frac{\dot{\omega}}{2\omega}\beta_h\nonumber\\ \dot{\beta}_h&=&\frac{\dot{\omega}}{2\omega}\alpha_h+i(\omega+\mu_\chi)\beta_h. \end{eqnarray} and \begin{eqnarray} \dot{\alpha}_g&=&-i(\omega+\mu_\chi)\alpha_g +\frac{\dot{\omega}}{2\omega}\beta_g\nonumber\\ \dot{\beta}_g&=&\frac{\dot{\omega}}{2\omega}\alpha_g+i(\omega-\mu_\chi)\beta_g. \end{eqnarray} One could naively claim that the shift of $\omega\pm \mu_\chi$ is the source of the asymmetry. {\bf However, this naive speculation fails in the present model.} One can calculate the behavior of $|\beta|^2$ (both numerically and analytically\cite{Enomoto:2017rvc}\footnote{The ``constant'' chemical potential just affects the phase rotation of $\alpha_{h.g}$ and $\beta_{h,g}$, and thus it does not appear in the physical quantity. Indeed, one can easily find that the equation of motion for $|\beta_{h,g}|^2$ does not depend on $\mu_\chi$.}) to find that the evolution of $|\beta_h|^2$ and $|\beta_g|^2$ are identical in this case, resulting no asymmetry production. From this simple model, one can understand why $\dot{\mu}\ne 0$ (i.e, a time-dependent chemical potential) is needed for the asymmetry production. Using the simplest model, we have seen that a constant chemical potential may not source the asymmetry. Although the result may depend on the details of the model, what is important here is that the meaning of ``chemical potential'' is becoming vague for the non-perturbative particle production scenario. This is why we have introduced mathematical tools for analyzing the asymmetry. Of course, in reality the above scenario should be considered with a time-dependent chemical potential, since usually $\mu$ is defined using a time-dependent parameter and such parameter is normally time-dependent during cosmological evolution. Therefore, usually the numerical calculation of a phenomenological model will generate the asymmetry, but still the meaning of ``chemical potential'' is vague. On the other hand, if the chemical potential is considered for a system of Boltzmann equations, the complexities discussed above for the non-perturbative particle production will not appear. In this sense, arguments of the chemical potential have to be discriminated between non-perturbative particle production and a system of Boltzmann equations. See also the recent arguments on the Higgs relaxation in Ref.\cite{Kusenko:2014lra,Yang:2015ida,Wu:2019ohx}. In this paper, we analyze cosmological particle production by a time-dependent interaction. We analytically explain the reason and the requirements of asymmetric particle production in typical situations. To avoid confusion, here we note that normally such ``asymmetric particle production'' is explained by two stages; (symmetric) production of heavy particles and asymmetric decay of the heavy particles, where the asymmetry is usually due to the interference. In this sense, our strategy is not common, as we are considering direct asymmetry production from the time-dependent scalar field. Although not very common, direct asymmetry production has a long history. Dolgov et. al.\cite{Dolgov:1994zq, Dolgov:1996qq} calculated the baryon asymmetry created by the decay of a pseudo-Nambu-Goldstone boson (PNGB), whose interactions violate baryon number conservation of fermions. Their calculation of Ref.\cite{Dolgov:1996qq} considers the Bogoliubov transformation after perturbative expansion. We are considering their calculation as a reference model. Compared with their calculation, our calculation is rather technical. Differences will be clearly described in this paper. For scalar fields, Funakubo et. al.\cite{Funakubo:2000us} and Rangarajan and Nanopoulos\cite{Rangarajan:2001yu} calculated asymmetric particle production. See also recent developments in this direction in Refs.\cite{Kusenko:2014uta, Adshead:2015jza, Adshead:2015kza, Enomoto:2017rvc, Enomoto:2018yeu, Enomoto:2020lpf}. The original scenario of spontaneous baryogenesis uses the rather moderate motion of a background field to source the effective chemical potential in the thermal background. On the other hand, our focus in this paper is rapid motion, which (itself) can cause efficient particle production. In this direction, the most famous scenario in cosmology would be the preheating scenario, which discusses non-perturbative particle production before reheating\cite{Dolgov:1989us, Kofman:1997yn}. Besides the preheating scenario, there are many papers considering the famous Schwinger mechanisms in cosmology. The Schwinger mechanism\cite{Schwinger:1951nm}, which is named after Schwinger who first derived the exponential formula for the pair production, is still an active research target\cite{Shakeri:2019mnt, Kitamoto:2020tjm, Taya:2020dco}. We also suppose phase transition or decay of unstable domain wall networks for our scenario, which is also expected to cause similar particle production. Since the configuration of scalar fields during the evolution of the Universe may develop domain wall structure and such configuration has to decay before nucleosynthesis, it would be interesting if decaying domain walls can generate baryon numbers.\footnote{A natural mechanism of generating safe(unstable) domain walls in supersymmetric theory has been advocated in Ref.\cite{Matsuda:1998ms}. See also Ref.\cite{Dolgov:2015gqa} for matter-antimatter asymmetry and safe domain walls.} Since the conventional $Z_n$-domain wall is interpolating between vacua with different phases, the phase of the field becomes the primary time-dependent parameter in such a scenario. Besides the particle production, the scattering of fermions by the walls could be asymmetric\cite{Nelson:1991ab, Funakubo:1996gi}. This idea has been used for baryogenesis at the electroweak scale\cite{Nelson:1991ab}. Although there are many scenarios of cosmology in which asymmetric particle production could be important, we will not discuss details of the phenomenological aspects and focus on the technical aspects of asymmetry production. To avoid confusion, we first explain the crucial difference between the conventional preheating scenarios and our approach. Since the ``symmetry violating interaction'' inevitably requires multiple fields, our original equations have to be multicomponent differential equations. Although the typical single-field equation of the conventional preheating scenario can be solved using the special function, it is impossible to obtain such a solution in general. Therefore, we need to develop mathematical methods to get an analytical estimation of the asymmetry generated from the equations. This includes sensible approximations and methods of calculating transfer matrix between asymptotic solutions when the exact solutions are not written by the special function. To avoid this problem, previous approaches\cite{Dolgov:1989us, Funakubo:2000us, Rangarajan:2001yu} sometimes use perturbative expansion before the non-perturbative analysis, where the special function can be used for the unperturbed solution. However, such expansion may drastically change the structure of the Stokes lines of the original theory. To avoid the problem, one has to understand the Stokes lines of the original theory first. Some concrete examples will be shown in this paper. In this paper, we consider the Landau-Zener model and the Exact WKB analysis (EWKB) for understanding the Stokes phenomena of the particle creation\footnote{In Ref.\cite{Enomoto:2020xlf}, we have applied the EWKB to cosmological particle production (without asymmetry). See Ref.\cite{Enomoto:2020xlf, Sueishi:2020rug, Taya:2020dco, Sueishi:2021xti} for more references.}. As we will show in this paper, the combination of these methods is very useful in understanding the origin of the asymmetry. Theoretically, the extension of the EWKB calculation to a higher Landau-Zener model is straightforward\cite{Virtual:2015HKT}, but because of the complexity of the analytical result (it contains solutions of higher-order equation), we are reducing the equations to the conventional two-component model. For multiple Dirac fermions, we are taking the relativistic and the non-relativistic limits. For fermions, our equations can be regarded as a generalized Landau-Zener model\cite{Zener:1932ws}. This analogy is sometimes very useful for understanding the origin of the asymmetry. Although the original Landau-Zener model mainly considers time-dependent diagonal elements, our focus is the rotational motion of the off-diagonal elements. We consider such models since the off-diagonal elements are supposed to be coming from the required interaction (i.e, symmetry violation) of asymmetry production. Mathematically, the time-dependence of the off-diagonal elements can be moved into the diagonal elements using some transformation. To explain the basic ideas of our strategy, we start with the solution of the original Landau-Zener model in the next section, in which transition between states is calculated when diagonal elements are time-dependent. Since the ``adiabatic states'' are diagonalizing the Hamiltonian, particle production can be calculated from the Landau-Zener transition, which is very convenient. In the next section, it will be clear why the transition matrix of the Landau-Zener model explains the Bogoliubov transformation of the cosmological particle production. See appendix \ref{app-srevEWKB} and \ref{app-reviewEWKB} for more technical details of the EWKB and Landau-Zener transformation applied to cosmological particle production. \subsection{The Landau-Zener model and particle creation in cosmology} First, we review the original Landau-Zener model and explain how it can be related to cosmological particle production. Here, the ``velocity'' is $v>0$, and the off-diagonal element $\Delta$ is supposed to be real. The Landau-Zener model uses a couple of ordinary differential equations given by \begin{eqnarray} i\hbar\frac{d}{dt}\left( \begin{array}{c} \psi_1\\ \psi_2 \end{array} \right)&=&\left( \begin{array}{cc} -\frac{v}{2}t& \Delta \\ \Delta& +\frac{v}{2}t \end{array} \right) \left( \begin{array}{c} \psi_1\\ \psi_2 \end{array} \right), \end{eqnarray} which can be decoupled to give \begin{eqnarray} \left[\hbar^2\frac{d^2}{dt^2}+\left(\Delta^2-i\hbar\frac{v}{2}\right)+\frac{1}{4}v^2t^2\right]\psi_1&=&0\\ \left[\hbar^2\frac{d^2}{dt^2}+\left(\Delta^2+i\hbar\frac{v}{2}\right)+\frac{1}{4}v^2t^2\right]\psi_2&=&0. \end{eqnarray} Following Ref.\cite{Virtual:2015HKT, EWKB}, we are going to rewrite the equations in the standard EWKB form. In this form, a ``large'' parameter $\eta\equiv \hbar^{-1}$ is introduced to give the ``Schr\"odinger equation'' \begin{eqnarray} \left[-\frac{d^2}{dx^2}+\eta^2 Q(x) \right]\psi(x,\eta)&=&0, \end{eqnarray} where \begin{eqnarray} Q(x)&\equiv&V(x)-E \end{eqnarray} is given by the ``potential'' $V$ and the ``energy'' $E$. For the Landau-Zener model, we have \begin{eqnarray} Q(x,\eta)&=&\left(\Delta^2-i\eta^{-1}\frac{v}{2}\right)+\frac{1}{4}v^2t^2\nonumber\\ &=&\left(\Delta^2+\frac{1}{4}v^2t^2\right)+\left(\mp i\eta^{-1}\frac{v}{2}\right)\\ Q_0(x)&\equiv&\Delta^2+\frac{1}{4}v^2t^2\\ Q_{-1}(x)&\equiv&\mp i\eta^{-1}\frac{v}{2}. \end{eqnarray} Due to the formal structure of the EWKB\cite{Enomoto:2020xlf,Virtual:2015HKT}, the Stokes lines are drawn using only $Q_0$\footnote{See also Appendix \ref{app-srevEWKB} to find the difference between the conventional WKB analysis and the EWKB.}. Therefore, in the EWKB formulation, $\psi_1$ and $\psi_2$ have the same Stokes lines. (A careful reader will understand that this statement does not mean that solutions are identical.) Finally, we have \begin{eqnarray} V&=&-\frac{1}{4}v^2x^2\\ E&=&\Delta^2 \end{eqnarray} for the conventional quantum scattering problem with an inverted quadratic potential. See also Appendix \ref{app-reviewEWKB} and Ref.\cite{Enomoto:2020xlf} for more details about the EWKB and the Stokes lines for cosmological particle production. If one wants to consider (explicitly) the exact solution instead of the Stokes lines of the EWKB, it will be convenient to consider $z=i\sqrt{v} e^{i\pi/4}t$ ($z^2=-ivt^2$) to find\footnote{Here we temporarily set $\hbar=1$.} \begin{eqnarray} \left[\frac{d^2}{dt^2}+\left(n+\frac{1}{2}-\frac{1}{4}z^2\right)\right]\psi_1(z)&=&0\\ \left[\frac{d^2}{dt^2}+\left(n-\frac{1}{2}-\frac{1}{4}z^2\right)\right]\psi_2(z)&=&0. \end{eqnarray} Here we set \begin{eqnarray} n&\equiv&i\frac{\Delta^2}{v}. \end{eqnarray} Since these equations are giving the standard form of the Weber equation, their solutions are given by a couple of independent combinations of $D_n(z), D_n(-z),D_{-n-1}(iz), D_{-n-1}(-iz)$. Using the asymptotic forms of the Weber function, one can easily get the transfer matrix given by \begin{eqnarray} \left( \begin{array}{c} \psi_1^+\\ \psi_2^+ \end{array} \right)&=&\left( \begin{array}{cc} e^{-\pi \kappa}& -\sqrt{1-e^{-2\pi\kappa}} \\ \sqrt{1-e^{-2\pi\kappa}} & e^{-\pi \kappa} \end{array} \right) \left( \begin{array}{c} \psi_1^-\\ \psi_2^- \end{array} \right),\nonumber\\ \end{eqnarray} where phase parameters are disregarded for simplicity. $\pm$ signs of $\psi^\pm$ are for $t\rightarrow \pm \infty$. We introduced $\kappa$, which is the imaginary part of $n$ and given by \begin{eqnarray} \kappa&\equiv&\frac{\Delta^2}{v}. \end{eqnarray} For the EWKB, this factor appears from the integral connecting the two turning points of the MTP\cite{Enomoto:2020xlf}. (Here, ``turning point'' denotes solutions of $Q_0=0$.) For the cosmological particle production, $\kappa$ determines the number density. Note that the above transfer matrix is not defined for the ``adiabatic states'', which represents the ``adiabatic energy'' \begin{eqnarray} E_\pm&=&\pm\sqrt{\Delta^2+v^2t^2/4}. \end{eqnarray} Since these adiabatic states are diagonalizing the Hamiltonian and identified with the asymptotic WKB solutions, the transition matrix for these (adiabatic) states is giving Bogoliubov transformation of the cosmological particle production. If one writes the transfer matrix for the ``adiabatic states'' $\Psi_{1,2}$ instead of the original states $\psi_{1,2}$, one will have \begin{eqnarray} \left( \begin{array}{c} \Psi_1^+\\ \Psi_2^+ \end{array} \right)&=&\left( \begin{array}{cc} \sqrt{1-e^{-2\pi\kappa}} &e^{-\pi \kappa}\\ e^{-\pi \kappa} &-\sqrt{1-e^{-2\pi\kappa}} \end{array} \right) \left( \begin{array}{c} \Psi_1^-\\ \Psi_2^- \end{array} \right),\nonumber\\ \end{eqnarray} where we have omitted the phase parameter. Compare the transfer matrix with the one obtained for bosonic preheating in Ref.\cite{Kofman:1997yn}. For Dirac fermions, one can find the calculation based on the Landau-Zener model in Ref.\cite{Enomoto:2020xlf}, which can be compared with the standard calculation of Ref.\cite{Greene:1998nh, Peloso:2000hy}. The off-diagonal elements of the transfer matrix are giving $\beta_k^+$ of the Bogoliubov transformation\cite{Kofman:1997yn} if $\alpha_k^-=1, \beta_k^-=0$ is considered for the initial condition. Comparing the original equation of the Landau-Zener model and the decoupled equations, one can see that $D_1\equiv - vt, D_2\equiv +vt$ in the (original) diagonal elements are transferred into the ``potential'' $-\frac{1}{4}v^2t^2$ in the decoupled equations\cite{Enomoto:2020xlf}. In this paper, both approaches (the Landau-Zener model and the EWKB Stokes lines of the decoupled equations) are used to understand the cosmological particle production and the origin of the asymmetry. \section{Asymmetry in cosmological particle production and the decay process} First, we introduce helicity-violating interaction (mass) for the Majorana fermion and examine asymmetry production when the mass is time-dependent. Because of the facility of the equations, our idea of asymmetric particle production will be examined first for the helicity asymmetry of the Majorana fermions. The result can be regarded as asymmetric decay of the $\theta(t)$ field (PNGB), where the asymmetry is determined by the sign of $\dot{\theta}$. Unlike the usual scenario of asymmetric decay, interference is not playing important role in our scenario. Although perturbative expansion before the non-perturbative analysis could be very useful, perturbative expansion may drastically change the structure of the EWKB Stokes lines of the model. Therefore, perturbative expansion before the non-perturbative analysis is sometimes very dangerous. Some useful examples will be shown. \subsection{Majorana fermion with time-dependent mass (Basic Calculation)} \label{subsec-basic} For the Majorana fermion, we consider $\Psi^t_R\equiv(\psi_R,\psi_R^\dagger)$ and write the Majorana mass term as \begin{eqnarray} {\cal L}_m&=&\bar{\Psi}_R \left( \begin{array}{cc} 0 & m_R \\ m_R^* & 0 \end{array} \right)\Psi_R. \end{eqnarray} The Lagrangian density becomes \begin{eqnarray} \label{eq-majorana-Lag} {\cal L}&=& \bar{\psi}_Ri\bar{\sigma}^\mu\partial_\mu\psi_R -\frac{1}{2} \left(m_R\psi_R^2+m_R^*\psi_R^{\dagger 2}\right), \end{eqnarray} which gives the equation of motion \begin{eqnarray} (i\bar{\sigma}^0\partial_t+i\bar{\sigma}^i\partial_i)\psi_R&=& -m_R^*\psi_R^\dagger. \end{eqnarray} We consider the expansion \begin{eqnarray} (\psi_R)_\alpha&=&\int\frac{d^3k}{(2\pi)^3}e^{i\mathbf{k\cdot x}}\sum_{s=\pm} (e^s_{\boldsymbol k})_\alpha \nonumber\\ &&\times \left[ u^s_k(t) a^s_{\boldsymbol k} +v^{s*}_{k}(t)\cdot e^{-i\theta_{\boldsymbol k}}a_{-\boldsymbol k}^{s\dagger}\right], \label{eq_expansion_MF} \end{eqnarray} where $e_{\mathbf{k}}^s$ denotes the helicity eigenstate and we have \begin{equation} -k^i \bar{\sigma}^i e_{\mathbf{k}}^s = s|\mathbf{k}| \bar{\sigma}^0 e_{\mathbf{k}}^ \qquad (s=\pm) \end{equation} and the orthogonalities \begin{equation} e_{\boldsymbol k}^{s\dagger}\bar{\sigma}^0e_{\boldsymbol k}^{s'} = \delta^{ss'}, \qquad e_{\boldsymbol k}^se_{- \boldsymbol k}^{s'}=se^{i\theta_{\boldsymbol k}}\delta^{ss'}, \end{equation} and a phase by the momentum direction \begin{equation} e^{i\theta_{\boldsymbol k}}\equiv \frac{k^1+ik^2}{\sqrt{(k^1)^2+(k^2)^2}}. \end{equation} From the above expansion and the equation of motion, one will find \begin{eqnarray} \label{eq-EOMofMajo} (i\partial_t+s|\boldsymbol k|)u^s_{k}=s m_R^*v^{s}_{k},\nonumber\\ (i\partial_t+s|\boldsymbol k|)v^{s*}_{k}=-s m_R^* u^{s*}_{k}. \end{eqnarray} One can write these equations using a matrix as \begin{eqnarray} i\frac{d}{dt}\Psi&=&H\Psi,\nonumber\\ \left( \begin{array}{cc} H_{11} & H_{12}\\ H_{21} & H_{21} \end{array} \right)&=& \left( \begin{array}{cc} -s|\boldsymbol k| & s m_R^*(t)\\ s m_R(t) & s|\boldsymbol k| \end{array} \right), \end{eqnarray} where $\Psi^t\equiv (v^s_{k}, u^s_{k})$. Note that this formalism introduces ``time-dependent off-diagonal elements'' to the theory. This is what we need for asymmetry production in this paper. We are going to write the above equations in a versatile form. Introducing $D(t)=-s|{\boldsymbol k}|$ and $\Delta(t)=sm_R(t)$, we have \begin{eqnarray} \label{eq-simpleoriginalLZ} i\hbar \frac{d}{dt}\left( \begin{array}{c} X\\ Y \end{array} \right)&=&\left( \begin{array}{cc} D(t) & \Delta(t)^*\\ \Delta(t) & -D(t) \end{array} \right) \left( \begin{array}{c} X\\ Y \end{array} \right), \end{eqnarray} where we recovered $\hbar$ for later arguments. Decoupling the equations, one will have the equations given by \begin{eqnarray} \ddot{X}-\frac{\dot{\Delta}^*}{\Delta^*}\dot{X}+ \left(-\frac{iD\dot{\Delta}^*}{\hbar\Delta^*} +\frac{i\dot{D}}{\hbar} +\frac{|\Delta|^2+D^2}{\hbar^2} \right)X=0.\\ \ddot{Y}-\frac{\dot{\Delta}}{\Delta}\dot{Y}+ \left(\frac{iD\dot{\Delta}}{\hbar\Delta} -\frac{i\dot{D}}{\hbar} +\frac{|\Delta|^2+D^2}{\hbar^2} \right)Y=0. \end{eqnarray} To obtain the standard form of the EWKB, one has to introduce a new $\hat{P}$ and $\hat{Q}$ defined by \begin{eqnarray} \label{eq-normalEWKBtrans} \hat{X}&=&\exp\left(-\frac{1}{2}\int^x \frac{\dot{\Delta}^*}{\Delta^*}dx\right)X\nonumber\\ \hat{Y}&=&\exp\left(-\frac{1}{2}\int^x \frac{\dot{\Delta}}{\Delta}dx\right)Y \end{eqnarray} Alternatively, one can introduce $\hat{P}$(and $\hat{Q}$) to the original equation (before the decoupling) to remove the time-dependence of the off-diagonal elements in (\ref{eq-simpleoriginalLZ}). In that case, one will have \begin{eqnarray} \Psi&=&U \hat{\Psi},\\ \hat{H}&=&U^{-1} H U-i U^{-1}\dot{U}, \end{eqnarray} where $U$ defines the transformation given in Eq.(\ref{eq-normalEWKBtrans}). Of course, after decoupling the equations, one will have identical results. For the decoupled equations, the equations can be written in the standard EWKB form \begin{eqnarray} \label{Eq_2ndorder-Majorana} \ddot{\hat{X}}&+&\left(\frac{-iD\dot{\Delta}^*}{\hbar \Delta^*} +\frac{i\dot{D}}{\hbar}+\frac{|\Delta|^2+D^2}{\hbar^2}\right.\nonumber\\ &+&\left.\frac{\ddot{\Delta}^*}{2\Delta^*}-\frac{3(\dot{\Delta}^*)^2}{4(\Delta^*)^2}\right)\hat{X}=0\\ \ddot{\hat{Y}}&+&\left(\frac{iD\dot{\Delta}}{\hbar \Delta} -\frac{i\dot{D}}{\hbar}+\frac{|\Delta|^2+D^2}{\hbar^2}\right.\nonumber\\ &+&\left.\frac{\ddot{\Delta}}{2\Delta}-\frac{3(\dot{\Delta})^2}{4(\Delta)^2}\right)\hat{Y}=0. \end{eqnarray} Seeing the $\hbar$-dependence, the EWKB Stokes lines of the above equation coincides with the trivial equation \begin{eqnarray} \label{eq-triv} \ddot{\hat{X}}&+&\frac{|\Delta|^2+D^2}{\hbar^2}\hat{X}=0, \end{eqnarray} which cannot generate asymmetry. Mathematically, this result is true if no extra $\hbar$ is appearing from $\dot{\Delta}^*$. We are going to examine this model further to understand the source of the asymmetry. \subsection{Majorana fermion with time-dependent mass (constant rotation)} \label{subsec-simplept} We start with the typical example of quantum mechanics. In cosmology, trapping of an oscillating field\cite{Kofman:2004yc, Enomoto:2013mla} or the Affleck-Dine baryogenesis\cite{Affleck:1984fy, Matsuda:2002jv} can generate similar rotation, which can be seen in a local domain\cite{Lee:1991ax,Matsuda:2002jx}. Here we consider the equation given by\footnote{Just for simplicity, we temporarily set $\hbar=1$.} \begin{eqnarray} i\frac{d}{dt}\Psi&=&(H^{(0)}+H^{(1)})\Psi,\nonumber\\ H^{(0)}&=& \left( \begin{array}{cc} D& 0\\ 0& -D\\ \end{array} \right)\\ H^{(1)}&=& \left( \begin{array}{cc} 0 & \Delta^*(t)\\ \Delta_1(t)& 0\\ \end{array} \right), \end{eqnarray} where $D= \omega_0$. Then we have the solution given by \begin{eqnarray} \Psi(t)&=&c_1(t) e^{-i\omega_0 t} \left( \begin{array}{c} 1\\ 0 \end{array} \right) +c_2(t) e^{i\omega_0 t} \left( \begin{array}{c} 0\\ 1 \end{array} \right). \end{eqnarray} To find the time-dependent coefficients $c_{1}(t)$ and $c_2(t)$, we substitute $\Psi(t)$ to find \begin{eqnarray} i\frac{d c_1}{dt}&=& \Delta^* e^{2i\omega_0 t}c_2(t)\\ i\frac{d c_2}{dt}&=& \Delta e^{-2i\omega_0 t}c_1(t). \end{eqnarray} It is very difficult to solve this equation exactly for general $\Delta(t)$, but one can use a numerical calculation to understand the transition. For $\Delta(t)\equiv Ae^{i\omega t}= Ae^{2i\omega_0 t}$, one can easily find the exact solution, which gives $c_{1,2}(t)\sim \sin (At+\theta_0)$. Here $A$ ($=|\Delta|$) determines the rapidity of the transition and the maximum transition is possible for any $A$, although it takes a long time for small $A$. Away from the resonance frequency at $\omega=2\omega_0$, the transition amplitude decreases. Here, what is important for our discussion about asymmetry is inverse rotation. If the off-diagonal element is replaced by $\Delta(t)= Ae^{-2i\omega_0t}$, the equations become \begin{eqnarray} i\frac{d c_1}{dt}&=& A e^{4i\omega_0 t}c_2(t)\\ i\frac{d c_2}{dt}&=& A e^{-4i\omega_0 t}c_1(t), \end{eqnarray} which ruins the resonance. The situation becomes very clear if one introduces the transformation of Eq.(\ref{eq-normalEWKBtrans}). For $\Delta(t)= Ae^{2i\omega_0t}$, the two states are shifted together to make a pair of degenerated states ($\hat{D}=0$) in $\hat{H}$. If this transition corresponds to the Bogoliubov transformation, particle production is possible in this case. On the other hand, for the inverse rotation $\Delta(t)= Ae^{-2i\omega_0t}$, two states are ``shifted away'' to give $\hat{D}=2\omega_0$. Then the particle production is suppressed. Although the diagonal elements of $\hat{H}$ coincide, the ``adiabatic'' energy splits because of the remaining off-diagonal elements (i.e, the radial part $|\Delta(t)|$), which affects the rapidity of the transition. The situation is shown in Fig.\ref{fig-rot-simplest}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{simplest.eps} \caption{Top: rotation of the off-diagonal elements. Middle: New states of $\hat{H}$. Bottom: The original states of $H$. In this case, the resonance is possible only for the anti-clockwise rotation.} \label{fig-rot-simplest} \end{figure} Although this model is very useful for understanding the origin of the asymmetry, there is a problem in defining the number density. In this model, there is no non-perturbative transition (i.e, the Stokes phenomena) between the ``adiabatic states''. On the other hand, the adiabatic states cannot keep diagonalizing the Hamiltonian, which causes oscillation of the number density. To avoid such problem, one has to define the creation and the annihilation operators for the asymptotic states, but in this model the asymptotic states are not defined.\footnote{ In reality, the period of the periodic function $n_k$ depends on $k$. Therefore, the total number density after $k$-intergation is not simple. Since the particles could decay during the process, it could be possible to claim that asymmetric particle production is possible in this model.}. To introduce a non-adiabatic transition, we have to extend the model. \subsection{Majorana fermion with linear time-dependence ($\Delta=s g (\epsilon+i v t), D=-s|\boldsymbol k|$)} \label{subsec-lin} For the cosmological preheating scenario, particle production near the enhanced symmetry point(ESP) with $\phi(t)=\epsilon+ivt$ is very important. Using the generation of the asymmetry, we also explain the crucial discrepancy between the conventional WKB expansion and the EWKB. We start with the decoupled equation \begin{eqnarray} \ddot{\hat{X}}&+&\left[\frac{g^2 \left(\epsilon^2+v^2t^2 \right)+ |\boldsymbol k|^2}{\hbar^2}+\frac{s|\boldsymbol k| v}{\hbar (\epsilon-ivt)}\right.\nonumber\\ &&\left. +\frac{3v^2}{4(\epsilon-ivt)^2}\right]\hat{X}=0. \end{eqnarray} Naively for the EWKB, this equation gives \begin{eqnarray} Q_0(t)&=&-g^2 \left(\epsilon^2+v^2t^2 \right)- |\boldsymbol k|^2 \end{eqnarray} and the asymmetry seems to disappear from the non-perturbative calculation. This result agrees with our numerical calculation. However, seeing the trajectory, one will find that this model has a rotational motion around the origin (i.e, the ESP). Is it true that the rotational asymmetry considered in Sec.\ref{subsec-simplept} disappears in this model? If the asymmetry disappears, what is the crucial condition?\footnote{In Ref.\cite{Dolgov:1989us}, by considering perturbation, the asymmetry is related to the interference between terms. We are arguing this topic from another viewpoint.} To understand more about the situation, we show the naive Stokes lines\footnote{The stokes lines are called ``naive'', since careful people will not use these stokes lines for their calculation, even if they are considering the conventional WKB expansion. The problems of $O(\hbar)$ terms and their poles are widely known\cite{Berry:1972na} for the conventional WKB expansion.} in Fig.\ref{fig_stokesdoublepoll}, which have a double pole at $t=-i\epsilon /v$ and four turning points.\footnote{Note that the Stokes lines in Fig.\ref{fig_stokesdoublepoll} are obtained from $Q$ itself, not from $Q_0$. Therefore the stokes lines in Fig.\ref{fig_stokesdoublepoll} are not representing the true stokes lines.} Seeing the Stokes lines, one can understand the situation. The EWKB stokes lines appear after gluing the double pole and two turning points together at the origin. In this limit (i.e, $\hbar \rightarrow 0$), the stokes lines give the simple ``scattering problem with an inverted quadratic potential''\cite{Enomoto:2020xlf}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{linearD.eps} \caption{Stokes lines for $\epsilon=0.1, v=1, k=1, \hbar=0.1, g=1, s=\pm1$ are given for $Q_0+Q_1$, not for $Q_0$. One can see four turning points (circle) and a double pole (triangle) near the origin, which are asymmetric, but they disappear in the EWKB.} \label{fig_stokesdoublepoll} \end{figure} Alternatively, using the numerical calculation explained in Appendix \ref{sec_distribution_MF}, we find that the net number $n_k^+-n_k^-$ cannot evolve in the straight motion $\ddot{m}_R=0$ as long as the initial state begins with the zero particle state. In reality, however, the backreaction is important since the particle production is significant in this model, and its backreaction can bend the trajectory to give $\ddot{m}_R\ne 0$.\footnote{In the most significant case, the oscillating field can be ``trapped'' near the ESP\cite{Kofman:2004yc}, which may happen also for higher-dimensional interaction suppressed by the Planck scale\cite{Enomoto:2013mla}}. Therefore, without considering the backreaction, this model cannot generate asymmetry, but the particle production always introduces the backreaction, which is required for the asymmetry. Therefore, in reality, the asymmetry production cannot be neglected in this model. Below, we mainly consider only the rotational motion of the off-diagonal elements, which depends only on the phase $\theta(t)$. In such models, the Landau-Zener transition is very useful. We are using the EWKB to support the analysis. \subsection{Majorana fermion with time-dependent mass (perturbative expansion)} \label{sec-pert} We consider perturbative expansion of the model considered in Sec.\ref{subsec-simplept}, which is given by \begin{eqnarray} \Delta&=&A e^{i\theta}\simeq A (1+i\theta+...), \end{eqnarray} where $\theta(t)=\omega t$ is the simplest example. The expansion\footnote{The EWKB is an expansion, but it takes the Borel sum to get the non-perturbative result.} is valid only for $\theta\ll 1$. Alternatively, one can choose $\theta(t)=A \cos \omega t$ with $A\ll 1$. Reflecting on the calculation above, the above perturbative expansion might have changed the mechanism of the non-perturbative process. The significant discrepancy appears when $\theta = \omega t$. Indeed, the original theory does not allow non-adiabatic transition, while in the perturbed theory the transition is solved as the quantum scattering by a potential (after decoupling the equations). Therefore, in this example, one has to conclude that the essential mechanism of the transition has been changed by taking the perturbative expansion. In the light of the EWKB, this is because the structure of the Stokes lines, which determines the transfer matrix, is changed by the perturbation. Therefore, to avoid such discrepancy, one has to consider the expansion that does not change the essential property of the Stokes lines. We show some typical examples in appendix\ref{app-reviewEWKB}. If the particle production is the Landau-Zener type, one has to check first the global structure of the Stokes lines of the EWKB, and the local (linear) expansion has to be taken around the points where the Stokes lines cross the real-time axis. Indeed, the original Landau-Zener model considers local expansion around the state-crossing point, where the global structure of the Stokes lines has the separable form of the MTP(Merged pair of simple Turning Points). In this paper, we often consider similar expansion. Note however that the particle production is not always explained by the Landau-Zener type transition. We sometimes compare the Landau-Zener type calculation with the EWKB Stokes lines. In our analysis, the Landau-Zener type transition is very useful for understanding the asymmetry. \subsection{Majorana fermion with time-dependent mass \label{sec-majcos} (The EWKB for $\Delta(t)=m_0 e^{i\theta}$, $\theta(t)\equiv A\cos(\omega_a t/\hbar)$)} \label{sec-oscphase} In this section, we consider \begin{eqnarray} \theta(t)&\equiv& A\cos(\omega_a t/\hbar)\nonumber\\ \Delta(t)&=&m_0 e^{-i\theta} \end{eqnarray} and draw the Stokes lines of Eq.(\ref{Eq_2ndorder-Majorana}). Note that \begin{eqnarray} \frac{\dot{\Delta}^*}{\Delta^*}&=&i\dot{\theta}\\ &=&\frac{i\omega_a}{\hbar}\sin\left(\frac{\omega_a t}{\hbar}\right)\nonumber\\ &\sim&O(\hbar^{-1}) \end{eqnarray} is important for the EWKB calculation.\footnote{In the EWKB, analytic evaluation of the integral is usually very difficult, despite the transparency of the qualitative analysis based on the Stokes lines. In this paper, we are using the EWKB for qualitative analysis.} What is important for the asymmetry is the term proportional to $s$. Terms proportional to $\dot{D}$ will disappear since $D$ is constant. Therefore, the s-dependence comes from the factor \begin{eqnarray} -iD\frac{\dot{\Delta}^*}{\hbar\Delta^*}&=& -\frac{i(-s|\boldsymbol k|)}{\hbar}\frac{i\omega_a}{\hbar}\sin\left(\frac{\omega_a t}{\hbar}\right)\nonumber\\ &=& -\frac{s|\boldsymbol k|\omega_a}{\hbar^2} \sin\left(\frac{\omega_a t}{\hbar}\right). \end{eqnarray} This term is suppressed in the non-relativistic limit ($|\boldsymbol k|\rightarrow 0$). Therefore, the asymmetry is not significant in the non-relativistic limit. This result may not be consistent with the intuition, since in the non-relativistic limit, violation of the helicity could be significant. We will explain the reason later in this section. To understand more about the origin of the asymmetry, we have calculated the Stokes lines for typical $Q_0$, which is shown in Fig.\ref{fig_exactpm} together with their ``potentials''. We have excluded the imaginary part for simplicity. See Appendix \ref{app-reviewEWKB} for more details. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{exactpm.eps} \caption{The Stokes lines and the potential are shown for $Q(t)= -1.5 - s 0.8 \sin(2t) - \sin^2(2t)$ and $V(t)=- s 0.8 \sin(2t) - \sin^2(2t)$. The upper is for $s=+1$ and the lower is for $s=-1$.} \label{fig_exactpm} \end{figure} Seeing Fig.\ref{fig_exactpm} and the solutions of $Q_0=0$, one can understand that the basic structures of the MTPs\footnote{In terms of the EWKB, each MTP gives the transfer matrix similar to the scattering by an independent inverted quadratic potential, which is calculable\cite{Enomoto:2020xlf,Sueishi:2020rug, Taya:2020dco, Sueishi:2021xti}. In this sense, each MTP is separable. See also Appendix \ref{app-reviewEWKB} for more details.} are completely the same but their positions are shifted by changing the sign of $s$. Therefore, for an eternal oscillation, the production becomes symmetric on average, but for a damped oscillation, the asymmetry appears because the particle production is not simultaneous for different $s=\pm1$. The time-dependent amplitude of the oscillation generates different number densities at each MTP. The asymmetry is therefore determined by the time-dependence of the amplitude. Particle production is exclusive at each MTP\footnote{ When $s=1$ is generated, the other ($s=-1$) is not generated. Simultaneous production is possible only when $k=0$, where the asymmetry vanishes. Of course, particle production is not instant in reality.} One thing that has to be clarified is the vanishing asymmetry in the limit of $|{\boldsymbol k}|\rightarrow 0$. This phenomenon can be understood easily by considering the original equation (Landau-Zener type). In Fig.\ref{fig_cosk}, we show the motion of the two states for $\hat{H}$. In the left, $s=\pm 1$ states are distinguishable when $|{\boldsymbol k}|\ne 0$, while in the right, $s=\pm 1$ states are not distinguishable when $|{\boldsymbol k}|= 0$. Simultaneous particle production is possible only when $k=0$, which corresponds to vanishing asymmetry. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{smallkcos.eps} \caption{After using Eq.(\ref{eq-normalEWKBtrans}), the time-dependence appears only in the diagonal elements. Left: As far as $|\boldsymbol k|\ne 0$, one can discriminate $s=1$ and $s=-1$. Right: For $|\boldsymbol k|= 0$, particle production cannot discriminate the helicity.} \label{fig_cosk} \end{figure} One can compare the above result with the earlier calculation given in Ref.\cite{Dolgov:1996qq, Enomoto:2018yeu}. In Ref.\cite{Dolgov:1996qq}, perturbative expansion has been used to expand $e^{i\theta}$ to calculate non-perturbative particle production, where the asymmetry appears from the interference between terms. In our analysis, the asymmetry appears because the particle production is exclusive.\footnote{Note however that the asymmetry depends on $k$. One can see from Fig.\ref{fig_cosk} that states are symmetric for $k\simeq 0$. Therefore, total number density after $k$ integration may not be utterly exclusive due to the symmetric particle production near $k\sim 0$.} In Ref.\cite{Enomoto:2018yeu}, it has been shown for a single Dirac fermion that crucial cancellation disturbs asymmetry production. The reason is very simple. For a single Dirac fermion, the complex mass has to be introduced as \begin{eqnarray} m_D \overline{\psi_L}\psi_R +m_D^* \overline{\psi_R}\psi_L, \end{eqnarray} which flips the rotational direction by the exchange $L\leftrightarrow R$. More precisely, since the exchange $L\leftrightarrow R$ inverses the rotational motion of the complex Dirac mass, ``matter production'' of the left-handed fermion occurs simultaneously with ``antimatter production'' of the right-handed fermion, and vice versa. To avoid this cancellation, one has to introduce more than two Dirac fermions, whose off-diagonal elements (interaction) is given by \begin{eqnarray} \left[m_{\Delta} (\overline{\psi_{Lj}}\psi_{Ri} +\overline{\psi_{Rj}}\psi_{Li})+h.c.\right]. \end{eqnarray} or \begin{eqnarray} \left[m_{\Delta} \overline{\psi_{Lj}}\psi_{Ri} +h.c.\right]. \end{eqnarray} We will go back to this topic in Sec.\ref{sec-concex}. \subsection{Majorana fermion with time-dependent mass ($\theta(t)=A/(1+e^{-at/\hbar})$)} \label{subsec-onetime} In this section, we consider the simple rotational motion of the phase. As we will see later in this section, the model clearly describes the origin of the asymmetry in particle production. We consider \begin{eqnarray} \theta(t)=\frac{A}{1+e^{-at/\hbar}}, \end{eqnarray} which gives transition of the phase from $\theta_i=0$ to $\theta_e=A$ around $t=0$. We showed the motion of $\theta(t)$ and $\dot{\theta}(t)$ in Fig.\ref{fig_trans-easy1}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{easy.eps} \caption{$\theta(t)$ and $\dot{\theta}(t)$ for $\theta(t)=A/(1+e^{-at/\hbar})$} \label{fig_trans-easy1} \end{figure} We take \begin{eqnarray} m_R(t)&=&g\varphi (t)\equiv g\varphi_0 e^{i\theta(t)}\nonumber\\ \theta(t)&\equiv&\frac{A}{1+e^{-at/\hbar}}, \end{eqnarray} where $\varphi_0$ is real. Again, one can remove the time-dependence of the off-diagonal elements using the translation\footnote{See Eq.(\ref{eq-normalEWKBtrans}). This transformation was originally introduced to get the standard form of the EWKB formulation from the decoupled equations. At the same time, it removes the time dependence of the off-diagonal elements.} \begin{eqnarray} \hat{\Psi}&\equiv& U_T^{-1} \Psi,\nonumber\\ U_T&\equiv& \left( \begin{array}{cc} e^{i\theta(t)/2} & 0\\ 0 & e^{-i\theta(t)/2} \end{array} \right). \end{eqnarray} The Hamiltonian after the transformation becomes \begin{eqnarray} \hat{H}&\equiv& U_T^{-1}H U_T -iU_T^{-1}\dot{U}_T\nonumber\\ &=& \left( \begin{array}{cc} -s|\boldsymbol k|-\frac{1}{2} \hbar\dot{\theta} & s g\varphi_0\\ s g^* \varphi_0 & s|\boldsymbol k|+\frac{1}{2} \hbar\dot{\theta} \end{array} \right). \end{eqnarray} Note that the new terms in the diagonal elements do not have ``$s$'' in their coefficients. The new Hamiltonian is quite useful for our discussion. One can see in Fig.\ref{fig_trans-easy} that the new term ($\propto \dot{\theta}$) makes a bump around $t=0$ and the intersection (in the sense of the original Landau-Zener model) appears only for $s=-1$, because of the signs in front of $\dot{\theta}$. This means that for a certain range of $k$, only $s=-1$ particles can experience the Landau-Zener type transition. The particle production seems exclusive in this case, at least for the Landau-Zener type particle production.\footnote{In appendix \ref{app-reviewEWKB}, we argue the possibility of particle production without crossing (not the Landau-Zener type).} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{easyplus.eps} \includegraphics[width=0.9\columnwidth]{easyminus.eps} \caption{Upper: Time-dependence of the diagonal elements of $\hat{H}$ for $s=+1$ and $k\ne 0$, in which one can see no crossing. Lower: Time-dependence of the diagonal elements of $\hat{H}$ for $s=-1$ and $k\ne 0$, in which each state crossing causes Landau-Zener transition.} \label{fig_trans-easy} \end{figure} In this model, one can see state crossing of $s=-1$ for $k<k_*\equiv \frac{aA}{8}$. Since $A$ defines the amplitude of $\theta(t)$, we found $n\propto k_*^3\propto A^3$ in this model, which supports the claim given in Ref.\cite{Dolgov:1996qq}.\footnote{The required condition for the state crossing is calculated from the maximum of $\dot{\theta}$ (i.e, the height of the bumps). For $|k|>\hbar \mathrm{Max}[\dot{\theta}]/2 =|\hbar \dot{\theta}(0)|/2=a A/8$, states are always apart.} Considering the original Landau-Zener model, the probability of the transition (i.e, the magnitude of the particle production) is determined by the velocity at each crossing point ($t_{1,2}$), which is \begin{eqnarray} \frac{|g\varphi_0|^2}{v(t_i)}&<&1\\ v(t_i)&\equiv& \dot{\hat{D}}(t_i)\equiv\hbar\frac{\ddot{\theta}(t_i)}{2}. \end{eqnarray} Note that in this formulation, $k$-dependence is appearing in $t_i$. In this argument, the radius of the Fermi sphere is important for estimating the number density. For large $k$, the top of the two bumps meet at \begin{eqnarray} k_*&\equiv&\frac{aA}{8}, \end{eqnarray} where the velocity of the state becomes $\ddot{\theta}\simeq 0$. See the Fig.\ref{fig_trans-noteasy}. Here the conventional Landau-Zener model, which uses the linear expansion at the state-crossing point, does not give the correct answer. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{fourth.eps} \caption{The upper and the lower states start to fall apart for larger $|k|$, and the crossing disappears near $k\sim k_*$. At this moment $\ddot{\theta}$ disappears and higher calculation becomes important\cite{Enomoto:2020xlf}.} \label{fig_trans-noteasy} \end{figure} The transition of this kind has already been calculated in Ref.\cite{Enomoto:2020xlf}, in which the scattering problem with an inverted quartic potential has been considered. Near the top of the bumps, we considered in Ref.\cite{Enomoto:2020xlf} the equation given by \begin{eqnarray} i\hbar\frac{d}{dt}\left( \begin{array}{c} \psi_1\\ \psi_2 \end{array} \right)&=&\left( \begin{array}{cc} -(a t^2+\epsilon)& \Delta \\ \Delta& at^2+\epsilon \end{array} \right) \left( \begin{array}{c} \psi_1\\ \psi_2 \end{array} \right), \end{eqnarray} which gives after decoupling the following equation \begin{eqnarray} \left[\hbar^2 \frac{d^2}{dt^2}+\left(\Delta^2-i(2at)\hbar\right)+\left(at^2+\epsilon\right)^2\right]\psi_1&=&0. \end{eqnarray} In terms of the EWKB, $Q_0$ of the equation is \begin{eqnarray} \label{eq-fermion1} Q_0&=&-\Delta^2-\left(at^2+\epsilon\right)^2, \end{eqnarray} where the ``potential'' is \begin{eqnarray} V(t)&=&-a^2t^4-2a\epsilon t^2<0 \end{eqnarray} and the ``energy'' is $E=\Delta^2+\epsilon^2>0$. We thus find \begin{eqnarray} Q_0(t)&=&-\Delta^2-\epsilon^2- a^2 t^4-2a\epsilon t^2. \end{eqnarray} For the model considered in this section, we have \begin{eqnarray} \dot{\theta}(t)&\simeq& 2k_*+\frac{1}{2}\dddot{\theta}(0)t^2+...\\ \dddot{\theta}(0)&\equiv& -4a_2, \end{eqnarray} which gives the Hamiltonian for $s=-1$ \begin{eqnarray} \hat{H} &=& \left( \begin{array}{cc} \Delta k +a_2t^2 & - g\varphi_0\\ - g^* \varphi_0 & -\Delta k -a_2t^2 \end{array} \right), \end{eqnarray} where we introduced $\Delta k \equiv |k|-k_*$ . There is no state crossing for $|k|>k_*$($\Delta k>0$), but to solve the scattering problem of the quartic potential, one can see that the transition can be significant yet\cite{Enomoto:2020xlf}. Transforming the above equation into $Q(x)=-\kappa_4 -x^4$, one can easily find that $\kappa_4$ in the equation gives the bound $\kappa_4\lesssim 1$, which determines the radius of the Fermi sphere. Although rather crude, one can see that when the tops of the bumps coincide at $\Delta k=0$, the Fermi sphere can be larger than $k_*$ if $a \ge (g^2\varphi_0^2+\Delta k)^{1/2} A^{1/3}$ is satisfied. Here, $A$ represents the variation (amplitude) of $\theta(t)$ and $a$ determines the speed of the transition (or the width of the domain wall). The above condition is showing that particle production is more efficient when the variation of $\theta$ is larger and the speed of the transition is higher. This result coincides with intuition. For $A\sim O(1)$, $a\sim g\varphi_0$ gives the Fermi sphere $k_F=k_*\simeq g\varphi_0 /8$. Variation from this point will change the Fermi sphere from the naive estimation $k_F\sim k_*$. We show our numerical results\footnote{ In our numerical calculation, the wave functions $u_k^s$ and $v_k^s$ appeared in (\ref{eq_expansion_MF}) are solved numerically, and they are interpreted into the distribution function. In Appendix \ref{sec_distribution_MF} we show how to calculate the distribution function by the wave functions.} in Fig.\ref{fig_majorana_numerical}, which depict the shapes of the distributions (upper panel) and the time evolution of the number density for each state (lower panel). One can see clearly that $s=+1$ production is suppressed compared with $s=-1$. (Note that in the lower panel the number densities are given for the log scale.) \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{majorana_distribution2.eps} \includegraphics[width=1.0\columnwidth]{majorana_density.eps} \caption{The results of numerical calculation with the parameters $|m_R|=a=1.0, A=\pi/3.$ Upper: shapes of the distributions for each helicity state after the production ($t=20$). Lower: time evolution of the number densities (log scale).} \label{fig_majorana_numerical} \end{figure} In this model, the MTP structure always appears twice during particle production. When one calculates the Fermi sphere of particle production, this double MTP structure causes a less trivial modification of the distribution, as far as the particles do not decay between the two MTPs. The typical distributions are shown in Fig.\ref{fig_double-cross} for $\kappa(\Delta)\propto \Delta^2$. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{doubleMTP.eps} \caption{Typical distribution functions for $\kappa(\Delta)$ are shown for the single and the double MTP models.} \label{fig_double-cross} \end{figure} Let us explain why this distribution is very important for our scenario. Since the off-diagonal elements are representing the symmetry-violating interaction, asymmetry in the $g\varphi_0\rightarrow 0$ limit is very important. For the Majorana fermion, since the off-diagonal elements are representing the Majorana mass, small $|\Delta|$ seems to be making the particle production easier. This speculation is true for a single MTP scenario, but for our double MTP scenario, one has to consider Fig.\ref{fig_double-cross} for the distribution. In the $g\varphi_0\rightarrow 0$ limit, the particle production vanishes as one can see from Fig.\ref{fig_double-cross}. Therefore, a tiny interaction suppresses particle production (and asymmetry). From the above results, one can see that static motion (always $\ddot{\theta}=0$) gives no state crossing in this model. Although very crude, one can estimate the particle production when cosmological domain walls decay. For the typical cosmological domain wall of tension $\sigma \sim \Lambda^3$ and width $\delta_w\sim \Lambda^{-1}$ decaying at the density $\rho\sim \sigma^2/M_p^2$, the number density of the produced particle can be estimated as $k_F\sim 0.1 \Lambda$ for fast-moving wall. However, since efficient particle production causes friction to the domain wall motion, the maximum number density should be estimated as $n\lesssim \rho/\Lambda\sim \Lambda^5/M_p^2$ when the temperature of the Universe is $T_D\sim \sqrt{\sigma/M_p}$. We conclude that asymmetric particle production can be efficient for typical cosmological domain walls. \section{Dirac fermions} \label{sec-concex} As we have mentioned previously, asymmetry will be canceled for a single Dirac fermion. Let us see more details of the cancellation mechanism. For a single Dirac fermion, one may find complex mass terms as \begin{eqnarray} \left[m_{\Delta} \overline{\psi_{L}}\psi_{R} +m_{\Delta}^* \overline{\psi_{R}}\psi_{L}\right] \end{eqnarray} Following the strategies we have considered in this paper, one can explicitly calculate particle production for four species; left(right)-handed matter(antimatter). Our observation is very simple. Since the $L\leftrightarrow R$ exchange flips the rotational direction, one can immediately find that $L$ and $R$ productions are exclusive. Since the matter-antimatter is also exclusive, L-matter production and R-antimatter production are simultaneous. Regarding this particle production as the decay process, one can say that depending on the sign of $\dot\theta$, the field $\theta$ ``selectively'' decays into L-matter and R-antimatter (or L-antimatter and R-matter for the opposite sign). Therefore, for a single Dirac fermion, direct asymmetry production is impossible in total. Although a single Dirac fermion is excluded for the (direct) baryogenesis, there is hope in multiple Dirac fermions. To show an example, let us consider a fictitious ``lepton'' $\psi^l$ and a ``quark'' $\psi^q$ for the calculation and suppose that each fermion is massless (i.e, we are considering a naive relativistic limit) but they have the Yukawa interaction \begin{eqnarray} \label{eq-QLint} &&g\phi \overline{\psi^l}\psi^q + h.c\nonumber\\ &=&g\phi(\overline{\psi^l_R}\psi^q_L+\overline{\psi^l_L}\psi^q_R)+h.c. \end{eqnarray} Since in this case, the $L\leftrightarrow R$ exchange does not flip the rotational direction, $\psi^q_L$-matter production and $\psi^q_R$-matter production simultaneously occur, and they are exclusive to $\psi^q_L$-antimatter and $\psi^q_R$-antimatter production. We thus find the net baryon number from the cosmological particle production. On the other hand, considering the $l\leftrightarrow q$ exchange, $\psi^q_{L,R}$-matter production is exclusive to $\psi^l_{L,R}$-matter production. We thus find that the negative lepton number has to be generated simultaneously with the positive baryon number, and vice versa. This result is consistent with effective charge conservation. The trick of this baryogenesis scenario is very simple in the massless limit. If one rearranges the pairs as $\psi_1=(\psi_{1L},\psi_{1R})\equiv(\psi^q_{L},\psi^l_{R}$) and $\psi_2=(\psi_{2L},\psi_{2R})\equiv(\psi^l_{L},\psi^q_{R}$), the Yukawa interaction can be written as \begin{eqnarray} &&\left[g\phi\overline{\psi^l_R}\psi^q_L+h.c. \right]+ \left[g\phi\overline{\psi^l_L}\psi^q_R+h.c. \right]\nonumber\\ &=& \left[g\phi\overline{\psi_{1R}}\psi_{1L}+h.c. \right]+ \left[g^*\phi^*\overline{\psi_{2R}}\psi_{2L}+h.c. \right], \end{eqnarray} which represents the complex Dirac mass terms for two independent Dirac fermions $\psi_{1,2}$. In the original $\psi^q$ and $\psi^l$, the ``exclusive pairs'' are rearranged to avoid the cancellation of the net B and L numbers. Remember that in the previous (Majorana) models, asymmetry vanishes for $\boldsymbol k\rightarrow 0$. Therefore, we are going to discuss the non-relativistic limit, where the original mass terms \begin{eqnarray} \label{eq-Diracmass-QL} m_{q}\overline{\psi^q}\psi^q- m_{l}\overline{\psi^l}\psi^l \end{eqnarray} are not negligible but $\boldsymbol k\simeq 0$ is negligible for both fermions. Here, the minus sign in front of $m_l$ is for later convenience. We also assume $m_q\simeq m_l$ for simplicity. First, consider the conventional decomposition \begin{eqnarray} \psi&=&\int\frac{d^3k}{(2\pi)^3} e^{-i\boldsymbol k\cdot \boldsymbol x}\sum_s\left[ u^s_{\boldsymbol k}(t)a^s_{\boldsymbol k} +v^s_{\boldsymbol k}(t)b^{s\dagger}_{-\boldsymbol k}\right] \end{eqnarray} and the single-field Dirac equation \begin{eqnarray} (i\slashed{\partial}-m_D)\psi&=&0. \end{eqnarray} Carefully following the formalism given in Ref.\cite{Peloso:2000hy}, one will find \begin{eqnarray} \dot{u}_\pm&=&ik u_{\mp}\mp i m_D u_\pm, \end{eqnarray} which can be written in the matrix as \begin{eqnarray} i\frac{d}{dt}\left( \begin{array}{c} u_+\\ u_- \end{array} \right)&=&\left( \begin{array}{cc} m_D& -k \\ -k & -m_D \end{array} \right) \left( \begin{array}{c} u_+\\ u_- \end{array} \right). \end{eqnarray} Diagonalizing this equation for constant matrix elements, one will find the adiabatic states (the WKB solutions) as \begin{eqnarray} i\frac{d}{dt}\left( \begin{array}{c} \tilde{u}_+\\ \tilde{u}_- \end{array} \right)&=&\left( \begin{array}{cc} \sqrt{m_D^2+k^2}& 0 \\ 0 & -\sqrt{m_D^2+k^2} \end{array} \right) \left( \begin{array}{c} \tilde{u}_+\\ \tilde{u}_- \end{array} \right).\nonumber\\ \end{eqnarray} The original $u_\pm$ and $\tilde{u}_\pm$ coincides at $k\sim 0$. Let us introduce the interaction given in Eq.(\ref{eq-QLint}) and take $\boldsymbol k \rightarrow 0$. Assuming $m_q$ and $m_l$ are constant, we find \begin{eqnarray} \label{eq-q+} i\frac{d}{dt}\left( \begin{array}{c} u_{+}^q\\ u_{+}^l \end{array} \right)&=&\left( \begin{array}{cc} m_q & m_\Delta^* \\ m_\Delta & -m_l \end{array} \right) \left( \begin{array}{c} u_{+}^q\\ u_{+}^l \end{array} \right). \end{eqnarray} and \begin{eqnarray} i\frac{d}{dt}\left( \begin{array}{c} u_{-}^q\\ u_{-}^l \end{array} \right)&=&\left( \begin{array}{cc} -m_q & -m_\Delta^* \\ -m_\Delta & +m_l \end{array} \right) \left( \begin{array}{c} u_{-}^q\\ u_{-}^l \end{array} \right). \end{eqnarray} At this moment, the cancellation could not be clear from the above equations. Decoupling the equations, we find \begin{eqnarray} \ddot{u}_{+}^q+\left[-\frac{\dot{m}_\Delta^*}{m_\Delta^*}+i(m_q-m_l)\right]\dot{u}_{+}^q&&\nonumber\\ +\left[|m_\Delta|^2+m_q m_l-i\frac{\dot{m}_\Delta^*}{m_\Delta^*} m_q\right]u_{+}^q&=&0 \end{eqnarray} Redefining the interaction as $m_\Delta=m_0 e^{i\theta(t)}$, we have \begin{eqnarray} \frac{\dot{m}_\Delta^*}{m_\Delta^*}&=&-i\dot{\theta}. \end{eqnarray} Also defining $\delta M\equiv m_q-m_l\ge0$, the equation becomes \begin{eqnarray} &&\ddot{u}_{+}^q+i\left[\dot{\theta}+\delta M\right]\dot{u}_{1+}\nonumber\\ &&+\left[|m_\Delta|^2+ m_q m_l-\dot{\theta} m_q\right]u_{+}^q=0 \end{eqnarray} The standard form of the EWKB will be obtained using \begin{eqnarray} \label{eq-st-ql} \hat{u}_{+}^q&\equiv&\exp \left(\frac{i}{2}\int^t\left[\dot{\theta}+\delta M\right]dt\right)u_{+}^q. \end{eqnarray} This also removes the time-dependence of the off-diagonal elements of the Landau-Zener formalism. Finally, we have \begin{eqnarray} &&\ddot{\hat{u}}_{+}^q+\left[ -\frac{i}{2}\ddot{\theta}+\frac{3(\dot{\theta}+\delta M)^2}{4}\right.\nonumber\\ &&\left.+m_0^2+m_l m_q-\dot{\theta} m_q \right]u_{+}^q\nonumber\\ &=&0. \end{eqnarray} It seems difficult to understand the situation from the decoupled equations. Therefore, we are going back to the original matrix form. If Eq.(\ref{eq-st-ql}) has been applied to Eq.(\ref{eq-q+}), one will find \begin{eqnarray} i\frac{d}{dt}\left( \begin{array}{c} \hat{u}_{+}^q\\ \hat{u}_{+}^l \end{array} \right)&=&\left( \begin{array}{cc} \frac{M_{1/2} -\dot{\theta}}{2} & m_0 \\ m_0 & -\frac{M_{1/2}-\dot{\theta}}{2} \end{array} \right) \left( \begin{array}{c} \hat{u}_{+}^q\\ \hat{u}_{+}^l \end{array} \right), \end{eqnarray} and \begin{eqnarray} i\frac{d}{dt}\left( \begin{array}{c} \hat{u}_{-}^q\\ \hat{u}_{-}^l \end{array} \right)&=&\left( \begin{array}{cc} -\frac{M_{1/2} +\dot{\theta}}{2} & -m_0 \\ -m_0 & \frac{M_{1/2}+\dot{\theta}}{2} \end{array} \right) \left( \begin{array}{c} \hat{u}_{-}^q\\ \hat{u}_{-}^l \end{array} \right), \end{eqnarray} where $M_{1/2}\equiv (m_q+m_l)/2$. This form is very familiar in this paper. From the above equations, we found that the situation is quite similar to the single Dirac fermion, and the matter-antimatter asymmetry is indeed canceled (i.e, net baryon and lepton numbers vanish) in the non-relativistic limit. For completeness, let us consider a generalization of the scenario. Since the left-handed and the right-handed fermions are independent particles, one can also consider the Yukawa interactions written as \begin{eqnarray} \left[m_{\Delta ij} \overline{\psi_{Lj}}\psi_{Ri} +h.c.\right]+\left[m_{\Delta ji} \overline{\psi_{Li}}\psi_{Rj} +h.c.\right], \end{eqnarray} where $m_{\Delta ij}\ne m_{\Delta ji}$ is possible. \section{Conclusions and discussions} As we have seen explicitly, the origin of the asymmetry in the particle production from the rotational motion is essentially the exclusive particle production. This property has been found by using the Landau-Zener model and the EWKB Stokes lines for the decoupled equations. We also pointed out that the perturbative expansion may sometimes change the Stokes lines. To avoid such discrepancy, one has to draw the Stokes lines of the original theory first and consider perturbation at the points where the Stokes lines cross the real-time axis. For the Dirac fermion, because of the left and right-handed components, the total asymmetry cancels for the single-field model. For the multiple Dirac fermions, we have seen that cancellation can be avoided in the relativistic limit. In all cases, the asymmetry vanishes for $k\rightarrow 0$. Perturbative expansion is sometimes used before non-perturbative calculation. Our concern was that such simplification may change the structure of the Stokes lines of the original theory and the theory after perturbative expansion cannot reproduce the physics of the original theory. For physics, the two theories could not always be needed to be identical, but the difference has to be under control. To avoid the problem, one has to draw first the Stokes line of the original theory to find a separable structure of the Stokes lines, where local expansion is applicable. Indeed, the idea described in Zener's original paper\cite{Zener:1932ws} follows this regulation. This strategy is crucial for a damped oscillation, as is shown in Fig.\ref{fig_exactpm}. We started with the simplest example (i.e, simple rotation $\Delta(t)=Ae^{i\omega_0 t}$), and carefully examined the essential mechanism of the asymmetry production for typical situations, comparing the structure of the Stokes lines and the original equations. \section{Acknowledgment} The authors would like to thank Nobuhiro Maekawa for collaboration in the very early stages of this work. SE was supported by the Sun Yat-sen University Science Foundation.
1,116,691,498,069
arxiv
\section{Introduction} The low Mach number limit for classical solutions to the full compressible Navier-Stokes equations was studied notably by {\sc T. Alazard} in \cite{Al06}. When the large temperature variations and thermal conduction are taken into account, the limit system reads (see (1.7) page 7 in \cite{Al06}) \begin{equation}\label{Lim} \begin{array}{c} \gamma P_0 \Div \vu = -(\gamma -1) \kappa \Div\vc{Q},\\ \vr \lr{\pt \vu+ \vu\cdot \nabla \vu} +\nabla \pi =- \delta \Div \vc{S} ,\\ \vr C_P \lr{\partial_t T + \vu \cdot\nabla T} =- \kappa \Div\vc{Q}, \end{array} \end{equation} where the unknown $\vr,\vu,\pi,T$ denote the fluid density, velocity vector field, pressure and temperature, respectively, $\vc{S}$ and $\vc{Q}$ denote the viscous tensor and the heat flux.\\ The two dimensionless parameters distinguished in \cite{Al06} $$ \delta \in [0,1], \qquad \kappa \in [0,1],$$ are the inverse of the {\it Reynolds} number and the {\it P\'eclet} number, respectively, measuring the importance of the viscosity and the heat-conduction.\\ The density and temperature of the fluid are related by \eq{\vr = P_0/(RT)\label{press},} where $P_0$ denotes the constant pressure at spatial infinity, $C_P=\gamma C_V$, $C_V = R/(\gamma-1)$, $\gamma>1$ and $R>0$ are two constants. Given relation \eqref{press}, the set of unknowns for system \eqref{Lim} can be reduced to $\{\vr,\vu,\pi\}$ or to $\{\vu,\pi,T\}$ equivalently.\\ System \eqref{Lim} is complemented by the Newton rheological law for the viscous tensor, namely $$\vc{S} = -2 \mu D(\vu) -\lambda \Div \vu \, \vc{I} $$ and the Fourier law for the heat flux \eq{\vc{Q}=-k\nabla T,\label{Q_form}} where $D(\vu) = \frac{1}{2}(\nabla \vu + \nabla^t \vu)$, $\vc{I}$ is the identity matrix, $\mu$ and $\lambda$ are the viscosity coefficients and $k$ is the heat conductivity coefficient, all assumed to depend smoothly on the temperature, thus, by relation \eqref{press} on the density.\\ Using \eqref{press} and \eqref{Q_form}, system \eqref{Lim} may be replaced by \begin{equation}\label{Limref} \begin{array}{c} \partial_t \vr + \Div (\vr \vu) = 0, \\ \pt (\vr \vu) + \Div (\vr \vu \otimes \vu) +\nabla \pi = -\delta \Div\vc{S} ,\\ \Div \vu =\displaystyle \frac{(\gamma -1) }{\gamma R} \kappa \Div\lr{{k}\Grad \lr{ \frac{1}{\vr}}}. \end{array} \end{equation} Aassuming $\delta =1$ {and including dependence on the temperature in the coefficient $k$}, the following system is obtained \begin{equation}\label{Limref2} \begin{array}{c} \partial_t \vr + \Div (\vr \vu) = 0, \\ \pt (\vr \vu) + \Div (\vr \vu \otimes \vu) +\nabla \pi = 2 \Div (\mu(\vr) D(\vu)) + \nabla (\lambda(\vr) \Div \vu) ,\\ \Div \vu = - 2 \kappa \Delta \varphi(\vr), \end{array} \end{equation} with $\varphi$--an increasing function of $\vr$. Depending on the nonlinearity $\varphi$, such systems are used to model various phenomena like motion of mixtures and avalanches, salt and pollutant spreading or combustion. In the recent paper \cite{GoVa} a more complex system is derived to model a flow of mixture in the multi-dimensional setting and investigated in the one-dimensional domain. \medskip Several authors, such as {\sc H. Beir\~{a}o Da Veiga} or {\sc P. Secchi} \cite{Ve, Se2} have considered the problem of existence of local strong solutions to system \eqref{Limref2}. The interested reader is referred to the recent interesting work by {\sc R. Danchin} and {\sc X. Liao} \cite{DaLi12} where the existence of global solutions in homogeneous Besov spaces with critical regularity is proven assuming the initial density close to a constant and the initial velocity small enough. Concerning the global in time existence, {\sc A.~Kazhikov, S.~Smagulov} showed in \cite{KaSm77} that system \eqref{Limref2} with modified convective term and small $\kappa$ possesses a global in time generalized solution that is unique in two-dimensional domain. {\sc P. Secchi} in \cite{Se1} proved the existence of a (unique) global solution for two-dimensional flows when the diffusion coefficient $\kappa$ is small. He also considered the convergence (as $\kappa \to 0$) towards the corresponding solutions of the nonhomogeneous Navier--Stokes system for two- and three-dimensional case. In \cite{DaLi12}, the authors proved the global existence result in critical spaces if the density is close to a constant and if the initial velocity is small enough. In \cite{PLL} {\sc P.--L.~Lions} showed, in two-dimensional case, that for a positive conductivity coefficient and $\varphi= {-1/\vr}$, a small perturbation of a constant density provides a global existence of weak solutions without restriction on the initial velocity. He has yet left a generalization of his result to the three-dimensional case as an open problem. \medskip The first global existence result of weak solutions without smallness assumption was obtained by {\sc D. Bresch}, {\sc E.H. Essoufi} and {\sc M. Sy} in \cite{BrEsSy07} when a certain algebraic relation between $\mu$ and $\kappa \varphi$ is assumed, namely \eq{\varphi'(s) = \mu'(s)/s \quad \text{and} \quad \kappa=1,\label{change1}} for which the third equation in \eqref{Limref2} becomes $$\Div\vu=-2\Div(\vr^{-1}\Grad\mu(\vr)).$$ Later on, {\sc X. Cai, L. Liao, Y. Sun} proved the uniqueness of this solution in the two-dimensional case \cite{CLS12}. Recently, this algebraic relation was also used by {\sc X. Liao} \cite{X.Liao} to show existence of weak solutions and. In the two-dimensional case, she also showed uniqueness of this solution in the critical non-homogeneous Besov spaces. \medskip In the first part of this paper, we show how to relax relation \eqref{change1}. More precisely, we prove existence of global weak solutions assuming only \begin{equation}\label{rel1} \varphi'(s) = \mu'(s)/s \quad\text{ and } \quad 0 < \kappa < 1 \end{equation} which implies \eq{\Div\vu=- 2\kappa\Div(\vr^{-1}\Grad\mu(\vr)).\label{non_phi}} This result may be viewed as a generalization of the particular case $\kappa=1$ studied by {\sc D.~Bresch} $\&$ {\it al.} in \cite{BrEsSy07} to the case when $\kappa$ is any constant from the interval $(0,1)$. It is based on the estimate of new mathematical entropy \eqref{hypo}, whose prototype for $\kappa=1$ was proposed in \cite{BrEsSy07}. Another interesting point of this paper is a construction of approximate solutions. The general relation \eqref{rel1} leads to presence of higher order terms in the momentum equation and thus the complexity of construction is significantly higher than the one from \cite{BrEsSy07, X.Liao}. It uses an original augmented regularized system \eqref{regularized} of parabolic type. Let us also announce that ideas developed in this paper will be adapted to handle the case compressible Navier--Stokes equations in the continuation of this paper (see \cite{BrDeZa}). \medskip In the second part of the paper we use the existence result and the uniform estimates for $\kappa\in(0,1)$ to prove convergence of weak solution towards the corresponding solutions of the non-homogeneous incompressible Navier--Stokes system (as $\kappa$ tends to $0$) and solutions of the {\it Kazhikhov-Samgulov}-type system (as $\kappa$ tends $1$).\\ The existence of global weak solutions for the non-homogenous incompressible Navier-Stokes equations ($\kappa=0$) has been investigated: by {\sc S. N. Antontsev, A. V. Kazhikhov and V. N. Monakhov} \cite{AnKaMo90} for a constant viscosity and initial density bounded and far away from vacuum, by R. Danchin and P.B. Mucha for discontinuous initial density \cite{DaMu}, by {\sc J. Simon} \cite{Sim90} for constant viscosity and bounded density with possible vacuum and by {\sc P.--L.~Lions} \cite{PLLI} for non-degenerate density dependent viscosity coefficients with $L^\infty(\Omega)$ bound for the density and possible vacuum. Here, the limit passage $\kappa\to0$ is performed in the case when the initial density is far away from zero and bounded. \bigskip The third part of the paper is devoted to study of a more general form of nonlinearity than \eqref{non_phi}. We show how to relax the algebraic relation \eqref{rel1}, more precisely, we consider the following system \begin{equation}\label{Limref1} \begin{array}{c} \partial_t \vr + \Div (\vr \vu) = 0, \\ \pt (\vr \vu) + \Div (\vr \vu \otimes \vu) +\nabla \pi = 2 \Div (\mu(\vr) D(\vu)) + \nabla (\lambda(\vr) \Div \vu) ,\\ \Div \vu = - \Delta \tilde \varphi(\vr), \end{array} \end{equation} with $\tilde \varphi$--an increasing function of $\vr$. Then, defining a new function $\tilde\mu(s)$ such that $\tilde \mu'(s) = s \tilde \varphi'(s)$, we propose three inequalities relating $\mu$ and $\tilde \mu$ (see \eqref{c_gen}) which yield the global existence of weak solutions. These inequalities allow to generalize the mathematical entropy introduced in the first part to \eqref{integ1}. As a corollary of these inequalities, we prove that a small perturbation of a constant initial density provides a global existence of weak solutions. This was observed already by {\sc P.--L.~Lions} in \cite{PLL} for two-dimensional domain, but thanks to our observation it is true also in the three-dimensional case with no smallness assumption on the initial velocity. Hence, our result sheds a new light on two open questions given in the book by {\sc P.--L.~Lions} \cite{PLL}. Firstly, we prove the global in time existence of solutions with no smallness assumption and without the severe restriction on the form of nonlinearity \eqref{change1} that we replace by a relaxed one namely \eqref{rel1}. Secondly, we provide an existence of global weak solutions with restriction on the initial density but {\it not on the initial velocity} for $d = 2$ and $d=3$. \bigskip In the two last sections, we present applications of our result to the gaseous mixture and ghost effect systems. More precisely, in the Section \ref{S:mix}, we show how to extend the result by {\sc E. Embid} or T. {\sc Alazard} (see \cite{Embid87} and \cite{Al1}) about local existence of strong solutions to the framework of weak solutions for two-component mixture model. In Section \ref{S:ghost}, we discuss the system derived in \cite{LeSuTr} by {\sc C.D. Levermore, W. Sun, K. Trivisa} as a low Mach number limit for classical solutions of the compressible Navier-Stokes equations with dispersive corrections. \begin{equation*} \begin{array}{c} \vr T=1, \qquad \partial_t \vr + \Div (\vr \vu) = 0, \\ \pt (\vr \vu) + \Div (\vr \vu \otimes \vu) +\nabla P^*= - \Div\vc{\Sigma}-\Div\tilde{\vc{\Sigma}} ,\\ \frac{5}{2}\Div\vu =\Div\lr{k(T)\Grad T}, \end{array} \end{equation*} where $k(T)$ is the heat conductivity coefficient $k>0$, while the two parts of stress tensor $\vc{\Sigma}$ and $\tilde{\vc{\Sigma}}$ are defined as follows \eqh{ \vc{\Sigma}&=\mu(T)\lr{\Grad\vu+\Grad^t\vu-\frac{2}{3}\Div\vu \vc{I}},\\ \tilde{\vc{\Sigma}}&=\tau_1(\vr,T)\lr{\Grad^2T-\frac{1}{3}\lap T\vc{I}}+\tau_2(\vr,T)\lr{\Grad T\otimes\Grad T-\frac{1}{3}|\Grad T|^2\vc{I}}, } where $\tau_1,\tau_2$ are transport coefficients with $\tau_1>0$. For a specific choice of physical viscosity, transport coefficient and heat-conductivity coefficient, we show how to apply our theoretical result to get global existence of weak solutions to a ghost effect system. To our knowledge, this gives a first answer to a question about global existence of weak solutions to such kinds of systems. \medskip As the title suggests, the present paper is the first part of a series. We refer the interested reader to part II \cite{BrDeZa} for the extension of the above results to the case of two-velocity hydrodynamics in compressible Navier-Stokes equations with degenerate viscosities: the construction of approximate solutions will follow the same lines than in the present paper but will be fully given for reader's convenience. \medskip \noindent {\bf Remark about the notation}: In the sequel $c$ denotes generic positive constant (possibly large) that may change from line to line. \section{Main results} {This section is devoted to presentation of our main result. First, we state the global in time existence result in the case when a special algebraic relation between $\vp$ and $\mu$ is assumed. In the second part we relax this algebraic equality to inequality and we formulate the second existence result covering more general form of $\vp$ and $\mu$.} \subsection{The case of $\varphi$ and $\mu$ related by \eqref{rel1}} \medskip \noindent {\bf Reformulation of the system.} Before formulating our main result we rewrite system \eqref{Limref2} in a different form. These forms are equivalent provided solutions to \eqref{Limref2} are sufficiently regular. Let us first introduce the following solenoidal vector field \eq{&\vw=\vu+ 2\kappa \Grad \vp(\vr),\\ &\Div\vw=0.\label{def_w}} Using this notation, system \eqref{Limref1} can be rewritten as \begin{equation}\label{main1} \begin{array}{c} \pt\vr+ \Div(\vr \vu) = 0,\\ \pt\lr{\vr \vw}+ \Div (\vr \vu \otimes \vw) - 2 \Div (\mu(\vr) D(\vu)) + 2 \kappa \Div (\mu(\vr) \Grad^t \vu) + \Grad {\pi_1} =\vc{0},\\ \vw=\vu+2\kappa \Grad \vp(\vr),\\ \Div\vw = 0, \end{array} \end{equation} where $$\pi_1= \pi + 2(\mu'(\vr)\vr-\mu(\vr))\Div\vu- \lambda(\vr)\Div\vu.$$ To see this it is enough to multiply the continuity equation by $\mu'(\vr)$ and write the corresponding equation for $\mu(\vr)$ \eq{\pt\mu(\vr) + \Div(\mu(\vr)\vu) + (\mu'(\vr)\vr-\mu(\vr))\Div\vu = 0.\label{mu}} Differentiating it with respect to space and employing \eqref{rel1}, we obtain \eq{\pt\lr{\vr\Grad\vp(\vr)} + \Div(\vr \vu\otimes \Grad \vp(\vr)) + \Div(\mu(\vr) \Grad^t \vu) + \Grad\lr{(\mu'(\vr)\vr-\mu(\vr))\Div\vu}= \vc{0}.\label{vp}} Thus the second equation of \eqref{main1} is obtained using definition of \eqref{def_w} and the momentum equation from \eqref{Limref1}. $\Box$ \medskip \noindent {\bf Hypothesis, and definitions of weak solution.} System \eqref{Limref2} is supplemented by the periodic boundary conditions, i.e. $$\Omega=\mathbb{T}^3.$$ and the initial conditions \eq{\vr|_{t=0} =\vr^0,\quad \vu|_{t=0}=\vu^0, \quad \vw|_{t=0}=\vw^0=\vu^0+2\kappa\Grad\vp(\vr^0) \quad \text{in}\ \Omega. \label{init}} We assume that the initial conditions satisfy \eq{\label{init_cond} \sqrt{(1-\kappa)\kappa}\vr^0 \in H^1(\Omega), \qquad 0< r\leq \vr^0\leq R<\infty, \qquad \vw^0 \in H, } uniformly with respect to $\kappa$, where \eqh{H=\{\vc{z}\in L^2(\Omega); \ \Div\vc{z}=0\}\quad \text{and} \quad V=\{\vc{z}\in W^{1,2}(\Omega);\ \Div\vc{z}=0\}.} We look for generalized weak solutions of system \eqref{main1} in the sense of the following definition. \begin{df}[Global weak solution in terms of $\vw$]\label{Def1} The couple of functions $(\vr,\vw)$ is called a global weak solution to system \eqref{main1} and \eqref{init} if the following regularity properties are satisfied \begin{equation*} \begin{gathered} 0<r\leq \vr\leq R<\infty, \quad a.e. \ in\ (0,T)\times\Omega,\\ \vr\in L^\infty(0,T;H^1(\Omega))\cap L^2(0,T;H^2(\Omega)),\\ \vw\in L^\infty(0,T;H)\cap L^2(0,T;V), \end{gathered} \end{equation*} the $\kappa$-entropy estimate holds \eq{\label{hypo} &\sup_{\tau\in [0,T]} \intO{\vr\lr{\frac{|\vw|^2}{2}+ (1-\kappa)\kappa\frac{|2\Grad\vp(\vr)|^2}{2}}(\tau)} + 2\kappa\intTO{\mu(\vr) |A(\vw)|^2}\\ &+ 2(1-\kappa)\intTO{ \mu(\vr) \left|D(\vw-2\kappa\nabla\varphi(\vr)) + \frac{2\kappa }{d}\lap \varphi(\vr)\, \vc{I}\right|^2} \\ & +2(1-\kappa) \intTO{\lr{\frac{(1-d)}{d}\mu(\vr)+\mu'(\vr)\vr}|2\kappa\lap\vp(\vr)|^2}\\ &\hskip5cm \le \intO{\vr\lr{\frac{|\vw|^2}{2}+ (1-\kappa)\kappa\frac{|2\Grad\vp(\vr)|^2}{2}}(0)}, } where $A(\vw)=\frac{1}{2}(\Grad\vw-\Grad^t\vw)$ and the equations of system \eqref{main1} hold in the sense of distributions.\\ More precisely, the continuity equation \eq{\intTO{\vr\pt\phi}+\intTO{\vr\vw\cdot\Grad\phi}- 2\kappa\intTO{\vr\Grad\vp(\vr)\cdot\Grad\phi}=-\intO{\vr^0\phi(0)}\label{weak_cont}} holds for $\phi\in C^\infty([0,T]\times\Omega)$, s.t. $\phi(T)=0$.\\ The momentum equation written in terms of $\vw$ \eq{ &\intTO{\vr\vw\cdot\pt\vphi_1}+\intO{\vr(\vw- 2\kappa\Grad\vp(\vr))\otimes\vw:\Grad\vphi_1}\\ &\quad-2\intTO{\mu(\vr)D(\vw):\Grad\vphi_1}+ 2\kappa\intTO{\mu(\vr)\Grad^t\vw:\Grad\vphi_1}\\ &\quad+ 4(1-\kappa)\kappa\intO{\mu(\vr)\Grad \Grad\vp(\vr):\Grad\vphi_1} =-\intO{\vr^0\vw^0\cdot\vphi_1(0)}\label{weak_mom}} holds for $\vphi_1\in (C^\infty([0,T]\times\Omega))^3$, s.t. $\Div\, \vphi_1=0$ and $\vphi_1(T)=\vc{0}$.\\ The equation for $\nabla\varphi(\vr)$ \eq{ &\intTO{\vr\Grad\vp(\vr)\cdot\pt\vphi_2}+\intO{\vr(\vw- 2\kappa\Grad\vp(\vr))\otimes\Grad\vp(\vr):\Grad\vphi_2}\\ &\quad - \kappa \intTO{\mu(\vr)\Grad^2\vp(\vr) : \Grad\vphi_2} - \kappa\intTO{(\mu'(\vr)\vr -\mu(\vr)) \Delta \vp(\vr) \Div \vphi_2}\\ &\quad + \intO{ \mu(\vr) \nabla^{t} \vw :\Grad\vphi_2} =-\intO{\vr^0\Grad\vp(\vr^0)\cdot\vphi_2(0)}\label{weak_graphi}} holds for $\vphi_2\in (C^\infty([0,T]\times\Omega))^3$, s.t. $\vphi_2(T)=0$. \end{df} Let us now specify the notion of a weak solution to the original system \eqref{Limref2}. \begin{df}[Global weak solution in terms of $\vu$]\label{Def2} The couple $(\vr,\vu)$ is called a weak solution to system \eqref{Limref2} and \eqref{init} if the following regularity properties are satisfied \begin{equation*} \begin{gathered} 0<r\leq \vr\leq R<\infty, \quad a.e. \ in\ (0,T)\times\Omega,\\ \vr\in L^\infty(0,T;H^1(\Omega))\cap L^2(0,T;H^2(\Omega)),\\ \vu\in L^\infty(0,T;L^2(\Omega))\cap L^2(0,T;W^{1,2}(\Omega)), \end{gathered} \end{equation*} and the equations of system \eqref{Limref2} hold in the sense of distributions.\\ More precisely, the mass equation is satisfied in the following sense \eq{\intTO{\vr\pt\phi}+\intTO{\vr\vu\cdot\Grad\phi}=-\intO{\vr^0\phi(0)}\label{weak_contu}} for $\phi\in C^\infty([0,T]\times\Omega)$, s.t. $\phi(T)=0$. \\ The momentum equation is satisfied in the following sense \eq{ \intTO{\vr\vu\cdot\pt\vphi}+\intO{\vr \vu \otimes\vu:\Grad\vphi} -2\intTO{\mu(\vr)D(\vu):\Grad\vphi}\\ =-\intO{\vr^0\vu^0\cdot\vphi(0)}\label{weak_momu}} for $\vphi\in (C^\infty([0,T]\times\Omega))^3$ s.t. $\Div\vphi=0$ and $\vphi(T)=\vc{0}$.\\ The constraint $$\Div \vu = - 2\kappa \Delta\varphi(\vr)$$ is satisfied in $L^2(0,T;L^2(\Omega))$. \end{df} Defining $\vu = \vw - 2\kappa \Grad\vp(\vr)$, we see that the weak solution from Definition \ref{Def1} gives a weak solution from Definition \ref{Def2}. Indeed, using the definition of $\vu$ and the fact that $\vw$ is a divergence free vector field, the weak formulation of the momentum equation from Definition \eqref{Def2} is obtained by choosing $\vphi_1 = \vphi_2 = \vphi$ in \eqref{weak_mom} and \eqref{weak_graphi}, multiplying the second equation by $\kappa$ and subtracting it from the first one. \bigskip \noindent{\bf Formulation of the main results.} The first main result of this paper concerns the global in time existence of weak solutions to system \eqref{main1}. \begin{thm}\label{T_main} Let $0<\kappa < 1$, $\mu$ be increasing function of class $C^1([r,R])$ such that $\mu\geq \Un{c}>0$ on $[r,R]$, assume \eqref{rel1} and let $\mu(\vr)$ satisfy the following condition on $[r,R]$ \eq{\label{important} \lr{\frac{1-d}{d}\mu(\vr)+\mu'(\vr)\vr}\geq \Un{c}_1> 0. } If the initial data $(\vr^0,\vw^0)$ satisfy \eqref{init_cond}, then there exists at least one global weak solution $(\vr,\vw)$ of system \eqref{main1}, in the sense of Definition \ref{Def1}.\\ Moreover, this solution satisfies the following estimates \eqh{\sqrt\kappa\|\vr\|_{L^2(0,T;H^1(\Omega))}+\|\vw\|_{L^\infty(0,T;H)}+\sqrt{\kappa}\|\vw\|_{L^2(0,T; V)}\leq c,} \eqh{\sqrt{(1-\kappa)\kappa}\|\vr\|_{L^\infty(0,T;H^1(\Omega))}+ \sqrt{(1-\kappa)}\kappa\|\vr \|_{L^2(0,T;H^2(\Omega))} +\sqrt{(1-\kappa)} \|D(\vu)\|_{L^2(0,T;L^2(\Omega))}\leq c,} uniformly with respect to $\kappa$.\\ If, in addition to \eqref{init_cond}, $\kappa \vr^0\in{H^1(\Omega)}$ uniformly with respect to $\kappa$, then \eq{\label{h_reg_rho} \kappa^{3/2}\|\pt\mu(\vr)\|_{L^2(0,T; L^{3\over2}(\Omega))}+ \kappa\| \mu(\vr)\|_{L^\infty(0,T; H^1(\Omega))}+ \kappa^{3/2}\| \mu(\vr)\|_{L^2(0,T; H^2(\Omega))}\leq c } and $$\sqrt \kappa\| \Grad \vu\|_{ L^2(0,T;L^2(\Omega))}\leq c$$ uniformly with respect to $\kappa$. \end{thm} Regularity of weak solutions follows from the $\kappa$-entropy inequality \eqref{hypo}. It is important to note that $$|\vw|^2 + (1-\kappa)\kappa|2\nabla\varphi(\vr)|^2 = (1-\kappa)|\vu|^2 + \kappa |\vu+2\nabla\varphi(\vr)|^2,$$ therefore \eqref{hypo} reflects a two-velocity hydrodynamics with a mixing ratio $\kappa$ in the spirit of the works by {\sc S.M. Shugrin} (see for instance {\rm \cite{Sh}}) but with an incompressible mean velocity $\vw$. The interested reader is referred to the second part of this series of papers {\rm \cite{BrDeZa}} where we make a link between the compressible two-velocity hydrodynamics and the compressible-incompressible two-velocity hydrodynamics studied here. \medskip Having proven this theorem we investigate the limit $\kappa\to0$ to recover the usual non-homogenous incompressible Navier-Stokes equations and the limit $\kappa\to 1$ to recover the so-called Kazhikhov-Smagulov system. We have the following theorem \begin{thm}\label{T_main2} Let $(\vr_\kappa,\vw_\kappa)$ be the sequence of global generalized weak solutions obtained in the previous theorem, emanating from the initial data $(\vr_\kappa^0,\vw_\kappa^0)$. Assume that the initial data satisfy \eqref{init_cond} and \begin{equation*} \begin{gathered} \sqrt{(1-\kappa)\kappa} \vr^0_\kappa\quad\text{ uniformly bounded in } H^1(\Omega),\\ \vr^0_\kappa\to\vr^0\quad\text{strongly in } L^p(\Omega) \hbox{ for all } 1< p<+\infty,\\ \vw^0_\kappa= \vu^0_\kappa + 2\kappa \nabla \varphi(\vr^0) \to \vw^0 \quad \hbox{ strongly in } V. \end{gathered} \end{equation*} Then we have two cases: \begin{itemize} \item when $\kappa\to 0$ then there exists a subsequence, denoted again by $\kappa$, and limit functions $(\vr,\vu)$ such that \begin{equation*} \begin{gathered} \vr_\kappa\to\vr \hbox{ in } C([0,T]; L^p(\Omega)) \hbox{ for all } 1< p<+\infty,\\ \vw_\kappa \to \vu \quad \text{weakly in } L^2(0,T;V) \text{ and weakly$^*$ in } L^\infty(0,T;H) \end{gathered} \end{equation*} and $(\vr,\vu)$ satisfies the following system \begin{equation}\label{mainINS} \begin{gathered} \pt\vr+ \Div(\vr \vu) = 0,\\ \pt\lr{\vr \vu}+ \Div (\vr \vu \otimes \vu) - 2 \Div (\mu(\vr) D(\vu)) + \Grad {\pi_1} =\vc{0},\\ \Div\vu = 0, \end{gathered} \end{equation} in the sense specified in \eqref{weak_contu} and \eqref{weak_momu}, where $D(\vu)=\frac{1}{2}(\Grad\vu+\Grad^t\vu)$. \item when $\kappa\to 1$, assuming in addition to \eqref{init_cond} that $\kappa \vr^0\in{H^1(\Omega)}$, then there exists a subsequence, demoted again by $\kappa$, and limit functions $(\vr,\vu)$ such that \begin{equation*} \begin{gathered} \vr_\kappa\to\vr \hbox{ in } C([0,T];L^p(\Omega)) \hbox{ for all } 1< p<+\infty, \\ (\kappa \mu'(\vr_\kappa))^{1/2} \nabla \vr_\kappa \to (\mu'(\vr))^{1/2} \nabla \vr \hbox{ in } L^2(0,T;L^2(\Omega)),\\ \vw_\kappa\to \vw \quad \text{weakly in } L^2(0,T;V) \text{ and weakly$^*$ in } L^\infty(0,T;H) \end{gathered} \end{equation*} and $(\vr,\vu)$ satisfies the following system \begin{equation}\label{main2} \begin{gathered} \pt\vr+ \Div(\vr \vu) = 0,\\ \pt\lr{\vr \vw}+ \Div (\vr \vu \otimes \vw) - 2 \Div (\mu(\vr) A(\vw)) + \Grad {\pi_1} =\vc{0},\\ \vw = \vu + 2 \nabla\varphi(\vr), \qquad \Div\vw = 0, \end{gathered} \end{equation} in the sense specified in Definition \ref{Def1}, where $A(\vu)=\frac{1}{2}(\Grad\vu-\Grad^t\vu)$. \end{itemize} \end{thm} \begin{rmk} Observe that the following compatibility condition is satisfied for $\kappa=0$ $$\vw^0=\vu^0 \hbox{ with } \Div \vu^0 = 0.$$ \end{rmk} \begin{rmk} Note that for $\kappa=1$, we recover the existence result by {\sc D. Bresch} {\it et al.} in {\rm \cite{BrEsSy07}}. \end{rmk} \subsection{The case of general $\vp$ and $\mu$} In this section, we relax the algebraic relation between $\varphi$ and $\mu$ \eqref{rel1}. More precisely, the diffusion equation $$\Div \vu = - 2 \kappa \Delta\varphi(\vr) \qquad \text{ with} \qquad \varphi'(s)= \mu'(s)/s$$ is replaced by \eq{\Div \vu = -2 \Delta \widetilde\varphi(\vr)\qquad \text{ with} \qquad \widetilde\varphi'(s)= \widetilde\mu'(s)/s\label{gen_pm}} with some new function $\widetilde \mu(s)$ which is related to $\mu(s)$ only by some inequality. \\ We prove the following result \begin{thm}\label{T_main3} Let $\mu$ be an increasing function of class $C^1([r,R])$ such that $\mu\ge c>0$ on $[r,R]$ and $\tilde \varphi$ an increasing function of class $C^1([r,R])$. Assume that the initial data satisfy $\vr^0\in H^1(\Omega)$, $\vu^0\in (L^2(\Omega))^3$ with $$ 0< r\le \vr^0\le R < +\infty, \qquad {\Div} (\vu^0 + 2\nabla \tilde \varphi(\vr^0)) = 0.$$ Moreover assume that there exist positive constants $c,c',\xi$ such that \begin{equation} \begin{gathered} c\leq\min_{\vr\in[r,R]}\lr{\mu(\vr) -\tilde \mu(\vr)},\\ c'\leq\tilde\mu'(\vr)\vr+\frac{1-d}{d}\tilde\mu(\vr),\\ \max_{\vr\in[r,R]} \frac{(\mu(\vr)-\tilde\mu(\vr)-\xi\tilde\mu(\vr))^2}{ 2\lr{\mu(\vr) -\tilde \mu(\vr)}} \leq \xi \min_{\vr\in[r,R]}\lr{\tilde\mu'(\vr)\vr+\frac{1-d}{d}\tilde\mu(\vr)}. \end{gathered} \label{c_gen} \end{equation} Then there exists a global weak solution to System \eqref{Limref1}. \end{thm} For this theorem, we will only prove the estimates because the construction and stability process follows the lines given for Theorem \ref{T_main}. For the sake of completeness, we give an example which show that this theorem gives an answer to a question formulated by {\sc P.--L. Lions} in \cite{PLL}, Chapter 8.8 (see Proposition \ref{Example}). Global existence of weak solution for initial density close to constant and large velocity field in the three-dimensional in space case. \medskip \section{Proof of Theorem \ref{T_main}} This section is dedicated to the proof of existence of weak solutions to system \eqref{main1}. We first assume that $\vr,\vw$ are smooth enough and present the derivation of a-priori estimates necessary to understand the main idea of construction of solution. The latter is presented in Section \ref{SS:Constr}. \subsection{A priori estimates} \noindent {\bf Maxiumum principle and $H^1$ bounds on the density.} First, applying the standard maximum principle for the continuity equation $$\pt \vr + \vw\cdot\Grad \vr - 2 \kappa \lap\mu(\vr) = 0$$ we deduce that \eq{0<r\leq\vr \leq R<\infty\label{max_vr},} and the basic energy estimate gives \eqh{\|\vr\|_{L^\infty(0,T;L^2(\Omega))}+ \sqrt\kappa \|\vr\|_{L^2(0,T; H^1(\Omega)}\leq c} uniformly with respect to $\kappa$. \bigskip \noindent {\it The $\kappa$-entropy and its consequences.} Our next goal is to derive an original estimate on $(\vr,\vw,\Grad\varphi(\vr))$. More precisely we prove that the following mathematical entropy holds \begin{prop} Let $(\vr,\vw)$ be sufficiently smooth solution to \eqref{main1}, then $(\vr,\vw)$ satisfy inequality\eqref{hypo}. \end{prop} \noindent {\it Proof.} We rewrite equation for $\vw$ from \eqref{main1} in slightly different form \eq{\label{eqw} \pt\lr{\vr \vw}+ \Div (\vr \vu \otimes \vw) - 2(1-\kappa) \Div(\mu(\vr) D(\vu)) - 2\kappa \Div (\mu(\vr) A(\vu)) + \Grad {\pi_1} =\vc{0},} where $A(\vu)=\frac{1}{2}(\Grad\vu-\Grad^t\vu)$. Multiplying this equation by $\vw$ and integrating by parts with respect to $\Omega$ we obtain \eq{\label{a} \frac{1}{2}\Dt\intO{\vr|\vw|^2}+ 2(1-\kappa)\intO{\mu(\vr) |D(\vu)|^2}+ 2\kappa\intO{\mu(\vr) |A(\vu)|^2}\\ + 4(1-\kappa)\kappa\intO{\mu(\vr)\Grad\vu:\Grad^2\vp(\vr)}=0. } In the case $\kappa= 1$, the high-derivative of $\varphi(\vr)$ does not appear in this equality and global existence of weak solutions may be obtained in a simpler way. Such calculations were firstly performed in \cite{BrDeZa} and may be also found in \cite{X.Liao}. Here, however, we do not assume that $\kappa=1$, thus it is not yet sure that \eqref{a} provides uniform estimates. The problem is the presence of the last term $\mu(\vr)\Grad\vu:\Grad^2\vp(\vr)$ which does not have a sign. To eliminate it we take advantage of the renormalized continuity equation \eqref{mu}. We first test its gradient \eqref{vp} by $\Grad\vp(\vr)$ and we integrate it with respect to $\Omega$ to get \eq{\label{b} \Dt\intO{\vr\frac{|\Grad\vp(\vr)|^2}{2}}-\intO{\mu(\vr)\Grad\vu:\Grad^2\vp(\vr)}-\intO{(\mu'(\vr)\vr-\mu(\vr))\Div\vu\lap\vp(\vr)}=0. } Next, we multiply \eqref{b} by $4(1-\kappa)\kappa$ and add it to \eqref{a}, we obtain \eq{\label{aa} \Dt\intO{\vr\lr{\frac{|\vw|^2}{2}+(1-\kappa)\kappa\frac{|2\Grad\vp(\vr)|^2}{2}}} &+ 2(1-\kappa)\intO{\mu(\vr) |D(\vu)|^2}+ 2\kappa\intO{\mu(\vr) |A(\vu)|^2}\\ &+ 2(1-\kappa)\intO{(\mu'(\vr)\vr-\mu(\vr))|2\kappa \lap\vp(\vr)|^2}=0. } To find an optimal condition for $\mu(\vr)$ we apply the following equality \eq{ \intO{ \mu(\vr)|D(\vu)|^2 } = \intO{ \mu(\vr) \left|D(\vu) - \frac{1}{d}\Div \vu\, \vc{I}\right|^2} + \intO{ \frac{\mu(\vr)}{d} |\Div \vu|^2} \label{Ddiv}} to the second integral in \eqref{aa}, we get \eqref{hypo}. Thus, for the l.h.s. to be nonnegative and to control the second gradient of $\vr$ independently from $\kappa$ one needs to assume \eqh{ \lr{\frac{1-d}{d}\mu(\vr)+\mu'(\vr)\vr}\geq c > 0, } which justifies the assumption \eqref{important} from Theorem \ref{T_main}. $\Box$\\ Equality \eqref{hypo} will be generalized to the case of compressible Navier-Stokes system in \cite{BrDeZa}. \begin{rmk} For example, for $\mu(\vr)=\vr^\alpha$, we have $\alpha > 1-\frac{1}{d}$. \end{rmk} Integrating \eqref{hypo} with respect to time we get \eqh{ &\intO{\vr\lr{\frac{|\vw|^2}{2}+ (1-\kappa)\kappa\frac{|2\Grad\vp(\vr)|^2}{2}}(T)} + 2\kappa\intTO{\mu(\vr) |A(\vu)|^2}\\ &+ 2(1-\kappa)\left[\intTO{ \mu(\vr) \left|D(\vu) - \frac{1}{d}\Div \vu\, \vc{I}\right|^2} + \intTO{\lr{\frac{(1-d)}{d}\mu(\vr)+\mu'(\vr)\vr}|\Div\vu|^2}\right] \\ & \le \intO{\vr\lr{\frac{|\vw|^2}{2}+ (1-\kappa)\kappa\frac{|2\Grad\vp(\vr)|^2}{2}}(0)}. } Using \eqref{max_vr}, the above inequality can be used to deduce the following estimates \eq{\|\vw\|_{L^\infty(0,T;H)}+ & \sqrt{(1-\kappa)\kappa} \left\|\vr\right\|_{L^\infty(0,T;H^1(\Omega))} +\sqrt{\kappa}\|\vw\|_{L^2(0,T; V)} \\ & \hskip2cm + \sqrt{(1-\kappa)}\|D(\vu)\|_{L^2(0,T; L^2(\Omega))} + \sqrt{(1-\kappa)} \kappa \|\vr \|_{L^2(0,T; H^2(\Omega))} \leq c.\label{v2}} \bigskip \noindent {\bf Additional estimate on $\vr$.} This estimate will be used in order to let $\kappa\to1$ in the proof of second part of Theorem \ref{T_main2}. Assuming that $\kappa \vr^0 \in H^1(\Omega)$ we show that it is possible to control $\kappa^{3/2}\lap\mu(\vr)$. To see this let us first rewrite \eqref{mu} as \eq{\pt\mu(\vr) + \vw\cdot\Grad\mu(\vr)- 2\kappa\mu'(\vr)\lap\mu(\vr) = 0.\label{mu_w}} Multiplying the above equation by $-\kappa^2\lap\mu(\vr)$ and integrating by parts we obtain \eqh{\Dt\intO{\kappa^2|\Grad\mu(\vr)|^2}+2\intO{\kappa^3\mu'(\vr)|\lap\mu(\vr)|^2}\leq \intO{\kappa^2|\Grad\vw||\Grad\mu(\vr)|^2}.} Note that for any $f\in H^2(\Omega)\cap L^\infty(\Omega)$ we have \eq{\|\Grad f\|^2_{L^4(\Omega)}\leq c\|\lap f\|_{L^2(\Omega)}\|f\|_{L^\infty(\Omega)}.\label{G-N}} Therefore, applying this inequality with $f=\mu(\vr)$ and using the uniform bound for $\sqrt\kappa \Grad \vw$ following from \eqref{v2}, we obtain \eq{ \label{est} \kappa \|\Grad\mu(\vr)\|_{L^\infty(0,T; L^2(\Omega))}+\kappa^{3/2}\|\lap\mu(\vr)\|_{L^2((0,T)\times\Omega)}\leq c\lr{\kappa\|\vr^0\|_{H^1(\Omega)},\sqrt{\kappa}\|\vw\|_{L^2(0,T;V)},R}, } and thus, coming back to \eqref{mu_w} we easily check \eqref{h_reg_rho}. \subsection{Construction of solution}\label{SS:Constr} Construction of sufficiently smooth approximation is different when $0<\kappa <1$ than for the case $\kappa=1$. The latter can found for instance in \cite{X.Liao} (adaptation of the usual process for example from \cite{PLL}). The difference between these two cases is due to the third term in \eqref{eqw} which does not vanish and which generates higher order term with respect to $\vr$. Here we have to consider the following augmented system with three unknowns $(\vr,\vw,\vv)$: \begin{equation}\label{regularized} \begin{array}{c} \vspace{0.2cm} \pt\vr+ \Div(\vr \vw) - 2 \kappa\lap\mu( \vr) = 0,\\ \hskip-8cm \pt\lr{\vr \vw}+ \Div ((\vr \vw - 2 \kappa \nabla\mu( \vr)) \otimes \vw) \\ \vspace{0.2cm} \hskip2cm - 2(1-\kappa) \Div (\mu(\vr) D(\vw)) - 2\kappa \Div (\mu(\vr) A(\vw)) + \Grad {\pi_1} = - 2 \kappa (1-\kappa) \Div(\mu(\vr) \nabla\vv),\\ \hskip-8cm \partial_t(\vr \vv) + \Div((\vr \vw- 2 \kappa \nabla \mu( \vr))\otimes \vv) \\ \vspace{0.2cm} \hskip2cm -2 \kappa \Div(\mu(\vr)\nabla\vv) - 2 \kappa \nabla ((\mu'(\vr)\vr - \mu(\vr))\Div \vv) = - 2\Div(\mu(\vr) \nabla^{t} \vw), \\ \Div\vw = 0. \end{array} \end{equation} Of course, we will have to prove that $\vv = 2 \nabla\varphi(\vr)$ to solve the initial system. \bigskip \bigskip \noindent{\bf Full approximation.} Below we present the basic level of approximation procedure. \bigskip 1. The continuity equation is replaced by its regularized version \eq{&\pt\vr+ \Div(\vr [\vw]_\delta) - 2\kappa\, \Div([\mu'({\vr})]_\alpha \nabla \vr) = 0,\\ &\qquad\qquad \vr(0,x)=[\vr^0]_{\delta}, \label{A_cont}} where $\alpha,\delta$ denote the standard regularizations with respect to time and space. Such double index regularization may be found for instance in \cite{X.Liao}. \bigskip 2. The momentum equation is replaced by its Faedo-Galerkin approximation with additional regularizing term $\ep\lr{\lap^2\vw-\Div((1+|\Grad\vw|^2)\Grad\vw)}$ \begin{equation}\label{FG} \begin{split} &\intO{\vr\vw(\tau)\cdot\vcg{\phi}} -\inttauO{((\vr [\vw]_\delta -2 \kappa[\mu'({\vr})]_\alpha \nabla \vr) \otimes \vw):\Grad \vcg{\phi}}\\ &\quad + 2(1-\kappa)\inttauO{\mu(\vr)D(\vw):\Grad \vcg{\phi}} +\ep\inttauO{\lr{\lap\vw\cdot\lap\vcg{\phi}+(1+|\Grad\vw|^2) \Grad\vw:\Grad\vcg{\phi}}}\\ &\quad+2 \kappa\inttauO{\mu(\vr)A(\vw): \Grad\vcg{\phi}}- 2\kappa(1-\kappa)\inttauO{\mu(\vr)\Grad\vv:\Grad\vcg{\phi}}\\ &=\intO{(\vr\vw)^{0}\cdot\vcg{\phi}}, \end{split} \end{equation} satisfied for any $\tau\in[0,T]$ and any test function $\vcg{\phi}\in X_{n}$, where $X_{n}=\operatorname{span}\{\vcg{\phi}_{i}\}_{i=1}^{n}$ and $\{\vcg{\phi}_{i}\}_{i=1}^{\infty}$ is an orthonormal basis in $V$ such that $\vcg{\phi}_{i} \in (C^\infty(\Omega))^3$ with $\Div\, \vcg{\phi}_{i}=0$ for all $i\in \mathbb{N}$. \medskip 3. The Faedo-Galerkin approximation for the artificial equation \eq{\label{FGtheta} &\intO{\vr\vv(\tau){\bf{\vcg{\xi}}}} -\inttauO{((\vr [\vw]_\delta - 2 \kappa [\mu'({\vr})]_\alpha \nabla \vr) \otimes \vv):\Grad \vcg{\xi}} + 2\kappa\inttauO{\mu(\vr)\Grad\vv:\Grad \vcg{\xi}}\\ &\quad+ 2\kappa\inttauO{(\mu'(\vr)\vr - \mu(\vr))\Div \vv\ \Div\vcg{\xi}}- 2\inttauO{\mu(\vr)\Grad^t\vw:\Grad\vcg{\xi}}=\intO{(\vr\vv)^{0}\vcg{\xi}}, } satisfied for any $\tau\in[0,T]$ and any test function $\vcg{\xi}\in Y_{n}$, where $Y_{n}=\operatorname{span}\{\vcg{\xi}_{i}\}_{i=1}^{n}$ and $\{\vcg{\xi}_{i}\}_{i=1}^{\infty}$ is an orthonormal basis in $W^{1,2}(\Omega)$ such that $\vcg{\xi}_{i}\in (C^\infty(\Omega))^3$ for all $i\in \mathbb{N}$. \bigskip \noindent{\bf Existence of solutions to the continuity equation.} For fixed $\vw\in C([0,T], X_n)$ we solve the continuity equation, which is now quasi-linear parabolic equation with smooth coefficients. Thus, application of classical existence theory of {\sc Lady{\v{z}}enskaja, Solonnikov and Uralceva} \cite{LSU} (see for example Theorem 10.24 from \cite{FN}, which is a combination of Theorems 7.2, 7.3 and 7.4 from \cite{LSU}) yields the following result \begin{thm}\label{LSU} Let $\nu \in (0,1)$. Suppose that the initial condition $\vr_\delta^0\in C^{2+\nu}(\Ov\Omega)$ is such that $0<r\leq \vr_\delta^0\leq R$ and satisfies the periodic boundary conditions. Then problem \eqref{A_cont} possesses a unique classical solution $\vr$ from the class \begin{equation}\label{regvr} V_{[0,T]}=\left\{\begin{array}{rl} \vr&\in C([0,T];C^{2+\nu}(\Omega))\cap C^1([0,T]\times\Omega),\\ \pt\vr&\in C^{\nu/2}\left([0,T];C(\Omega)\right) \end{array}\right\} \end{equation} and satisfying classical maximum principle \eq{0< r\leq\vr(t,x)\leq R. \label{max_gal}} Moreover, the mapping $\vw\mapsto\vr(\vw)$ maps bounded sets in $C([0,T];X_{n})$ into bounded sets in $V_{[0,T]}$ and is continuous with values in $C\big([0,T];C^{2+\nu'}(\Omega)\big)$, $0<\nu'<\nu<1$. \end{thm} \bigskip \noindent{\bf Local existence of solutions to the Galerkin approximations.} Having a smooth solution to the continuity equation we may now solve the integral equations \eqref{FG} and \eqref{FGtheta} on possibly short time interval via fixed point argument. More precisely, we want to prove that there exists time $T=T(n)$ and $(\vw,\vv)\in C([0,T];X_{n})\times C([0,T]; Y_n)$ satisfying (\ref{FG}, \ref{FGtheta}). To this purpose let us rewrite this equations as a fixed point problem \eq{\label{T} (\vw(t),\vv(t))&=\lr{ {\mathcal{M}}_{\vr(t)}\left[P_{X_n}(\vr\vw)^0+\int_{0}^{t}{{\mathcal{K}}(\vw)(s) {\rm d}s}\right],\ \mathcal{N}_{\vr(t)}\left[P_{Y_n}(\vr\vv)^0+\int_{0}^{t}{{\mathcal{L}}(\vv)(s) {\rm d}s}\right]}\\ &={\mathcal{T}}[\vw,\vv](t) } where $\vr=\vr(\vw)$ is a solution to the continuity equation as explained above, $${\cal{M}}_{\vr(t)}:X_{n}\rightarrow X_{n},\quad\intO{\vr{\cal{M}}_{\vr(t)}[\vcg{\phi}]\cdot\vcg{\psi}}=\langle\vcg{\phi},\vcg{\psi}\rangle, \quad\vcg{\phi},\vcg{\psi}\in X_{n},$$ $${\cal{N}}_{\vr(t)}:Y_{n}\rightarrow Y_{n},\quad\intO{\vr{\cal{N}}_{\vr(t)}[\vcg{\xi}]\cdot\vcg{\zeta}}=\langle\vcg{\xi},\vcg{\zeta}\rangle, \quad\vcg{\xi},\vcg{\zeta}\in Y_{n},$$ $P_{X_n}$, $P_{Y_n}$ denote the projections of $L^2(\Omega)$ on $X_n$, $Y_n$, respectively, and ${\cal K}(\vw), {\cal L}(\vv)$ are the operators defined as \eqh{ {\cal K}: &\quad X_n\to X_n,\\ \langle{\cal K}(\vw),\vcg{\phi}\rangle=&\intO{((\vr [\vw]_\delta - 2 \kappa [\mu'({\vr})]_\alpha \nabla \vr) \otimes \vw):\Grad \vcg{\phi}} - 2(1-\kappa)\intO{\mu(\vr)D(\vw):\Grad \vcg{\phi}}\\ &-2 \kappa\intO{\mu(\vr)A(\vw): \Grad\vcg{\phi}} + 2\kappa(1-\kappa)\intO{\mu(\vr)\Grad\vv:\Grad\vcg{\phi}}\\ &-\ep\intOB{\lap\vw\cdot\lap\vcg{\phi}+(1+|\Grad\vw|^2) \Grad\vw:\Grad\vcg{\phi}}, } \eqh{ {\cal L}: &\quad Y_n\to Y_n,\\ \langle{\cal L}(\vv),\vcg{\xi}\rangle= &\intO{((\vr [\vw]_\delta - 2 \kappa [\mu'({\vr})]_\alpha \nabla \vr) \otimes \vv):\Grad \vcg{\xi}} - 2\kappa\intO{\mu(\vr)\Grad\vv:\Grad \vcg{\xi}}\\ &-2 \kappa\intO{(\mu'(\vr)\vr - \mu(\vr))\Div \vv\ \Div\vcg{\xi}}+2\intO{\mu(\vr)\Grad^t\vw:\Grad\vcg{\xi}}.} First let us observe that since $\vr(t,x)$ is strictly positive we have \eqh{\|{\cal M}_{\vr(t)}\|_{L(X_n,X_n)},\ \|{\cal N}_{\vr(t)}\|_{L(Y_n,Y_n)}\leq\frac{1}{r}.} Moreover \eq{\|{\cal M}_{\vr^1(t)}-{\cal M}_{\vr^2(t)}\|_{L(X_n,X_n)} + \|{\cal N}_{\vr^1(t)}-{\cal N}_{\vr^1(t)}\|_{L(Y_n,Y_n)}\leq c(n,r^1,r^2)\|\vr^1-\vr^2\|_{L^1(\Omega)},\\ \label{XY} } and by the equivalence of norms on the finite dimensional space we prove that \eq{\|{\cal K}(\vw)\|_{X_n}+ \|{\cal L}(\vv)\|_{Y_n}\leq c(n,r,R,\|\Grad\vr\|_{L^2(\Omega)},\|\vw\|_{X_n}, \|\vv\|_{Y_n} ).\label{KL}} Next, we consider a ball ${\cal B}$ in the space $C([0,\tau];X_{n})\times C([0,\tau];Y_{n})$: $${\cal B}_{M,\tau}=\left\{(\vw,\vv)\in C([0,\tau];X_{n})\times C([0,\tau];Y_{n}):\|\vw\|_{C([0,\tau];X_n)}+\|\vv\|_{C([0,\tau];Y_n)}\leq M\right\}.$$ Using estimates \eqref{XY}, \eqref{KL}, \eqref{regvr} and \eqref{max_gal}, one can check that ${\cal T}$ is a continuous mapping of the ball ${\cal B}_{M,\tau}$ into itself and for sufficiently small $\tau=T(n)$ it is a contraction. Therefore, it possesses a unique fixed point which is a solution to \eqref{FG} and \eqref{FGtheta} for $T=T(n)$. \bigskip \noindent{\bf Global existence of solutions.} In order to extend the local in-time solution obtained above to the global in time one, we need to find uniform (in time) estimates, so that the above procedure can be iterated. First let us note, that $\vw,\vv$ obtained in the previous paragraph have better regularity with respect to time. It follows by taking the time derivative of \eqref{T} and using the estimates \eqref{regvr}, \eqref{max_gal}, that $$(\vw,\vv)\in C^1([0,\tau];X_{n})\times C^1([0,\tau];Y_{n}).$$ This is an important feature since now we can take time derivatives of \eqref{FG} and \eqref{FGtheta} and use the test functions $\vcg{\phi}=\vw$ and $\vcg{\xi}=\vv$, respectively. We then obtain \begin{equation}\label{w1} \begin{split} &\Dt\intO{\vr\frac{|\vw|^2}{2}} + 2(1-\kappa)\intO{\mu(\vr)|D(\vw)|^2}+\ep\intOB{|\lap\vw|^2+(1+|\Grad\vw|^2) |\Grad\vw|^2}\\ &\quad+ 2\kappa\intO{\mu(\vr)|A(\vw)|^2}- 2 \kappa(1-\kappa)\intO{\mu(\vr)\Grad\vv:\Grad\vw}=0, \end{split} \end{equation} and \eq{\label{t1} &\Dt\intO{\vr\frac{|\vv|^2}{2}} + 2\kappa\intTO{\mu(\vr)|\Grad\vv|^2}\\ &\quad+2 \kappa\intTO{(\mu'(\vr)\vr - \mu(\vr))(\Div \vv)^2}- 2\intTO{\mu(\vr)\Grad^t\vw:\Grad\vv}=0. } Therefore, multiplying the second equality by $(1-\kappa)\kappa$ and adding it to the first one, we obtain \eq{\label{aa1} \Dt&\intO{\vr\lr{\frac{|\vw|^2}{2} + (1-\kappa)\kappa\frac{|\vv|^2}{2}}}+ 2(1-\kappa)\intO{\mu(\vr) |D(\vw) -\kappa\nabla \vv)|^2} \\ &+\ep\intOB{|\lap\vw|^2+(1+|\Grad\vw|^2) |\Grad\vw|^2}\\ &+2 \kappa\intO{\mu(\vr) |A(\vw)|^2}+(1-\kappa)\intO{(\mu'(\vr)\vr-\mu(\vr))(\kappa\Div\vv)^2}=0. } Integrating the above estimate with respect to time, we obtain uniform estimate for $\vw$ and $\vv$ necessary to repeat the procedure described in the previous paragraph. Thus, we obtain a global in time unique solution $(\vr,\vw,\vv)$ satisfying equations (\ref{A_cont}, \ref{FG}, \ref{FGtheta}). \bigskip \noindent{\bf Uniform estimates.} Below we present uniform estimates that will allow us to pass to the limit with $\alpha$ and $n$ respectively. First observe that multiplying continuity equation \eqref{A_cont} by $\vr_\alpha$ and integrating by parts with respect to $x$ gives $$\frac{1}{2} \frac{d}{dt} \intO {\vr_\alpha^2} +2 \kappa \intO {[\mu'(\vr_\alpha)]_\alpha |\nabla\vr_\alpha|^2} = 0. $$ Integrating this equality with respect to time provides the following estimates \eq{\|\vr_\alpha\|_{L^\infty(0,T; L^2(\Omega))}+\|\sqrt{[\mu'(\vr_\alpha)]_\alpha }\nabla\vr_\alpha\|_{L^2(0,T;L^2(\Omega))}\leq c.\label{rho1}} Moreover, the standard maximum principle gives boundedness of $\vr_\alpha$ from above and below. Indeed, multiplying equation \eqref{A_cont} by $$\vr_\alpha^-=\max(0,r-\vr_\alpha)\quad\text{and}\quad \vr_\alpha^+=\min(0,R-\vr_\alpha),$$ respectively we obtain \eq{0< r\leq\vr_\alpha(t,x)\leq R. \label{max}} \begin{rmk} To prove these bounds one need to know that $\vr_\alpha\in L^2(0,T;W^{1,2}(\Omega))$, which doesn't follow from \eqref{rho1}. This problem could be solved by adding a small viscosity parameter $\alpha$ and considering $\widetilde{[\mu'(\vr)]_\alpha}=[\mu'(\vr)]_\alpha+\alpha$ in place of $[\mu'(\vr)]_\alpha$. \end{rmk} Next, using \eqref{max} and integrating \eqref{aa1} with respect to time we see that for $0<\kappa<1$ we have \eq{\|\vw\|_{L^\infty(0,T;H)} +\|\vw_\alpha\|_{L^2(0,T;V\cap W^{2,2}(\Omega))}+ \|\Grad\vw_\alpha\|_{L^4(0,T;L^4(\Omega))}\\ +\|\vv_\alpha\|_{L^\infty(0,T;L^2(\Omega))} +\|\vv_\alpha\|_{L^2(0,T;W^{1,2}(\Omega))}\leq c. \label{ueu}} \bigskip \subsection{Passage to the limit with respect to $\alpha,n,\delta$ and $\varepsilon$} \medskip \noindent{\bf {Passage to the limit $\alpha\to0$.}} On the finite dimensional subspace all the norms are equivalent, therefore the space compactness of $\vw_\alpha$ and $\vv_\alpha$ is automatic. In fact, for $n$ fixed we also know that $\pt\vw_\alpha\in L^2(0,T; X_n)$, thus $\vw_\alpha\to\vw$ strongly in $L^2(0,T;X_n)$. The same can be deduced for $\vv_\alpha$. The biggest problem is thus to pass to the limit in the term \eq{[\mu'(\vr_\alpha)]_\alpha\Grad\vr_\alpha\otimes\vw_\alpha\label{conv_w0}} which requires the strong convergence of the density and the weak convergence of the gradient of density. Using \eqref{max} we can deduce that there exists $c>0$ and $\alpha_0$ such that for $\alpha<\alpha_0$ one has $[\mu'(\vr_\alpha)]_\alpha>c$ uniformly with respect to $\alpha$. Therefore \eqref{rho1} implies that that up to a subsequence \eqh{\vr_\alpha\to\vr\quad\text{weakly in }L^2(0,T; W^{1,2}(\Omega)),} moreover $\pt\vr_\alpha\in L^2(0,T; W^{-1,2}(\Omega))$ and $\vr_\alpha \in L^\infty((0,T)\times\Omega)$, thus the Aubin-Lions lemma implies strong convergence of $\vr_\alpha$ \eqh{\vr_\alpha\to\vr\quad\text{strongly in }L^p((0,T)\times\Omega), \quad p<\infty.} This justifies the passage to the limit in \eqref{conv_w0}. Therefore, one is able to pass to the limit $\alpha\to 0$ in both velocity equations \eqref{FG} and \eqref{FGtheta}. The limit functions $(\vr,\vw,\vv)=(\vr_n,\vw_n,\vv_n)$ satisfy the following system of equations: \begin{itemize} \item the momentum equation \begin{equation}\label{FG_a} \begin{split} &\langle\pt\lr{\vr_n\vw_n}(t),\vcg{\phi}\rangle_{(X_n^*,X_n)} -\intO{((\vr_n [\vw_n]_\delta - 2 \kappa \nabla \mu(\vr_n)) \otimes \vw_n)(t):\Grad \vcg{\phi}}\\ &\quad+ 2(1-\kappa)\intO{\mu(\vr_n)D(\vw_n)(t):\Grad \vcg{\phi}} + 2 \kappa\intO{\mu(\vr_n)A(\vw_n)(t): \Grad\vcg{\phi}}\\ &\quad-2\kappa(1-\kappa)\intO{\mu(\vr_n)\Grad\vv_n(t):\Grad\vcg{\phi}} +\ep\intO{\lap\vw_n(t)\cdot\lap\vcg{\phi}}\\ &\quad+\ep\intO{(1+|\Grad\vw_n|^2) \Grad\vw_n(t):\Grad\vcg{\phi}} =0, \end{split} \end{equation} satisfied for $\vcg{\phi}\in X_n$, $t\in[0,T]$ \item the auxiliary equation for $\vv_n$ \eq{\label{FGtheta_a} &\langle\pt\lr{\vr_n\vv_n}(t),\vcg{\xi}\rangle_{(Y_n^*,Y_n)} -\intO{((\vr_n [\vw_n]_\delta - 2 \kappa \nabla \mu(\vr_n)) \otimes \vv_n)(t):\Grad \vcg{\xi}}\\ &\quad+2\kappa\intO{\mu(\vr_n)\Grad\vv_n(t):\Grad \vcg{\xi}} + 2\kappa\intO{(\mu'(\vr_n)\vr_n - \mu(\vr_n))\Div \vv_n(t) \Div\vcg{\xi}}\\ &\quad-2 \intO{\mu(\vr_n)\Grad^t\vw_n(t):\Grad\vcg{\xi}}=0, } is satisfied for $\vcg{\xi}\in Y_n$, $t\in[0,T]$. \end{itemize} However, so far we only know that the approximate continuity equation \eqref{A_cont} is satisfied in the sense of distributions \eq{\label{weak1} \intT{\langle\pt\vr_n,\phi\rangle_{(W^{-1,2}(\Omega), W^{1,2}(\Omega))}}-\intTO{\vr_n[\vw_n]_\delta\cdot\Grad\phi}+ 2\kappa\intTO{\Grad\mu(\vr_n)\cdot\Grad\phi}=0} for any test function $\phi$ from $L^2(0,T;W^{1,2}(\Omega))$. But on the other hand, we know that that in the distributional sense $$\psi=\pt\vr_n- 2\kappa\Div\lr{\mu'(\vr_n)\Grad\vr_n}\in L^2((0,T),\times\Omega),$$ if only $\vw_n\in L^\infty(0,T; L^\infty(\Omega)).$ But this is the case at the level of Galerkin approximations, therefore also $$\psi\mu'(\vr_n)\in L^2((0,T)\times\Omega).$$ Taking the product of $\psi$ and $\psi\mu'(\vr_n)$ we obtain \eqh{\intO{\mu'(\vr_n)\lr{\pt\vr_n- 2\kappa\lap\mu(\vr_n)}^2}\leq c} and the above integral gives rise to estimates \eq{\intO{\mu'(\vr_n)(\pt\vr_n)^2}+ 4 \kappa^2\intO{\mu'(\vr_n)(\lap\mu(\vr_n))^2}-4\kappa\intO{\mu'(\vr_n)\pt\vr_n\lap\mu(\vr_n)}\\ =\intO{\mu'(\vr_n)(\pt\vr_n)^2}+4 \kappa^2\intO{\mu'(\vr_n)(\lap\mu(\vr_n))^2} +2\kappa\Dt\intO{|\Grad\mu(\vr_n)|^2}\leq c.\label{h_reg} } Note that this estimate asks for $L^\infty((0,T)\times\Omega)$ bound for $\vw_n$, which is possible only at the level of Galerkin approximation. Nevertheless, regularity \eqref{h_reg} allows us to repeat (\ref{mu_w}-\ref{est}) to get the uniform (with respect to $n,\ \ep$ and $\delta$) estimate \eq{\|\Grad\vr_n\|_{L^\infty(0,T;L^{2}(\Omega))} +\|\lap\vr_n\|_{L^2((0,T)\times\Omega)}\leq c. \label{high}} \bigskip \bigskip \noindent{\bf Passage to the limit $n\to\infty$.} The biggest problem is again to pass to the limit in the term \eq{\mu'(\vr_n)\Grad\vr_n\otimes\vw_n,\label{conv_w}} which requires the strong convergence of the density and at least weak convergence of the gradient of density and in the convective term \eq{\vr_n\vw_n\otimes\vw_n\label{conv_w2}} which requires strong convergence of $\sqrt{\vr_n}\vw_n$. Having obtained estimate \eqref{high} we can estimate the time-derivative of gradient of $\vr_n$. Indeed differentiating \eqref{A_cont} with respect to $x$ we obtain \eqh{\pt\Grad\vr_n=-\Grad\lr{[\vw_n]_\delta\cdot\Grad\vr_n}- 2\kappa\Grad\lap\mu(\vr_n)\in L^2(0,T; W^{-1,3/2}(\Omega)).} Note that the above estimate is uniform also with respect to $\ep$. Now, applying the Aubin-Lions lemma for $\Grad\vr_n$ we obtain \eqh{\Grad\vr_n\to\Grad\vr\quad\text{strongly in } L^2(0,T;L^2(\Omega)),} therefore due to \eqref{max} we also have \eq{\vr_n\to\vr\quad\text{and}\quad \frac{1}{\vr_n}\to\frac{1}{\vr}\quad\text{strongly in } L^p(0,T;L^p(\Omega)) \label{s_rho}} for $p<\infty$ and \eq{\vr_n\vw_n\to\vr\vw\quad \text{weakly in } L^{p_1}(0,T; L^{q_1}(\Omega))\cap L^{p_2}(0,T; L^{q_2}(\Omega)),\label{k1}} where $ p_1<2, q_1<6, p_2<\infty,q_2<2.$ These convergences justify the limit passage in \eqref{conv_w}. To justify passage to the limit in \eqref{conv_w2} we first estimate \eq{\|\Grad\lr{\vr_n\vw_n}\|_{L^2(0,T; L^{\frac{3}{2}}(\Omega))} \leq &\|\Grad\vr_n\|_{L^\infty(0,T;L^2(\Omega))}\|\vw_n\|_{L^2(0,T; L^6(\Omega))}\\ &+\|\Grad\vw_n\|_{L^2(0,T;L^2(\Omega))}\|\vr_n\|_{L^\infty(0,T;L^\infty(\Omega))}. \label{k2}} We can also estimate the time derivative of momentum, from \eqref{FG_a} we obtain \eq{ &\sup_{\|\vcg{\phi}\|\leq1}\left|\intTO{\pt\lr{\vr_n\vw_n}\cdot\vcg{\phi}}\right|\\ &=\sup_{\|\vcg{\phi}\|\leq 1}\left\{\left|\intTO{((\vr_n [\vw_n]_\delta - 2 \kappa\mu'({\vr_n}) \nabla \vr_n) \otimes \vw_n):\Grad \vcg{\phi}}\right|\right.\\ &\quad\qquad+ 2(1-\kappa)\left|\intTO{\mu(\vr_n)D(\vw_n):\Grad \vcg{\phi}}\right|\\ &\quad\qquad+\ep\left|\intTOB{\lap\vw\cdot\lap\vcg{\phi}}\right| +\ep\left|\intTOB{(1+|\Grad\vw|^2) \Grad\vw:\Grad\vcg{\phi}}\right| \\ &\quad\qquad+\left. 2 \kappa\left|\intTO{\mu(\vr_n)A(\vw_n): \Grad\vcg{\phi}}\right| +2\kappa(1-\kappa)\left|\intTO{\mu(\vr_n)\Grad\vv_n:\Grad\vcg{\phi}}\right|\right\}, \label{ptw}} where $\|\vcg{\phi}\|$ denotes the norm in the space $W_T:=L^2(0,T;V\cap W^{2,2}(\Omega))\cap L^4(0,T;W^{1,4}(\Omega))$. Let us estimate the convective term \eqh{&\left|\intTO{((\vr_n [\vw_n]_\delta - 2 \kappa\mu'({\vr_n}) \nabla \vr_n) \otimes \vw_n):\Grad \vcg{\phi}}\right|\\ &\leq\intT{\|\Grad\vcg{\phi}\|_{L^6(\Omega)}\lr{R\|\vw_n\|^2_{L^\frac{12}{5}(\Omega)}+c(\kappa,R)\|\Grad\vr_n\|_{L^6(\Omega)}\|\vw_n\|_{L^\frac{3}{2}(\Omega)}}}\\ &\leq c(\kappa,R,\ep)\|\vcg{\phi}\|_{L^2(0,T; W^{2,2}(\Omega))}, } for the highest order terms we have \eqh{\ep\left|\intTO{\lap\vw_n\cdot\lap\vcg{\phi}}\right|\leq \ep\|\vw_n\|_{L^2(0,T; W^{2,2}(\Omega))}\|\vcg{\phi}\|_{L^2(0,T; W^{2,2}(\Omega)),}} and \eqh{\ep\left|\intTO{(1+|\Grad\vw|^2) \Grad\vw:\Grad\vcg{\phi}}\right| \leq \ep\intT{\|\Grad\vcg{\phi}\|_{L^4(\Omega)}\lr{\|\Grad\vw_n\|^3_{L^{4}(\Omega)}+\|\Grad\vw_n\|_{L^{\frac{4}{3}}(\Omega)}}}\\ \leq \ep\|\Grad\vcg{\phi}\|_{L^4(0,T; L^4(\Omega))}\lr{\|\Grad\vw\|^3_{L^4(0,T; L^4(\Omega))}+\|\Grad\vw\|_{L^{\frac{4}{3}}(0,T; L^{\frac{4}{3}}(\Omega))}} .} The other terms from \eqref{ptw} are less restrictive, therefore \eq{\|\pt\lr{\vr_n\vw_n}\|_{W_T^*}\leq c,\label{k3}} where $W_T^*$ denotes the dual space of $W_T$ defined above. Collecting \eqref{k1}, \eqref{k2}, \eqref{k3} and applying the Aubin-Lions lemma to $\vr_n\vw_n$, we prove that \eqh{\vr_n\vw_n\to\vr\vw\quad \text{strongly in } L^p(0,T;L^p(\Omega))} for some $p>1$ and therefore thanks to \eqref{s_rho} and \eqref{ueu} \eq{\Grad\vw_n\to\Grad\vw\quad \text{strongly in }L^p(0,T;L^p(\Omega))\label{strong_gw}} for $1\leq p<4$. In particular, convergence in \eqref{conv_w2} is proved. For future purposes we now estimate the time derivative of $\vr\vv$ in $L^{\frac{4}{3}}(0,T; W^{-1,\frac{4}{3}}(\Omega))$. We use \eqref{FGtheta_a} to obtain \eqh{ &\sup_{\|\vcg{\xi}\|\leq1}\left|\intTO{\pt\lr{\vr_n\vv_n}\cdot{\vcg{\xi}}}\right|\\ &= \sup_{\|\vcg{\xi}\|\leq1}\left\{\left|\intTO{((\vr_n [\vw_n]_\delta - 2 \kappa \mu'({\vr_n}) \nabla \vr_n) \otimes \vv_n):\Grad \vcg{\xi}}\right|\right.\\ &\qquad+ 2 \kappa\left|\intTO{\mu(\vr_n)\Grad\vv_n:\Grad \vcg{\xi}}\right| + 2\kappa\left|\intTO{(\mu'(\vr_n)\vr_n - \mu(\vr_n))\Div \vv_n\ \Div\vcg{\xi}}\right|\\ &\qquad+ 2\left.\left|\intTO{\mu(\vr_n)\Grad^t\vw_n:\Grad\vcg{\xi}}\right|\right\}} where $\|\vcg{\xi}\|$ denotes the norm in the space $L^4(0,T;W^{1,4}(\Omega))$. We will only estimate the convective term since it is most restrictive. \eqh{ &\left|\intTO{((\vr_n [\vw_n]_\delta - 2 \kappa \mu'({\vr_n}) \nabla \vr_n) \otimes \vv_n):\Grad \vcg{\xi}}\right|\\ &\leq\intT{\|\Grad\vcg{\xi}\|_{L^4(\Omega)}\lr{R\|\vw_n\|_{L^4(\Omega)}\|\vv_n\|_{L^2(\Omega)}+c(\kappa,R)\|\Grad\vr_n\|_{L^4(\Omega)}\|\vv_n\|_{L^2(\Omega)}}}\\ &\leq c(\kappa,R)\|\vcg{\xi}\|_{L^4(0,T; W^{1,4}(\Omega))}\|\vv_n\|_{L^2(0,T;L^2(\Omega))} \lr{\|\vw_n\|_{L^4(0,T; L^4(\Omega))}+\|\lap\vr_n\|^{\frac{1}{2}}_{L^2(0,T; L^2(\Omega))}} } thus \eq{\|\pt\lr{\vr_n\vv_n}\|_{L^{\frac{4}{3}}(0,T; W^{-1,\frac{4}{3}}(\Omega))}\leq c.\label{k4}} Hence, the limit functions $(\vr,\vw,\vv)=(\vr_\delta,\vw_\delta,\vv_\delta)$ fulfil \begin{itemize} \item the continuity equation \eq{\pt\vr_\delta+\Div\lr{\vr_\delta[\vw_\delta]_\delta}- 2\kappa\lap\mu(\vr_\delta)=0\label{strong1}} a.e. in $(0,T)\times\Omega$, \item the momentum equation \begin{equation}\label{FGn} \begin{split} \langle\pt\lr{\vr_\delta\vw_\delta},\vcg{\phi}\rangle_{(W_\tau^*, W_\tau)} -\inttauO{((\vr_\delta [\vw_\delta]_\delta - 2 \kappa \nabla \mu(\vr_\delta)) \otimes \vw_\delta):\Grad \vcg{\phi}}\\ &\quad+ 2(1-\kappa)\inttauO{\mu(\vr_\delta)D(\vw_\delta):\Grad \vcg{\phi}}\\ &\quad+\ep\inttauO{\lr{\lap\vw_\delta\cdot\lap\vcg{\phi}}} +\ep\inttauO{\lr{(1+|\Grad\vw_\delta|^2) \Grad\vw_\delta:\Grad\vcg{\phi}}}\\ &\quad+2 \kappa\inttauO{\mu(\vr_\delta)A(\vw_\delta): \Grad\vcg{\phi}} -2 \kappa(1-\kappa)\inttauO{\mu(\vr_\delta)\Grad\vv_\delta:\Grad\vcg{\phi}}=0, \end{split} \end{equation} for all $\vcg{\phi}\in W_\tau$ with $\tau\in[0,T]$, \item the auxiliary equation for $\vv$ \eq{\label{FGthetan} \langle\pt\lr{\vr_\delta\vv_\delta},\vcg{\xi}\rangle_{(L^{\frac{4}{3}}(0,\tau; W^{-1,\frac{4}{3}}(\Omega)), L^4(0,\tau; W^{1,4}(\Omega)))} \\ &\quad-\inttauO{((\vr_\delta [\vw_\delta]_\delta - 2 \kappa \nabla \mu(\vr_\delta)) \otimes \vv_\delta):\Grad \vcg{\xi}}+ 2\kappa\inttauO{\mu(\vr_\delta)\Grad\vv_\delta:\Grad \vcg{\xi}}\\ &\quad+ 2\kappa\inttauO{(\mu'(\vr_\delta)\vr_\delta - \mu(\vr_\delta))\Div \vv_\delta\ \Div\vcg{\xi}}-2 \inttauO{\mu(\vr_\delta)\Grad^t\vw_\delta:\Grad\vcg{\xi}}=0, } for all $\vcg{\xi}\in L^4(0,\tau; W^{1,4}(\Omega))$ with $\tau\in[0,T]$. \end{itemize} \bigskip \noindent{\bf Passage to the limit $\delta$ tends to $0$ and identification of $\vv_\delta$ with $\Grad\vp(\vr_\delta)$ at the limit.} The aim of this paragraph is to let $\delta\to0$ in the equations \eqref{strong1}, \eqref{FGn} and \eqref{FGn}. This limit passage can be performed exactly as $n\to \infty$ presented above. The only difference is that after this step we may drop the additional equation for $\vv$ thanks to identification $\vv=2 \Grad\vp(\vr)$. Below we present the details of this reasoning. Note that the coefficients of the quasi-linear parabolic equation \eqref{strong1} (i.e. $[\vw_\delta]_\delta$) are sufficiently regular and the maximum principle \eqref{max} holds uniformly with respect to all approximation parameters. Therefore, the classical theory of Lady{\v{z}}enskaja, Solonnikov and Uralceva \cite{LSU} (Theorems 7.2, 7.3 and 7.4 from \cite{LSU}) can be applied to show further regularity of $\vr_\delta$, we have in particular \eq{\pt\vr_\delta\in C([0,T]; C(\Omega)),\quad \vr_\delta\in C([0,T]; C^2(\Omega)).\label{reg_rg}} Let us now rewrite \eqref{strong1} using \eqref{rel1} as $$ \pt\vr_\delta + \Div\lr{\vr_\delta ([\vw_\delta]_\delta - 2 \kappa\, \nabla\vp(\vr_\delta)) } = 0$$ and therefore multiplying the above equation by $\mu'(\vr)$ we obtain $$\partial_t\mu(\vr_\delta) + \Div\lr{\mu(\vr_\delta)([\vw_\delta]_\delta - 2 \kappa\, \Grad\vp(\vr_\delta))} - 2\kappa (\mu'(\vr_\delta)\vr_\delta - \mu(\vr_\delta)) \Delta \vp(\vr_\delta) = 0. $$ Differentiating it with respect to space one gets in the sense of distributions \eq{ \pt(\vr_\delta \widetilde \vv_\delta) + \Div(\vr_\delta ([\vw_\delta]_\delta -2 \kappa\, \Grad\vp(\vr_\delta))\otimes \widetilde\vv_\delta ) -2\kappa\Grad\lr{ (\mu'(\vr_\delta)\vr_\delta - \mu(\vr_\delta)) \Div \widetilde\vv_\delta }\\ +2 \Div((\mu(\vr_\delta) \Grad^{t} [\vw_\delta]_\delta) - 2 \kappa \Div(\mu(\vr_\delta)\Grad\widetilde \vv_\delta) = 0\label{test}} where by $\widetilde\vv_\delta$ we denoted $2\Grad\vp(\vr_\delta)$. Note that due to particular case of the Gagliardo-Nirenberg interpolation inequality \eqref{G-N} and \eqref{max} we know that $\Grad\vr_\delta$ is bounded \eqh{\|\nabla \vr_\delta\|_{L^4(0,T; L^4(\Omega))}\leq c,} uniformly with respect to $\delta$. One can thus estimate the convective term of \eqref{test} in $L^2(0,T; W^{-1,2}(\Omega))$ uniformly with respect to $\delta$. Indeed, we now $\vw_\delta$ uniformly bounded in $L^4(0,T;L^4(\Omega))$ with respect to $\delta$ and therefore \eqh{&\sup_{\|\vcg{\xi}\|\leq1}\left|\intTO{ (\vr_\delta ([\vw_\delta]_\delta - 2 \kappa\, \Grad\vp(\vr_\delta))\otimes \widetilde\vv_\delta ):\Grad\vcg{\xi}}\right|\\ &\leq c(R) \|\Grad\vcg{\xi}\|_{L^2(0,T; L^2(\Omega))}\|\nabla \vr_\delta\|_{L^4(0,T; L^4(\Omega))} \lr{\|\nabla [\vw_\delta]_\delta\|_{L^4(0,T; L^4(\Omega))}+\|\nabla \vr_\delta\|_{L^4(0,T; L^4(\Omega))}} } for $\vcg{\xi}\in L^2(0,T; W^{1,2}(\Omega))$ (uniformly with respect to $\delta$), which justifies that \eq{ &\langle \pt(\vr_\delta \widetilde \vv_\delta), \vcg{\xi}\rangle_{(L^2(0,T; W^{-1,2}(\Omega)),L^2(0,T; W^{1,2}(\Omega)))}\\ &-\intTO{\vr_\delta ([\vw_\delta]_\delta - 2 \kappa\, \Grad\vp(\vr_\delta))\otimes \widetilde\vv_\delta :\Grad\vcg{\xi}} + 2\kappa\intTO{ (\mu'(\vr_\delta)\vr_\delta - \mu(\vr_\delta)) \Div \widetilde\vv_\delta \Div\vcg{\xi} }\\ & - 2\intTO{\mu(\vr_\delta) \Grad^{t} [\vw_\delta]_\delta:\Grad\vcg{\xi}} + 2\kappa \intTO{\mu(\vr_\delta)\Grad\widetilde \vv_\delta:\Grad\vcg{\xi}} = 0\label{test2}} is satisfied for any $\vcg{\xi}\in L^2(0,T; W^{1,2}(\Omega))$. \begin{rmk} As we noticed in \eqref{reg_rg} the regularity of $\vr_\delta$ is in fact much higher and allows to formulate the equation for $\widetilde\vv_\delta$ in much stronger sense than merely \eqref{test2}. This formulation, however, will be used when passing to the limit with respect to $\ep$ after passing to the limit with respect to $\delta$. \end{rmk} We now want show that $\widetilde\vv_\delta -\vv_\delta$ tends to $0$ when $\delta$ goes to zero in an appropriate norm. To this purpose let us expand \eq{ I= & \Dt\intO{\vr_\delta\frac{|\vv_\delta-\widetilde\vv_\delta|^2}{2}} + 2 \kappa \intO{\mu(\vr_\delta)|\nabla (\vv_\delta - \widetilde\vv_\delta)|^2} \\ & + 2\kappa\intTO{(\mu'(\vr_\delta)\vr_\delta - \mu(\vr_\delta))(\Div(\vv_\delta -\widetilde \vv_\delta))^2}\\ =& \Dt\intO{\vr\lr{\frac{|\vv_\delta|^2}{2}-\vv_\delta \cdot\widetilde\vv_\delta+\frac{|\widetilde\vv_\delta|^2}{2}}}\\ &+ 2\kappa\intO{\mu(\vr_\delta)\lr{|\nabla\vv_\delta |^2 +|\nabla \widetilde\vv_\delta|^2 - 2 \nabla \vv_\delta\cdot \nabla \widetilde \vv_\delta}}. \\ & +2 \kappa\intTO{(\mu'(\vr_\delta)\vr_\delta - \mu(\vr_\delta))\lr{|\Div \vv_\delta|^2 + |\Div \widetilde \vv_\delta|^2 - 2 \Div \vv_\delta \Div \widetilde \vv_\delta}} \label{expand} } To handle the first term, let us notice that letting $n\to \infty$ in \eqref{t1}, using the lower semi-continuity of the convex functions and the strong convergence of $\Grad\vw_n$ established in \eqref{strong_gw} we obtain \eq{\label{t11} &\Dt\intO{\vr_\delta\frac{|\vv_\delta|^2}{2}} + 2\kappa\intTO{\mu(\vr_\delta)|\Grad\vv_\delta|^2}\\ &\quad+ 2\kappa\intTO{(\mu'(\vr_\delta)\vr_\delta - \mu(\vr_\delta))(\Div \vv_\delta)^2}- 2 \intTO{\mu(\vr_\delta)\Grad^t\vw_\delta:\Grad\vv_\delta}\leq0. } Now, the last term in \eqref{expand} can be computed using $\vcg{\xi}=\widetilde\vv_\delta$ in \eqref{test2}, we have \eqh{&\Dt\intO{\vr_\delta\frac{|\widetilde\vv_\delta|^2}{2}} +2 \kappa\intO{ (\mu'(\vr_\delta)\vr_\delta - \mu(\vr_\delta)) (\Div \widetilde\vv_\delta)^2 }\\ & - 2\intO{\mu(\vr_\delta) \Grad^{t} [\vw_\delta]_\delta :\Grad\widetilde\vv_\delta} + 2\kappa \intO{\mu(\vr_\delta)|\Grad\widetilde \vv_\delta|^2}=0. } The middle term in \eqref{expand} equals \eq{\Dt\intO{\vr_\delta\vv_\delta\cdot\widetilde\vv_\delta}=\intOB{\pt\lr{\vr_\delta\vv_\delta}\cdot\widetilde\vv_\delta + \vv_\delta \cdot\pt(\vr_\delta \widetilde\vv_\delta ) - \partial_t\vr_\delta\, \vv_\delta\cdot \widetilde \vv_\delta}\label{der_mixed}} and the two first terms make sense and can be handled using $\vcg{\xi}=\widetilde\vv_\delta$ in \eqref{FGthetan} and $\vcg{\xi} = \vv_\delta$ in \eqref{test2}. Note that $\widetilde \vv_\delta$ and $\partial_t \vr_\delta$ are due to \eqref{reg_rg} regular enough to justify the integrability of the last term in \eqref{der_mixed} and we can write \eqh{&\intO{\pt\vr_\delta\vv_\delta\cdot\widetilde\vv_\delta}\\ &=\intO{\lr{\vr_\delta ([\vw_\delta]_\delta - 2 \kappa\, \nabla\vp(\vr_\delta)) } \otimes\vv_\delta:\Grad\widetilde\vv_\delta} +\intO{\lr{\vr_\delta ([\vw_\delta]_\delta - 2 \kappa\, \nabla\vp(\vr_\delta)) } \otimes\widetilde\vv_\delta:\Grad\vv_\delta}. } Therefore, after summing all expressions together and after some manipulation, we can show that \eqh{I-\intO{\mu(\vr_\delta)\Grad^t[\vw_\delta]_\delta:\Grad\widetilde\vv_\delta} -\intO{\mu(\vr_\delta)\Grad^t\vw_\delta:\Grad\vv_\delta}\\ +\intO{\mu(\vr_\delta)\Grad^t\vw_\delta:\Grad\widetilde\vv_\delta} +\intO{\mu(\vr_\delta)\Grad^t[\vw_\delta]_\delta:\Grad\vv_\delta}\leq 0, } in particular $$I \leq \intO{\mu(\vr)\Grad^t([\vw_\delta]_\delta - \vw_\delta ): \Grad (\widetilde\vv_\delta - \vv_\delta)}.$$ Note that the r.h.s. of this inequality tends to 0 when $\delta\to 0$. Indeed, we can bound $ \Grad (\widetilde\vv_\delta - \vv_\delta)$ in $L^2(0,T; L^2(\Omega))$ uniformly with respect to $\delta$ and $[\vw_\delta]_\delta\to \vw$ strongly in $L^p(0,T; W^{1,p}(\Omega))$ for $p<4$. Therefore, using \eqref{expand}, we conclude that $\vv_\delta - \widetilde\vv_\delta$ converges to zero in $L^\infty(0,T;L^2(\Omega))\cap L^2(0,T;H^1(\Omega))$ when $\delta \to 0$. $\Box$ The limit functions $(\vr,\vw)=(\vr_\ep,\vw_\ep)$ fulfil \begin{itemize} \item the continuity equation \eq{\pt\vr_\ep+\Div\lr{\vr_\ep\vw_\ep}- 2\kappa\lap\mu(\vr_\ep)=0\label{cont_ep}} a.e. in $(0,T)\times\Omega$, \item the momentum equation \begin{equation}\label{FG_ep} \begin{split} & \langle\pt\lr{\vr_\ep\vw_\ep},\vcg{\phi}\rangle_{(W_\tau^*, W_\tau)} -\inttauO{((\vr_\ep\vw_\ep - 2\kappa \nabla \mu(\vr_\ep)) \otimes \vw_\ep):\Grad \vcg{\phi}}\\ &\quad+ 2(1-\kappa)\inttauO{\mu(\vr_\ep)D(\vw_\delta):\Grad \vcg{\phi}}\\ &\quad+\ep\inttauO{\lr{\lap\vw_\ep\cdot\lap\vcg{\phi}}} +\ep\inttauO{\lr{(1+|\Grad\vw_\ep|^2) \Grad\vw_\ep:\Grad\vcg{\phi}}}\\ &\quad+2 \kappa\inttauO{\mu(\vr_\ep)A(\vw_\ep): \Grad\vcg{\phi}}- 4 \kappa(1-\kappa)\inttauO{\mu(\vr_\ep)\Grad^2\vp(\vr_\ep):\Grad\vcg{\phi}}=0, \end{split} \end{equation} for $\vcg{\phi}\in W_\tau$, $\tau\in[0,T]$, \item the auxiliary equation for $\Grad\vp(\vr_\ep)$ \eq{\label{FGtheta_ep} \langle\pt\Grad\mu(\vr_\ep),\vcg{\xi}\rangle_{(L^2(0,\tau; W^{-1,2}(\Omega)),L^2(0,\tau; W^{1,2}(\Omega)))} \\ &\quad-\inttauO{((\vr_\ep\vw_\ep- 2\kappa \nabla \mu(\vr_\ep)) \otimes \Grad\vp(\vr_\ep)):\Grad \vcg{\xi}}+2\kappa\inttauO{\Grad\mu(\vr_\ep):\Grad \vcg{\xi}}\\ &\quad+2\kappa\inttauO{(\mu'(\vr_\ep)\vr_\ep - \mu(\vr_\ep))\lap\vp(\vr_\ep)\ \Div\vcg{\xi}}-\inttauO{\mu(\vr_\ep)\Grad^t\vw_\ep:\Grad\vcg{\xi}}=0, } for $\vcg{\xi}\in L^2(0,\tau; W^{1,2}(\Omega))$, $\tau\in[0,T]$. \end{itemize} \bigskip \noindent {\bf Passage to the limit with respect to $\varepsilon$: existence of global solutions.} Let us start this paragraph by recalling the estimates that are uniform with respect to $\ep$. Passing to the limit $\delta\to 0$ in \eqref{aa1}, using the weak convergence of $\Grad\vr_\delta$ and strong convergence of $\vr_\ep$ together with strong convergence of $\Grad\vw_\ep$ and weak convergence of $\lap\vw_\ep$ and a standard argument based on convexity of norm we obtain \eq{\label{aa2} \Dt&\intO{\vr_\ep\lr{\frac{|\vw_\ep|^2}{2} +(1-\kappa)\kappa\frac{|2\nabla\vp(\vr_\ep)|^2}{2}}}+2(1-\kappa)\intO{\mu(\vr) |D(\vw_\ep) -2\kappa\nabla \Grad\vp(\vr_\ep))|^2} \\ &+\ep\intOB{|\lap\vw_\ep|^2+(1+|\Grad\vw_\ep|^2) |\Grad\vw_\ep|^2}\\ &+ 2\kappa\intO{\mu(\vr_\ep) |A(\vw_\ep)|^2}+ 2(1-\kappa)\intO{(\mu'(\vr_\ep)\vr_\ep-\mu(\vr_\ep))|2\kappa \lap\vp(\vr_\ep)|^2}\leq0. } We also have \eqref{max} and thus we can deduce from the above inequality that \eq{\|\vw_\ep\|_{L^\infty(0,T;H)}+\|\vw_\ep\|_{L^2(0,T;V)} +\ep^{1/2}\|\vw_\ep\|_{L^2(0,T;V\cap W^{2,2}(\Omega))}+ \ep^{1/4}\|\Grad\vw_\ep\|_{L^4(0,T;L^4(\Omega))}\\ +\|\Grad\vr_\ep\|_{L^\infty(0,T;L^2(\Omega))} +\|\vr_\ep\|_{L^2(0,T;W^{2,2}(\Omega))}\leq c. \label{v3}} Having obtained estimate \eqref{v3} we are ready to perform the last limit passage and to deduce existence of weak solutions to original system \eqref{main1} in the sense of Definition \eqref{Def1}. This in fact can be done almost exactly the same way as the proof of sequential stability of solutions from the paper \cite{BrEsSy07}. The only difference is to pass to the limit in the term $\Grad\mu(\vr)\otimes\Grad\vp(\vr)$ in \eqref{FG_ep} and \eqref{FGtheta_ep}. To this purpose we need the following compensated compactness lemma (see for instance \cite{PLL} Lemma 5.1). \begin{lemma}\label{lem_compac} Let $g_n$, $h_n$ converge weakly to $g$, $h$ respectively in $L^{p_1}(0,T;L^{p_2}(\Omega))$, $L^{q_1}(0,T;L^{q_2}(\Omega))$ where $1\leq p_1,p_2 \leq \infty$, $\frac{1}{p_1}+ \frac{1}{q_1}=\frac{1}{p_2}+ \frac{1}{q_2}= 1$. We assume in addition that \begin{itemize} \item $\partial_t g_n$ is bounded in $L^1(0,T;W^{-m,1}(\Omega))$ for some $m\geq 0$ independent of $n$. \item $\|h_n-h_n(t,\cdot+\xi)\|_{L^{q_1}(L^{q_2})} \rightarrow 0$ as $|\xi|\rightarrow 0$, uniformly in $n$. \end{itemize} Then $g_nh_n$ converges to $gh$ in $\mathcal{D}'$. \end{lemma} We will apply this lemma to $g_n=\Grad\mu(\vr_\ep)$ and $h_n=\Grad\vp(\vr_\ep)$. First of all, let us notice that exactly as in the previous sections we have $\vr_\ep\to\vr$ strongly in $L^p((0,T)\times\Omega)$ for any $p$ finite, and so we have also that $\mu(\vr_\ep), \vp(\vr_\ep)$ converge strongly to $\mu(\vr),\ \vp(\vr)$, respectively. Due to \eqref{v3} both $g_n$ and $h_n$ are bounded in $L^2((0,T)\times\Omega)$ and they converge weakly to $g=\Grad\mu(\vr)$, $h=\Grad\vp(\vr)$. From the same estimate it follows that $\Grad h_n$ is uniformly bounded in $L^2((0,T)\times\Omega)$. Moreover, the convective term in \eqref{FGtheta_ep} is bounded uniformly with respect to $\ep$ for $\vcg{\xi}\in L^2(0,T; W^{1,3}(\Omega))$ therefore we may estimate $\pt g_n$ in $L^2(0,T; W^{-1,\frac{3}{2}}(\Omega))$, thus the product $g_n h_n$ converges to $\Grad\mu(\vr)\otimes\Grad\vp(\vr)$ in the sense of distributions on $(0,T)\times\Omega$. Finally, the passage $\ep\to0$ in all $\ep$-dependent terms of \eqref{FG_ep} gives $0$ due to the uniform estimates from \eqref{v3}. To conclude let us check in which sense are the initial conditions admitted, Since the time derivative of $\Grad\vr_\ep$ is bounded in $L^2(0,T; W^{-1,\frac{3}{2}}(\Omega))$ and $\Grad\vr\in L^\infty(0,T; L^2(\Omega))$, we may use the Arzela-Ascoli\`e theorem to verify that $\Grad\vr_\ep\to\Grad\vr$ in $C([0,T]; L^2_{\rm weak}(\Omega))$ \footnote{$\zeta\in C([0,T]; L^2_{\rm weak}(\Omega))$ iff $\lim_{t\to t_0}|\langle{\eta,\zeta(t)-\zeta(t_0)\rangle}|=0$, $\forall \eta\in L^2(\Omega)$, $\forall t_0\in[0,T]$}, also $\vr_\ep\vw_\ep\to\vr\vw$ in $C([0,T]; L^2_{\rm weak}(\Omega))$. Moreover, using a version of Aubin-Lions lemma we obtain $\vr$ is strongly continuous, i.e. $\vr_\ep\to\vr$ in $C([0,T]; L^2(\Omega))$. $\Box$ . \section{Proof of Theorem \ref{T_main2}} \subsection{Part 1, passage to the limit $\kappa\to 0$} The aim of this section will be to let $\kappa\to 0$ in \eqref{main1} and to show that the sequence of solutions $(\vr_\kappa,\vw_\kappa)$ converges to $(\vr,\vu)$ the weak solution of the non-homogeneous incompressible Navier-Stokes system \eqref{mainINS} \begin{equation}\label{main2} \begin{array}{c} \pt\vr+ \Div(\vr \vu) = 0,\\ \pt\lr{\vr \vu}+ \Div (\vr \vu \otimes \vu) - 2 \Div (\mu(\vr) D(\vu)) + \Grad {\pi_1} =\vc{0},\\ \Div\vu = 0. \end{array} \end{equation} \noindent {\bf Strong convergence of the density.} We start the proof of Theorem \ref{T_main2} by proving that the uniform estimates obtained in the previous section can be used to deduce the strong convergence of the density. First let us recall that uniformly with respect to $\kappa$ we have \eq{0<r\leq\vr_\kappa \leq R<\infty.\label{max_vr_k}} For any $\kappa$ fixed we have \eqh{\vw_\kappa\in{L^2(0,T; V)}\cap L^\infty(0,T;H).} However, the bounds from \eqref{v2} allow us only to show that \eqh{ &\vu_\kappa\to\vu\quad \text{weakly\ in\ } L^2(0,T;H^1(\Omega))\\ & \kappa\vr_\kappa\to 0 \quad \text{strongly\ in\ }L^\infty(0,T;H^1(\Omega))\cap L^2(0,T; H^2(\Omega)),\\ &\vw_\kappa\to\vw\quad \text{weakly$^*$\ in\ } L^\infty(0,T;H).\label{weak_w} } Thus it follows that \eq{ & \vw_\kappa=\vu_\kappa+ 2\kappa \Grad \vp(\vr_\kappa)\to \vu \quad \text{weakly\ in\ }L^2(0,T;H^1(\Omega)),\\ &\vw_\kappa\to\vu\quad \text{weakly$^*$\ in\ } L^\infty(0,T;H),\label{weak_w}} in particular \eq{\vw_\kappa\to \vu \quad \text{weakly\ in\ }L^2(0,T;V).} Moreover, we know that the pair $(\vr_\kappa,\vw_\kappa)$ satisfies the equation \eq{\pt \vr_\kappa + \vw_\kappa\cdot\Grad \vr_\kappa - 2\kappa \lap\mu(\vr_\kappa) = 0\label{ca}} in the sense of distributions on $(0,T)\times\Omega$. As a consequence, one may deduce using Aubin-Lions lemma and Arzela-Ascoli\`e theorem that there exists a subsequence s.t. \eq{\label{conv} \vr_\kappa\to\vr \quad \text{in\ } C([0,T]; W^{-m,p}(\Omega)),\quad\text{and}\quad \vr_\kappa\to\vr \quad \text{in\ } C([0,T]; L^{p}_{\rm{weak}}(\Omega))} for $m>0$, $1\leq p<\infty$. Using this and \eqref{weak_w} we prove that \eqh{\vr_\kappa\vw_\kappa\to \vr\vu,\quad\text{weakly\ in\ } L^p(0,T;(L^p(\Omega))^3)} for some $p>1$. Therefore the limit pair $(\vr,\vu)$ satisfies the continuity equation \eq{\pt \vr + \vu\cdot\Grad \vr = 0\label{lim_cont}} in the sense of distributions on $(0,T)\times\Omega$ and $$\Div\vu=0\quad \text{a.e. in }(0,T)\times\Omega.$$ Now, our aim will be to prove strong convergence of the density, which is given by the following lemma \begin{lemma}\label{L_strong} Let $(\vr_\kappa,\vw_\kappa)_{\kappa>0}$ be a sequence of solutions satisfying the above weak convergences, then $\vr_\kappa$ converges to $\vr$ strongly in $C([0,T];L^p(\Omega))$ for all $1\leq p<\infty$. Moreover, $\vr$ is the unique solution of \eqref{lim_cont}. \end{lemma} \pf Uniqueness of $\vr$ and the fact that $\vr$ belongs to $C([0,T];L^p(\Omega))$ relies on the fact that if $\vu\in L^2(0,T;V)$, $\vr\in L^\infty((0,T)\times\Omega)$ and \eqref{lim_cont} is satisfied, then for any $\beta\in C(\mathbb{R})$, $\beta(\vr)$ also solves \eqref{lim_cont}. This can proven exactly as in P.-L.~Lions \cite{PLL} Theorem 2.4., Step 2, thus the task is only to show the strong convergence of the density. For this purpose, let us observe that since $\vr,\vu$ satisfy \eqref{lim_cont}, taking $\beta(\vr)=\vr^2$ we obtain the equation \eq{\pt \vr^2 + \vu\cdot\Grad\vr^2 = 0\label{lim_cont_ren},} therefore integrating over $(0,T)\times\Omega$ we check that \eq{\label{vrlim} \intO{\lr{\vr(t)}^2}=\intO{\lr{\vr^0}^2} } for all $t\in[0,T]$. Next, multiplying the approximate equation \eqref{ca} by $2\vr_\kappa$ and integrating by parts, we obtain \eq{\label{vrkappa} \intO{\lr{\vr_\kappa (t)}^2}=-4\kappa\int_0^t\intO{\mu'(\vr_k)|\Grad\vr_\kappa|^2}\, d{\rm s}+\intO{\lr{\vr_\kappa^0}^2} } Taking the sup in time, passing to the limit and using the strong convergence of initial data we therefore obtain \eqh{\limsup_{\kappa\to0} \sup_{t\in [0,T]} \intO{\lr{\vr_\kappa (t)}^2}\leq \intO{\lr{\vr^0}^2}= \sup_{t\in [0,T]} \intO{\lr{\vr(t)}^2}. } Note however, that by the lower semicontinuity of norms we know that $$ \sup_{t\in [0,T]} \intO{(\vr(t))^2} \le \liminf_{\kappa\to 0} \sup_{t\in [0,T]} \intO{(\vr_\kappa(t)) ^2}.$$ Combining these two inequalities, we finally obtain \eqh{\lim_{\kappa\to0} \sup_{t\in [0,T]} \intO{\lr{\vr_\kappa (t)}^2}= \sup_{t\in [0,T]} \intO{\lr{\vr(t)}^2}. } Coming back to \eqref{vrkappa}, we get the strong convergence $\kappa \int_0^t\intO {\mu'(\vr_k) |\nabla \vr_\kappa|^2}d\tau \to 0 \hbox{ for all } t \in (0,T]$ and therefore coming back to \eqref{vrkappa} \eqh{\lim_{\kappa\to0} \intO{\lr{\vr_\kappa (t)}^2}= \intO{\lr{\vr(t)}^2}. } for all $t\in[0,T]$ on account of \eqref{vrlim}. In view of the convergence in $C([0,T];L^2_{{\rm weak}}(\Omega))$, this implies that $$\vr_\kappa\to \vr\quad \text{in\ } C([0,T]; L^{2}(\Omega))$$ and the statement of Lemma \ref{L_strong} follows by \eqref{conv}. $\Box$ \bigskip \noindent {\bf Passage to the limit in the momentum equation.} On account of \eqref{weak_w} and strong convergence of the density stablished above, the passage to the limit in \eqref{weak_mom} requires checking the limit in the nonlinear terms $\vr_\kappa\vw_\kappa\otimes\vw_\kappa$ and $\kappa\Grad\mu(\vr_\kappa)\otimes\Grad\vp(\vr_\kappa)$. The passage to the limit in the first term can be justified by application of Lemma \ref{lem_compac} with $g_n=\vr_\kappa\vw_\kappa$, $h_n=\vw_\kappa$, the details are left to the reader. To pass to the limit in the second term we only need to show that $\kappa|\Grad\vr_\kappa|^2$ converges to $0$. Note that it does not follow from the estimates obtained in \eqref{v2} which only provide uniform $L^1$ bound of the term in question. To solve this problem we need to recall that we have proved previously that \eqh{\lim_{\kappa\to0}\kappa\intTO{\mu'(\vr_\kappa)|\Grad\vr_\kappa|^2}=0} which due to \eqref{max_vr_k} implies that \eqh{\kappa|\Grad\vr_\kappa|^2\to0\quad\text{strongly in}\ L^1((0,T)\times\Omega).} Therefore, the limit momentum equation reads \eqh{ &\intTO{\vr\vu\cdot\pt\vphi}+\intO{\vr\vu\otimes\vu:\Grad\vphi}-2\intTO{\mu(\vr)D(\vu):\Grad\vphi}=-\intO{\vu^0\cdot\vphi(0)}} and is satisfied for $\vphi\in (C^\infty((0,T)\times\Omega))^3$, s.t. $\Div\vphi=0$ and $\vphi(T)=\vc{0}$. \subsection{Part 2, passage to the limit $\kappa\to 1$} The aim of this section will be to let $\kappa\to 1$ in \eqref{main1} and to show that the sequence of solutions $(\vr_\kappa,\vw_\kappa)$ converges to the global weak solution $(\vr,\vu)$ of the Kahzikhov-Smagulov type system, i.e. \eqref{main2}. The basic idea is again to use the estimate \eqref{v2}, \eqref{max_vr_k}, \eqref{est} and deduce the convergences \begin{equation}\label{conv2} \begin{gathered} \vw_\kappa\rightharpoonup \vw \quad \text{weakly in } L^2(0,T;V) \text{ and weakly$^*$ in } L^\infty(0,T;H),\\ \vr_\kappa\to\vr\quad \text{weakly\ in\ } L^p((0,T)\times\Omega),\ 1\leq p<\infty,\text{ and weakly in } L^2(0,T;H^1(\Omega)),\\ \vr_\kappa\to\vr \quad \text{in\ } C([0,T]; L^{p}_{\rm{weak}}(\Omega)),\ 1\leq p<\infty,\\ (1-\kappa)\vr_\kappa\to 0 \quad \text{strongly\ in\ }L^\infty(0,T;H^1(\Omega))\cap L^2(0,T; H^2(\Omega)), \\ \vr_\kappa \rightharpoonup \vr \hbox{ weakly$^*$ in } L^\infty(0,T;H^1(\Omega)), \qquad \vr_\kappa \rightharpoonup \vr \hbox{ weakly in } L^2 (0,T;H^2(\Omega)). \end{gathered} \end{equation} and to pass to the limit in the continuity equation. Then the main difference with respect to limit process in the previous subsection concerns the strong convergence of the density. Note that the fourth convergence is useless because not uniform with respect to $1-\kappa$. The strong convergence $$\vr_\kappa\to \vr\quad \text{in\ } C([0,T]; L^2 (\Omega))$$ follows directly from the variant of the Aubin-Lions lemma, see f.i. Corollary 4 in \cite{S87} (using the uniform estimate of $\vr_\kappa$ in $L^\infty(0,T;H^1(\Omega)$ and the estimate of $\partial_t\vr_\kappa \in L^2(0,T;H^{-1}(\Omega))$. Therefore we conclude on the strong convergence in $C([0,T];L^p(\Omega))$ for all $1\le p<+\infty$ using \eqref{conv2}${}_3$. Remark that we deduce $$(\kappa \mu'(\vr_k))^{1/2} \nabla\vr_\kappa \to (\mu'(\vr))^{1/2} \nabla\vr \hbox{ in } L^2(0,T;L^2(\Omega))$$ coming back the energy. To pass to the limit in the fifth the most demanding and the only new term in comparison to \cite{BrEsSy07} from the weak formulation of the momentum equation \eqref{weak_mom}. For this purpose one needs to check that $(1-\kappa)|\Grad\vr_\kappa|^2\to 0$ in $L^1((0,T)\times \Omega)$. But that is an immediate consequence of the fact that $\Grad\vr_\kappa$ is uniformly bounded in $L^2((0,T)\times\Omega)$. We therefore pass to the limit in \eqref{weak_mom} and find the limit system \eq{ &\intTO{\vr\vw\cdot\pt\vphi}+\intO{\vr(\vw-\kappa\Grad\vp(\vr))\otimes\vw:\Grad\vphi}\\ &\quad-2\intTO{\mu(\vr)A(\vw):\Grad\vphi} =-\intO{\vw^0\cdot\vphi(0)}\label{weak_mom1}} holds for $\vphi\in (C^\infty((0,T)\times\Omega))^3$, s.t. $\Div\vphi=0$ and $\vphi(T)=\vc{0}$ with $A(\vw)= (\nabla \vw - \nabla^t \vw)/2$. Using the regularity, we can write the weak formulation in terms of $(\vr,\vu)$. This provides the convergence to a global weak solution of the Kazhikhov-Smagulov-type system, for which the global solutions were proved in \cite{BrEsSy07}. \section{Proof of Theorem \ref{T_main3} and an example.} \label{S:gen} Below we present only the proof of the a priori estimate for more general viscosity and conductivity. The rest of the proof of existence of solutions would require minor modifications solely, so we skip this part.\\ Relation \eqref{gen_pm} leads to more general form of $\vw$, for the purposes of this section we denote \eq{&\vw=\vu+ 2\Grad \tilde\vp(\vr),\\ &\Div\vw=0 \label{def_w2}} and we define a new function $\tilde\mu(\vr)$ that satisfies $$\vr\tilde\vp'(\vr)=\tilde\mu'(\vr).$$ Using this notation, system \eqref{Limref1} reads \begin{equation}\label{mainor} \begin{array}{c} \pt\vr+ \Div(\vr \vu) = 0,\\ \pt\lr{\vr \vw}+ \Div (\vr \vu \otimes \vw) - 2 \Div (\mu(\vr) D(\vu)) + 2 \Div (\tilde\mu(\vr) \Grad^t \vu) + \Grad {\pi_1} =\vc{0},\\ \Div\vw = 0. \end{array} \end{equation} We can again rewrite the momentum equation as \eqh{\pt\lr{\vr \vw}+ \Div (\vr \vu \otimes \vw) - 2 \Div ([\mu(\vr)-\tilde\mu(\vr)] D(\vu)) -2 \Div (\tilde\mu(\vr) A (\vu)) + \Grad {\pi_1} =\vc{0},} and the energy estimate for this form is \eq{\label{22} \frac{1}{2}\Dt&\intO{\vr|\vw|^2}+ 2 \intO{[\mu(\vr)-\tilde\mu(\vr)] |D(\vu)|^2}\\ &+ 2 \intO{\tilde\mu(\vr)|A(\vu)|^2}+ 4 \intO{(\mu(\vr)-\tilde\mu(\vr))\Grad\vu:\Grad^2\tilde\vp(\vr)}=0. } Now, an extra estimate for $\Grad\tilde\vp(\vr)$ can be obtained by mimicking the steps leading to \eqref{b}, we have \eq{\label{bb2} \Dt\intO{\vr\frac{|\Grad\tilde\vp(\vr)|^2}{2}}-\intO{\tilde\mu(\vr)\Grad\vu:\Grad^2\tilde\vp(\vr)}-\intO{(\tilde\mu'(\vr)\vr-\tilde\mu(\vr))\Div\vu\lap\tilde\vp(\vr)}=0. } Next, we multiply \eqref{bb2} by the constant $4 \xi$ and we add it to \eqref{22} to find \eq{\label{kaa} \Dt\intO{\vr\lr{\frac{|\vw|^2}{2}+\xi\frac{|2\Grad\tilde\vp(\vr)|^2}{2}}} + 2 \intO{(\mu(\vr) -\tilde\mu(\vr))|D(\vu)|^2}+ 2\intO{\tilde\mu(\vr)|A(\vu)|^2}\\ + 4 \intO{(\mu(\vr)-\tilde\mu(\vr)-\xi\tilde\mu(\vr))\Grad\vu:\Grad^2\tilde\vp(\vr)} + 2\xi\intO{(\tilde\mu'(\vr)\vr-\tilde\mu(\vr))|\Div\vu|^2}=0. } Therefore, using equivalence \eqref{Ddiv} we obtain \eq{\label{kaa2} \Dt&\intO{\vr\lr{\frac{|\vw|^2}{2}+\xi\frac{|2\Grad\tilde\vp(\vr)|^2}{2}}} +2\intO{(\mu(\vr) -\tilde\mu(\vr)) |D(\vu)- \frac{1}{d} \Div\vu|^2}\\ &+2\intO{\tilde\mu(\vr)|A(\vu)|^2} +4\intO{(\mu(\vr)-\tilde\mu(\vr)-\xi\tilde\mu(\vr))\Grad\vu:\Grad^2\tilde\vp(\vr)}\\ &+ 2 \intO{\lr{\xi(\tilde\mu'(\vr)\vr-\tilde\mu(\vr))+\frac{\mu(\vr) -\tilde\mu(\vr)}{d}}|\Div\vu|^2}=0. } Using the fact that $\intO{\Grad\vu:\Grad^2\tilde\vp(\vr)}=\intO{D(\vu):\Grad^2\tilde\vp(\vr)}$ we rewrite the fourth term \eq{\label{integ0} \Dt&\intO{\vr\lr{\frac{|\vw|^2}{2}+\xi\frac{|2\Grad\tilde\vp(\vr)|^2}{2}}} + 2\intO{(\mu(\vr) -\tilde\mu(\vr)) |D(\vu)- \frac{1}{d} \Div\vu|^2}\\ &+2\intO{\tilde\mu(\vr)|A(\vu)|^2} + 4\intO{(\mu(\vr)-\tilde\mu(\vr)-\xi\tilde\mu(\vr))\lr{D(\vu)-\frac{1}{d}\Div\vu \vc{I}}:\Grad^2\tilde\vp(\vr)}\\ &- 2\intO{\frac{\mu(\vr)-\tilde\mu(\vr)-\xi\tilde\mu(\vr)}{d}|\Div\vu|^2}\\ &+ 2 \intO{\lr{\xi(\tilde\mu'(\vr)\vr-\tilde\mu(\vr))+\frac{\mu(\vr) -\tilde\mu(\vr)}{d}}|\Div\vu|^2}=0. } and thus finally \eq{\label{integ1} \Dt&\intO{\vr\lr{\frac{|\vw|^2}{2}+\xi\frac{|2\Grad\tilde\vp|^2}{2}}} + 2\intO{\lr{\mu(\vr) -\tilde\mu(\vr)} |D(\vu)- \frac{1}{d} \Div\vu\,\vc{I}|^2}\\ &+ 2 \intO{\tilde\mu(\vr)|A(\vu)|^2} +4\intO{(\mu(\vr)-\tilde\mu(\vr)-\xi\tilde\mu(\vr))\lr{D(\vu)-\frac{1}{d}\Div\vu\, \vc{I}}:\Grad^2\tilde\vp(\vr)}\\ &+2\intO{\xi\lr{\tilde\mu'(\vr)\vr+\frac{1-d}{d}\tilde\mu(\vr)}|\Div\vu|^2}=0. } Let us now denote $$J_1(\vr)= \tilde\mu'(\vr)\vr+\frac{1-d}{d}\tilde\mu(\vr) $$ $$J_2(\vr)= \mu(\vr) -\tilde \mu(\vr), $$ From \eqref{c_gen} we know that there exists a positive constant $c$ such that \eq{J_2(\vr) \ge c > 0, \qquad J_1(\vr)\geq c >0 \quad\hbox{ on } \quad[r,R] \label{j1j2},} therefore, the second and the last integral in \eqref{integ1} are non-negative, in fact we have \eq{\label{integ11} \Dt&\intO{\vr\lr{\frac{|\vw|^2}{2}+\xi\frac{|2 \Grad\tilde\vp(\vr)|^2}{2}}} + 2 \intO{J_2(\vr) |D(\vu)- \frac{1}{d} \Div\vu\, \vc{I}|^2}\\ &+2 \intO{\tilde\mu(\vr)|A(\vu)|^2} +4 \intO{\lr{J_2(\vr)-\xi\tilde\mu(\vr)}\lr{D(\vu)-\frac{1}{d}\Div\vu\, \vc{I}}:\Grad^2\tilde\vp(\vr)}\\ &+2 \intO{\xi J_1(\vr)|\Div\vu|^2}=0. } So, in order to deduce uniform estimates from \eqref{integ1} we need to show that the penultimate term can be controlled by the positive contributions from the l.h.s. To this purpose let us write \eq{\nonumber I= \intO{(J_2(\vr)-\xi\tilde\mu(\vr)) (D(\vu)- \frac{1}{d}\Div \vu\, \vc{I}) :\Grad^2\tilde\vp(\vr)} \\ = \intO{\sqrt{J_2(\vr)}(D(\vu)-\frac{1}{d}{\Div \vu}\,\vc{I}):\frac{J_2(\vr)-\xi\tilde\mu(\vr)}{\sqrt{J_2(\vr)}} \nabla\nabla \tilde\vp(\vr) }, } then $$I \le \intO{\frac{(J_2(\vr)-\xi\tilde\mu(\vr))^2}{2 J_2(\vr)} |\Grad^2\tilde\varphi(\vr)|^2}+ \intO{\frac{J_2(\vr)}{2} |D(\vu)-\frac{1}{d}{\Div \vu}\, \vc{I}|^2 } $$ and the last term is absorbed by the l.h.s. of \eqref{integ11}. Therefore, on account of equality \eqh{\|\Grad \Grad \tilde\varphi(\vr)\|_{L^2(\Omega)} = \|\lap \tilde\varphi(\vr)\|_{L^2(\Omega)}} it remains to assume that \eqh{\max_{\vr\in[r,R]} \frac{(J_2(\vr)-\xi\tilde\mu(\vr))^2}{ 2J_2(\vr)}\leq \xi \min_{\vr\in[r,R]} J_1(\vr)} which is equivalent to \eqref{c_gen}. $\Box$ \medskip Let us now consider a special case, satisfied for physical case of low Mach approximation for dense gases. \begin{prop}\label{Example} Assume that all the previous assumptions are satisfied and let \eqref{c_gen} be replaced by $$\mu(\vr)=\vr,\quad \tilde\mu(\vr)=\log\vr.$$ Then there exist a non-empty interval $[\tilde r,\tilde R]$ such that if \eqh{0<\tilde r\leq \vr^0\leq \tilde R<0,} there exists global in time weak solution to \eqref{Limref1}. \end{prop} \pf Conditions \eqref{c_gen} for $\vr=s=const.$ become \begin{equation} \begin{gathered} c\leq s-\log s,\\ \frac{(s-\log s-\xi\tilde\mu(s))^2}{2 \lr{s-\log s}}\leq \xi c_1,\quad\text{where}\quad c_1\leq 1+\frac{1-d}{d}\log\tilde r. \end{gathered} \label{c_s} \end{equation} which means that $\xi\in\left[\xi^-,\xi^+\right]$, where \eqh{ \xi^\pm=\frac{\lr{s-\log s}(\tilde\mu(s)+c_1)\pm \lr{s-\log s}\sqrt{c_1(2\tilde\mu(s)+c_1)}}{(\tilde\mu(s))^2}. } Now if $s$ is such that $\tilde\mu(s)>-\frac{c_1}{2}$, then at least $\xi^+$ is positive. In fact taking $\xi^0 =J_2(s)(\tilde\mu(s)+c_1)$, we always know that $\xi^0\in[\xi^-,\xi^+]$, so it satisfies conditions \eqref{c_s}. Let us now note that the second condition in \eqref{c_s} is continuous with respect to $s$. Therefore from above considerations it follows that there exists a neighbourhood $(\tilde r,\tilde R)$ of $s$ in which \eqref{c_s} is satisfied. $\Box$\\ \smallskip This example shows that Theorem \ref{T_main3} generalizes the global in time existence of weak solutions shown by {\sc P.--L. Lions} (see \cite{PLL}, Chapter 8.8) for the two-dimensional case to the three-dimensional case. \section{Application to the model of the gaseous mixture}\label{S:mix} As a particular application of the above theory, let us consider the system which describes the flow of a compressible $n$-component fluid in a domain $\Omega \subset \R^d$, $d=2,3$ \begin{equation}\label{1.1} \begin{array}{c} \pt\vr+\Div (\vr \vu) = 0,\\ \pt\lr{\vr\vu}+\Div (\vr \vu \otimes \vu) + \Div \vc{S} + \Grad p =\vc{0},\\ \pt\lr{\vr e}+\Div (\vr e\vu )+p\Div\vu +\Div\bf{Q}+ \vc{S}:\Grad\vu=0,\\ \pt\vr_k+\Div (\vr_k \vu)+ \Div (\vf_{k}) = 0,\quad k\in \{1,\ldots,n\}, \end{array} \end{equation} where the unknowns are the species densities $\vr_k=\vr_k(t,x)$, $k\in \{1,\ldots,n\}$, the total mass density $\vr=\vr(t,x)$, $\vr=\sumkN{\vr_{k}}$, the velocity vector field $\vu=\vu(t,x)$ and the absolute temperature $T=T(t,x)$. Further $\vc{S}$ denotes the viscous tensor, $p$ the internal pressure of the fluid, $e$ the internal energy, $\vc{Q}$ the heat flux, $m_k$ the molar mass of the $k$-th species and $\vf_{k}$ denotes the diffusion flux of the $k$-th species. \medskip \noindent{\bf The equation of state.} For simplicity we consider a mixture of ideal gases with constant specific heats, i.e. $$p=\sumkN p_k=\sumkN \frac{R\vr_k T}{m_k},\quad \vr e=\sumkN \vr_k e_k=\sumkN c_{vk}\vr_{k}T,$$ where $c_{vk}$ and $c_{pk}$ are the constant-volume and the constant-pressure specific heats related by \eq{c_{pk}=c_{vk}+\frac{R}{m_k}\label{defcp}} and $m_k$ is the species $k$ molar mass. \medskip \noindent{\bf The diffusion fluxes.} We consider the so-called multicomponent diffusion without taking into account the Soret effect. The diffusion fluxes may depend on $\vr, T,\vr_1,\ldots,\vr_n$ as follows \begin{equation} \vf_{k}=-C_0\sum_{l=1}^n C_{kl}\vc{d}_l, \label{eq:diff} \end{equation} where $C_{0},\ C_{kl}$ are multicomponent flux diffusion coefficients; by $\vd$ we denote the species $k$ diffusion force specified, in the absence of external forces, by the following relation \begin{equation}\label{eq:} \vd=\Grad\left({p_{k}\over p}\right)+\left({p_{k}\over p}-{\vr_{k}\over \vr}\right)\Grad\log{p}. \end{equation} We assume that \begin{equation}\label{formD} {C}_{kl}=D_{kl}\vr_{k},\quad k,l\in\{1,\ldots,n\}, \end{equation} where $D_{kl}$ is symmetric and positive definite over the physical hyperplane ${U}^{\bot}$--the orthogonal complement of ${ U}=(1,\ldots,1)^{T}$, we refer the reader to \cite{VG}, Chapters 4, 7 for more details. \medskip \noindent{\bf The heat flux.} We neglect the transfer of energy due to species molecular diffusion---the so-called Dufour effect. The heat flux is thus in the following form \begin{equation}\label{heat} \vc{Q}=\sumkN c_{pk}T \vf_{k}-k\Grad T, \quad k>0, \end{equation} where the heat conductivity coefficient $k$ may depend smoothly on $\vr$ and $T$. \medskip \subsection{Low Mach system and simplifications} When the Mach number is assumed to be small and for appropriate boundary conditions, system \eqref{1.1} has the following form \begin{equation}\label{1.3} \begin{array}{c} \pt\vr+\Div (\vr \vu) = 0,\\ \pt\lr{\vr\vu}+\Div (\vr \vu \otimes \vu) + \Div \vc{S} + \Grad \Pi =\vc{0},\\ \sumkN c_{vk}\pt\lr{\vr_k T}+\sumkN c_{vk}\Div\lr{\vr_k T\vu}+P_0 \Div\vu=\Div\lr{k\Grad T}-{\sumkN\Div\lr{c_{pk}T\vf_k}}\\ \pt\vr_k+\Div (\vr_k \vu)+ \Div (\vf_{k}) = 0,\quad k\in \{1,\ldots,n\},\\ P_0=\frac{R\vr T}{\Ov{m}} \end{array} \end{equation} where $P_0$ denotes the constant pressure, $\Ov{m}$ denotes the mean molar mass of the mixture given by \eqh{ \frac{\vr}{\Ov{m}}=\sumkN\frac{\vr_k}{m_k}} and the diffusion fluxes have the following simplified form \begin{equation} \vf_k=-C_0\sumlN C_{kl}\Grad\lr{\frac{p_k}{P_0 }}\label{eq:1}. \end{equation} Below we propose some hypothesis that are adequate for the Low Mach, constant Lewis number, binary mixture. \begin{enumerate} \item[A1.] The number of species is $n=2$. \item[A2.] The heat capacity at constant volume per unit mole of the mixture $C_{vk}$, are equal \eqh{C_{vk}=m_k c_{vk}= C_v, \quad k=1,2. \label{ass1}} \item[A3.] The diffusion matrix coefficients are given by \eqh{C=\frac{1}{\vr}\lr{\begin{array}{rr} \vr_2&-\vr_1\\ -\vr_2& \vr_1 \end{array}},\quad C_0=c_0(\vr)\frac{m_1m_2}{\Ov{m}^2}, \label{Cform}} where $c_0(\vr)>0$. \item[A4.] The heat conductivity coefficient $k$ depends on the concentrations of the species and on $c_0$ \eqh{k=\frac{c_0(\vr)\lr{C_v+R}}{\Ov{m}}=\frac{c_0(\vr) C_p}{\Ov{m}}, \label{ass_symp_2}} which is equivalent with assumption that the Lewis number is equal to 1. \end{enumerate} Under these simplifications system (\ref{1.3}) may be rewritten \begin{equation*}\label{symp2} \begin{array}{c} \pt\vr+\Div (\vr \vu) = 0,\\ \pt\lr{\vr\vu}+\Div (\vr \vu \otimes \vu) + \Div \vc{S} + \Grad \Pi =\vc{0},\\ \displaystyle P_0 \Div\vu=\Div\lr{\frac{c_0 R}{\Ov{m}}\Grad T}-\sumkN\Div\frac{RT \vf_k}{m_k},\\ \pt\vr_k+\Div (\vr_k \vu)+ \Div \vf_{k} = 0,\quad k\in \{1,2\},\\ \displaystyle P_0 =\frac{R\vr T}{\Ov{m}}, \end{array} \end{equation*} where \eqh{\vf_1=-c_0(\vr)\Grad Y_k,\quad \vf_2=-c_0(\vr)\Grad Y_2,\quad Y_i=\frac{\vr_i}{\vr},\ i=1,2.\label{fF}} Let us now observe that \eq{\Div\vu&=\Div\lr{\frac{c_0(\vr) R}{P_0 \Ov{m}}\Grad T-\sumkN\frac{RT \vf_k}{P_0 m_k}}\\ &=\Div\lr{\frac{c_0(\vr) R}{P_0 \Ov{m}}\Grad T+\frac{c_0(\vr)RT }{P_0 }\Grad\lr{\frac{1}{\Ov{m}}}}\\ &=\Div\lr{c_0(\vr)\Grad\lr{\frac{1}{\vr}}}. \label{1.41}} So, finally our system can be rewritten as \begin{equation}\label{symp3} \begin{array}{c} \pt\vr+\Div (\vr \vu) = 0,\\ \pt\lr{\vr\vu}+\Div (\vr \vu \otimes \vu) - 2 \Div (\mu(\vr) D(\vu)) - \Grad (\lambda(\vr)\Div\vu) + \Grad \Pi =\vc{0},\\ \Div\vu=\Div(c_0(\vr)\Grad\vr^{-1}),\\ \pt\vr_k+\Div (\vr_k \vu)-\Div(c_0(\vr)\Grad Y_k) = 0,\quad k\in \{1,2\},\\ \displaystyle P_0=\frac{\vr T}{\Ov{m}}. \end{array} \end{equation} Note that the three first equations of \eqref{symp2} form the low Mach number model obtained by {P.--L. Lions} in \cite{PLL}. The extension of this system to the combustion models was presented by {\sc A.~Majda} in \cite{Majda84} for binary mixture. In \cite{Embid87} {\sc P. Embid} proved the local-in-time existence of unique regular solution to zero Mach number equations for reacting multi-component compressible mixture. Decoupling the first three equations from the subsystem of reaction-diffusion equations we obtain the Kazhikhov-Smagulov type system. The first global existence result for particular choice of viscosity coefficient $$\mu(\vr)=\frac{c_0}{2}\log\vr$$ and for the initial density bounded away from zero is due to {\sc D. Bresch} {\it et al.} \cite{BrEsSy07}. Concerning the reaction-diffusion equations, the existence analysis for such system in case of diffusion (\ref{eq:diff}-\ref{eq:}) and with the diffusion matrix as in \eqref{Cform} was recently performed by {\sc P. B. Mucha}, {\sc M. Pokorn\'y} and {\sc E. Zatorska }in \cite{MPZ}. Their result holds under certain regularity assumptions imposed on $\vr$ and $\vu$. \bigskip \subsection{Existence of solutions} Our strategy is to first resolve the combustion system looking for $\vr,\vv$ and then use them to handle the reaction-diffusion equations. For simplicity, we assume that $\Omega=\mathbb{T}^3$ and we supplement system \eqref{symp3} by the initial conditions \begin{equation} \begin{array}{c} \vr(0,x)=\vr^0(x),\quad \vu(0,x)=\vu^0(x)\quad x\in\Omega,\\ \vr_k(0,x)=\vr_k^0(x),\quad0< r_k\leq \vr_k^0(x)\leq R_k<\infty,\quad k=1,2,\\ \vr_1^0+\vr_2^0=\vr^0. \label{init1} \end{array} \end{equation} \begin{rmk} We will restrict to the case when the initial density $\vr^0$ is bounded away from vacuum. This is due to nonlinearity in the continuity equation. For example for a model of pollutant (studied in {\rm \cite{BrEsSy07}}) there exists also existence result for $\vr^0\geq0$ {\rm \cite{Sy2005}}. \end{rmk} Taking $$\kappa=c_0(\vr)/\tilde c_0(\vr),\quad\text{and}\quad \vp'(\vr)=\tilde c_0(\vr)\vr^{-2}$$ we obtain from relation \eqref{rel1} that $$\mu'(\vr)=\tilde c_0(\vr)\vr^{-1}$$ and the three first equations of \eqref{symp3} give exactly system \eqref{main1} studied at the beginning of this paper. \begin{rmk} For example for $c_0(\vr)=\kappa$, condition \eqref{important} is satisfied provided the bound from above for the density $R$ is sufficiently small, i.e. $$1-\lr{1-\frac{1}{d}}\log R\geq0. $$ \end{rmk} In general we have to assume that \eqh{ \min_{\vr\in[r,R]} \lr{\frac{1-d}{d}\int_r^\vr \tilde c_0(s)s^{-1}{\rm d}s+\tilde c_0(\vr)}\geq c > 0. } With this information at hand we may combine the elements of the proof from \cite{MPZ1} in order to construct weak solution to the system of reaction-diffusion equations for the species. The difference is that now we consider a general form of diffusion matrix $C$ but, on the other hand the form of diffusion flux $\vf_k$ is reduced to \eqref{eq:1}. Thanks to this, the species equations reads \eqh{\pt(\vr_\ep Y_k)+\Div (\vr_\ep Y_k \vu_\ep)-\Div(c_0(\vr_\ep)\Grad Y_k) = 0,\quad k=1,2,} where $Y_k=\frac{\vr_k}{\vr}$, $\vr_\ep,\ \vu_\ep$ denotes the convolution of $\vr, \vu$ with a standard regularizing kernel. Due to maximum principle for $\vr$ \eqref{max_vr}, this is system with semi-linear parabolic equations which are weakly coupled. The existence of strong solutions and the limit passage $\ep\to0$ is then straightforward. \section{Application to the ghost effect system}\label{S:ghost} In this section, we give some comments regarding a particular case of ghost system that we can find for instance in a recent paper of {\sc C.D. Levermore, W. Sun, K. Trivisa} \cite{LeSuTr} \begin{equation*} \begin{array}{c} \vr T=1, \qquad \partial_t \vr + \Div (\vr \vu) = 0, \\ \pt (\vr \vu) + \Div (\vr \vu \otimes \vu) +\nabla P^*= - \Div\vc{\Sigma}-\Div\tilde{\vc{\Sigma}} ,\\ \frac{5}{2}\Div\vu =\Div\lr{k(T)\Grad T}, \end{array} \end{equation*} where $k(T)$ is the heat conductivity coefficient $k>0$, while the two parts of stress tensor $\vc{\Sigma}$ and $\tilde{\vc{\Sigma}}$ are defined as follows \eqh{ \vc{\Sigma}&=\mu(T)\lr{\Grad\vu+\Grad^t\vu-\frac{2}{3}\Div\vu \vc{I}},\\ \tilde{\vc{\Sigma}}&=\tau_1(\vr,T)\lr{\Grad^2T-\frac{1}{3}\lap T\vc{I}}+\tau_2(\vr,T)\lr{\Grad T\otimes\Grad T-\frac{1}{3}|\Grad T|^2\vc{I}}, } where $\tau_1,\tau_2$ are transport coefficients with $\tau_1>0$. Now, let us consider a particular form of heat-conductivity coefficient, such that $$\frac{2}{5}\frac{k\lr{{\vr}^{-1}}\Grad\vr}{\vr^2}=\kappa\Grad\log\vr$$ and let us choose $\tau_1=c\vr^2$, $\tau_2=-c\vr^3$ , then we obtain \eqh{ \tau_1\Grad^2 T&= c\vr^2\Grad^2\lr{\frac{1}{\vr}}=-c\vr\Grad^2\log\vr+c\vr^{-1}\Grad\vr\otimes\Grad\vr\\ \tau_1\lap T&=-c\vr\lap\log\vr+c\frac{|\Grad\vr|^2}{\vr} } and therefore \eqh{\tau_1(\vr, T)\lr{\Grad^2 T-\frac{1}{3}\lap T\vc{I}}&=-c\vr\Grad^2\log\vr +c\vr^{-1}\Grad\vr\otimes\Grad\vr-\frac{c}{3}\frac{|\Grad\vr|^2}{\vr}\vc{I}+\frac{c}{3}\vr\lap\log\vr\vc{I},\\ \tau_2(\vr, T)\lr{\Grad T\otimes\Grad T-\frac{1}{3}|\Grad T|^2\vc{I}}&=-c\vr^{-1}\Grad\vr\otimes\Grad\vr+\frac{c}{3}\frac{|\Grad\vr|^2}{\vr}\vc{I} } and hence \eqh{\tilde{\vc{\Sigma}}=-c\vr\Grad^2\log\vr+\frac{c}{\kappa}\vr\Div\vu\vc{I}} and the last term, after taking the divergence, may be absorbed by a part of $\vc{\Sigma}$ or put to the pressure. Finally, the ghost system can be rewritten as follows \begin{equation}\label{Ghost} \begin{array}{c} \partial_t \vr + \Div (\vr \vu) = 0. \\ \pt (\vr \vu) + \Div (\vr \vu \otimes \vu) +\nabla \pi - 2 \Div (\mu(\vr) D(\vu)) - c\, \Div(\vr \nabla\nabla \log\vr) = 0,\\ \Div \vu = - 2 \kappa \Delta \log\vr. \end{array} \end{equation} Let us focus on the specific form of viscosity coefficient $\mu(\vr)= \bar{\mu}\vr$, thanks to the Bohm identity we have $$\Div(\vr\nabla\nabla \log\vr) = 2\vr \nabla (\frac{1}{\sqrt\vr}\Delta \sqrt\vr).$$ System \eqref{Ghost} with above relation corresponds to the low Mach version with large heat-release of the quantum Navier-Stokes system studied by A. {\sc J\"ungel} in \cite {Ju}, by M. {\sc Gisclon} and I. {\sc Violet} in \cite{GiVi}, see also work by B. Haspot \cite{Ha14}. First we check that system \eqref{Ghost} has the following energy estimate \eq{\label{estimghost} \Dt&\intO{\vr\lr{\frac{|\vw|^2}{2} +[(1-\kappa)\kappa+c] \frac{|2\Grad\log\vr|^2}{2}}} + 2(1-\kappa)\bar{\mu}\intO{\vr |D( \vu)|^2} \\ &+ 2\kappa\bar{\mu}\intO{\vr |A(\vu)|^2} + c\kappa \bar{\mu} \intO{\vr|\Grad^2\log\vr|^2}=0 } with $\vw=\vu + 2 \kappa \bar{\mu} \nabla\log\vr$. Moreover we have from the maximum principle $$r\le \vr \le R$$ and the above estimate implies that for $0<\kappa< 1$ we have $\vr \in L^2(0,T;H^2(\Omega)).$ This is the final argument needed to justify global in time existence of weak solutions to the ghost effect system. System \eqref{Ghost}, in comparison to the system studied in the first part of the paper, has a new term which needs to be handled, namely \eqh{\Div(\vr \nabla\nabla \log\vr)=\Grad\lap\vr-\Div\lr{\Grad\sqrt{\vr}\otimes\Grad\sqrt{\vr}}.} The difficulty to pass to the limit is the last component of this expressionn which asks for strong convergence of $\Grad\sqrt{\vr}$ in $L^2((0,T)\times\Omega)$. This is given using the regularity obtained by \eqref{estimghost} and the mass equation using the standard Aubin-Lions lemma. \medskip \begin{rmk} Similar regularity argument was used by {\sc I. Kostin, M. Marion, R.~Texier-Picard} and {\sc V.A. Volpert}\cite{KMT-PV} to prove the global existence of solutions to the incompressible Korteweg system but with an additional diffusion in the mass (concentration) equation. \end{rmk} \begin{rmk} Estimate \eqref{estimghost} is not really helpful to perform the limit $\kappa\to 0$. The global existence of weak solutions to the system with $\kappa=0$ is in fact an open problem due to the lack of convergence in the nonlinear capillary term, namely in the quantity $\nabla\sqrt\vr \otimes\nabla\sqrt\vr$. \end{rmk} \bigskip \noindent {\bf Acknowledgments.} The authors thank the referee for his/her valuable comments which improves the quality of the paper. The first and the third author acknowledge the support from the ANR-13-BS01-0003-01 project DYFICOLTI. The second and the third author acknowledge the Post-Doctoral support of Ecole Polytechnique. The third author was also supported by MN grant IdPlus2011/000661 and by the fellowship START of the Foundation for Polish Science. \footnotesize
1,116,691,498,070
arxiv
\section{Introduction} Magnetic fields are universally present in astronomical bodies ranging from the Earth to the distant quasars, but it is still unknown if magnetic fields permeate the universe as a whole. Astrophysical magnetic fields may arise due to the existence of a primordial magnetic field that grows as galaxies form. The discovery of an extragalactic magnetic field on scales larger than virialized systems (i.e., larger than clusters of galaxies) would reveal the presence of a primordial field. The existence of such a primordial field would help understand the origin of astrophysical magnetic fields and may open a new window into processes ocurring in the early universe. In this letter, we show that the study of ultra high energy cosmic rays (UHE CRs) with energies above $\simeq10^{18}{\,\rm eV}$ can probe primordial fields below the current upper bound. Our proposed method is complementary to traditional ones (e.g., Kronberg 1994) and more recent suggestions (e.g., Plaga 1995). At present, the most stringent constraint on large scale extragalactic fields comes from limits on the Faraday rotation of light coming to us from distant quasars. The upper bound on a widespread, all-pervading field is $\sim 10^{-9}$ Gauss (e.g., Kronberg 1994). There are weaker constraints derived from the synchrotron emission from nearby galaxy clusters (Kim et al.~1989) and the cosmic microwave background isotropy. UHE CR nucleons from extragalactic sources are attenuated in energy while propagating through the cosmic microwave background (CMB). Above a few $10^{19}{\,\rm eV}$, nucleons produce pions on the CMB photons and the energy of the cosmic ray nucleons is degraded rapidly, which is known as the Greisen-Zatsepin-Kuz'min (GZK) effect (Greisen 1966; Zatsepin \& Kuz'min 1966). Produced pions, on the other hand, decay into secondary leptons such as electrons, muons, and neutrinos, and photons. Since muons decay to electrons and neutrinos, the final secondary particles are photons, electrons, and neutrinos, among which photons are more readily detectable. Very energetic secondary photons and electrons couple to form an electromagnetic (EM) cascade. The $\gamma$-rays produce electron pairs on the CMB and the universal radio background photons (see Lee \& Sigl 1995, Lee 1995 and references therein for more detailed discussions). The resulting electrons (or positrons) in turn upscatter background photons via inverse Compton scattering (ICS), thus completing a cycle. It is through these two processes that an EM cascade develops in the intergalactic medium. As a result, if one propagates a purely protonic spectrum, one gets a processed nucleon spectrum with the GZK cutoff, and a secondary EM (photons and electrons) spectrum. The extragalactic magnetic field (EGMF) influences the UHE CR flux mainly through their charged components, namely the primary hadrons and the secondary electrons. In the energy range under consideration the hadrons are deflected with negligible energy loss due to synchrotron radiation whereas the electrons are negligibly deflected before they lose most of their energy. In \S 2, we explore the effect of the electron energy loss due to the EGMF on the secondary EM cascade spectrum. We then discuss the deflection of the hadronic component in the EGMF, in \S 3. Finally in \S 4, we summarize our findings. \section{Extragalactic Magnetic Field and Ultrahigh Energy {$\gamma$}-Ray Flux} The EGMF plays a crucial role in the development of the EM cascade. In the presence of the magnetic field, electrons lose energy via synchrotron radiation loss, which is given by \begin{equation} \frac{dE}{dt} = - \frac{e^4 B^2}{24\pi^2 m_e^4} E^2 = -\frac{2}{3} r_0^2 B^2 \left( \frac{E}{m_e} \right)^2, \end{equation} where $B$ is the strength of the large scale EGMF, $r_0$ is the classical electron radius, and $m_e$ is the electron mass. In fig.~1, we show the rates of ICS and synchrotron loss for electrons. Whereas the rate of ICS responsible for EM cascade development decreases with energy, the synchrotron loss rate increases with energy. Therefore, in a narrow energy range a transition occurs between a regime where ICS is dominant and electrons couple to photons efficiently and another where electrons are rapidly lost due to synchrotron loss and the cascade is suppressed. Below $\simeq 10^{20}{\,\rm eV}$ (the threshold for pair production on the radio background), cascade development in the absence of an EGMF would give rise to a generic power law photon spectrum with index $\simeq-1.5$. The above mentioned transition in the secondary {$\gamma$}-ray spectrum will therefore occur between this generic cascade shape and a synchrotron loss dominated spectrum. As long as synchrotron quanta can be neglected the latter is given by the photons produced ``directly'' by source injection or from pion production by nucleons, before undergoing pair production in the low energy photon background. In a magnetic field of strength $B$ measured in G (Gauss) the synchrotron spectrum produced by an electron of energy $E_e$ peaks at \begin{equation} E_{\rm syn} \simeq 6.8 \times 10^{13} \left( \frac{E_e}{10^{21} {\,\rm eV}} \right)^2 \left( \frac{B}{10^{-9}\,{\rm G}} \right) {\,\rm eV}\,,\label{synch} \end{equation} and falls off exponentially at higher energies. In the following we assume that the observable EM flux is energetically dominated by {$\gamma$}-rays and electrons with energy $E\la10^{21}{\,\rm eV}$. Then, according to eq.~[\ref{synch}], the contribution of synchrotron radiation to the {$\gamma$}-ray flux above $10^{18}{\,\rm eV}$ can be safely neglected as long as $B\la10^{-6}\,{\rm G}$ everywhere. The energy where the transition in the {$\gamma$}-ray spectrum occurs is in general a function of the magnetic field strength and the background photon spectrum, but {\em not\/} a function of the source distance or the injection spectrum. If we consider only the CMB, the relation between the transition energy $E_{\rm tr}$ and the magnetic field strength is $E_{\rm tr}\propto B^{-1}$. In order to include the less well known diffuse extragalactic radio background into consideration we adopt its usual description by a power law with an overall amplitude and a lower frequency cutoff as parameters (Clark, Brown, \& Alexander 1970) the latter one being the main source of uncertainty. Using a cutoff at 2 MHz as suggested by Clark et al.~(1970) the above relation is modified to \begin{equation} E_{\rm tr} \simeq 10^{19} \left(\frac{B}{10^{-9}\,{\rm G}}\right)^{-1.3}~{\,\rm eV} ~~(B \ga 10^{-10}\,{\rm G}) \ . \end{equation} For the same magnetic field a cutoff at lower frequencies would increase the rate of ICS of the then more abundant low frequency radio photons and thus the value for $E_{\rm tr}$ (see fig.~1). Assuming that the radio cutoff frequency lies somewhere in the range between 0.5 MHz and 3 MHz, for a given $E_{\rm tr}$ the EGMF strength $B$ is uncertain within about a factor 5. Therefore, for $B\ga10^{-10}\,{\rm G}$ it is possible to approximately determine the EGMF strength by searching for a dip in the $\gamma$-ray flux below $10^{20}{\,\rm eV}$ which would mark a transition between an ICS and a synchrotron loss dominated regime. For $B \la 10^{-10}\,{\rm G}$, this transition occurs above the pair production threshold on the radio background where the $\gamma$-ray flux increasingly depends on several unknown factors such as the charged cosmic ray flux above the GZK cutoff. Thus, even though the $\gamma$-ray flux can be comparable to the nucleon flux above $10^{20}{\,\rm eV}$, a discussion of possible magnetic field signatures in its spectrum would presently be too speculative. We developed a numerical code for the propagation of nucleons, photons, and electrons through the intergalactic medium which employs a transport equation formalism, the details of which can be found in Lee (1995). The observed UHE CR flux below $10^{20}{\,\rm eV}$ (see, e.g., Bird et al.~1994; Yoshida et al.~1995) is reproduced quite well by a diffuse distribution of sources injecting protons with a spectrum $\propto E^{-2.3}$ up to some maximal energy considerably beyond the GZK cutoff (Yoshida \& Teshima 1993; Sigl et al.~1995). For the calculations presented here we therefore adopted this proton injection spectrum with a maximal energy of $10^{22}{\,\rm eV}$ (see figures). The EGMF enters the calculation via the synchrotron loss of electrons. The transition between ICS and synchrotron loss domination can be easily seen in fig.~2, which shows the processed nucleon and photon spectra for a single source at a distance of 30 Mpc for a range of EGMF strengths. In general, one expects a distribution of cosmic ray sources rather than a single source at a fixed distance. In fig.~3, we show the diffuse spectrum from a continuous source distribution extending up to 1 Gpc. We assume a flat universe with zero cosmological constant and a Hubble constant of $H_0 = 75$ km sec$^{-1}$Mpc$^{-1}$, and a comoving source density scaling as $(1+z)^2$ in redshift $z$ as in some ``bright phase'' models of CR sources (e.g., Yoshida \& Teshima 1993; Hill \& Schramm 1985 and references therein). The results are not very sensitive to these choices. In the diffuse case, the {$\gamma$}-ray to nucleon flux ratio tends to be smaller than for a single source, and the spectral features are not as pronounced, but still detectable for $B \ga10^{-10}\,{\rm G}$. The ``extragalactic magnetic field'' in this analysis refers to the average component of the EGMF normal to the line of propagation. Primordial magnetic fields are expected to have very little structure on scales below $\sim $ few Mpc (Jedamzik, Katalinic, \& Olinto 1995), but condensed structures such as galaxies and clusters of galaxies can ``polute" the intergalactic medium with stronger magnetic fields on smaller scales. Fortunately, the effect of the EGMF on the $\gamma$-ray spectral shape discussed here is most sensitive to the average field on large scales. When the EM cascade goes through a strong field region, the electrons in the cascade lose energy rapidly and drop out of the UHE range. In contrast, the UHE nucleon and {$\gamma$}-ray fluxes are usually hardly affected directly by the radiation field of the intervening object (Stecker et al.~1991; Szabo \& Protheroe 1994; Norman, Melrose, \& Achterberg 1995). After escaping the object, the cascade redevelops and the cascade spectrum recovers quickly at only a slightly smaller amplitude. The influence of intervening objects decreases with increasing strength of the large scale field and becomes important only when the objects are very close to the observer (e.g., less than $\sim 5\,{\rm Mpc}$ away), or when their linear size is significant compared to the total propagation distance. We can derive conditions for the filling factors for such objects by requiring that they are much more sparsely populated along the line of sight to the source than the typical cascade regeneration length $s_c$ (e.g., $\sim$ 5 Mpc). For an average linear size $\bar l_{\rm iv}$ of the intervening objects their filling factor $f_{\rm iv}$ must satisfy \begin{equation} f_{\rm iv} \ll \frac{\bar l_{\rm iv}}{s_c}. \end{equation} For field galaxies the above relation is $f_g \ll 10^{-3}$, and for clusters of galaxies $f_c \ll 0.1$. The actual filling factors for galaxies, $f_g \la 10^{-7}$, and galaxy clusters, $f_c \la 10^{-4}$ (Kolb \& Turner 1992; Nichol, Briel, \& Henry 1994) satisfy these constraints. Therefore, intervening objects do not modify the above discussion substantially. One interesting exception may be the effect of nearby large structures such as the Virgo Cluster. If Virgo has strong magnetic fields, e.g.~of order $10^{-7}\,{\rm G}$ on Mpc scales, the UHE {$\gamma$}-ray flux from background sources might be modified across Virgo's angular extension (see fig.~2). The ability of future detectors to study EGMF features crucially depends on the {$\gamma$}-ray to nucleon flux ratio. For nearby strong sources, the secondary {$\gamma$}-ray flux could be measurable with instruments which are sensitive to ratios down to $\simeq1\%$. This could be achieved by the proposed Pierre Auger Project (see e.g., Boratav et al.~1992), which would also allow an angular resolution of $\simeq1^{\circ}$. The case of a diffuse source distribution is more challenging; the $\gamma$-ray flux is typically smaller and the EGMF feature is less pronounced than for a single source at moderate distances (see figs.~2 \& 3), but the dip in the $\gamma$ spectrum may still be detectable. Thus, measuring the {$\gamma$}-ray flux between $\simeq10^{18}{\,\rm eV}$ and $\simeq10^{20}{\,\rm eV}$ has the potential to either detect or find strong evidence against an EGMF $\ga 10^{-10}\,{\rm G}$. \section{Charged Cosmic Ray Deflection by Extragalactic Magnetic Fields} Here, we discuss the influence of the EGMF on the charged UHE CR flux from discrete sources. We restrict ourselves to the case of small deflection angles (for the opposite limit see, e.g. Wdowczyk \& Wolfendale 1979; Berezinskii, Grigor'eva, \& Dogel' 1989). In this case, the energy spectrum of charged UHE CRs from a given source is not significantly altered as compared to a straight-line propagation. However, if the sources are strong enough to cause an anisotropy in the UHE CR flux, the directional correlation of ``hot spots" with possible sources will depend on the EGMF. The following discussion relates to this anisotropic component of the charged UHE CR flux. As in \S 2, let us assume that the large scale EGMF can be characterized by a typical field strength $B$ and a coherence length $l_c$. Furthermore, we assume for the moment that the source distance $r$ is smaller than the energy attenuation length $\lambda=E(dE/dr)^{-1}$ for a charged cosmic ray of energy $E$ which can then be treated as approximately constant throughout propagation. For nucleons, $\lambda\simeq10$ Mpc above the GZK cutoff (at $E\simeq6\times10^{19}{\,\rm eV}$), and $\lambda\simeq1$ Gpc much below the GZK cutoff. A more sophisticated analysis would require a Monte Carlo simulation of both UHE CR propagation and deflection. However, since data on both UHE CR and the EGMF are so sparse to date, we feel that a qualitative discussion of the principle effects is more appropriate at the moment. We now consider two cases. (i) The source distance is smaller than the coherence length, $r\la l_c$. Then, in vectorial notation, the deflection angle ${\bf\alpha}$ is given by \begin{equation} {\bf\alpha}=-{Ze\over E}\,{\bf r}\times{\bf B}= 5.3^{\circ}\,Z\left({E\over10^{20}{\,\rm eV}}\right)^{-1}\left({r\over10\,{\rm Mpc}} \right)\left({B\over10^{-9}\,{\rm G}}\right)\, \left({\bf\hat r}\times{\bf\hat B}\right)\,,\label{def1} \end{equation} where ${\bf r}$ is the radius vector pointing to the source, $Z$ is the charge of the UHE CR component, ${\bf\hat r}={\bf r}/ \vert{\bf r}\vert$, and ${\bf\hat B}={\bf B}/\vert{\bf B}\vert$. Thus, a correlation between the UHE CR flux of charge $Z$ and energy $E$ and source counterparts, systematically shifted by an angle ${\bf\alpha}$, would indicate that $l_c\ga r$ and for a known source distance $r$ would allow to measure the combination $B({\bf\hat r}\times{\bf\hat B})$. The characteristic $E$- and $Z$ dependence of ${\bf\alpha}$ would provide an additional test for the hypothesis that the deflection is caused by an EGMF. (ii) The source distance is considerably larger than the coherence length, $r\gg l_c$. In this case the deflection angle undergoes a diffusion process during propagation and the source shape in the UHE CR flux will be smeared out over a typical angle \begin{equation} \alpha_{\em rms}\simeq{2\over\pi}{ZeB\over E}\left(rl_c\right)^{1/2}= 1.1^{\circ}\,Z\left({E\over10^{20}{\,\rm eV}}\right)^{-1} \left({r\over10\,{\rm Mpc}}\right)^{1/2} \left({l_c\over1\,{\rm Mpc}}\right)^{1/2} \left({B\over10^{-9}\,{\rm G}}\right)\,.\label{def2} \end{equation} Therefore, if sources appear spread out in the UHE CR flux of charge $Z$ and energy $E$ by a typical angle $\alpha$, this would indicate that $l_c\la r$ and for a known source distance $r$ would allow to measure the combination $Bl_c^{1/2}$. If the source distance is larger than the energy attenuation length, $r\ga\lambda$, eqs.~[\ref{def1}] and [\ref{def2}] tend to overestimate $\alpha$. In fact, in the limit $r\gg\lambda$ the deflection angle $\alpha$ ``saturates" as a function of $r$ and for approximately energy independent $\lambda$, $r$ has to be substituted by $\lambda$ and $\lambda/2$ in eqs.~[\ref{def1}] and [\ref{def2}], respectively. Secondary {$\gamma$}-rays produced by the interactions of the charged UHE CRs are also expected to correlate with the sources. Due to their continuous production they will be smeared out over angles which are typically somewhat smaller than given in eqs.~[\ref{def1}] and [\ref{def2}]. Sources which can act as suitable probes for the EGMF via the effects discussed above have to obey the following conditions apart from producing a detectable anisotropic UHE CR flux component: Their apparent angular size should be smaller than the deflection angle $\alpha$. The same pertains to the apparent angular radius of a possible high magnetic field region around the source if it can cause deflections in excess of $\alpha$. For example, a $10^{-6}\,{\rm G}$ field over a scale $\ga100\,{\rm kpc}$ is possible in galaxy clusters (see, e.g., Kronberg 1994) and would completely bend around a $10^{20}{\,\rm eV}$ proton. However, as long as a detectable proton flux emerges from such an object and the above conditions are fulfilled, it could still be a suitable probe of the EGMF. Finally, there should be no intervening high magnetic field regions between source and observer which could cause bending by more than $\alpha$. For example, for $r\ga l_c$, $\lambda$, this corresponds to the condition $l_{\rm iv}B_{\rm iv}\la\left(\lambda l_c\right)^{1/2}B$ for linear scale $l_{\rm iv}$ and strength $B_{\rm iv}$ of the intervening field. This condition could well be satisfied along most lines of sight since known objects with high field regions like galaxies and galaxy clusters have a small filling factor $f\la10^{-5}$. In light of these conditions we believe that some of the nearby galaxy clusters and powerful field radio galaxies could well be suitable EGMF probes since they are expected to contribute significantly to the UHE CR flux (Rachen, Stanev, \& Biermann 1993). The accuracy to which the EGMF bending can be determined is limited by the (to date) unknown additional bending by the galactic magnetic field of strength $B_g$ and scale height $l_g$. Thus, according to eq.~[\ref{def1}], the sensitivity of deflection measurements of the EGMF is restricted to field parameters satisfying $Br\ga10^{-9}\,{\rm G}\,{\rm Mpc} \left(B_g/\mu{\rm G}\right)\left(l_g/300\,{\rm pc}\right)$ where the fudge factors are the parameter values usually assumed for the galactic magnetic field. \section{Conclusions} We discussed how composition, spectrum, and directional distribution of UHE CR above $\simeq10^{18}{\,\rm eV}$ can be used to gain information about the large scale (a few to tens of Mpc) EGMF. Spectral features in the $\gamma$-ray flux are sensitive to field strengths in the range $\simeq10^{-10}-10^{-9}\,{\rm G}$. In a similar range, correlations between an anisotropic charged UHE CR flux component and possible sources could provide independent information on the EGMF including its polarization. Both effects should yield consistent estimates for the EGMF strength. Strong discrete sources detected in UHE CRs by future instruments with an angular resolution of $1^{\circ}$ or better and a sensitivity to $\gamma$-ray to nucleon flux ratios of $1\%$ or smaller would provide the best conditions for detecting an EGMF in the range $\simeq 10^{-10}-10^{-9}\,{\rm G}$. Since these conditions are not unreasonable, UHE CRs have the potential to provide important information on properties and origin of the EGMF. \acknowledgments This work was supported by the DoE, NSF and NASA at the University of Chicago, by the DoE and by NASA through grant NAG5-2788 at Fermilab, and by Alexander-von-Humboldt Foundation. S.L. acknowledges the support of the POSCO Scholarship Foundation in Korea.
1,116,691,498,071
arxiv
\section{Introduction} The SNO+ experiment probes a number of rare physics processes with the main focus on searching for neutrinoless double beta decay of ${}^{130}$Te during the tellurium loaded scintillator phase. The detector is located in SNOLAB, located in Creighton Mine in Sudbury, Canada with approximately 2\,{\rm km} of rock overburden. The SNO+ detector is an acrylic vessel, 6m in radius, containing 780\,{\rm tonnes} LAB PPO liquid scintillator. Scintillator light from low energy interactions is observed by about 9400 PMTs surrounding the acrylic vessel. In order to produce reliable results, a broad calibration system has been developed, including an optical calibration system and deployed radioactive sources, specifically ${}^{60}\mathrm{Co}$, ${}^{57}\mathrm{Co}$, ${}^{48}\mathrm{Sc}$, ${}^{24}\mathrm{Na}$, ${}^{16}\mathrm{N}$ and AmBe, which are gamma-emitters, covering an energy scale from 0.1\,{\rm MeV} to 6\,{\rm MeV} \cite{Andringa:2015tza}. AmBe is also a source of neutrons. It was proposed in \cite{JRWilson2011} to add a pure beta-emitter ${}^{90}$Y-isotope as a calibration source, which will verify the simulated detector model and energy reconstruction algorithms of electron-like events. The advantage of ${}^{90}$Y is the relatively short half-life time of 64\,{\rm hours} which reduces the risk of long-term contamination of the detector. The high end-point energy of 2.24\,{\rm MeV} of the decay allows to study the energy region close to the neutrinoless double beta decay. \section{ ${}^{90}$Y calibration source development} In order to use a calibration source, its geometry has to be understandable and easily modelled. The goal is to create a point source of beta radiation which can be achieved using a small cylindrical shaped container and a droplet of ${}^{90}$Y inside it. We found the best option was to use a micro capillary, available from various suppliers in a range of diameters and materials. A Monte Carlo study, using SNO+ RAT software was performed to define an appropriate geometry. The study shows that betas from the decay of ${}^{90}$Y are slightly less attenuated by glass than by quartz. An additional advantage of using glass is that its melting point is much lower than of quartz. We simulated different capillary diameters, from 0.5\,{\rm mm} to 2\,{\rm mm} and found that the ${}^{90}$Y contained within an outer diameter of 1.2\,{\rm mm} and inner diameter of 1\,{\rm mm} still behaved as a point source and suffered minimal attenuation in the glass. At the same time such a diameter is preferable from practical approach. The amount of ${}^{90}$Y, and therefore the height of the droplet, can not be large, as it stops being approximated by a point, and secondly attenuates the electrons more. After optimization we chose to inject 2\,{\rm$\mu$L} of ${}^{90}$Y-source. After defining the parameters of the calibration source, we designed the manufacturing procedure, taking into account safety of personnel and ease in production. Plastic elements required for the production and secure usage of the source were designed and machined in Queen Mary University of London. The production of the calibration source took place in the University of Sussex radioactive laboratory. All work with the radioactive source was performed behind a Perspex shield inside a fume cabinet. \begin{figure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.087]{capillary.png} \caption[]{The capillary with the glued holder } \label{capillary} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.1]{tip.png} \caption[]{The capillary and the end of the pipette} \label{tip} \end{subfigure} \caption{} \end{figure} We used liquid ${}^{90}$Y, separated from ${}^{90}$Sr and purified at the 10$^{-7}$ level by PerkinElmer Inc. for medical treatments. Prior to injection of ${}^{90}$Y a plastic holder was glued to the capillary, Figure~\ref{capillary}. In order to safely insert the ${}^{90}$Y droplet, the capillary was placed into a plastic stand supported by the holder. A Gilson pipette was used to injected 2\,{\rm$\mu$L} of ${}^{90}$Y source. We then wiped the capillary with a dry cotton pad and moved the droplet down with 8\,{\rm$\mu$L} of air, injected from the pipette with a new end. The accuracy in this procedure is very important to avoid accidental breakage, as the diameter of a plastic end of the pipette is only slightly smaller than the inner diameter of the capillary, Figure~\ref{tip}. Inaccurate air pipetting can split the ${}^{90}$Y into small droplets and hence the source can't be considered as a point source. After the droplet is placed at a satisfactory position, the edges of the capillary are sealed using a butane gas torch. The seals are visually inspected under magnification; a good seal is accurate and has no glass bubbles. It was next soaked in a water bath, that was then tested for contamination, using a Geiger counter. Next the clean capillary was placed into a vacuum canister air pump storage box. If the droplet either moves or splits into smaller droplets under vacuum, this indicates a poor seal and must be repeated We prepared two calibration sources to perform the tests in the University of Sussex and in the University of Pennsylvania. The capillary with the source was shipped in a custom designed thick acrylic container, that protected it from damage. \section{Measurements at the University of Sussex} Initial tests were performed in the radiation laboratory at the University of Sussex. The experimental setup consists of two Hamamatsu 1'' PMTs H10721-210 PMTs and a disk shaped vessel with diameter 6\,{\rm cm} filled with LAB PPO inside a dark box, and a Tektronix MSO2024 oscilloscope. \begin{figure} \centering \includegraphics[scale=0.1]{sussex_vessel.png} \caption{Disk-shaped vessel filled with LAB PPO. } \label{sussex_vessel} \end{figure} The capillary is placed inside the scintillator filled vessel, Figure~\ref{sussex_vessel}. Due to the small volume of the vessel and high decay rate of the ${}^{90}$Y the contribution from cosmic ray muons is negligible. Signals were readout via the oscilloscope and analysed by custom written programs using LABVIEW and Python. \begin{figure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.35]{rainbow.png} \caption[]{Spectra of ${}^{90}$Y decay over time} \label{rainbow} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.35]{decay.png} \caption[]{Count rates of ${}^{90}$Y decay over time} \label{decay} \end{subfigure} \caption{} \end{figure} Data from several days agrees with the theoretically predicted spectrum of ${}^{90}$Y decay, Figure~\ref{rainbow}. During the first day of data taking, when the activity of the source was high, we observed pile up between multiple decays in the same readout window. The count rate over time also agrees with the expected half-life, Figure~\ref{decay}, confirming that we can observe ${}^{90}$Y decay betas from the source with minimal attenuation. \section{Measurements at the University of Pennsylvania} Using a large spherical acrylic vessel (diameter 40\,{\rm cm}) at the University of Pennsylvania we were able to make more precise measurements to study the scintillator cocktail properties. The source capillary was placed in the tank using a custom designed mount. Further data was collected with a ${}^{60}$Co test disk source attached to the spherical vessel from the outside to stop betas from entering the scintillator volume. The experimental setup includes five Hamamatsu PMTs: a ETL-9354KB PMT that was used as the trigger PMT, two R11780-HQE PMTs and two R1408 PMTs, that have been used in the SNO experiment. To accurately simulate the PMT response, the single photoelectron distribution was measured and then convolved with the simulated charge distribution from ${}^{90}$Y decay. The final modelled charge distribution was fit to the obtained data. Using the echidna software designed by the University of Sussex and Queen Mary University of London for fitting and limit setting tasks. The parameters of the fit, including a charge scale and an offset, have been applied to simulations of ${}^{60}$Co and compared to obtained data. Poor convergence in the fit indicated discrepancies in the scintillator model, triggering further study. \section{Conclusion and future directions} These studies have contributed to improve modelling of the scintillator response and validated this ex-situ source calibration technique. However these measurements were challenging, and due to the delicate nature of the capillary source, we conclude deployment of such a source within the SNO+ detector represents too high risk to detector contamination, despite the short source half-life. One possible approach is to contain the capillary within a secondary scintillator filled acrylic sphere for deployment into SNO+, but this study will be analysing this particular scintillator, rather than the scintillator volume of the detector. \bigskip \bigskip \begin{center} \begin{large This research was supported by ERC grant 278310 under the FP7 framework. We are grateful to the University of Sussex group for providing access to their radiation laboratory and the University of Pennsylvania group for helping to perform ${}^{90}$Y source measurements in various scintillator cocktails.
1,116,691,498,072
arxiv
\section{Introduction} \label{S-I} Estimating quantiles has a longstanding history in statistics and probability. Except in parametric models where explicit formula are available, the estimation of quantiles is a real issue. The most commun way to estimate quantiles is to make use of order statistics, see among other references \cite{Bahadur1966,Ghosh1971}. Another strategy is to make use of stochastic approximation algorithms and the pioneering work in this vein is the celebrated paper by Robbins and Monro \cite{RobbinsMonro1951}. \vspace{1ex} \noindent Let $X$ be an integrable continuous random variable with strictly increasing cumulative distribution function $F$ and probability density function $f$. For any $\alpha \in ]0,1[$, the quantile $\theta_{\alpha}$ of order $\alpha$ of $F$ is given by \begin{equation} \label{DEFQ} F(\theta_{\alpha})= \mathbb{P}(X \leq \theta_{\alpha})=\alpha, \end{equation} whereas the superquantile $\vartheta_{\alpha}$ of order $\alpha$ is defined by \begin{equation} \label{DEFSQ} \vartheta_{\alpha} = \mathbb{E}[ X \, \vert \, X \geq \theta_{\alpha}] =\frac{\mathbb{E}[X \mathrm{I}_{\{X \geq \theta_{\alpha}\}}]}{\mathbb{P}(X \geq \theta_{\alpha})} = \frac{\mathbb{E}[X \mathrm{I}_{\{X \geq \theta_{\alpha}\}}]}{1-\alpha}. \end{equation} One can observe that the superquantile provides more information on the tail of the distribution of the random variable $X$. Our goal in this paper is to simultaneously estimate quantiles and superquantiles, also respectively known as values at risk and conditional values at risk, which have become increasingly popular as measures of risk in finance \cite{Rockafellar2000,Rockafellar2002}. \vspace{1ex} \noindent The paper is organized as follows. Section \ref{S-O} is devoted to a brief overview of the previous literature on the recursive estimation of quantiles and superquantiles. The main results of the paper are given in Section \ref{S-MR}. We propose the almost sure convergence of two-time-scale stochastic approximation algorithms for superquantile estimation. The quadratic strong law (QSL) as well as the law of iterated logarithm (LIL) of our stochastic algorithms are also provided. Moreover, we establish the joint asymptotic normality of our estimates. Numerical experiments on real data are given in Section \ref{S-NE}. All technical proofs are postponed to Appendices A and B. \section{Overview of existing literature} \label{S-O} A wide range of literature exists already on the recursive estimation of quantiles \cite{RobbinsMonro1951}. However, to the best of our knowledge, only a single paper is available on the recursive estimation of superquantiles \cite{Bardou2009}. In many practical situations where the data are recorded online with relatively high speed, or when the data are simply too numerous to be handled in batch systems, it is more suitable to implement a recursive strategy where quantiles and superquantiles are sequentially estimated with the help of stochastic approximation algorithms \cite{Duflo1997}, \cite{KushnerYin2003}. We also refer the reader to \cite{CCG2017, CCZ2013, Godichon2015, Godichon2019} for the online estimation of geometric medians and variances. \vspace{1ex} \noindent Bardou et al. \cite{Bardou2009} have previously studied the averaged version \cite{PolyakJuditsky1992, Ruppert1988} of a one-time-scale stochastic algorithm in order to estimate $\theta_{\alpha}$ and $\vartheta_{\alpha}$. Here, we have chosen to investigate a two-time-scale stochastic algorithm \cite{Borkar1997, GPS2018, Konda2004, MokkademPelletier2006} which performs pretty well and offers more flexibility than the one-time-scale algorithm. Let $(X_n)$ be a sequence of independent and identically distributed random variables sharing the same distribution as $X$. We shall extend the statistical analysis of \cite{Bardou2009} by studying the two-time-scale stochastic algorithm given, for all $n \geq 1$, by \begin{equation} \label{TTSALGO1} \left \{ \begin{aligned} &\theta_{n+1} =\theta_{n}-a_{n} \Bigl( \mathrm{I}_{\{X_{n+1} \leq \theta_{n} \}} - \alpha \Bigr), \\ &\widehat{\vartheta}_{n+1}=\widehat{\vartheta}_{n}+b_{n} \Bigl( \frac{X_{n+1}}{1-\alpha}\mathrm{I}_{\{X_{n+1} >\theta_{n}\}} - \widehat{\vartheta}_{n} \Bigr), \end{aligned} \right. \end{equation} where the initial values $\theta_1$ and $\widehat{\vartheta}_1$ are square integrable random variables which can be arbitrarily chosen and the steps $(a_n)$ and $(b_n)$ are two positive sequences of real numbers strictly smaller than one, decreasing towards zero such that \begin{equation} \label{CONDSTEP} \sum_{n=1}^\infty a_n=+\infty, \hspace{0.4cm} \sum_{n=1}^\infty b_n=+\infty \hspace{0.8cm}\text{and}\hspace{0.8cm} \sum_{n=1}^\infty a_n^2<+\infty, \hspace{0.4cm} \sum_{n=1}^\infty b_n^2<+\infty. \end{equation} We shall also investigate the asymptotic behavior of the convexified version of algorithm \eqref{TTSALGO1}, based on the Rockafellar-Uryasev's identity \cite{Rockafellar2000} and given, for all $n \geq 1$, by \begin{equation} \label{TTSALGO2} \left \{ \begin{aligned} &\theta_{n+1} =\theta_{n}-a_n \Bigl( \mathrm{I}_{\{X_{n+1} \leq \theta_{n}\}}-\alpha\Bigr)\\ &\widetilde{\vartheta}_{n+1}=\widetilde{\vartheta}_{n}+b_n \Bigl(\theta_{n}+ \frac{(X_{n+1}-\theta_{n})}{1-\alpha}\mathrm{I}_{\{X_{n+1} >\theta_{n}\}} - \widetilde{\vartheta}_{n}\Bigr), \end{aligned} \right.\end{equation} where as before the initial values $\theta_1$ and $\widetilde{\vartheta}_1$ are square integrable random variables which can be arbitrarily chosen. We also refer the reader to the original contribution \cite{Ben-Tal} where this convexification first appeared. The almost sure convergence \begin{equation} \label{ASCVGRM} \lim_{n \rightarrow \infty}\theta_{n}=\theta_{\alpha} \hspace{1cm}\text{a.s.} \end{equation} is a famous result that was established by Robbins and Monro \cite{RobbinsMonro1951}, Robbins and Siegmund \cite{RobbinsSiegmund1971}. Moreover, the asymptotic normality is due to Sacks, see Theorem 1 in \cite{Sacks1958}. It requires the additional assumption that the probability density function $f$ is differentiable with bounded derivative in every neighborhood of the quantile $\theta_{\alpha}$. More precisely, if the step $a_n=a_1/n$ where $a_1>0$ and $2 a_1 f(\theta_{\alpha})>1$, we have the asymptotic normality \begin{equation} \label{ANRM} \sqrt{n}\bigl(\theta_{n} - \theta_{\alpha} \bigr) \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\Bigl(0, \frac{a_1^2\alpha(1-\alpha)}{2a_1f(\theta_{\alpha}) -1} \Bigr). \end{equation} One can observe that in the special case where the value $f(\theta_{\alpha})>0$ is known, it is possible to minimise the previous limiting variance by choosing $a_1=1/f(\theta_{\alpha})$ and to obtain from \eqref{ANRM} the asymptotic efficiency $$ \sqrt{n}\bigl(\theta_{n} - \theta_{\alpha} \bigr) \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\Bigl(0, \frac{\alpha(1-\alpha)}{f^2(\theta_{\alpha})} \Bigr). $$ Some useful refinements on the asymptotic behavior of the sequence $(\theta_{n})$ are also well-known. The LIL was first proved by Gaposhkin and Krasulina, see Theorem 1 in \cite{Gaposkin1975} and Corollary 1 in \cite{Kersting1977}. More precisely, if the step $a_n=a_1/n$ where $2a_1 f(\theta_{\alpha})>1$, we have the LIL \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\bigl( \theta_{n} - \theta_{\alpha} \bigr) &=& - \liminf_{n \rightarrow \infty} \left(\frac{n}{2 \log\log n}\right)^{1/2} \!\!\!\bigl( \theta_{n} - \theta_{\alpha} \bigr) \notag \\ &=& \left( \frac{a_1^2\alpha(1-\alpha)}{2a_1f(\theta_{\alpha}) -1} \right)^{1/2} \hspace{1cm}\text{a.s.} \label{LILRM} \end{eqnarray} In particular, it follows from \eqref{LILRM} that \begin{equation} \label{LILSUPRM} \limsup_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right) \bigl( \theta_{n} - \theta_{\alpha} \bigr)^2= \frac{a_1^2\alpha(1-\alpha)}{2a_1f(\theta_{\alpha}) -1} \hspace{1cm}\text{a.s.} \end{equation} which is the limiting variance in \eqref{ANRM}. The QSL is due to Lai and Robbins, see Lemma 1 and Theorem 2 in \cite{LaiRobbins1979} as well as Theorem 3 in \cite{Pelletier1998}. More precisely, they proved that \begin{equation} \label{QSLRM} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \bigl( \theta_k - \theta_{\alpha} \bigr)^2= \frac{a_1^2\alpha(1-\alpha)}{2a_1f(\theta_{\alpha}) -1} \hspace{1cm}\text{a.s.} \end{equation} Besides the classical choice $a_n=a_1/n$ where $a_1>0$, slower step-size $a_n=a_1/n^a$ where $a_1>0$ and $1/2<a<1$ have been studied in depth. We refer the reader to pioneer work of Chung \cite{Chung1954} and to Fabian \cite{Fabian1968} who obtained that the asymptotic normality still holds for the Robbins-Monro algorithm. More precisely, if $f(\theta_{\alpha})>0$, they showed that \begin{equation} \label{ANRMStepa} \sqrt{n^a}\bigl(\theta_{n} - \theta_{\alpha} \bigr) \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\Bigl(0, \frac{a_1\alpha(1-\alpha)}{2f(\theta_{\alpha})} \Bigr). \end{equation} In addition, it follows from Lai and Robbins \cite{LaiRobbins1979} or Pelletier \cite{Pelletier1998} that \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{n^a}{2(1-a) \log n} \right)^{1/2} \!\!\!\bigl( \theta_{n} - \theta_{\alpha} \bigr) &=& - \liminf_{n \rightarrow \infty} \left(\frac{n^a}{2(1-a) \log n}\right)^{1/2} \!\!\!\bigl( \theta_{n} - \theta_{\alpha} \bigr) \notag \\ &=& \left( \frac{a_1\alpha(1-\alpha)}{2f(\theta_{\alpha})} \right)^{1/2} \hspace{1cm}\text{a.s.} \label{LILRMStepa} \end{eqnarray} In particular, \begin{equation} \label{LILSUPRMStepa} \limsup_{n \rightarrow \infty} \left(\frac{n^a}{2(1-a) \log n} \right) \bigl( \theta_{n} - \theta_{\alpha} \bigr)^2= \frac{a_1\alpha(1-\alpha)}{2f(\theta_{\alpha})} \hspace{1cm}\text{a.s.} \end{equation} Moreover, we also have from \cite{LaiRobbins1979}, \cite{Pelletier1998} that \begin{equation} \label{QSLRMStepa} \lim_{n \rightarrow \infty} \frac{1}{n^{1-a}} \sum_{k=1}^n \bigl( \theta_k - \theta_{\alpha} \bigr)^2= \frac{a_1\alpha(1-\alpha)}{2(1-a)f(\theta_{\alpha})} \hspace{1cm}\text{a.s.} \end{equation} The restrictive assumption $2a_1 f(\theta_{\alpha})>1$, which involves the knowledge of $f(\theta_{\alpha})$, is no longer needed. However, the convergence rate $n^a$ is always slower than $n$, which means that the choice $a_n=a_1/n$ theoretically outperforms the one of $a_n=a_1/n^a$, at least asymptotically. \vspace{1ex} \noindent In the special case of the one-time-scale stochastic algorithm where $a_n=b_n$, Bardou et al. \cite{Bardou2009} proved the almost sure convergences \begin{equation} \label{ASCVGSQB1} \lim_{n \rightarrow} \theta_{n}= \theta_{\alpha} \hspace{1.5cm}\text{and}\hspace{1.5cm} \lim_{n \rightarrow} \widetilde{\vartheta}_{n}=\vartheta_{\alpha} \hspace{1cm}\text{a.s.} \end{equation} using an extended version of Robbins-Monro theorem together with Cesaro and Kronecker lemmas, see e.g. Theorem 1.4.26 in \cite{Duflo1997}. They also state without proof that \begin{equation} \label{ASCVGSQB2} \lim_{n \rightarrow} \widehat{\vartheta}_{n}=\vartheta_{\alpha} \hspace{1cm}\text{a.s.} \end{equation} Yet, other almost sure asymptotic properties for the sequences $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$, such as the LIL and the QSL, are still missing. Bardou et al. also established in Theorem 2.4 of \cite{Bardou2009} the joint asymptotic normality of the averaged version \cite{PolyakJuditsky1992, Ruppert1988} of their one-time-scale stochastic algorithm \begin{equation} \label{ANRMB} \sqrt{n} \begin{pmatrix} \overline{\theta}_n - \theta_{\alpha} \\ \overline{\vartheta}_n - \theta_{\alpha} \\ \end{pmatrix} \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\bigl(0, \Sigma \bigr) \end{equation} where the asymptotic covariance matrix $\Sigma$ is explicitly calculated, $$ \overline{\theta}_n = \frac{1}{n}\sum_{k=1}^n \theta_k \hspace{1.5cm}\text{and}\hspace{1.5cm} \overline{\vartheta}_n = \frac{1}{n}\sum_{k=1}^n \widetilde{\vartheta}_{k}. $$ We will show that our two-time-scale stochastic algorithms given by \eqref{TTSALGO1} and \eqref{TTSALGO2} allow us to avoid the Ruppert and Polyak-Juditsky averaging principle. Moreover, they perform pretty well both from a theoretical and a practical point of view and offer more flexibility than the one-time-scale stochastic algorithm. \vspace{-1ex} \section{Main results} \label{S-MR} In order to state our main results, it is necessary to introduce some assumptions. \begin{displaymath} \begin{array}{ll} (\mathcal{A}_1) & \textrm{The probability density function $f$ is differentiable with bounded derivative in}\\ & \textrm{every neighborhood of $\theta_{\alpha}$.} \end{array} \end{displaymath} \vspace{-1ex} \begin{displaymath} \begin{array}{ll} (\mathcal{A}_2) & \textrm{The function $\Phi$ defined, for all $\theta \in \mathbb{R}$, by $\Phi(\theta)=f(\theta)+ \theta f^{\prime}(\theta)$ is bounded in}\\ & \textrm{every neighborhood of $\theta_{\alpha}$.} \end{array} \end{displaymath} Our first result concerns the basic almost sure convergence of the two-time-scale stochastic algorithms \eqref{TTSALGO1} and \eqref{TTSALGO2} to the superquantile $\vartheta_{\alpha}$. \begin{thm} \label{T-ASCVGSQ} Assume that $(\mathcal{A}_1)$ holds and that the random variable $X$ is square integrable. Then, we have the almost sure convergences \begin{equation} \label{ASCVGSQ1} \lim_{n \rightarrow \infty}\widehat{\vartheta}_{n}=\vartheta_{\alpha} \hspace{1cm}\text{a.s.} \end{equation} \begin{equation} \label{ASCVGSQ2} \lim_{n \rightarrow \infty}\widetilde{\vartheta}_{n}=\vartheta_{\alpha} \hspace{1cm}\text{a.s.} \end{equation} \end{thm} \noindent Our proof is slightly different from that of Bardou et al. \cite{Bardou2009} established for the one-time-scale stochastic algorithm where $a_n=b_n$. It can be found in Appendix A for sake of completeness. We now focus our attention on the almost sure rates of convergence of the sequences $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$. We divide our analysis into two parts depending on the step size $(b_n)$ in the superquantile recursive procedure. First of all, we shall consider the optimal step $b_n=b_1/n$. Then, we shall study the case where $b_n=b_1/n^{b}$ with $1/2<b<1$. For all $\theta \in \mathbb{R}$, denote \begin{equation} \label{DEFVAR} \sigma_\alpha^2(\theta)=\frac{1}{(1-\alpha)^2} \text{Var}(X \mathrm{I}_{\{X >\theta\}}) \hspace{0.5cm}\text{and}\hspace{0.5cm} \tau_\alpha^2(\theta)=\frac{1}{(1-\alpha)^2} \text{Var}((X-\theta) \mathrm{I}_{\{X >\theta\}}). \end{equation} It follows from straightforward calculation that \begin{equation*} \label{EQVAR} \tau_\alpha^2(\theta_{\alpha}) =\sigma_\alpha^2(\theta_{\alpha}) - \Bigl(\frac{\alpha\theta_{\alpha}}{1-\alpha}\Bigr)(2\vartheta_{\alpha}-\theta_{\alpha}). \end{equation*} Consequently, as soon as $\theta_{\alpha} \geq 0$, we always have $\tau_\alpha^2(\theta_{\alpha}) \leq \sigma_\alpha^2(\theta_{\alpha})$ since $\vartheta_{\alpha} \geq \theta_{\alpha}$. \begin{thm} \label{T-LILQSEQUAL1} Assume that $(\mathcal{A}_1)$ and $(\mathcal{A}_2)$ hold and that the random variable $X$ has a moment of order $>2$. Moreover, suppose that $f(\theta_{\alpha})>0$ and that the step sequences $(a_n)$ and $(b_n)$ are given by \begin{equation*} a_n=\frac{a_1}{n^a} \hspace{1.5cm}\text{and}\hspace{1.5cm} b_n=\frac{b_1}{n} \end{equation*} where $a_1>0$, $b_1> 1/2$ and $1/2<a<1$. Then, $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$ share the same QSL \begin{equation} \label{QSL1} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \bigl( \widehat{\vartheta}_{k} - \vartheta_{\alpha} \bigr)^2= \Bigl(\frac{b_1^2}{2b_1-1} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} In addition, they also share the same LIL \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr) &=& - \liminf_{n \rightarrow \infty} \left(\frac{n}{2 \log\log n}\right)^{1/2} \!\!\!\bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr) \notag \\ &=& \left( \frac{b_1^2}{2b_1 -1} \right)^{1/2} \tau_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \label{LIL1} \end{eqnarray} In particular, \begin{equation*} \label{LILSUPRM1} \limsup_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right) \bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr)^2= \Bigl(\frac{b_1^2}{2b_1 -1}\Bigr) \tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation*} \end{thm} \begin{rem} In the special case where the step sequence $(b_n)$ is given by \begin{equation*} b_n=\frac{1}{n+1}, \end{equation*} it is easy to see that $\widehat{\vartheta}_{n}$ and $\widetilde{\vartheta}_{n}$ both reduce to $$ \widehat{\vartheta}_{n}=\frac{1}{n} \sum_{k=1}^n \Bigl( \frac{X_{k}}{1-\alpha}\Bigr)\mathrm{I}_{\{X_{k} >\theta_{k-1}\}} $$ and $$ \widetilde{\vartheta}_{n}=\frac{1}{n} \sum_{k=1}^n \theta_{k-1} +\frac{1}{n} \sum_{k=1}^n \Bigl( \frac{X_{k} -\theta_{k-1} }{1-\alpha}\Bigr)\mathrm{I}_{\{X_{k} >\theta_{k-1}\}}. $$ In this setting, we immediately obtain from Theorem \ref{T-LILQSEQUAL1} that \begin{equation*} \label{BASICQSL} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \bigl( \widehat{\vartheta}_{k} - \vartheta_{\alpha} \bigr)^2= \tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation*} and \begin{equation*} \label{BASICLILSUP} \limsup_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right) \bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr)^2= \tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation*} \end{rem} \begin{thm} \label{T-LILQSLESS1} Assume that $(\mathcal{A}_1)$ and $(\mathcal{A}_2)$ hold and that the random variable $X$ has a moment of order $>2$. Moreover, suppose that $f(\theta_{\alpha})>0$ and that the step sequences $(a_n)$ and $(b_n)$ are given by \begin{equation*} a_n=\frac{a_1}{n^a} \hspace{1.5cm}\text{and}\hspace{1.5cm} b_n=\frac{b_1}{n^b} \end{equation*} where $a_1>0$, $b_1>0$ and $1/2<a<b<1$. Then, $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$ share the same QSL \begin{equation} \label{QSL3} \lim_{n \rightarrow \infty} \frac{1}{n^{1-b}} \sum_{k=1}^n \bigl( \widehat{\vartheta}_{k} - \vartheta_{\alpha} \bigr)^2= \Bigl(\frac{b_1}{2(1-b)} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} In addition, they also share the same LIL \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{n^b}{2 (1-b)\log n} \right)^{1/2} \!\!\!\bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr) &=& - \liminf_{n \rightarrow \infty} \left(\frac{n^b}{2 (1-b)\log n}\right)^{1/2} \!\!\!\bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr) \notag \\ &=& \left( \frac{b_1}{2} \right)^{1/2} \tau_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \label{LIL3} \end{eqnarray} In particular, \begin{equation*} \label{LILSUPRM3} \limsup_{n \rightarrow \infty} \left(\frac{n^b}{2 (1-b)\log n} \right) \bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr)^2= \Bigl(\frac{b_1}{2} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation*} \end{thm} \begin{rem} Similar computations in the case where $1/2<b<a<1$ would lead to the same results for the convexified algorithm $(\widetilde{\vartheta}_{n})$. However, for the standard algorithm $(\widehat{\vartheta}_{n})$, it is necessary to replace the asymptotic variance $\tau_\alpha^2(\theta_{\alpha})$ by $\sigma_\alpha^2(\theta_{\alpha})$. This emphasizes the interest of using the convexified algorithm. \end{rem} We now focus our attention on the asymptotic normality of our two-time-scale stochastic algorithms \eqref{TTSALGO1} and \eqref{TTSALGO2}. \begin{thm} \label{T-AN} Assume that $(\mathcal{A}_1)$ and $(\mathcal{A}_2)$ hold and that the random variable $X$ has a moment of order $>2/a$. Moreover, suppose that $f(\theta_{\alpha})>0$ and that the step sequences $(a_n)$ and $(b_n)$ are given by \begin{equation*} a_n=\frac{a_1}{n^a} \hspace{1.5cm}\text{and}\hspace{1.5cm} b_n=\frac{b_1}{n^b} \end{equation*} where $a_1>0$, $b_1 >0$ and $1/2<a<b \leq 1$ with $b_1>1/2$ if $b=1$. Then, $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$ share the same joint asymptotic normality \begin{equation} \label{AN1} \begin{pmatrix} \sqrt{n^a} \bigl(\theta_n - \theta_{\alpha}\bigr) \vspace{1ex} \\ \sqrt{n^b} \bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha}\bigr) \\ \end{pmatrix} \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\left(0, \begin{pmatrix} \Gamma_{\theta_{\alpha}} & 0 \\ 0 & \Gamma_{\vartheta_{\alpha}} \\ \end{pmatrix}\right) \end{equation} where the asymptotic variances are given by \begin{equation*} \Gamma_{\theta_{\alpha}}= \frac{a_1\alpha(1-\alpha)}{2f(\theta_{\alpha})} \end{equation*} and \begin{equation*} \Gamma_{\vartheta_{\alpha}}= \left \{ \begin{array}[c]{ccc} {\displaystyle \frac{b_1^2 \tau^2_{\alpha}(\theta_\alpha)}{2b_1 - 1}} & \text{if} & b=1, \vspace{1ex} \\ {\displaystyle \frac{b_1 \tau^2_{\alpha}(\theta_\alpha)}{2}} & \text{if} & b<1. \end{array} \right. \end{equation*} \end{thm} \begin{rem} One can observe that the asymptotic covariance matrix in \eqref{AN1} is diagonal. It means that, at the limit, the two algorithms for quantile and superquantile estimation are no longer correlated. This is due to the fact that we use two different time scales contrary to Bardou et al. \cite{Bardou2009}. Moreover, in the special case where $b=1$, we also recover the same asymptotic variance as the one obtained in \cite{Bardou2009} for the averaged version of their one-time-scale stochastic algorithm. \end{rem} \begin{rem} The asymptotic variance $\tau^2_{\alpha}(\theta_\alpha)$ can be estimated by $$ \tau_n^2= \frac{1}{n}\sum_{k=1}^n \Bigl(\frac{X_k - \theta_{k-1}}{1 - \alpha} \Bigr)^2 \mathrm{I}_{\{X_{k} >\theta_{k-1}\}} - \Bigl( \frac{1}{n}\sum_{k=1}^n \Bigl(\frac{X_k - \theta_{k-1}}{1 - \alpha} \Bigr) \mathrm{I}_{\{X_{k} >\theta_{k-1}\}} \Bigr)^2. $$ Via the same lines as in the proof of the almost sure convergences \eqref{ASCVGSQ1} and \eqref{ASCVGSQ2}, one can verify that $\tau_n^2 \rightarrow \tau^2_{\alpha}(\theta_\alpha)$ a.s. Therefore, using Slutsky's Theorem, we deduce from \eqref{AN1} that $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$ share the same asymptotic normality \begin{equation} \label{AN2} \sqrt{n^b} \Bigl( \frac{\widehat{\vartheta}_{n} - \vartheta_{\alpha}}{\tau_n} \Bigr) \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}(0,\nu^2) \end{equation} where \begin{equation*} \nu^2= \left \{ \begin{array}[c]{ccc} {\displaystyle \frac{b_1^2}{2b_1 - 1}} & \text{if} & b=1, \vspace{1ex} \\ {\displaystyle \frac{b_1}{2}} & \text{if} & b<1. \end{array} \right. \end{equation*} Convergence \eqref{AN2} allows us to construct asymptotic confidence intervals for the superquantile $\vartheta_{\alpha}$. \end{rem} \section{Our martingale approach} \label{S-MA} All our analysis relies on a decomposition of our estimates as sum of a martingale increment and a drift term. More precisely, it follows from \eqref{TTSALGO1} and \eqref{TTSALGO2} that for all $n \geq 1$, \begin{equation} \label{TTSALGOYZ} \left \{ \begin{array}[c]{ccc} \widehat{\vartheta}_{n+1} & = & (1-b_n)\widehat{\vartheta}_{n}+b_n Y_{n+1} \vspace{2ex} \\ \widetilde{\vartheta}_{n+1} & = & (1-b_n)\widetilde{\vartheta}_{n}+b_n Z_{n+1} \end{array} \right. \end{equation} where $$ Y_{n+1}=\frac{X_{n+1}}{1-\alpha}\mathrm{I}_{\{X_{n+1} >\theta_{n}\}} $$ and $$ Z_{n+1}=\theta_{n}+ \frac{(X_{n+1}-\theta_{n})}{1-\alpha}\mathrm{I}_{\{X_{n+1} >\theta_{n}\}}. $$ Let $H_\alpha(\theta)$ and $L_\alpha(\theta)$ be the functions defined, for all $\theta \in \mathbb{R}$, by \begin{equation} \label{DEFHL} H_\alpha(\theta)=\frac{1}{1-\alpha} \mathbb{E}[X \mathrm{I}_{\{X >\theta\}}] \hspace{0.5cm}\text{and}\hspace{0.5cm} L_\alpha(\theta)= \theta +\frac{1}{1-\alpha} \mathbb{E}[(X-\theta) \mathrm{I}_{\{X >\theta\}}]. \end{equation} We clearly have that almost surely $$\mathbb{E}[Y_{n+1} | \mathcal{F}_n]=H_\alpha(\theta_{n}) \hspace{1cm}\text{and}\hspace{1cm} \mathbb{E}[Z_{n+1} | \mathcal{F}_n]=L_\alpha(\theta_{n}).$$ It allows use to split $Y_{n+1}$ and $Z_{n+1}$ as sum of a martingale increment and a drift term, $Y_{n+1}=\varepsilon_{n+1} + H_\alpha(\theta_{n})$ and $Z_{n+1}=\xi_{n+1} + L_\alpha(\theta_{n})$. One can also verify that $\mathbb{E}[\varepsilon_{n+1}^2 | \mathcal{F}_n]=\sigma_\alpha^2(\theta_{n})$ and $\mathbb{E}[\xi_{n+1}^2 | \mathcal{F}_n]=\tau_\alpha^2(\theta_{n})$ where the two variances are given by \eqref{DEFVAR}. Then, we immediately deduce from \eqref{TTSALGOYZ} that for all $n \geq 1$, \begin{equation} \label{TTSALGOE} \left \{ \begin{array}[c]{ccc} \widehat{\vartheta}_{n+1} & = & (1-b_n)\widehat{\vartheta}_{n}+b_n (\varepsilon_{n+1} + H_\alpha(\theta_{n})) \vspace{2ex} \\ \widetilde{\vartheta}_{n+1} & = & (1-b_n)\widetilde{\vartheta}_{n}+b_n (\xi_{n+1}+L_\alpha(\theta_{n})). \end{array} \right. \end{equation} Hereafter, assume for the sake of simplicity that for all $n \geq 1$, $b_n<1$, since this is true for $n$ large enough. Let $(P_n)$ be the increasing sequence of positive real numbers defined by \begin{equation} \label{DEFPN} P_n= \prod_{k=1}^n (1-b_k)^{-1} \end{equation} with the convention that $P_0\!=\!1$. Since $(1-b_n)P_n=P_{n-1}$, we obtain from \eqref{TTSALGOE} that \begin{equation*} \left \{ \begin{array}[c]{ccc} P_n\widehat{\vartheta}_{n+1} & = & P_{n-1}\widehat{\vartheta}_{n}+P_nb_n (\varepsilon_{n+1} + H_\alpha(\theta_{n})) \vspace{2ex} \\ P_n\widetilde{\vartheta}_{n+1} & = & P_{n-1}\widetilde{\vartheta}_{n}+P_n b_n (\xi_{n+1}+L_\alpha(\theta_{n})) \end{array} \right. \end{equation*} which implies the martingale decomposition \begin{equation} \label{DECMART} \left \{ \begin{array}[c]{ccc} \widehat{\vartheta}_{n+1} & = & {\displaystyle \frac{1}{P_{n}}\Bigl( \widehat{\vartheta}_1+M_{n+1} + H_{n+1} \Bigr)} \vspace{1ex}\\ \widetilde{\vartheta}_{n+1} & = &{\displaystyle \frac{1}{P_{n}}\Bigl( \widetilde{\vartheta}_1+N_{n+1} + L_{n+1} \Bigr)} \end{array} \right. \end{equation} where \begin{equation} \label{DEFMART} M_{n+1}=\sum_{k=1}^{n} b_k P_k \varepsilon_{k+1}, \hspace{1.5cm}N_{n+1}=\sum_{k=1}^{n} b_k P_k \xi_{k+1} \end{equation} \begin{equation} \label{DEFADDT} H_{n+1}= \sum_{k=1}^{n} b_k P_k H_\alpha(\theta_k), \hspace{1.5cm}L_{n+1}= \sum_{k=1}^{n} b_k P_k L_\alpha(\theta_k). \end{equation} Our strategy is to establish the asymptotic behavior of the two martingales $(M_n)$ and $(N_n)$ as well as to determine the crucial role played by the two drift terms $(H_n)$ and $(L_n)$. Several results in our analysis rely on the following keystone lemma which concerns the convexity properties of the functions $H_\alpha$ and $L_\alpha$ defined in \eqref{DEFHL}. \begin{lem} \label{L-HL} Assume that $(\mathcal{A}_1)$ and $(\mathcal{A}_2)$ hold. Then, $L_\alpha$ is a convex function such that $L_\alpha(\theta_{\alpha})=\vartheta_{\alpha}$, $L_\alpha^\prime(\theta_{\alpha})=0$ and that for all $ \theta \in \mathbb{R}$, \begin{equation} \label{TAYLORL} 0 \leq L_\alpha(\theta) - L_\alpha(\theta_{\alpha})\leq \frac{ ||f||_\infty}{2(1- \alpha)} \bigl( \theta - \theta_{\alpha} \bigr)^2. \end{equation} In addition, we also have $H_\alpha(\theta_{\alpha})=\vartheta_{\alpha}$ and that for all $ \theta \in \mathbb{R}$, \begin{equation} \label{TAYLORH} \Bigl| H_\alpha(\theta) - H_\alpha(\theta_{\alpha}) + \frac{\theta_{\alpha} f(\theta_{\alpha})}{1-\alpha} (\theta -\theta_{\alpha} )\Bigr|\leq \frac{ ||\Phi||_\infty}{2(1- \alpha)} \bigl( \theta - \theta_{\alpha} \bigr)^2. \end{equation} \end{lem} \begin{proof} It follows from \eqref{DEFHL} that for all $\theta \in \mathbb{R}$, $$ L_\alpha^\prime(\theta)= \frac{F(\theta)-\alpha}{1-\alpha} \hspace{1cm}\text{and}\hspace{1cm} L_\alpha^{\prime \prime}(\theta)= \frac{f(\theta)}{1-\alpha} . $$ Consequently, $L_\alpha$ is a convex function such that $L_\alpha^\prime(\theta_{\alpha})=0$. Hence, we deduce from a Taylor expansion with integral remainder that for all $ \theta \in \mathbb{R}$, \begin{equation*} L_\alpha(\theta) = L_\alpha(\theta_{\alpha}) + (\theta - \theta_{\alpha})^2 \int_0^1 (1-t) L_\alpha^{\prime \prime}(\theta_t)dt \end{equation*} where $\theta_t=\theta_{\alpha}+t(\theta - \theta_{\alpha})$, which immediately leads to \eqref{TAYLORL} using $(\mathcal{A}_1)$. Unfortunately, $H_\alpha$ is not a convex function. However, we obtain from \eqref{DEFHL} that for all $\theta \in \mathbb{R}$, $$ H_\alpha^\prime(\theta)= -\frac{\theta f(\theta)}{1-\alpha} \hspace{1cm}\text{and}\hspace{1cm} H_\alpha^{\prime \prime}(\theta)= -\frac{\Phi(\theta)}{1-\alpha} $$ where $\Phi(\theta)=f(\theta)+ \theta f^\prime(\theta)$. Finally, \eqref{TAYLORH} follows once again from a Taylor expansion with integral remainder together with $(\mathcal{A}_2)$. \end{proof} \noindent We have just seen that the function $H_\alpha$ is not convex. Consequently, in order to prove sharp asymptotic properties for the sequence $(\widehat{\vartheta}_{n})$, it is necessary to slightly modify the first martingale decomposition in \eqref{DECMART}. For all $\theta \in \mathbb{R}$, let \begin{equation} \label{DEFREMAINDERS} \left \{ \begin{array}[c]{ccc} G_\alpha(\theta) & = & F(\theta) - \alpha -f(\theta_{\alpha})(\theta - \theta_{\alpha}),\vspace{2ex}\\ R_\alpha(\theta) & = & H_\alpha(\theta) - \vartheta_{\alpha} +C_\alpha (\theta - \theta_{\alpha}) \end{array} \right. \end{equation} where $$ C_\alpha=- H_\alpha^\prime(\theta_{\alpha})= \frac{\theta_{\alpha} f(\theta_{\alpha})}{1-\alpha}. $$ We deduce from \eqref{TTSALGO1}, \eqref{TTSALGOE} and \eqref{DEFREMAINDERS} that for all $n \geq 1$, \begin{equation} \label{DECALGON1} \left \{ \begin{array}[c]{ccl} \theta_{n+1} -\theta_{\alpha} \!& \!\!=\! \!&\! (1-a_n f(\theta_{\alpha})) (\theta_{n} - \theta_{\alpha})-a_n \bigl( V_{n+1} + G_\alpha(\theta_n) \bigr), \vspace{2ex}\\ \widehat{\vartheta}_{n+1} -\vartheta_{\alpha} \!& \!\!=\! \!&\! (1-b_n)(\widehat{\vartheta}_{n} -\vartheta_{\alpha})+b_n \bigl(\varepsilon_{n+1} \!+\! R_\alpha(\theta_{n}) \!-\! C_\alpha (\theta_n - \theta_{\alpha})\bigr) \end{array} \right. \end{equation} with $V_{n+1}=\mathrm{I}_{\{X_{n+1} \leq \theta_{n} \}} - F(\theta_n)$. Hereafter, we shall consider a tailor-made weighted sum of our estimates given by $\Delta_1=0$ and, for all $n\geq 2$, \begin{equation} \label{DIFDELTA} \Delta_n = (\widehat{\vartheta}_{n} -\vartheta_{\alpha} ) - \delta_n (\theta_{n} -\theta_{\alpha}) \end{equation} where $(\delta_n)$ is a deterministic sequence, depending on $(a_n)$ and $(b_n)$, which will be explicitly given below. It follows from \eqref{DECALGON1} together with straightforward calculation that for all $n\geq 2$, \begin{equation} \label{DECALGON2} \Delta_{n+1}=(1-b_n) \Delta_n +b_n \Bigl(W_{n+1}+ R_\alpha(\theta_{n}) +\frac{a_n}{b_n}\delta_{n+1} G_\alpha(\theta_n) +\nu_{n+1} (\theta_{n} - \theta_{\alpha}) \Bigr) \end{equation} where \begin{equation} \label{DEFWN} W_{n+1}= \varepsilon_{n+1}+\frac{a_n}{b_n}\delta_{n+1}V_{n+1} \end{equation} and \begin{equation} \label{DEFNUN} \nu_{n+1} = \frac{1}{b_n}\Bigl((1-b_n)\delta_n - C_\alpha b_n-\delta_{n+1}(1-a_n f(\theta_{\alpha}))\Bigr). \end{equation} We have several strategies in order to simplify the expression of $\nu_{n+1}$. A first possibility that cancels several terms in \eqref{DEFNUN} is to choose $$ \delta_{n+1}=\frac{C_\alpha b_n}{f(\theta_{\alpha}) a_n}. $$ It clearly reduces $\nu_{n+1}$ to $$ \nu_{n+1}=\frac{(1-b_n)\delta_n - \delta_{n+1}}{b_n}. $$ Another more sophisticated choice, which only works if $a_n f(\theta_\alpha) -b_n\neq 0$, is to take \begin{equation} \label{DEFDELTAN} \delta_{n+1}= \frac{C_\alpha b_n}{a_n f(\theta_\alpha) -b_n}. \end{equation} It implies that \begin{equation} \label{DEFNUNN} \nu_{n+1}=\frac{(1-b_n)(\delta_n - \delta_{n+1})}{b_n}. \end{equation} The two choices are quite similar in the special case where $b_n=b_1/n$. However, in the case where the step $b_n=b_1/n^b$ with $a<b<1$, the second choice outperforms the first one as $\nu_n$ goes faster towards zero as $n$ grows to infinity. Throughout the sequel, we shall make use of the second choice given by \eqref{DEFDELTAN}. We deduce from \eqref{DECALGON2} the new martingale decomposition \begin{equation} \label{DECALGON3} \Delta_{n+1} = \frac{1}{P_{n}}\Bigl( \mathcal{M}_{n+1} +\mathcal{H}_{n+1} +\mathcal{R}_{n+1}\Bigr) \end{equation} where \begin{equation} \label{DEFMARTNEW} \mathcal{M}_{n+1}=\sum_{k=1}^{n} b_k P_k W_{k+1}, \hspace{1.5cm}\mathcal{H}_{n+1}=\sum_{k=1}^{n} b_k P_k \nu_{k+1}\bigl(\theta_k - \theta_{\alpha}\bigr) \end{equation} and \begin{equation} \label{DEFRNEW} \mathcal{R}_{n+1}= \sum_{k=1}^{n} b_k P_k \Bigl( R_\alpha(\theta_k) + \frac{a_k}{b_k}\delta_{k+1}G_\alpha(\theta_k)\Bigr). \end{equation} \section{Proofs of the almost sure convergence results} \label{S-PRASCVG} \subsection{The basic almost sure properties.} The starting point in our analysis of the almost sure convergence of our estimates is the following lemma. \begin{lem} \label{L-ASCVG} Assume that $(\mathcal{A}_1)$ holds and that the random variable $X$ is square integrable. Then, we have the almost sure convergences \begin{equation} \label{ASCVGMN} \lim_{n \rightarrow \infty} \frac{M_{n+1}}{P_{n}} = 0 \hspace{1cm}\text{and}\hspace{1cm} \lim_{n \rightarrow \infty} \frac{N_{n+1}}{P_{n}} = 0 \hspace{1cm}\text{a.s.} \end{equation} \end{lem} \begin{proof} Let $(\Sigma_n^{\varepsilon})$ and $(\Sigma_n^{\xi})$ be the two locally square integrable martingales $$ \Sigma_n^{\varepsilon}= \sum_{k=1}^{n-1} b_k\varepsilon_{k+1}, \hspace{1.5cm} \Sigma_n^{\xi}= \sum_{k=1}^{n-1} b_k\xi_{k+1}. $$ Their predictable quadratic variations \cite{Duflo1997} are respectively given by $$ \langle \Sigma^{\varepsilon} \rangle_n= \sum_{k=1}^{n-1} b_k^2 \sigma_\alpha^2(\theta_k) \hspace{1cm} \text{and} \hspace{1cm} \langle \Sigma^{\xi} \rangle_n=\sum_{k=1}^{n-1} b_k^2 \tau_\alpha^2(\theta_k). $$ It follows from convergence \eqref{ASCVGRM} and the continuity of the variances $\sigma_\alpha^2(\theta)$ and $\tau_\alpha^2(\theta)$ given by \eqref{DEFVAR} that $\sigma_\alpha^2(\theta_{n}) \longrightarrow \sigma_\alpha^2(\theta_{\alpha})$ and $\tau_\alpha^2(\theta_{n}) \longrightarrow \tau_\alpha^2(\theta_{\alpha})$ a.s. Consequently, we get from the right-hand side of \eqref{CONDSTEP} that \begin{equation} \label{ASCVGIPM} \lim_{n \rightarrow \infty} \langle \Sigma^{\varepsilon} \rangle_n < + \infty \hspace{1cm} \text{and} \hspace{1cm} \lim_{n \rightarrow \infty} \langle \Sigma^{\xi} \rangle_n < + \infty \hspace{1cm}\text{a.s} \end{equation} Therefore, we obtain from the strong law of large numbers for martingales given e.g. by theorem 1.3.24 in \cite{Duflo1997} that $(\Sigma_n^{\varepsilon})$ and $(\Sigma_n^{\xi})$ both converge almost surely. The rest of the proof proceeds in a standard way with the help of Kronecker's lemma. As a matter of fact, we can deduce from the left-hand side of \eqref{CONDSTEP} that the sequence $(P_n)$, defined in \eqref{DEFPN}, is strictly increasing to infinity. In addition, we just showed the almost sure convergence of the series $$ \sum_{n=1}^{\infty} b_n\varepsilon_{n+1} \hspace{1cm} \text{and} \hspace{1cm} \sum_{n=1}^{\infty} b_n\xi_{n+1}. $$ Consequently, we immediately deduce from Kronecker's lemma that \begin{equation*} \lim_{n \rightarrow \infty} \frac{1}{P_{n}} \sum_{k=1}^{n} b_k P_k \varepsilon_{k+1}= 0 \hspace{1cm}\text{and}\hspace{1cm} \lim_{n \rightarrow \infty} \frac{1}{P_{n}} \sum_{k=1}^{n} b_k P_k \xi_{k+1}= 0 \hspace{1cm}\text{a.s.} \end{equation*} which is exactly what we wanted to prove. \end{proof} \noindent{\bf Proof of Theorem \ref{T-ASCVGSQ}.} We recall from \eqref{DECMART} that for all $n \geq 1$, \begin{equation*} \left \{ \begin{array}[c]{ccc} \widehat{\vartheta}_{n+1} & = & {\displaystyle \frac{1}{P_{n}}\Bigl( \widehat{\vartheta}_1+M_{n+1} + H_{n+1} \Bigr)} \vspace{1ex}\\ \widetilde{\vartheta}_{n+1} & = &{\displaystyle \frac{1}{P_{n}}\Bigl( \widetilde{\vartheta}_1+N_{n+1} + L_{n+1} \Bigr)}. \end{array} \right. \end{equation*} We have from \eqref{ASCVGRM} together with the continuity of the functions $H_\alpha$ and $L_\alpha$ that \begin{equation} \label{ASCVGH} \lim_{n \rightarrow \infty} H_\alpha(\theta_{n}) = H_\alpha(\theta_{\alpha}) \hspace{1cm}\text{a.s} \end{equation} and \begin{equation} \label{ASCVGL} \lim_{n \rightarrow \infty} L_\alpha(\theta_{n}) = L_\alpha(\theta_{\alpha}) \hspace{1cm}\text{a.s} \end{equation} One can observe that $H_\alpha(\theta_{\alpha})= L_\alpha(\theta_{\alpha}) =\vartheta_{\alpha}$. Moreover, it is easy to see that for all $n\geq 1$, $b_nP_n=P_n-P_{n-1}$. Hence, we obtain by a telescoping argument that \begin{equation} \label{DECPN} \sum_{k=1}^{n} b_k P_k=P_n -P_0, \end{equation} which leads to $$ \lim_{n \rightarrow \infty} \frac{1}{P_n}\sum_{k=1}^{n} b_k P_k=1. $$ Therefore, it follows from Toeplitz's lemma that \begin{equation} \label{ASCVGHL} \lim_{n \rightarrow \infty} \frac{H_{n+1}}{P_{n}} = \vartheta_{\alpha} \hspace{1cm}\text{and}\hspace{1cm} \lim_{n \rightarrow \infty} \frac{L_{n+1}}{P_{n}} = \vartheta_{\alpha} \hspace{1cm}\text{a.s.} \end{equation} Finally, we find from \eqref{DECMART}, \eqref{ASCVGMN} and \eqref{ASCVGHL} that \begin{equation} \label{ASCVGVAR} \lim_{n \rightarrow \infty} \widehat{\vartheta}_{n} = \vartheta_{\alpha} \hspace{1cm}\text{and}\hspace{1cm} \lim_{n \rightarrow \infty} \widetilde{\vartheta}_{n} = \vartheta_{\alpha} \hspace{1cm}\text{a.s.} \end{equation} which completes the proof of Theorem \ref{T-ASCVGSQ}. \hfill $\videbox$\\ \subsection{A keystone lemma.} The QSL as well as the LIL for our estimates require the sharp asymptotic behavior of the sequence $(P_n)$ defined in \eqref{DEFPN}. Surprisingly, to the best of our knowledge, the following keystone lemma is new. It involves the famous Euler-Riemann zeta function. \begin{lem} \label{L-CVGPN} Assume that for some $0<b_1<1$, \begin{equation} \label{DEFPN1} P_n=\prod_{k=1}^n \Bigl( 1 - \frac{b_1}{k} \Bigr)^{-1}. \end{equation} Then, we have \begin{equation} \label{CVGPNGAMMA} \lim_{n \rightarrow \infty} \frac{1}{n^{b_1}}P_n= \Gamma(1-b_1) \end{equation} where $\Gamma$ stands for the Euler gamma function. Moreover, suppose that \begin{equation} \label{DEFPNLESS1} P_n=\prod_{k=1}^n \Bigl( 1 - \frac{b_1}{k^b} \Bigr)^{-1} \end{equation} where $1/2<b<1$. Then, we have \begin{equation} \label{CVGPNZETA} \lim_{n \rightarrow \infty} \frac{1}{\exp(cn^{1-b})}P_n= \exp(\Lambda) \end{equation} with $c=b_1/(1-b)$ and the limiting value $$ \Lambda=\sum_{n=2}^\infty \frac{b_1^n}{n} \zeta(b n) $$ where $\zeta$ stands for the Riemann zeta function. \end{lem} \begin{rem} The link between the first case $b=1$ and the second case $1/2<b<1$ is given the following formula due to Euler. For all $|x|<1$, $$ \log \Gamma(1-x)=\gamma x + \sum_{n=2}^\infty \frac{x^n}{n} \zeta(n) $$ where $\gamma$ is the Euler-Mascheroni constant. \end{rem} \begin{rem} The case $b_1 \geq 1$ can be treated in the same way. For example, concerning the first part of Lemma \ref{L-CVGPN}, it is only necessary to replace $P_n$ defined in \eqref{DEFPN1} by $$ P_n=\prod_{k=1+ \lfloor b_1 \rfloor}^n \Bigl( 1 - \frac{b_1}{k} \Bigr)^{-1} $$ where $\lfloor b_1 \rfloor$ is the integer part of $b_1$. Then, we obtain that \begin{equation*} \lim_{n \rightarrow \infty} \frac{1}{n^{b_1}}P_n= \frac{\Gamma(1-\{b_1\})}{\Gamma(1+ \lfloor b_1 \rfloor)} \end{equation*} where $\{b_1\}=b_1-\lfloor b_1 \rfloor$ stands for the fractional part of $b_1$. \end{rem} \begin{proof} In the first case $b=1$, we clearly have \begin{equation} \label{PGAMMA} P_n=\prod_{k=1}^n \Bigl( 1 - \frac{b_1}{k} \Bigr)^{-1}= \frac{\Gamma(n+1) \Gamma(1-b_1)}{\Gamma(n+1-b_1)}. \end{equation} It is well-known that for any $c>0$, \begin{equation} \label{CVGAMMA} \lim_{n \rightarrow \infty} \frac{\Gamma(n+c)}{\Gamma(n) n^c}= 1. \end{equation} Hence, we obtain from \eqref{PGAMMA} and \eqref{CVGAMMA} that \begin{equation} \label{CVGPN} \lim_{n \rightarrow \infty} \frac{1}{n^{b_1}}P_n= \Gamma(1-b_1). \end{equation} The second case $1/2<b<1$ is much more difficult to handle. It follows from the Taylor expansion of the natural logarithm $$ \log(1-x)=-\sum_{\ell=1}^\infty \frac{x^\ell}{\ell} $$ that \begin{eqnarray} \log(P_n) &=&-\sum_{k=1}^n\log\Bigl(1- \frac{b_1}{k^b}\Bigr)=\sum_{k=1}^n \sum_{\ell=1}^\infty \frac{1}{\ell} \Bigl(\frac{b_1}{k^b}\Bigr)^\ell = \sum_{\ell=1}^\infty \sum_{k=1}^n \frac{1}{\ell} \Bigl(\frac{b_1}{k^b}\Bigr)^\ell, \notag \\ &=& b_1 \sum_{k=1}^n \frac{1}{k^b} + \sum_{\ell=2}^\infty \frac{b_1^\ell}{\ell}\sum_{k=1}^n \frac{1}{k^{b\ell}}. \label{LOGPN} \end{eqnarray} It is well-known that $$ \lim_{n \rightarrow \infty}\frac{1}{n^{1-b}}\sum_{k=1}^n \frac{1}{k^b}=\frac{1}{1-b}. $$ In addition, as $b>1/2$, we always have for all $\ell \geq 2$, $b\ell >1$. Consequently, $$ \lim_{n \rightarrow \infty} \sum_{k=1}^n \frac{1}{k^{b\ell}} =\zeta(b \ell) $$ where $\zeta$ is the Riemann zeta function. Therefore, we obtain from \eqref{LOGPN} that \begin{equation} \label{CVGPNLESS1} \lim_{n \rightarrow \infty} \frac{1}{\exp(cn^{1-b})}P_n= \exp(\Lambda) \end{equation} where $c=b_1/(1-b)$ and the limiting value $$ \Lambda=\sum_{\ell=2}^\infty \frac{b_1^\ell}{\ell} \zeta(b \ell). $$ \end{proof} \subsection{The fast step size case.} The proof of Theorem \ref{T-LILQSEQUAL1} relies on the following lemma which provides the QSL and the LIL for the martingales $(\mathcal{M}_n)$ and $(N_n)$. \begin{lem} \label{L-MART1} Assume that the step sequences $(a_n)$ and $(b_n)$ are given by \begin{equation*} a_n=\frac{a_1}{n^a} \hspace{1.5cm}\text{and}\hspace{1.5cm} b_n=\frac{b_1}{n} \end{equation*} where $a_1>0$, $b_1>1/2$ and $1/2<a<1$. Then, $(\mathcal{M}_n)$ and $(N_n)$ share the same QSL \begin{equation} \label{QSLMARTM1} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \Bigl( \frac{\mathcal{M}_k}{P_{k-1}} \Bigr)^2= \Bigl(\frac{b_1^2}{2b_1-1} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} In addition, they also share the same LIL \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\Bigl(\frac{\mathcal{M}_n}{P_{n-1}}\Bigr) &=& - \liminf_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\Bigl(\frac{\mathcal{M}_n}{P_{n-1}}\Bigr) \notag \\ &=& \left( \frac{b_1^2}{2b_1 -1} \right)^{1/2} \tau_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \label{LILMARTN1} \end{eqnarray} \end{lem} \begin{proof} We first focus our attention on the martingale $(\mathcal{M}_n)$ defined by $$ \mathcal{M}_{n+1}=\sum_{k=1}^{n} b_k P_k W_{k+1} $$ where $$ W_{n+1}=\varepsilon_{n+1} +\frac{a_n}{b_n}\delta_{n+1}V_{n+1} $$ with $V_{n+1}=\mathrm{I}_{\{X_{n+1} \leq \theta_{n} \}} - F(\theta_n)$. We clearly have $\mathbb{E}[V_{n+1} | \mathcal{F}_n]=0$, $\mathbb{E}[W_{n+1} | \mathcal{F}_n]=0$, and $\mathbb{E}[V_{n+1}^2 | \mathcal{F}_n]=F(\theta_n)(1-F(\theta_n))$, $\mathbb{E}[W_{n+1}^2 | \mathcal{F}_n]=\tau_n^2(\theta_{n})$ where for all $\theta \in \mathbb{R}$, \begin{equation} \label{DEFTAUN} \tau_n^2(\theta)=\sigma_\alpha^2(\theta)+\Bigl(\frac{a_n \delta_{n+1}}{b_n}\Bigr)^2 F(\theta)(1-F(\theta)) -\Bigl(\frac{2 a_n \delta_{n+1} }{b_n}\Bigr)F(\theta)H_\alpha(\theta). \end{equation} We obtain from \eqref{DEFDELTAN} that \begin{equation} \label{CVGRATIO} \lim_{n \rightarrow \infty}\frac{a_n \delta_{n+1}}{b_n}=\frac{\theta_{\alpha}}{1-\alpha}. \end{equation} Consequently, we infer from \eqref{ASCVGRM}, \eqref{DEFTAUN} and \eqref{CVGRATIO} that \begin{equation} \label{ASCVGNU} \lim_{n \rightarrow \infty} \tau_n^2(\theta_{n}) = \sigma_\alpha^2(\theta_{\alpha}) - \Bigl(\frac{\alpha\theta_{\alpha}}{1-\alpha}\Bigr)(2\vartheta_{\alpha}-\theta_{\alpha})=\tau_\alpha^2(\theta_{\alpha}) \hspace{1cm}\text{a.s} \end{equation} Hereafter, assume for the sake of simplicity that $1/2<b_1<1$ inasmuch as the proof follows exactly the same lines for $b_1 \geq 1$. On the one hand, the predictable quadratic variation of $(\mathcal{M}_n)$ is given by $$ \langle \cM \rangle_n = \sum_{k=1}^{n-1} b_k^2 P_k^2 \tau^2_k(\theta_k). $$ On the other hand, as $b_1\!>\!1/2$, we obtain from convergence \eqref{CVGPNGAMMA} in Lemma \ref{L-CVGPN} that \begin{equation*} \lim_{n \rightarrow \infty} \frac{1}{n^{2b_1-1}}\sum_{k=1}^n b_k^2 P_k^2 = \Bigl(\frac{b_1^2}{2b_1-1} \Bigr) \Gamma^2(1-b_1). \end{equation*} Then, we deduce from \eqref{ASCVGNU} and Toeplitz's lemma that \begin{equation} \label{CVGIPCMN} \lim_{n \rightarrow \infty} \frac{1}{n^{2b_1-1}} \langle \cM \rangle_n = \Bigl(\frac{b_1^2}{2b_1-1} \Bigr) \Gamma^2(1-b_1) \tau^2_\alpha(\theta_{\alpha}) \hspace{1cm}\text{a.s.} \end{equation} Denote by $f_n$ the explosion coefficient associated with the martingale $(\mathcal{M}_n)$, $$ f_n = \frac{\langle \cM \rangle_n - \langle \mathcal{M} \rangle_{n-1}}{\langle \cM \rangle_n}. $$ We obtain from \eqref{CVGIPCMN} that \begin{equation} \label{CVGFN} \lim_{n \rightarrow \infty} nf_n=2b_1-1 \hspace{1cm}\text{a.s.} \end{equation} It means that $f_n$ converges to zero almost surely at rate $n$. In addition, we already saw from \eqref{ASCVGNU} that $$ \lim_{n \rightarrow \infty} \mathbb{E}[ W_{n+1}^2 | \mathcal{F}_n] = \tau^2_\alpha(\theta_{\alpha}) \hspace{1cm}\text{a.s.} $$ Furthermore, the random variable $X$ has a moment of order $>2$. It implies that for some real number $p>2$, $$ \sup_{n \geq 0} \mathbb{E}[ | W_{n+1} |^p | \mathcal{F}_n] < \infty \hspace{1cm}\text{a.s.} $$ Consequently, we deduce from the QSL for martingales given in theorem 3 of \cite{bercu2004} that \begin{equation} \label{QSLCMN1} \lim_{n \rightarrow \infty} \frac{1}{\log \langle \cM \rangle_n} \sum_{k=1}^n f_k \frac{\mathcal{M}_k^2}{\langle \mathcal{M} \rangle _k} = 1 \hspace{1cm}\text{a.s.} \end{equation} Hence, we obtain from the conjunction of \eqref{CVGIPCMN} and \eqref{QSLCMN1} that \begin{equation} \label{QSLCMN2} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \frac{\mathcal{M}_k^2}{P_{k-1}^2 } = \Bigl(\frac{b_1^2}{2b_1-1} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} We shall now proceed to the proof of the LIL given by \eqref{LILMARTN1}. We find from \eqref{CVGFN} that the explosion coefficient $f_n$ satisfies $$ \sum_{n=1}^\infty f_n^{p/2} < +\infty \hspace{1cm}\text{a.s.} $$ Therefore, we deduce from the LIL for martingales \cite{Stout1970}, see also corollary 6.4.25 in \cite{Duflo1997} that \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{1}{2 \langle \cM \rangle_n \log \log \langle \cM \rangle_n} \right)^{1/2} \!\!\!\mathcal{M}_n &=& - \liminf_{n \rightarrow \infty} \left(\frac{1}{2 \langle \cM \rangle_n \log \log \langle \cM \rangle_n} \right)^{1/2} \!\!\!\mathcal{M}_n \notag \\ &=& 1 \hspace{1cm}\text{a.s.} \label{LILCMN1} \end{eqnarray} Hence, it follows from the conjunction of \eqref{CVGPN}, \eqref{CVGIPCMN} and \eqref{LILCMN1} that \begin{eqnarray*} \limsup_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\Bigl(\frac{\mathcal{M}_n}{P_{n-1}}\Bigr) &=& - \liminf_{n \rightarrow \infty}\left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\Bigl(\frac{\mathcal{M}_n}{P_{n-1}}\Bigr) \notag \\ &=& \left( \frac{b_1^2}{2b_1 -1} \right)^{1/2} \tau_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{eqnarray*} which is exactly what we wanted to prove. Finally, concerning the martingale $(N_n)$ given by $$ N_{n+1}=\sum_{k=1}^{n} b_k P_k \xi_{k+1}, $$ the only minor change is that $\mathbb{E}[\xi_{n+1}^2 | \mathcal{F}_n]= \tau_\alpha^2(\theta_{n})$. However, we already saw that $\tau_\alpha^2(\theta_{n}) \longrightarrow \tau_\alpha^2(\theta_{\alpha})$ a.s. Consequently, $(\mathcal{M}_n)$ and $(N_n)$ share the same QSL and the same LIL, which completes the proof of Lemma \ref{L-MART1}. \end{proof} \noindent{\bf Proof of Theorem \ref{T-LILQSEQUAL1}.} We shall only prove Theorem \ref{T-LILQSEQUAL1} in the special case where $b_n=b_1/n$ with $1/2<b_1<1$ inasmuch as the proof in the case $b_1 \geq 1$ follows essentially the same lines. First of all, we focus our attention on the standard estimator $\widehat{\vartheta}_{n}$. $\bullet$ Our strategy is first to establish the QSL for the sequence $(\Delta_n)$ given by \eqref{DIFDELTA} and then to come back to $\widehat{\vartheta}_{n}$. We recall from \eqref{DECALGON3} that for all $n \geq 2$, \begin{equation*} \Delta_{n+1} = \frac{1}{P_{n}}\Bigl( \mathcal{M}_{n+1} +\mathcal{H}_{n+1} +\mathcal{R}_{n+1}\Bigr). \end{equation*} We claim that the weighted sequence $(\Delta_n)$ satisfies the QSL \begin{equation} \label{QSLDELTAN} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \Delta^2_k= \Bigl(\frac{b_1^2}{2b_1-1} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} As a matter of fact, we already saw from \eqref{QSLMARTM1} that \begin{equation*} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \Bigl( \frac{\mathcal{M}_k}{P_{k-1}} \Bigr)^2= \Bigl(\frac{b_1^2}{2b_1-1} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation*} Hence, in order to prove \eqref{QSLDELTAN}, it is necessary to show that \begin{equation} \label{PRQSLHRN1} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \Bigl( \frac{\mathcal{H}_k}{P_{k-1}} \Bigr)^2 =0 \hspace{0.5cm}\text{and}\hspace{0.5cm} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \Bigl( \frac{\mathcal{R}_k}{P_{k-1}} \Bigr)^2 =0 \hspace{1cm}\text{a.s.} \end{equation} On the one hand, it follows from \eqref{LILSUPRMStepa} that for $n$ large enough and for all $k\geq n$, \begin{equation} \label{MAJLILStepa} \bigl( \theta_k - \theta_{\alpha} \bigr)^2 \leq 2 D_a \Bigl(\frac{\log k}{k^a}\Bigr) \hspace{1cm}\text{a.s.} \end{equation} where $$ D_a= \frac{2a_1(1-a)\alpha(1-\alpha)}{f(\theta_{\alpha})}. $$ Consequently, we obtain from \eqref{DEFMARTNEW} and \eqref{MAJLILStepa} that \begin{equation} \label{PRCVGHN1} | \mathcal{H}_{n+1} | = O \left( \sum_{k=1}^n \frac{b_kP_k | \nu_{k+1}| \sqrt{\log k}}{k^{a/2}} \right) \hspace{1cm}\text{a.s.} \end{equation} Furthermore, one can easily check from \eqref{DEFDELTAN} and \eqref{DEFNUNN} that $$ \lim_{n \rightarrow \infty} n^{1-a} \nu_{n+1}= \frac{(1-a)C_\alpha}{a_1f(\theta_{\alpha})}. $$ In addition, we also recall from convergence \eqref{CVGPNGAMMA} in Lemma \ref{L-CVGPN} that $$ \lim_{n \rightarrow \infty} \frac{1}{n^{b_1}}P_n= \Gamma(1-b_1). $$ Hence, we deduce from \eqref{PRCVGHN1} that \begin{equation} \label{PRCVGHN2} | \mathcal{H}_{n+1} | = O \left( \sum_{k=1}^n \frac{\sqrt{\log k}}{k^{2-b_1-a/2}} \right) \hspace{1cm}\text{a.s.} \end{equation} It follows from \eqref{PRCVGHN2} that \begin{equation} \label{PRCVGHN3} \sum_{n=1}^\infty \Bigl(\frac{\mathcal{H}_n}{P_{n-1}} \Bigr)^2 <+\infty \hspace{1cm}\text{a.s.} \end{equation} As a matter of fact, let $d=2 - b_1 - a/2$. If $d>1$ that is $b_1<1-a/2$, we obtain from \eqref{PRCVGHN2} that $| \mathcal{H}_{n} | = O\bigl( 1 \bigr)$ a.s. Consequently, as $b_1>1/2$, \eqref{PRCVGHN3} holds true. In addition, if $d=1$ that is $b_1=1-a/2$, we deduce from \eqref{PRCVGHN2} that $| \mathcal{H}_{n} | = O\bigl( (\log n)^{3/2} \bigr)$ a.s. which implies that \begin{equation*} \sum_{n=1}^\infty \Bigl(\frac{\mathcal{H}_n}{P_{n-1}} \Bigr)^2 = O \left( \sum_{n=1}^\infty \frac{(\log n)^3}{n^{2b_1}} \right)=O(1) \hspace{1cm}\text{a.s.} \end{equation*} Moreover, if $d<1$ that is $b_1>1-a/2$, we get from \eqref{PRCVGHN2} that $| \mathcal{H}_{n} | = O\bigl( (\log n)^{1/2}n^{1-d} \bigr)$ a.s. leading to \begin{equation*} \sum_{n=1}^\infty \Bigl(\frac{\mathcal{H}_n}{P_{n-1}} \Bigr)^2 = O \left( \sum_{n=1}^\infty \frac{(\log n)}{n^{2b_1+d-1}} \right)= O \left( \sum_{n=1}^\infty \frac{(\log n)}{n^{2-a}} \right)=O(1) \hspace{1cm}\text{a.s.} \end{equation*} On the other hand, \eqref{DEFRNEW} together with \eqref{TAYLORH} and \eqref{DEFREMAINDERS} imply that \begin{equation} \label{PRCVGRN1} | \mathcal{R}_{n+1} | = O \left( \sum_{k=1}^n b_kP_k\bigl(\theta_k - \theta_{\alpha}\bigr)^2 \right) \hspace{1cm}\text{a.s.} \end{equation} It follows from \eqref{MAJLILStepa} and \eqref{PRCVGRN1} that \begin{equation} \label{PRCVGRN2} | \mathcal{R}_{n+1} | = O \left( \sum_{k=1}^n \frac{\log k}{k^{1+a-b_1}} \right) \hspace{1cm}\text{a.s.} \end{equation} which clearly leads to \begin{equation} \label{PRCVGRN3} \sum_{n=1}^\infty \Bigl(\frac{\mathcal{R}_n}{P_{n-1}} \Bigr)^2 <+\infty \hspace{1cm}\text{a.s.} \end{equation} Therefore, we obtain from \eqref{PRCVGHN3} and \eqref{PRCVGRN3} that the two convergences in \eqref{PRQSLHRN1} hold true, which immediately implies \eqref{QSLDELTAN}. Hereafter, one can notice from \eqref{DIFDELTA} that \begin{equation} \label{PRQSLFIN} \sum_{k=1}^n \bigl( \widehat{\vartheta}_{k} - \vartheta_{\alpha} \bigr)^2=\sum_{k=1}^n \Delta_k^2 + \sum_{k=1}^n \delta_k^2 \bigl( \theta_k - \theta_{\alpha} \bigr)^2 + 2\sum_{k=1}^n \delta_k \Delta_k \bigl( \theta_k - \theta_{\alpha} \bigr). \end{equation} Hence, in order to prove \eqref{QSL1}, it is only necessary to show that \begin{equation*} \label{PRQSLFIN1} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \delta_k^2 \bigl( \theta_k - \theta_{\alpha} \bigr)^2=0 \hspace{1cm}\text{a.s.} \end{equation*} and to make use of the Cauchy-Schwarz inequality. Denote $$ \Lambda_n=\sum_{k=1}^n \bigl( \theta_k - \theta_{\alpha} \bigr)^2. $$ We have from \eqref{QSLRMStepa} that as soon as $f(\theta_{\alpha})>0$, \begin{equation} \label{PRQSLFIN2} \lim_{n \rightarrow \infty} \frac{1}{n^{1-a}} \Lambda_n= \frac{a_1\alpha(1-\alpha)}{2(1-a)f(\theta_{\alpha})} \hspace{1cm}\text{a.s.} \end{equation} Furthermore, we obtain from a simple Abel transform that \begin{equation} \label{ABELDELTAN} \sum_{k=1}^{n} \delta_k^2 \bigl( \theta_k - \theta_{\alpha} \bigr)^2=\delta_n^2 \Lambda_n + \sum_{k=1}^{n-1}(\delta_k^2 - \delta_{k+1}^2 )\Lambda_k. \end{equation} We obtain from \eqref{DEFDELTAN} that \begin{equation} \label{CVGdelta} \lim_{n \rightarrow \infty} n^{1-a} \delta_n= \frac{b_1 C_\alpha}{a_1f(\theta_{\alpha})}. \end{equation} Then, we deduce from \eqref{PRQSLFIN2} that $$ \lim_{n \rightarrow \infty} n^{1-a} \delta_n^2 \Lambda_n= \frac{b_1^2 C_\alpha^2\alpha(1-\alpha)}{2 a_1(1-a)f^3(\theta_{\alpha})} \hspace{1cm}\text{a.s.} $$ which implies that \begin{equation} \label{PRQSLFIN3} \lim_{n \rightarrow \infty} \frac{1}{\log n} \delta_n^2 \Lambda_n=0 \hspace{1cm}\text{a.s.} \end{equation} In addition, we also have from \eqref{DEFDELTAN} that $$ \lim_{n \rightarrow \infty} n^{3-2a} \bigl( \delta_n^2 - \delta_{n+1}^2 \bigr)= 2(1-a) \Bigl(\frac{b_1 C_\alpha}{a_1 f(\theta_{\alpha})}\Bigr)^2. $$ It clearly ensures via \eqref{PRQSLFIN2} that \begin{equation} \label{PRQSLFIN4} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^{n-1}(\delta_k^2 - \delta_{k+1}^2 )\Lambda_k=0 \hspace{1cm}\text{a.s.} \end{equation} Then, it follows from \eqref{ABELDELTAN} together with \eqref{PRQSLFIN3} and \eqref{PRQSLFIN4} that \begin{equation} \label{PRQSLFIN5} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^{n} \delta_k^2 \bigl( \theta_k - \theta_{\alpha} \bigr)^2=0 \hspace{1cm}\text{a.s.} \end{equation} Consequently, we obtain from \eqref{QSLDELTAN} together with \eqref{PRQSLFIN}, \eqref{PRQSLFIN5} and the Cauchy-Schwarz inequality that $(\widehat{\vartheta}_{n})$ satisfies the QSL \begin{equation*} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \bigl( \widehat{\vartheta}_{k} - \vartheta_{\alpha} \bigr)^2= \Bigl(\frac{b_1^2}{2b_1-1} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation*} $\bullet$ The proof of the QSL for the convexified estimator $(\widetilde{\vartheta}_{n})$ is much more easier. We infer from \eqref{DECMART}, \eqref{DEFADDT}, \eqref{DECPN} and the identity $L_\alpha(\theta_{\alpha}) =\vartheta_{\alpha}$ that for all $n \geq 1$, \begin{eqnarray} \widetilde{\vartheta}_{n+1} -\vartheta_{\alpha} & = & \frac{1}{P_{n}}\Bigl( \widetilde{\vartheta}_1+N_{n+1} + L_{n+1} -P_n \vartheta_{\alpha}\Bigr), \nonumber \\ & = & \frac{1}{P_{n}}\Bigl(N_{n+1} + \widetilde{R}_{n+1} \Bigr) \label{PRQSLT1} \end{eqnarray} where \begin{equation} \label{RMDT1} \widetilde{R}_{n+1}= \widetilde{\vartheta}_1 -\vartheta_{\alpha} + \sum_{k=1}^{n} b_k P_k \bigl(L_\alpha(\theta_k) - L_\alpha(\theta_{\alpha}) \bigr). \end{equation} It follows from Lemma \ref{L-MART1} that \begin{equation} \label{PRQSLT2} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \Bigl( \frac{N_k}{P_{k-1}} \Bigr)^2= \Bigl(\frac{b_1^2}{2b_1-1} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} Hence, in order to prove \eqref{QSL1}, it is only necessary to show that \begin{equation} \label{PRQSLT3} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \Bigl( \frac{\widetilde{R}_k}{P_{k-1}} \Bigr)^2 =0 \hspace{1cm}\text{a.s.} \end{equation} We shall prove the stronger result \begin{equation} \label{SRMDT} \sum_{n=1}^\infty \Bigl( \frac{\widetilde{R}_k}{P_{k-1}} \Bigr)^2 < + \infty \hspace{1cm}\text{a.s.} \end{equation} We obtain from \eqref{TAYLORL} and \eqref{RMDT1} that for all $n \geq 1$, \begin{equation} \label{RMDTT} \widetilde{R}_{n+1}^2 \leq 2\bigl(\widetilde{\vartheta}_1 - \theta_{\alpha} \bigr)^2+M_f \Bigl( \sum_{k=1}^{n} b_k P_k \bigl( \theta_k - \theta_{\alpha} \bigr)^2 \Bigr)^2 \end{equation} where $$ M_f=\frac{||f||_\infty^2}{2(1- \alpha)^2}. $$ As before, we obtain from a simple Abel transform that \begin{equation} \label{ABELT} \sum_{k=1}^{n} b_k P_k \bigl( \theta_k - \theta_{\alpha} \bigr)^2=b_nP_n \Lambda_n + \sum_{k=1}^{n-1}(b_kP_k - b_{k+1}P_{k+1} )\Lambda_k. \end{equation} It is easy to see that $$ b_nP_n - b_{n+1}P_{n+1}=b_nP_n \Bigl(\frac{1-b_1}{n+1-b_1}\Bigr). $$ It implies that $ 0<n(b_nP_n - b_{n+1}P_{n+1})<(1-b_1)b_nP_n. $ Hence, as $1/2<a<1$ and $1/2<b_1<1$, we find from \eqref{CVGPN} and \eqref{PRQSLFIN2} that \eqref{SRMDT} holds true. Consequently, we deduce from \eqref{PRQSLT1} together with \eqref{PRQSLT2} and \eqref{PRQSLT3} that \begin{equation*} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \bigl( \widetilde{\vartheta}_{k} - \vartheta_{\alpha} \bigr)^2= \Bigl(\frac{b_1^2}{2b_1-1} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation*} which is exactly the QSL given by \eqref{QSL1}. $\bullet$ It only remains to establish the LIL for our estimates $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$. We start by proving the LIL for the sequence $(\Delta_n)$. We immediately obtain from \eqref{DECALGON3} that \begin{equation} \label{PRLILH1} \left(\frac{n}{2 \log \log n} \right)^{1/2}\!\!\Delta_n= \left(\frac{n}{2 \log \log n} \right)^{1/2}\Bigl(\frac{\mathcal{M}_n +\mathcal{H}_n+\mathcal{R}_n }{P_{n-1}}\Bigr). \end{equation} We already saw in Lemma \ref{L-MART1} that the martingale $(\mathcal{M}_n)$ satisfies the LIL given by \eqref{LILMARTN1}. In addition, it is easy to see from \eqref{CVGPN}, \eqref{PRCVGHN2} and \eqref{PRCVGRN2} that $$ \lim_{n \rightarrow \infty} n \Bigl( \frac{\mathcal{H}_n }{P_{n-1}}\Bigr)^2=0 \hspace{1cm}\text{and}\hspace{1cm} \lim_{n \rightarrow \infty} n \Bigl( \frac{\mathcal{R}_n }{P_{n-1}}\Bigr)^2=0 \hspace{1cm}\text{a.s.} $$ It clearly implies that \begin{equation*} \lim_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \Bigl( \frac{\mathcal{H}_n }{P_{n-1}}\Bigr) =0 \hspace{1cm}\text{a.s.} \end{equation*} and \begin{equation*} \lim_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\Bigl( \frac{\mathcal{R}_n }{P_{n-1}}\Bigr) =0 \hspace{1cm}\text{a.s.} \end{equation*} Therefore, we deduce from \eqref{LILMARTN1} and \eqref{PRLILH1} that $(\Delta_n)$ satisfies the LIL \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\Delta_n &=& - \liminf_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\Delta_n \notag \\ &=& \left( \frac{b_1^2}{2b_1 -1} \right)^{1/2} \tau_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \label{LILDELTAN1} \end{eqnarray} Hereafter, one can observe from \eqref{DIFDELTA} that \begin{equation} \label{PRLILH2} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr)=\left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\Delta_n+\left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\!\delta_n \bigl( \theta_{n}- \theta_{\alpha} \bigr). \end{equation} It follows from \eqref{LILRMStepa} and \eqref{CVGdelta} that $$ \left(\frac{n}{2 \log \log n} \right) \!\!\!\ \bigl| \delta_n ( \theta_{n}- \theta_{\alpha}) \bigr|^2=O\left(\frac{\log n}{n^{1-a} \log \log n} \right) \hspace{1cm}\text{a.s.} $$ which clearly leads to \begin{equation} \label{PRLILH3} \lim_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \!\!\delta_n \bigl( \theta_{n}- \theta_{\alpha} \bigr) =0 \hspace{1cm}\text{a.s.} \end{equation} Consequently, we obtain \eqref{LIL1} from \eqref{LILDELTAN1}, \eqref{PRLILH2} and \eqref{PRLILH3}. The proof for the convexified estimator $(\widetilde{\vartheta}_{n})$ is straightforward. We obtain from \eqref{PRQSLT1} that \begin{equation} \label{PRLILT1} \left(\frac{n}{2 \log \log n} \right)^{1/2}\bigl( \widetilde{\vartheta}_{n} - \vartheta_{\alpha} \bigr)= \left(\frac{n}{2 \log \log n} \right)^{1/2}\Bigl(\frac{N_n +\widetilde{R}_n }{P_{n-1}}\Bigr). \end{equation} We already saw in Lemma \ref{L-MART1} that the martingale $(N_n)$ satisfies the LIL given by \eqref{LILMARTN1}. In addition, it is easy to see from \eqref{CVGPN}, \eqref{RMDTT} and \eqref{ABELT} that $$ \lim_{n \rightarrow \infty} n \Bigl( \frac{\widetilde{R}_n }{P_{n-1}}\Bigr)^2=0 \hspace{1cm}\text{a.s.} $$ which clearly implies \begin{equation} \label{PRLILT2} \lim_{n \rightarrow \infty} \left(\frac{n}{2 \log \log n} \right)^{1/2} \Bigl( \frac{\widetilde{R}_n }{P_{n-1}}\Bigr) =0 \hspace{1cm}\text{a.s.} \end{equation} Finally, we deduce \eqref{LIL1} from \eqref{LILMARTN1}, \eqref{PRLILT1} and \eqref{PRLILT2}, which completes the proof of Theorem \ref{T-LILQSEQUAL1}. \hfill $\videbox$\\ \subsection{The slow step size case.} In order to prove Theorem \ref{T-LILQSLESS1}, it is necessary to establish the following QSL and LIL for the martingales $(\mathcal{M}_n)$ and $(N_n)$. \begin{lem} \label{L-MARTLESS1} Assume that the step sequences $(a_n)$ and $(b_n)$ are given by \begin{equation*} a_n=\frac{a_1}{n^a} \hspace{1.5cm}\text{and}\hspace{1.5cm} b_n=\frac{b_1}{n^b} \end{equation*} where $a_1>0$, $b_1>0$ and $1/2<a<b<1$. Then, $(\mathcal{M}_n)$ and $(N_n)$ share the same QSL \begin{equation} \label{QSLMARTMLESS1} \lim_{n \rightarrow \infty} \frac{1}{n^{1-b}} \sum_{k=1}^n \Bigl( \frac{\mathcal{M}_k}{P_{k-1}} \Bigr)^2= \Bigl(\frac{b_1}{2(1-b)} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} In addition, they also share the same LIL \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{n^b}{2 (1-b) \log n} \right)^{1/2} \!\!\!\Bigl(\frac{\mathcal{M}_n}{P_{n-1}}\Bigr) &=& - \liminf_{n \rightarrow \infty} \left(\frac{n^b}{2 (1-b) \log n} \right)^{1/2} \!\!\!\Bigl(\frac{\mathcal{M}_n}{P_{n-1}}\Bigr) \notag \\ &=& \left( \frac{b_1}{2} \right)^{1/2} \tau_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \label{LILMARTMLESS1} \end{eqnarray} \end{lem} \begin{proof} We recall that the martingale $(\mathcal{M}_n)$ and its predictable quadratic variation are given by $$ \mathcal{M}_{n+1}=\sum_{k=1}^{n} b_k P_k W_{k+1} \hspace{1cm}\text{and}\hspace{1cm} \langle \cM \rangle_{n+1} = \sum_{k=1}^{n} b_k^2 P_k^2 \tau^2_k(\theta_k) $$ where, thanks to \eqref{ASCVGNU}, $$ \lim_{n \rightarrow \infty} \tau_n^2(\theta_{n}) = \tau_\alpha^2(\theta_{\alpha}) \hspace{1cm}\text{a.s} $$ It is not hard to see via a comparison series integral together with convergence \eqref{CVGPNZETA} in Lemma \ref{L-CVGPN} that \begin{equation} \label{COMPSI} \lim_{n \rightarrow \infty} \frac{1}{b_nP_n^2}\sum_{k=1}^n b_k^2 P_k^2 = \frac{1}{2}. \end{equation} Hence, we deduce from \eqref{COMPSI} and Toeplitz's lemma that \begin{equation} \label{CVGIPCMNLESS1} \lim_{n \rightarrow \infty} \frac{1}{b_{n}P_{n}^2} \langle \cM \rangle_{n+1} = \frac{\tau^2_\alpha(\theta_{\alpha})}{2} \hspace{1cm}\text{a.s.} \end{equation} Denote by $f_n$ the explosion coefficient associated with the martingale $(\mathcal{M}_n)$, $$ f_n = \frac{\langle \cM \rangle_n - \langle \mathcal{M} \rangle_{n-1}}{\langle \cM \rangle_n} $$ It follows from the very definition of $P_n$ given by \eqref{DEFPN} together with \eqref{CVGIPCMNLESS1} that \begin{equation} \label{CVGFNLESS1} \lim_{n \rightarrow \infty} n^bf_n=2b_1\hspace{1cm}\text{a.s.} \end{equation} It means that $f_n$ converges to zero almost surely at rate $n^b$ where $1/2<b<1$. Furthermore, the random variable $X$ has a moment of order $>2$. It implies that for some real number $p>2$, $$ \sup_{n \geq 0} \mathbb{E}[ | W_{n+1} |^p | \mathcal{F}_n] < \infty \hspace{1cm}\text{a.s.} $$ Consequently, we deduce from the QSL for martingales given in theorem 3 of \cite{bercu2004} that \begin{equation} \label{QSLCMNLESS1} \lim_{n \rightarrow \infty} \frac{1}{\log \langle \cM \rangle_n} \sum_{k=1}^n f_k \frac{\mathcal{M}_k^2}{\langle \mathcal{M} \rangle _k} = 1 \hspace{1cm}\text{a.s.} \end{equation} Therefore, we obtain from \eqref{CVGPNLESS1} and \eqref{CVGIPCMNLESS1} together with \eqref{CVGFNLESS1} and \eqref{QSLCMNLESS1} that \begin{equation} \label{QSLCMNLESS2} \lim_{n \rightarrow \infty} \frac{1}{n^{1-b}} \sum_{k=1}^n \frac{\mathcal{M}_k^2}{P_{k-1}^2 } = \Bigl(\frac{b_1}{2(1-b)} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} Hereafter, we focus our attention on the proof of the LIL given by \eqref{LILMARTMLESS1}. Since $b>1/2$, we obtain from \eqref{CVGFNLESS1} that the explosion coefficient $f_n$ satisfies $$ \sum_{n=1}^\infty f_n^{p/2} < +\infty \hspace{1cm}\text{a.s.} $$ Therefore, we deduce from the LIL for martingales \cite{Stout1970}, see also corollary 6.4.25 in \cite{Duflo1997} that \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{1}{2 \langle \cM \rangle_n \log \log \langle \cM \rangle_n} \right)^{1/2} \!\!\!\mathcal{M}_n &=& - \liminf_{n \rightarrow \infty} \left(\frac{1}{2 \langle \cM \rangle_n \log \log \langle \cM \rangle_n} \right)^{1/2} \!\!\!\mathcal{M}_n \notag \\ &=& 1 \hspace{1cm}\text{a.s.} \label{LILCMNLESS1} \end{eqnarray} Hence, we find from \eqref{CVGPNLESS1}, \eqref{CVGIPCMNLESS1} and \eqref{LILCMNLESS1} that \begin{eqnarray*} \limsup_{n \rightarrow \infty} \left(\frac{n^b}{2 (1-b)\log n} \right)^{1/2} \!\!\!\Bigl(\frac{\mathcal{M}_n}{P_{n-1}}\Bigr) &=& - \liminf_{n \rightarrow \infty} \left(\frac{n^b}{ 2 (1-b) \log n} \right)^{1/2} \!\!\!\Bigl(\frac{\mathcal{M}_n}{P_{n-1}}\Bigr) \notag \\ &=& \left( \frac{b_1}{2} \right)^{1/2} \tau_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{eqnarray*} The proof for the martingale $(N_n)$ is left to the reader inasmuch as it follows exactly the same lines than those for the martingale $(\mathcal{M}_n)$. \end{proof} \noindent{\bf Proof of Theorem \ref{T-LILQSLESS1}.} We shall proceed as in the proof of Theorem \ref{T-LILQSEQUAL1}. We already saw from \eqref{QSLMARTMLESS1} that \begin{equation*} \lim_{n \rightarrow \infty} \frac{1}{n^{1-b}} \sum_{k=1}^n \Bigl( \frac{\mathcal{M}_k}{P_{k-1}} \Bigr)^2= \Bigl(\frac{b_1}{2(1-b)} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation*} Our goal is to prove that the sequence $(\Delta_n)$ given by \eqref{DIFDELTA} satisfies the QSL \begin{equation} \label{QSLDELTANLESS1} \lim_{n \rightarrow \infty} \frac{1}{n^{1-b}} \sum_{k=1}^n \Delta^2_k= \Bigl(\frac{b_1}{2(1-b)} \Bigr)\tau^2_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \end{equation} On the one hand, we have from \eqref{DEFMARTNEW} and \eqref{MAJLILStepa} that \begin{equation*} | \mathcal{H}_{n+1} | = O \left( \sum_{k=1}^n \frac{b_kP_k | \nu_{k+1}| \sqrt{\log k}}{k^{a/2}} \right) \hspace{1cm}\text{a.s.} \end{equation*} In addition, one can easily check from \eqref{DEFDELTAN} and \eqref{DEFNUNN} that $$ \lim_{n \rightarrow \infty} n^{1-a} \nu_{n+1}= \frac{(b-a)C_\alpha}{a_1f(\theta_{\alpha})}. $$ Hence, we obtain from convergence \eqref{CVGPNZETA} in Lemma \ref{L-CVGPN} together with a comparison series integral as previously done in the proof of Theorem \ref{T-LILQSEQUAL1} that \begin{equation} \label{PRCVGHNLESS1} | \mathcal{H}_{n+1} | = O \left( \sum_{k=1}^n \frac{P_k \sqrt{\log k}}{k^{1+b-a/2}} \right)=O \left( \frac{P_n\sqrt{\log n}}{n^{1-a/2}} \right) \hspace{1cm}\text{a.s.} \end{equation} Consequently, we deduce from \eqref{PRCVGHNLESS1} that \begin{equation} \label{PRCVGHNLESS2} \sum_{n=1}^\infty \Bigl(\frac{\mathcal{H}_n}{P_{n-1}} \Bigr)^2 <+\infty \hspace{1cm}\text{a.s.} \end{equation} On the other hand, we already saw from \eqref{PRCVGRN1} that \begin{equation*} | \mathcal{R}_{n+1} | = O \left( \sum_{k=1}^n b_kP_k\bigl(\theta_k - \theta_{\alpha}\bigr)^2 \right) \hspace{1cm}\text{a.s.} \end{equation*} which implies that \begin{equation} \label{PRCVGRNLESS1} | \mathcal{R}_{n+1} | = O \left( \sum_{k=1}^n \frac{P_k\log k }{k^{a+b}} \right) = O \left( \frac{P_n \log n }{n^{a}} \right) \hspace{1cm}\text{a.s.} \end{equation} Then, as $a>1/2$, we find from \eqref{PRCVGRNLESS1} \begin{equation} \label{PRCVGRNLESS2} \sum_{n=1}^\infty \Bigl(\frac{\mathcal{R}_n}{P_{n-1}} \Bigr)^2 <+\infty \hspace{1cm}\text{a.s.} \end{equation} Therefore, we obtain from \eqref{PRCVGHNLESS2} and \eqref{PRCVGRNLESS2} that the QSL \eqref{QSLDELTANLESS1} holds true. In order to prove \eqref{QSL3}, it only remains to show via \eqref{PRQSLFIN} that \begin{equation} \label{PRQSLFINLESS1} \lim_{n \rightarrow \infty} \frac{1}{n^{1-b}} \sum_{k=1}^n \delta_k^2 \bigl( \theta_k - \theta_{\alpha} \bigr)^2=0 \hspace{1cm}\text{a.s.} \end{equation} We recall from \eqref{ABELDELTAN} that \begin{equation*} \sum_{k=1}^{n} \delta_k^2 \bigl( \theta_k - \theta_{\alpha} \bigr)^2=\delta_n^2 \Lambda_n + \sum_{k=1}^{n-1}(\delta_k^2 - \delta_{k+1}^2 )\Lambda_k. \end{equation*} We obtain from \eqref{DEFDELTAN} that \begin{equation} \label{CVGdeltaLESS} \lim_{n \rightarrow \infty} n^{b-a} \delta_n= \frac{b_1 C_\alpha}{a_1 f(\theta_{\alpha})}. \end{equation} Then, it follows from \eqref{PRQSLFIN2} and \eqref{CVGdeltaLESS} that \begin{equation} \label{PRQSLFINLESS2} \lim_{n \rightarrow \infty} \frac{1}{n^{1+a-2b}} \delta_n^2 \Lambda_n= \frac{b_1^2 C_\alpha^2\alpha(1-\alpha)}{2 a_1(1-a)f^3(\theta_{\alpha})} \hspace{1cm}\text{a.s.} \end{equation} Consequently, as $a<b$, we deduce from \eqref{PRQSLFINLESS2} that \begin{equation} \label{PRQSLFINLESS3} \lim_{n \rightarrow \infty} \frac{1}{n^{1-b}} \delta_n^2 \Lambda_n=0 \hspace{1cm}\text{a.s.} \end{equation} By the same token, we also find from \eqref{PRQSLFIN2} and \eqref{CVGdeltaLESS} that \begin{equation} \label{PRQSLFINLESS4} \lim_{n \rightarrow \infty} \frac{1}{n^{1-b}} \sum_{k=1}^{n-1}(\delta_k^2 - \delta_{k+1}^2 )\Lambda_k=0 \hspace{1cm}\text{a.s.} \end{equation} Then, we clearly obtain from \eqref{PRQSLFINLESS3} and \eqref{PRQSLFINLESS4} that convergence \eqref{PRQSLFINLESS1} holds true. As before, the proof of the QSL for the convexified estimator $(\widetilde{\vartheta}_{n})$ is much more easier and left to the reader. We now focus our attention on the LIL for our estimates $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$. We start by proving the LIL for the sequence $(\Delta_n)$ given by \eqref{DIFDELTA}. We immediately obtain from \eqref{DECALGON3} that \begin{equation} \label{PRLILHLESS1} \left(\frac{n^b}{2(1-b) \log n} \right)^{1/2}\!\!\Delta_n= \left(\frac{n^b}{2(1-b) \log n} \right)^{1/2}\Bigl(\frac{\mathcal{M}_n +\mathcal{H}_n+\mathcal{R}_n }{P_{n-1}}\Bigr). \end{equation} We already saw in Lemma \ref{L-MARTLESS1} that the martingale $(\mathcal{M}_n)$ satisfies the LIL given by \eqref{LILMARTMLESS1}. In addition, as $b<1<2a$, we get from \eqref{PRCVGHNLESS1} and \eqref{PRCVGRNLESS1} that $$ \lim_{n \rightarrow \infty} n^b \Bigl( \frac{\mathcal{H}_n }{P_{n-1}}\Bigr)^2=0 \hspace{1cm}\text{and}\hspace{1cm} \lim_{n \rightarrow \infty} n^b \Bigl( \frac{\mathcal{R}_n }{P_{n-1}}\Bigr)^2=0 \hspace{1cm}\text{a.s.} $$ which clearly ensures that \begin{equation*} \lim_{n \rightarrow \infty} \left(\frac{n^b}{2(1-b) \log n} \right)^{1/2} \Bigl( \frac{\mathcal{H}_n }{P_{n-1}}\Bigr) =0 \hspace{1cm}\text{a.s.} \end{equation*} and \begin{equation*} \lim_{n \rightarrow \infty} \left(\frac{n^b }{2(1-b) \log n} \right)^{1/2} \!\!\Bigl( \frac{\mathcal{R}_n }{P_{n-1}}\Bigr) =0 \hspace{1cm}\text{a.s.} \end{equation*} Consequently, we find from \eqref{LILMARTMLESS1} and \eqref{PRLILHLESS1} that $(\Delta_n)$ satisfies the LIL \begin{eqnarray} \limsup_{n \rightarrow \infty} \left(\frac{n^b}{2(1-b) \log n} \right)^{1/2} \!\!\Delta_n &=& - \liminf_{n \rightarrow \infty} \left(\frac{n^b}{2 (1-b) \log n} \right)^{1/2} \!\!\Delta_n \notag \\ &=& \left( \frac{b_1}{2} \right)^{1/2} \tau_{\alpha}(\theta_\alpha) \hspace{1cm}\text{a.s.} \label{LILDELTANLESS1} \end{eqnarray} Hereafter, we clearly have from \eqref{DIFDELTA} that \begin{equation} \label{PRLILHLESS2} \left(\frac{n^b}{2 (1-b) \log n} \right)^{1/2} \!\!\bigl( \widehat{\vartheta}_{n} - \vartheta_{\alpha} \bigr)=\left(\frac{n^b}{2 (1-b) \log n} \right)^{1/2} \!\!\bigl( \Delta_n + \delta_n \bigl( \theta_{n}- \theta_{\alpha} \bigr) \bigr). \end{equation} It follows from \eqref{LILRMStepa} and \eqref{CVGdeltaLESS} that $$ \left(\frac{n^b}{2 (1-b) \log n} \right) \!\!\!\ \bigl| \delta_n ( \theta_{n}- \theta_{\alpha}) \bigr|^2=O\left( \frac{1}{n^{b-a}} \right) \hspace{1cm}\text{a.s.} $$ Since $a<b$, it clearly implies that \begin{equation} \label{PRLILHLESS3} \lim_{n \rightarrow \infty} \left(\frac{n^b}{2 (1-b)\log n} \right)^{1/2} \!\!\delta_n \bigl( \theta_{n}- \theta_{\alpha} \bigr) =0 \hspace{1cm}\text{a.s.} \end{equation} Therefore, we obtain \eqref{LIL3} from \eqref{LILDELTANLESS1}, \eqref{PRLILHLESS2} and \eqref{PRLILHLESS3}. The proof of the LIL for the convexified estimator $(\widetilde{\vartheta}_{n})$ is straightforward and left to the reader, which achieves the proof of Theorem \ref{T-LILQSLESS1}. \hfill $\videbox$\\ \vspace{-2ex} \section{Proofs of the asymptotic normality results} \label{S-PRAN} The proof of Theorem \ref{T-AN} relies on the central limit theorem for the two-time-scale stochastic algorithm given in Theorem 1 of Mokkadem and Pelletier \cite{MokkademPelletier2006}. It is a sophisticated application of this result for the standard estimator $(\widehat{\vartheta}_{n})$, while it is a direct application for the convexified estimator $(\widetilde{\vartheta}_{n})$. \ \vspace{1ex}\\ \noindent{\bf Proof of Theorem \ref{T-AN}.} We start with the proof for the standard estimator $\widehat{\vartheta}_{n}$. As it was previously done in Section \ref{S-PRASCVG}, our strategy is first to establish the joint asymptotic normality for the couple $(\theta_n,\Delta_n)$ where $\Delta_n$ is given by \eqref{DIFDELTA}, and then to deduce the joint asymptotic normality for the couple $(\theta_n,\widehat{\vartheta}_{n})$. We have from \eqref{TTSALGO1} together with \eqref{DECALGON2} that for all $n \geq 1$, \begin{equation} \label{PRAN1} \left \{ \begin{aligned} &\theta_{n+1} = \theta_{n}+a_n \mathcal{X}_{n+1}\vspace{1ex}\\ &\Delta_{n+1} =\Delta_n+b_n \mathcal{Y}_{n+1} \end{aligned} \right. \end{equation} where \begin{equation*} \left \{ \begin{aligned} &\mathcal{X}_{n+1} = f(\theta_{n},\Delta_n)+\psi_n^{(\theta)}+ \mathcal{V}_{n+1}\\ &\mathcal{Y}_{n+1} = g(\theta_{n},\Delta_n)+\psi_n^{(\Delta)}+ \mathcal{W}_{n+1}\\ \end{aligned} \right. \end{equation*} with $f(\theta, \Delta)=\alpha - F(\theta)$, $\psi_n^{(\theta)}=0$, $\mathcal{V}_{n+1}=F(\theta_n)-\mathrm{I}_{\{X_{n+1} \leq \theta_{n} \}}$ and $g(\theta, \Delta)=- \Delta$, $$ \psi_n^{(\Delta)}=R_\alpha(\theta_{n}) +\frac{a_n}{b_n}\delta_{n+1} G_\alpha(\theta_n) +\nu_{n+1} (\theta_{n} - \theta_{\alpha}), $$ $$ \mathcal{W}_{n+1}= \varepsilon_{n+1}+\frac{a_n}{b_n}\delta_{n+1}V_{n+1}. $$ By denoting $\Delta_\alpha=0$, we clearly have $f(\theta_{\alpha}, \Delta_\alpha)=0$ and $g(\theta_{\alpha}, \Delta_\alpha)=0$. To be more precise \begin{equation*} \begin{pmatrix} f(\theta, \Delta_\alpha) \\ g(\theta, \Delta_\alpha) \end{pmatrix} = \begin{pmatrix} -f^\prime(\theta_{\alpha}) & 0 \\ 0 &-1 \end{pmatrix} \begin{pmatrix} \theta - \theta_{\alpha}\\ \Delta - \Delta_\alpha \end{pmatrix} + \begin{pmatrix} O\bigl( ||\theta - \theta_{\alpha}||^2 \bigr)\\ 0 \end{pmatrix} . \end{equation*} On the one hand, it follows from the conjunction of \eqref{TAYLORH}, \eqref{DEFREMAINDERS} and \eqref{DEFDELTAN} that $$ \psi_n^{(\Delta)}=r_n^{(\Delta)} + O\bigl( ||\theta_n - \theta_{\alpha}||^2 \bigr) $$ where $r_n^{(\Delta)}=nu_{n+1} (\theta_{n} - \theta_{\alpha})$. On the other hand, we infer from \eqref{LILSUPRMStepa} and \eqref{DEFNUNN} that $$ \bigl| r_n^{(\Delta)} \bigr| = O\Bigl( \frac{\sqrt{n^a \log n}}{n}\Bigr)=o\bigl(\sqrt{b_n}\bigr) \hspace{1cm}\text{a.s.} $$ Furthermore, $\mathbb{E}[\mathcal{V}_{n+1} | \mathcal{F}_n]=0$, $\mathbb{E}[\mathcal{W}_{n+1} | \mathcal{F}_n]=0$, and we already saw in Sections \ref{S-MA} and \ref{S-PRASCVG} that $\mathbb{E}[\mathcal{V}_{n+1}^2 | \mathcal{F}_n]=F(\theta_n)(1-F(\theta_n)$ and $\mathbb{E}[\mathcal{W}_{n+1}^2 | \mathcal{F}_n]=\tau_\alpha^2(\theta_{n})$. One can also check that $$ \mathbb{E}[\mathcal{V}_{n+1} \mathcal{W}_{n+1} | \mathcal{F}_n]=F(\theta_n)\Bigl(H_\alpha(\theta_n) - \frac{a_n}{b_n}\delta_{n+1}\bigl(1-F(\theta_n)\bigr)\Bigr). $$ It clearly implies that \begin{equation*} \lim_{n \rightarrow \infty} \begin{pmatrix} \mathbb{E}[\mathcal{V}_{n+1}^2 | \mathcal{F}_n] & \mathbb{E}[\mathcal{V}_{n+1} \mathcal{W}_{n+1} | \mathcal{F}_n] \\ \mathbb{E}[\mathcal{V}_{n+1} \mathcal{W}_{n+1} | \mathcal{F}_n] & \mathbb{E}[\mathcal{W}_{n+1}^2 | \mathcal{F}_n] \end{pmatrix} = \begin{pmatrix} \alpha(1-\alpha) & \alpha ( \vartheta_{\alpha} - \theta_{\alpha} ) \\ \alpha ( \vartheta_{\alpha} - \theta_{\alpha} ) & \tau_\alpha^2(\theta_{\alpha}) \end{pmatrix} \hspace{0.5cm}\text{a.s.} \end{equation*} Consequently, all the conditions of Theorem 1 in \cite{MokkademPelletier2006} are satisfied with \begin{equation*} \Sigma_{\theta_{\alpha}}= \frac{\alpha(1-\alpha)}{2f(\theta_{\alpha})} \end{equation*} and \begin{equation*} \Sigma_{\vartheta_{\alpha}}= \left \{ \begin{array}[c]{ccc} {\displaystyle \frac{b_1 \tau^2_{\alpha}(\theta_\alpha)}{2b_1 - 1}} & \text{if} & b=1, \vspace{1ex} \\ {\displaystyle \frac{ \tau^2_{\alpha}(\theta_\alpha)}{2}} & \text{if} & b<1. \end{array} \right. \end{equation*} Therefore, as $\Delta_\alpha=0$, we obtain from \cite{MokkademPelletier2006} the joint asymptotic normality \begin{equation} \label{PRAN2} \begin{pmatrix} \sqrt{n^a} \bigl(\theta_n - \theta_{\alpha}\bigr) \vspace{1ex} \\ \sqrt{n^b} \Delta_n \\ \end{pmatrix} \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\left(0, \begin{pmatrix} \Gamma_{\theta_{\alpha}} & 0 \\ 0 & \Gamma_{\vartheta_{\alpha}} \\ \end{pmatrix}\right) \end{equation} where $\Gamma_{\theta_{\alpha}}=a_1 \Sigma_{\theta_{\alpha}}$ and $\Gamma_{\vartheta_{\alpha}}=b_1 \Sigma_{\vartheta_{\alpha}}$. Hereafter, in order to prove the joint asymptotic normality for the couple $(\theta_n,\widehat{\vartheta}_{n})$, it is only necessary to show from the very definition of $\Delta_n$ given in \eqref{DIFDELTA} that \begin{equation} \label{PRAN3} \lim_{n \rightarrow \infty} \sqrt{n^b} \delta_n \bigl(\theta_{n} -\theta_{\alpha}\bigr)=0 \hspace{1cm}\text{a.s.} \end{equation} We already saw from \eqref{CVGdelta} and \eqref{CVGdeltaLESS} that \begin{equation} \label{PRAN4} \lim_{n \rightarrow \infty} n^{b-a} \delta_n= \frac{b_1 C_\alpha}{a_1 f(\theta_{\alpha})}. \end{equation} Hence, we deduce from \eqref{LILSUPRMStepa} and \eqref{PRAN4} that \begin{equation*} \sqrt{n^b} \bigl| \delta_n (\theta_{n} -\theta_{\alpha})\bigr|=O\Bigl( \frac{\sqrt{n^a \log n}}{\sqrt{n^b}}\Bigr) \hspace{1cm}\text{a.s.} \end{equation*} which ensures that \eqref{PRAN3} holds true. Consequently, \eqref{AN1} clearly follows from \eqref{PRAN2} and \eqref{PRAN3} . The proof for the convexified estimator $(\widetilde{\vartheta}_{n})$ is much more easy to handle. We have from \eqref{TTSALGO2} that for all $n \geq 1$, \begin{equation} \label{PRAN5} \left \{ \begin{aligned} &\theta_{n+1} = \theta_{n}+a_n \mathcal{X}_{n+1}\vspace{1ex}\\ &\widetilde{\vartheta}_{n+1} =\widetilde{\vartheta}_{n}+b_n \mathcal{Y}_{n+1} \end{aligned} \right. \end{equation} where \begin{equation*} \left \{ \begin{aligned} &\mathcal{X}_{n+1} = f(\theta_{n},\widetilde{\vartheta}_{n})+\psi_n^{(\theta)}+ \mathcal{V}_{n+1}\\ &\mathcal{Y}_{n+1} = g(\theta_{n},\widetilde{\vartheta}_{n})+\psi_n^{(\vartheta)}+ \mathcal{W}_{n+1}\\ \end{aligned} \right. \end{equation*} with $f(\theta, \vartheta)=\alpha - F(\theta)$, $\psi_n^{(\theta)}=0$, $\mathcal{V}_{n+1}=F(\theta_n)-\mathrm{I}_{\{X_{n+1} \leq \theta_{n} \}}$ and $g(\theta, \vartheta)=\vartheta_{\alpha} - \vartheta$, $\psi_n^{(\vartheta)}=L_\alpha(\theta_n) - \vartheta_{\alpha}$, $\mathcal{W}_{n+1}=Z_{n+1}-L_\alpha(\theta_n)$, where we recall that $\mathbb{E}[Z_{n+1} | \mathcal{F}_n]=L_\alpha(\theta_n)$ with $L_\alpha(\theta)$ given by \eqref{DEFHL}. We clearly have $f(\theta_{\alpha}, \vartheta_{\alpha})=0$ and $g(\theta_{\alpha}, \vartheta_{\alpha})=0$. To be more precise, \begin{equation*} \begin{pmatrix} f(\theta, \vartheta) \\ g(\theta, \vartheta) \end{pmatrix} = \begin{pmatrix} -f^\prime(\theta_{\alpha}) & 0 \\ 0 &-1 \end{pmatrix} \begin{pmatrix} \theta - \theta_{\alpha}\\ \vartheta - \vartheta_{\alpha} \end{pmatrix} + \begin{pmatrix} O\bigl( ||\theta - \theta_{\alpha}||^2 \bigr)\\ 0 \end{pmatrix} . \end{equation*} In addition, we deduce from \eqref{TAYLORL} that $ \psi_n^{(\vartheta)} = L_\alpha(\theta_n) - L_\alpha(\theta_{\alpha}) = O\bigl( ||\theta_n - \theta_{\alpha}||^2 \bigr)$. Furthermore, $\mathbb{E}[\mathcal{V}_{n+1} | \mathcal{F}_n]=0$, $\mathbb{E}[\mathcal{W}_{n+1} | \mathcal{F}_n]=0$, and we already saw in Sections \ref{S-MA} and \ref{S-PRASCVG} that $\mathbb{E}[\mathcal{V}_{n+1}^2 | \mathcal{F}_n]=F(\theta_n)(1-F(\theta_n)$ and $\mathbb{E}[\mathcal{W}_{n+1}^2 | \mathcal{F}_n]=\tau_\alpha^2(\theta_{n})$. One can also verify that $\mathbb{E}[\mathcal{V}_{n+1} \mathcal{W}_{n+1} | \mathcal{F}_n]=F(\theta_n)\bigl(L_\alpha(\theta_n) - \theta_n\bigr)$. It clearly implies that \begin{equation*} \lim_{n \rightarrow \infty} \begin{pmatrix} \mathbb{E}[\mathcal{V}_{n+1}^2 | \mathcal{F}_n] & \mathbb{E}[\mathcal{V}_{n+1} \mathcal{W}_{n+1} | \mathcal{F}_n] \\ \mathbb{E}[\mathcal{V}_{n+1} \mathcal{W}_{n+1} | \mathcal{F}_n] & \mathbb{E}[\mathcal{W}_{n+1}^2 | \mathcal{F}_n] \end{pmatrix} = \begin{pmatrix} \alpha(1-\alpha) & \alpha ( \vartheta_{\alpha} - \theta_{\alpha} ) \\ \alpha ( \vartheta_{\alpha} - \theta_{\alpha} ) & \tau_\alpha^2(\theta_{\alpha}) \end{pmatrix} \hspace{0.5cm}\text{a.s.} \end{equation*} Consequently, our two-time-scale stochastic algorithm satisfies all the conditions of Theorem 1 in \cite{MokkademPelletier2006} where the asymptotic variances $\Sigma_{\theta_{\alpha}}$ and $\Sigma_{\vartheta_{\alpha}}$ have been previously defined. Finally, we obtain the joint asymptotic normality \eqref{AN1} where $\Gamma_{\theta_{\alpha}}=a_1 \Sigma_{\theta_{\alpha}}$ and $\Gamma_{\vartheta_{\alpha}}=b_1 \Sigma_{\vartheta_{\alpha}}$, which completes the proof of Theorem \ref{T-AN}. \hfill $\videbox$\\ \vspace{-2ex} \section{Numerical experiments on real data} \label{S-NE} We briefly illustrate the asymptotic behavior of our two stochastic algorithms $(\widehat{\vartheta}_{n})$ and $(\widetilde{\vartheta}_{n})$ with different tuning of parameters. Since we have several elements of variability in the parameters, we have chosen typical setups even if our presentation is not exhaustive. In our synthetic benchmark, we shall consider Exponential and Gamma distributions, even though explicit formula may be found for the pair $(\theta_{\alpha},\vartheta_{\alpha})$. \ \vspace{1ex} \\ First of all, we wish to point out that our recursive procedure is very fast for both algorithms since a set of $1000$ observations is handled in less than 0.1 second with a standard laptop. Next, Figure \ref{fig:asconvergence} illustrates the good almost sure behavior of the standard and convexified algorithms both on Exponential and Gamma distributions. Here, we consider the $\mathcal{E}(1/10)$ and $\mathcal{G}(4,3)$ distributions. \begin{figure}[h] \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=8cm]{Ascvg-Expo.jpg}} \end{minipage} \hspace{2cm} \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=8cm]{Ascvg-Gamma.jpg}} \end{minipage} \vspace{-1em} \caption{Almost sure convergence of our algorithms for $\alpha=0.5$ and $b_n=1/n$. \label{fig:asconvergence}} \end{figure} Second, one can verify and compare the limiting variance of the asymptotic normality involved in Theorem \ref{T-AN} for several values of $a$ and $b$. Figure \ref{fig:CLT} represents the histogram of the rescaled algorithms for several values of $a$ and $b$. One can check that the convexified algorithm outperforms the standard algorithm as soon as $b<a$. \begin{figure}[h] \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=6.5cm]{CLT-Fig21.jpg}} \end{minipage}\hfill \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=8cm]{CLT-Fig22.jpg}} \end{minipage}\\ \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=8cm]{CLT-Fig23.jpg}} \end{minipage}\\ \vspace{-1em} \caption{ Distribution of the rescaled algorithms in different situations: top-left ($a=2/3<b=4/5<1$), top-right ($b=2/3<a=4/5<1$), bottom ($a=2/3<b=1$). One can verify the asymptotic normality with larger variance for the standard rescaled algorithm (top-right). \label{fig:CLT}} \end{figure} One can also use our method to estimate online $95\%$ confidence intervals for the superquantile $\vartheta_{\alpha}$ as explained in Remark 3.4. This is illustrated in Figure \ref{fig:IC-online} with the Exponential and Gamma distributions with $a=2/3$ and $b=1$. \begin{figure}[h] \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=7cm]{IC-CLT-expo.jpg}} \end{minipage} \hspace{2cm} \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=7cm]{IC-CLT-gamma.jpg}} \end{minipage} \vspace{-1em} \caption{Online confidence interval with the convexified algorithm for the Exponential and Gamma distributions.} \label{fig:IC-online} \end{figure} \subsection{Real data} We finally illustrate, as a proof of concept, the use of our two algorithms on financial real-data that are freely available on the R-package \textrm{tseries} (EuStockMarkets dataset). Some more recent ressources may also be downloaded on the Yahoo! Finance website. We consider the four time series of the financial stock-markets \textrm{DAX, CAC40, SMI, FTSE} between 2014 and 2018 and compute the CVaR of the weekly log-returns, that are common indicators in the analysis of financial markets. It is commonly admitted as a reasonnable approximation that in non-exceptionnal situations, the log-returns are not far from an independent and identically distributed set of observations. As a major interest in finance, we compute the negative CVaR at the level $10\%$ and some $95\%$ confidence intervals as well. Our results are presented in Figure \ref{fig:finance} for the convexified algorithm tuned with the parameters $a=2/3$, $a_1=5$ and $b=1$, $b_1=3/4$. \begin{figure}[h] \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[height=3cm,width=10cm]{CAC40.jpg}}\centerline{\includegraphics[height=3cm,width=10cm]{DAX.jpg}} \end{minipage}\hfill \\ \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=6.5cm]{IC-DAX.jpg}} \end{minipage} \begin{minipage}[b]{0.4\linewidth} \centerline{\includegraphics[width=6.5cm]{IC-CAC40.jpg}} \end{minipage}\\ \vspace{-1em} \caption{Convexified algorithm on Yahoo! Finance datasets.\label{fig:finance}} \end{figure} \providecommand{\AC}{A.-C}\providecommand{\CA}{C.-A}\providecommand{\CH}{C.-H}\providecommand{\CJ}{C.-J}\providecommand{\JC}{J.-C}\providecommand{\JP}{J.-P}\providecommand{\JB}{J.-B}\providecommand{\JF}{J.-F}\providecommand{\JJ}{J.-J}\providecommand{\JM}{J.-M}\providecommand{\KW}{K.-W}\providecommand{\PL}{P.-L}\providecommand{\RE}{R.-E}\providecommand{\SJ}{S.-J}\providecommand{\XR}{X.-R}\providecommand{\WX}{W.-X}
1,116,691,498,073
arxiv
\section{Introduction} While many details of planet formation are not fully understood \citep{johansen, raymond14, chabrier, helled}, significant debris is expected to be produced by the planet-building process. These leftovers, such as asteroids and comets, dynamically and collisionally evolve over a planetary system's lifetime, creating a steady source of dust and small grains, which would otherwise be depleted on short timescales \citep{matthews}. Thus, the presence of circumstellar debris around a star is taken as evidence that planet building was at least partially successful in that system. When debris structures are resolved, the morphologies can be used to place constraints on the architecture of putative planets \citep{kuchner,quillen, moro, stark} and to potentially understand the dynamical history of a system \citep{raymond}. Multi-frequency observations can further be used to constrain dust properties \citep{wyatt}, giving a way to explore the debris itself. Among known debris disks, a limited number contain gas, as detected in radio molecular line emission. This includes $\beta$ Pic \citep{zuckerman95, dent}, HD 131835 \citep{moor15}, HD21997 \citep{moor11, moor13}, and 49 Cet \citep{hughesa} with estimated ages of 12 Myr, 16 Myr, 30 Myr, and 40 Myr, respectively. These systems are older than the typical lifetimes of gaseous disks, as inferred from IR excess and accretion \citep[e.g.,][]{mamajek}. Furthermore, if the gas has a primordial origin (i.e., from the formation of disk itself), the gas abundances need to be reconciled with photoevaporation rates \citep{alexander} and CO photodissociation timescales \citep{vandishoeck_black_1988, visser}. Photoevaporation rates may not be constant throughout the lifetimes of the disk, and the radial distribution of gas is influenced by both UV and X-ray sources \citep[e.g.,][]{gorti}. Instead of primordial, the gas could be second-generation, produced by the early evolution of a comet reservoir \citep{dent} through impact vaporization or sublimation of impact-generated particulates. It nonetheless remains unclear whether there is sufficient mass in comets to explain the amount of gas detected in these systems \citep{matthews, moor13}. Regardless of the reason, the existence of this gas has implications for planet building. For example, while the measured gas masses are too small to contribute significantly to gas giant planet formation, the gas could still contribute to planetary atmospheres and potentially, for high enough gas masses, continue to affect small-grain dust. If the gas does have a debris origin, then the relative debris and gas morphologies, along with dynamical models of the system, can be used to probe the clearing stages of planet formation and serve as a probe of disk mass during that evolutionary stage. As such, debris+gas systems can potentially offer significant constraints on planet formation theory \citep{kospal, wyatt_stages}. To this end, HD141569 is of particular interest. HD141569 is a B9.5 Ve star at a distance\footnote{\citet{hipparcos} find a distance of $99\pm8$ pc, whereas the re-analysis of the {\it Hipparcos} data yields a distance of $116\pm8$ pc \citep{van}. Throughout the literature, both distances are used for HD141569. In this manuscript, when reporting linear sizes from other work, we simply use their reported values. For the stellar, dust, and gas masses that we derive here, we will discuss how the results are expected to scale with distance.} of $116\pm8$ pc \citep{van}. At an age of about 5 Myr, it is surrounded by a complex dust and gas disk \citep{vandenancker, weinberger, fisher}. At distances $>100$ au from the star, large-scale spiral structure has been detected in optical scattered light, revealing at least two well-defined ring/spiral-like structures \citep{weinberger, clampin}. One spiral is between $\sim 175$ and 210 au, and the other between $\sim 300$ and 400 au. The rings/spirals are bright, with an optical depth $\sim 0.01$ in the outer arm \citep{clampin} and a scattered light flux density of $4.5\pm0.5 ~\rm mJy$ at $1.6 ~\mu$m \citep{mouillet, augereau}. In addition to having a large extended disk, HD141569 also hosts an inner dust system. This disk was first detected by excess emission in the mid-infrared using IRAS \citep{walker, andrillat}. Observations at 12, 25, 60, and 100 $\mu$m wavelengths \citep{walker} led to a calculated disk radius of 47-63 au, based on modeling \citep[see also]{fisher, marsh}. \citet{thi} used archival VLT data at 8.6 $\mu$m to resolve the inner system out to $\sim50$ au. SED modeling suggests that the inner edge of small grains must be at about 10 au with a likely peak at 15 au \citep{malfait, maaskant}. \textbf{Select} previous continuum observations are summarized in Table 1. If the dust's origin is debris, HD141569 may be viewed as the youngest of the gas-rich debris systems. By ``debris'', we mean that the majority of the (sub)millimeter emission from solids is associated with grains that have already been incorporated into a parent body and re-released into the nebula. If the solids have not already been processed into parent bodies, then they reflect the initial growth stages of grains in planet-forming disks. \begin{table}[H] \begin{center} \caption{Summary of select previous HD 141569 debris disk observations. Uncertainties provided when available. References listed are: (1) \citet{fisher}; (2) \citet{walker}; (3) \citet{marsh}, (4) \citet{mouillet}; (5) \citet{augereau}; (6) \citet{nilsson}; (7); \citet{sylvester01}} \label{my-label} \begin{tabular}{ l|l|l|l|l } \hline\hline \textbf{Features} & \textbf{Wavelength [$\mu$m]} & \textbf{Flux Density [Jy]} & \textbf{Instrument} & \textbf{Ref.} \\ \hline Continuum & $10.8$ & $0.318 \pm 0.016$ & Keck OSCIR & (1) \\ Continuum & $18.2$ & $0.646 \pm 0.035$ & Keck OSCIR & (1) \\ Continuum & $12, 25, 60, 100$ & $0.66, 1.99, 5.37, 3.34$ & IRAS & (2) \\ Continuum & $12.5, 17.9, 20.8$ & $0.333, 0.936, 1.19$ & KECK MIRLIN & (3)\\ & & $\pm .022, \pm, .094, \pm 0.16$ & \\ Spiral Structure & $1.6$ & $0.0045 \pm 0.0005$ & HST & (4,5) \\ Total System & $870$ & $0.0126 \pm 0.0046$ & APEX & (6) \\ Total System & $1350$ & $0.0054 \pm 0.001$ & JCMT SCUBA & (7) \label{past} \end{tabular} \end{center} \end{table} The total gas mass has been constrained to be roughly between 13 and 200 M$_{\oplus}$ \citep{zuckerman95, thi, flaherty}, depending on assumed abundance ratios and model fitting. Most of this mass is likely located in the outer system, where CO kinematics suggest that the gas is non-uniformly distributed in radius. Tracers of hot gas such as ro-vibrational CO lines in the near-infrared \citep{brittain02, goto} show that there is a region of tenuous CO gas distributed between 10 and at least 50 au, seemingly commensurate with the inner dust system. HD141569 may be in a stage where the outer gas regions have, at least in part, a primordial component, but the inner region associated with millimeter grains may arise from the collisional evolution of parent bodies. We must also entertain whether the outer gaseous disk is dominated by second-generation gas, making the entire system an early-stage debris disk. In this paper we present ALMA band 7 observations of the inner dust and outer gas systems. Section 2 is an overview of the observations and data reduction. The $870 \mu m$ continuum and $^{12}$CO(J = 3-2) (hereafter CO(3-2)) spectral imaging and analysis of the gas disk are given in section 3. We describe mass calculations and discuss interpretations in Section 4. Section 5 summarizes the results. \section{Observations} The data were acquired on 21 May 2014 as part of the ALMA cycle 1 campaign (project ID 2012.1.00698.S). Observations were made in two execution blocks (EBs), but one EB could not be calibrated due to phase amplitude and water vapor radiometer (WVR) problems. The total integration time for the successful EB was 1.43 hr (0.79 hr on target). A compact configuration was used with 32 antennas; the longest baseline was 650.3 m. Observations were centered on HD141569 using J2000 coordinates RA = 15 hr 49 min 57.73 sec and $\delta = -3^{\circ} 55' 16.62''$. To acquire high S/N data in both continuum and CO(3-2) efficiently, observations were taken in band 7 (at $\sim345$ GHz) with the correlator setup using the Frequency Division Mode (FDM) and dual polarization. Four different spectral windows were used with 1875 MHz bandpasses at rest frequency centers of 335, 337, 345, and 347 GHz. These locations were chosen to maximize continuum sensitivity while also overlapping the CO(3-2) transition. The correlator in FDM gives 3840 channels of width 488 kHz, which corresponds to a velocity resolution of $0.85$ km s$^{-1}$. Titan and quasar J1550+0527 were used for absolute flux and bandpass calibration, respectively. Atmospheric variations at each antenna were monitored continuously using the WVRs. The estimated WVR thermal contribution to path fluctuations is $5.8$ $\mu$m per antenna. Data were reduced using the Common Astronomy Software Applications (CASA) package \citep{casa_reference}. Antenna 14 was flagged during quality assurance (QA), leaving 31 antennas for the final data product. In addition, spectral windows 1 and 3 each exhibited 120 bad channels (1/32 of the bandwidth), which were also flagged. Antenna 14 and the flagged channels were removed from the data prior to reduction and subsequent analyses using the task \textit{split}. The data reduction in CASA included WVR calibration; system temperature corrections; and bandpass, flux, and phase calibrations with Titan and quasar J1550+0527. \section{Results} Table \ref{data} summarizes observed system properties for both the dust and gas. The continuum flux density is determined by fitting a disk model to visibilities (see Sec.\,3.1), while the gas flux density is taken from integrating within the $3\sigma$ contours of the zeroth moment maps (see Sec.\,3.2). The peak intensity and angular size are taken from the CLEANed images, assuming a distance of $116~\rm pc$ for linear scales. The uncertainties for the flux densities and the line fluxes include the $\sigma_{\rm{RMS}}$ of the observations and an absolute flux calibration uncertainty of $\sim10\%$ added in quadrature. The uncertainties in the intensities only include the $\sigma_{\rm{RMS}}$. \begin{table}[H] \caption{Summary of observed values and for both gas and dust. The flux densities are determined by fitting the visibilities by a disk model (see Sec.\,5.1). The peak intensity and angular size are derived from the CLEANed images. Linear sizes assume a distance of $116$ pc and are measured across the semimajor axis of the Continuum and gas. The uncertainties for the flux densities and the line fluxes include the $\sigma_{\rm{RMS}}$ of the observations and an absolute flux calibration uncertainty of $\sim10\%$ added in quadrature. The uncertainties in the intensities only include the $\sigma_{\rm{RMS}}$.} \centering \begin{tabular}{c | c | c} \hline\hline Parameter & Continuum [Debris] & Gas [CO 3-2] \\ \hline Flux Density & $3.8 \pm 0.4 $ mJy & $15.7 \pm 1.6 $ Jy km s$^{-1}$ \\ Peak Intensity & $1.74 \pm 0.24 $ mJy beam$^{-1}$ & $0.90 \pm 0.16 $ Jy beam$^{-1}$ \\ Angular Radius & $0."49$ ($\sim 56$ au) & $1.8"$ ($\sim 210$ au) \\ $\sigma_{\rm{RMS}}$ & $0.070$ mJy beam$^{-1}$ & $0.028$ Jy beam$^{-1}$ \\ Synthesized Beam Area & $0.163$ arcsec$^{2}$ & $0.121$ arcsec$^{2}$ \\ Beam major axis FWHM & $0."42$ & $0."34$ \\ Beam major axis FWHM & $0."34$ & $0."31$ \\ Beam Position Angle (PA) & $-61.1^{\circ}$& $-77.1^{\circ}$ \label{data} \end{tabular} \end{table} \subsection{Continuum} The dust emission is clearly resolved by the ALMA beam. The continuum (with the CO channels removed) is deconvolved and imaged using CASA's CLEAN algorithm. The average wavelength across the frequency range is $870 \mu m$. A threshold of $\frac{1}{2} \times \sigma_{RMS}$ and a natural weighting are used to produce the final cleaned product in Fig. \ref{clean} (with contours corresponding to $3, 6, 12 $ and $ 21 \times \sigma_{RMS}$). The inner disk around HD 141569 is imaged out to 56 au (assuming a distance of 116 pc). The longest baseline is unable to resolve a central clearing of $< 15\rm~au$, leading to a central peak near the pointing center (the star and inner disk). The peak intensity in the cleaned data is $1.74 \rm~ mJy~ beam^{-1}$, corresponding to a S/N of $\sim 25$. At $870 \mu m$ the thermal emission from the host star's photosphere contributes $<1\%$ to the peak flux per beam, assuming a blackbody with T$_{\rm{Eff}} = 10,500$ K, a radius of $1.7 ~ \rm R_{\odot}$, and a distance of $116$ pc. The star's flux is thus negligible, as long as corona and chromospheric effects can be ignored. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{cont0527.png} \caption{CLEANed $870~ \mu m$ continuum image of HD 141569. The contours represent $3, 6, 12 $ and $ 21 \times \sigma_{RMS}$ noise ($\sigma_{RMS} = 0.070~\rm mJy~beam^{-1}$). The dashed contour represents $-1\sigma$. The solid ellipse in the bottom left represents the beam size. A $50$ au scale (assuming a system distance of $116$ pc) is given in the bottom right. The peak intensity is $1.74 \pm 0.24 $ mJy beam$^{-1}$. Coordinates are given as offset from the phase center. North is up and East is to the left. \label{clean}} \end{figure} The dust distribution is constrained using CASA's {\it uvmodelfit}, which fits single component models directly to the visibility data and selects the best fit through $\chi^{2}$ minimization. We run this task to fit a uniform disk model to the continuum data (the CO channels are {\it split} out) and list the best-fit model in Table \ref{UV_tab}. Disks with inclinations near $i\sim 55^\circ$ are favored with a major axis of about 0.''85, corresponding to $\sim 85$ au at a distance of $116~\rm pc$. The preferred model has a total continuum flux density of $3.78 \pm 0.23 ~\rm mJy$. This is within $15\%$ of the flux density found by summing the total flux from the cleaned image down to the $3\sigma$ contour. The uncertainty in the flux is dominated by the uncertainty in the absolute flux scale, which is taken to be $10\%$. This sets our flux estimate of the inner dust disk to be $3.8 \pm 0.4 ~\rm mJy$. \begin{table}[H] \caption{Summary of CASA's {\it uvmodelfit} results for the debris disk. The data were fit by comparing a simple, uniform disk model to the data visibilities. The fitting uncertainties for parameters other than flux are not included here, but are addressed for the gaseous disk in section \ref{MCMC}. } \centering \begin{tabular}{c | c } \hline\hline Parameter & Continuum [Debris] \\ \hline Flux Density & $3.78 \pm 0.23 ~\rm mJy$ \\ X Offset & $-0."032$ \\ Y Offset & -$0."023$ \\ Major Axis & $0."85$ \\ Axis Ratio (inclination) & $0.58$ [$55^{\circ}$] \\ Position Angle & $-8.8^{\circ}$ \label{UV_tab} \end{tabular} \end{table} \subsection{Gas Disk} In addition to the continuum, CO(3-2) emission is kinematically and spatially resolved using the FDM capabilities of the ALMA correlators, with a spectral resolution of $0.85~\rm km~s^{-1}$. The double-horned spectrum is shown as a function of LSRK velocity in Figure \ref{CO}. The previously constrained system velocity of $6~\rm km~s^{-1}$ is shown, as well as the asymmetric emission from the disk (Dent et al.~2005). \begin{figure}[H] \centering \includegraphics[width=.75\textwidth]{vel0309.png} \caption{Continuum subtracted CO(3-2) spectra as a function of LSRK velocity. The dashed line represents the system velocity of $6~\rm km~s^{-1}$. The $\sigma_{\rm RMS}$ of the individual channels is $\sim 6$ mJy meaning that the dominant source of uncertainty will come from the absolute flux calibration, which we take to be $\sim 10\%$. \label{CO}} \end{figure} The CO is continuum subtracted using the CASA task {\it uvcontsub}. Figure \ref{moms} (left panel) shows the brightness map for the CO line (zeroth moment), in which the $3\sigma$ CO contour extends out to $1.8"$ ($\sim 210$ au). This is compared directly with the continuum emission (contours), which is more centrally concentrated. The right panel shows the velocity map (first moment), with the CO brightness contours overlaid. There are two brightness peaks, each at about $\sim 0.9\rm~Jy~km~s^{-1}$. The peaks are separated by $\sim~0."5$ in a morphology that resembles ring ansae and suggestive of an inner gas cavity. There is only a tenuous CO detection within this $\sim~0."5$ ($\sim50\rm~au$) diameter cavity which is broadly consistent with previous shorter wavelength observations that find only tenuous CO between about 10 and 50 au in radius \citep{brittain02, goto}. The velocity field map shows clear Keplerian rotation, with the gas south of the star approaching us. The brightness is skewed westward (right in the image), relative to the velocity map, which is discussed in more detail below below. Fig.\,\ref{chan} shows maps for 25 velocity channels between $-0.5\rm~km~s^{-1}$ and $11.5\rm~km~s^{-1}$. Contours represent 3, 6, 9, and 24 times the RMS noise of the zeroth moment. The spectral resolution of the velocity, $0.85~\rm km~s^{-1}$, is a factor of 2 larger than the channel width. For Fig.\,\ref{chan}, velocity channel spacing is chosen to be $0.50~\rm km~s^{-1}$ to include a slight oversampling. The total flux density of the CO given in Table \ref{data} is determined by summing the flux in the zeroth moment map down to $3 \times \sigma_{RMS}$ and multiplying by the number of beams. This value is consistent with integrating over all channels of the CO map to within $10\%$. Note again that there is a clear asymmetry in the emission west of the star. The peak flux in the northwestern limb is significantly brighter than its counterpart in the northeastern and southwestern limbs. \begin{figure}[H] \centering \includegraphics[width=1.1\textwidth]{sbs0411.png} \caption{\textbf{Left:} CO zeroth moment map. The contours represent $3, 6, 9 $ and $ 12 \times \sigma_{RMS}$ noise of the continuum ($\sigma_{RMS} = 0.070 ~\rm mJy~beam^{-1}$). The solid ellipse in the bottom left represents the beam size with properties as given in Table 2. A $50$ au scale (assuming a system distance of $116$ pc) is given in the bottom right. \textbf{Right:} CO first moment map (velocity field). The contours represent $3, 6, 12 $ and $ 24 \times \sigma_{RMS}$ noise ($\sigma_{RMS} = 0.028 ~\rm Jy~beam^{-1}$). Coordinates are given as offset from the phase center, as indicated on the left plot. North is up and East is to the left. \label{moms}} \end{figure} \begin{figure} \centering \includegraphics[width=1.1\textwidth]{chan0409.png} \caption{ Channel map of the CO(3-2). The 25 subplots step forward in $0.5~\rm km~s^{-1}$ intervals from $-0.5$ to $11.5~\rm km~s^{-1}$ LSRK. The contours represent 3, 9, and 24 times the RMS noise of the intensity weighted map (as seen in Fig.\ref{moms}). Coordinates are given as offset from the phase center, as indicated on the bottom left plot. North is up and East is to the left. \label{chan}} \end{figure} \subsection{MCMC Modeling} As shown in Figure \ref{moms}, the high spatial and velocity resolution capabilities of ALMA yield a well-constrained velocity field. These data can thus be compared with a Keplerian disk model to infer system properties. Trial models are generated by first assuming a uniform Keplerian disk. For simplicity, the inner cavity, temperature profile, and line broadening of CO (which is expected to be small) are not factored in to the model. Each model is projected to the disk geometry and the LSRK velocity is subtracted. The model is then convolved with a 2D Gaussian beam as given in Table 2. Using Markov Chain Monte Carlo (MCMC) techniques (specifically, Metropolis-Hastings with Gibbs sampling), the posterior distributions are calculated for the disk's inclination, position angle, LSRK system velocity, dynamical center, and mass. We assume flat prior distributions over the ranges given in Table \ref{mcmc_pri}. Model comparison is conducted in the image domain due to the high velocity resolution and signal-to-noise. \begin{table}[H] \caption{Ranges for the flat prior distributions of each parameter. The Gaussian widths are also given for the proposal distributions. The prior is based on the UV model fitting results given in Table \ref{UV_tab}. } \centering \begin{tabular}{c | c | c } \hline\hline Parameter & Prior Range & $\sigma$ \\ \hline Mass [M$_{\odot}$] & $[1.0, 4.0]$ & 0.02 \\ Position Angle [$^{\circ}$] & $[-15.0, 5.0]$ & 0.1 \\ Inclination [$^{\circ}$] & $[45.0, 65.0]$ & 0.2\\ System Velocity [km s$^{-1}$] & $[5.0, 7.0]$ & 0.01 \\ X Offset ["] & $[-0.2, 0.2]$ & 0.06 \\ Y Offset ["] & $[-0.2, 0.2]$ & 0.06 \ \label{mcmc_pri} \end{tabular} \end{table} Parameter space is explored through a random walk directed by Metropolis-Hastings MCMC \citep[e.g.,][]{ford}. For each new trial, two model parameters are randomly chosen and then updated by drawing a Gaussian random parameter centered on the current model (state $i$). The acceptance probability for the new trial model (state $i+1$) is given by \begin{equation} \alpha =\rm{min}(e^{\frac{1}{2}\left(\chi^{2}_{i}-\chi^2_{i+1}\right)},1), \end{equation} where we take \begin{equation} \chi^{2}_{i} = \sum{ \frac{(D - M_{i})^{2}}{\sigma^{2}}}. \end{equation} Here, D are the data from the CO first moment map (see Fig.\,\ref{CO}), M$_{i}$ is the current model, and $\sigma = 0.5 ~\rm km~s^{-1}$ is the velocity channel width. The summation is over all points on the moment map. If $\alpha$ is greater than a random number drawn from a uniform [0,1] distribution, then the new model is accepted and recorded in the Markov chain. If the model is rejected, then the previous model is used again and re-recorded. The MCMC routine is run using 3 chains, each with randomly chosen starting points in the flat prior parameter space. Each chain contains 100 thousand links of which about 1000 are needed for burn-in. The 3 chains converge on similar parameters, and the distributions are combined to give the resulting posterior distributions in Fig.~\ref{MCMC}. The blue points correspond to the values of highest probability. The most probable parameters (i.e., the mode of the distributions) are given in Table \ref{mcmc_par}. Uncertainties are given by a $95\%$ credible interval unless otherwise stated. The most probable mass is $2.39~\rm M_{\odot}$, for a distance\footnote{The most probable mass scales directly with the assumed distance.} of 116 pc. Since there is a degeneracy in inclination and mass, we give ${\rm M} \sin(i)$ and ${\rm M}$. Previously constrained stellar mass estimates are between 2.0 and 3.1 $\rm M_{\odot}$ \citep[e.g.,][]{merin,wyatt07}. The posterior distributions for both quantities are sampled independently by the MCMC. Ultimately, the uncertainty in the derived mass is dominated by the distance uncertainty. The re-analyzed {\it Hipparcos} Catalog distance with 1-$\sigma$ uncertainty is $116\pm8$ pc \citep{van}. Considering only this 1-$\sigma$ distance uncertainty with our most probable mass yields $2.39^{+.16}_{-.16}\rm~M_{\odot}$. The most probable parameters are used to construct a final disk model, which is shown in Figure~\ref{resid}. The residuals of the model are also shown as percent deviation from the data. The most probable model typically shows agreement with the data to about 10\%, but has larger deviations along the minor axis of the data/model. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{MCMC0411.png} \caption{Posterior probability distribution from MCMC modeling of the CO velocity field for 300 thousand links minus the burn-in. The blue points represent the most probable model parameter. The contours show 0.5, 1, 1.5, and 2$\sigma$. \label{MCMC}} \end{figure} \begin{table}[H] \caption{Summary of MCMC Results with $95\%$ Credible Range.} \centering \begin{tabular}{c | c | c} \hline\hline Parameter & Most Probable & $95\%$ Credible Range \\ \hline Mass [M$_{\odot}$] & $2.39$ & $[2.34,2.43]$ \\ Mass [M$_{\odot}$sin(i)] & $1.92$ & $[1.89,1.95]$ \\ Position Angle [$^{\circ}$] & -$3.36$ & $[-3.78,-2.71]$ \\ Inclination [$^{\circ}$] & $53.4$ & $[52.5, 54.6]$ \\ System Velocity [km s$^{-1}$]& $6.04$ & $[6.01, 6.06]$ \\ X Offset ["] & $-0.049$ & $[-0.060, -0.038]$ \\ Y Offset ["] & $-0.11$ & $[-0.12, -0.10]$ \ \label{mcmc_par} \end{tabular} \end{table} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{modelA0330.png} \includegraphics[width=.8\textwidth]{resid0330.png} \caption{\textbf{Top:} The left panel shows the first moment map of the data (same as RHS in Fig.\,\ref{moms}), while the right shows the velocity field of the model. \textbf{Bottom:} The panel shows the residuals presented as a percent difference in the model from the data. All images are shifted to the system centered velocity of $6.04~\rm km~s^{-1}$. The model is consistent with the data to about 10\% or better throughout most of the disk. The largest deviations occur along the minor axis. The black ellipse in the bottom corresponds to the beam with properties given in Table 2. \label{resid}} \end{figure} \section{Discussion} \subsection{Disk Asymmetry} An interesting asymmetry is observed in the CO channel map. Looking at the ``butterfly'' features in the $4-8 \,{\rm km~s^{-1}}$ channels, there is a localized flux enhancement on the northwestern (top right) component of the gas. The east wing of the butterfly has a fairly symmetrical intensity about the system velocity of $6 \,{\rm km~s^{-1}}$, while the west wing is asymmetrical. To explore this feature further, Figure \ref{zoom} shows three of the channel maps (4.5, 6, and 7.5 ${\rm km~s^{-1}}$), along with the continuum using $3, 6, 9 $ and $ 12 \times \sigma_{rms}$ contours. The southern components of both sides appear to be approximately symmetric, but a strong asymmetry becomes obvious for the 6 and 7.5 ${\rm km~s^{-1}}$ maps, in which the western wing is brighter than the eastern wing by $\sim 40\%$ in each channel. These channel maps also suggest that there is indeed an inner cavity to the CO disk, as noted in other studies \citep{goto, flaherty}. Since the asymmetry is present throughout multiple channels (see Fig.\,\ref{chan}), the feature appears to be real in the data. While the exact source of the flux enhancement is unknown, it may be caused by asymmetries in the inner disk edge, such as vortex formation \citep[e.g.,][]{lyraa,lyrab} or by perturbations from an unseen companion. \citet{dent} observe a large asymmetry in $\beta$-Pic that is attributed to localized collisions of gas-rich comets. The asymmetry is also in the general direction of the two distant red dwarf companions that orbit at $\sim 1000 ~ \rm au$. Follow-up observations and detailed simulations are required to determine the cause of the CO disk morphology. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{co_zoom0409.png} \caption{The $4.5, 6.0,$ and $7.5 ~\rm km~s^{-1}$ velocity channels of CO. The localized flux enhancement can be seen on the west and northwest components of the gas disk. The velocities given are LSRK and are centered around a system velocity of $6 ~\rm km~s^{-1}$. The contours represent $3, 6, 12 $ and $ 21 \times \sigma_{RMS}$ noise ($\sigma_{RMS} = 0.070~\rm mJy~beam^{-1}$) fo the continuum. North is up and East is to the left. \label{zoom}} \end{figure} \subsection{Debris/Dust Mass \label{sec:debris}} An initial estimate for the dust mass is made by assuming that the emission is optically thin, dominated by mm grains, and spatially concentrated in a thin ring. In this case, \begin{equation} M = \frac{4}{3} \rho_i \pi s^{3} \frac{F_\nu(\text{Obs})}{ B_\nu(R)\Omega_{\text{s}}}, \end{equation} where $F_\nu \text(Obs)$ is observed flux density of the continuum, $B_\nu {(R)}$ is the black body intensity for a single grain placed at a distance $R$ from the star, and $\Omega_s$ is the solid angle of a single grain. The grains are further assumed to be in thermal equilibrium with the host star, to have an internal density $\rho_i = 2.5~\rm g~cm^{-3}$ and size $s=1~\rm mm$, and to be perfect absorbers and radiators (albedo of 0, emissivity of 1). We note that this mass estimate is equivalent to $M = \frac{d^2 F_\nu(\text Obs)}{\kappa_\nu B_\nu {(R)}}$ with $\kappa_\nu =3~\rm cm^2~g^{-1}$, for our assumptions. This opacity is within a factor of two of the mm opacity used by \cite{flaherty}. For an approximate lower limit, the ring can be envisaged to be at $R=10$ au, which represents the innermost location for large grains based on SED modeling \citep{malfait}. At this distance and for the noted assumptions, the grains would be $T\sim200~K$, which yields a mm grain mass\footnote{Adopting a different distance will scale the mass by $(\frac{d}{116~\rm pc})^{2}$} of $0.04~\rm M_{\oplus}$. Placing grains at larger stellar separations would require additional mass to explain the emission. For example, if all the grains were placed at $R=50$ au ($T\sim90~K$), the mm grain mass would be $\sim0.09~\rm M_{\oplus}$. This simple estimate may only correspond to the actual dust mass if the observed mm grains are leftovers that were never incorporated into planets. Instead, if the grains are produced by the evolution of a nascent debris disk, the total mass can be significantly different. We explore this possibility next using a size distribution of grains spread throughout a disk. For simplicity, we assume that the surface density of material decreases as $r^{-1}$ over disk radii 10-50 au. The dust is assumed to be absent outside of these boundaries. We further take the grains to radiate efficiently as long as the their diameter $(2s)$ is equal to or larger than the absorbing/emitted photons \citep[e.g.,][]{wyatt}. For wavelengths larger than the grain's diameter, the emission and absorption coefficients ($\rm Q_\nu({\rm em}) = Q_\nu ({\rm abs}) = Q_\nu$) are inversely proportional to the photon wavelength. Specifically, \begin{equation} {\rm Q}_\nu = \left\{ \begin{array}{lr} 1 & ~ 2s > \lambda\\ \frac{2s}{\lambda} & ~\rm otherwise. \end{array} \right. \end{equation} We only consider a ``total'' debris mass up to some maximum parent body size, which is taken to be $s_{\rm max}=50\rm~km$. This does not mean that $50~\rm km$ is envisaged to be the largest solids in the debris disk; it is only the maximum size we consider in a given size distribution. To get a total debris mass for solids $s<50$ km, a particle size distribution must be assumed. Lacking further constraints, we use a collisional cascade such that the mass per size increment $M_s\propto s^{-0.5}$ \citep[e.g.,][]{dohnayi}. The total mass is then determined by requiring the model continuum flux density to match our observations. In practice, the debris disk is divided into a series of rings (here 100), placed evenly between 10 and 50 au. If each ring has the same mass, then the surface density profile follows $r^{-1}$. A flux density for each ring is then calculated by first deriving a grain temperature, assuming that the grains are dark (albedo$\sim0$) and balancing the received and emitted powers using a black body model with the effects of $Q_\nu$. The grain temperature \citep[e.g.,][]{wyatt} is \begin{equation} T_g = T_{g,BB} \left(\frac{Q_{\rm abs}(T_\text{star})}{Q_{\rm abs}(T_g)}\right)^{1/4}, \end{equation} where $T_\text{star}$ is the host star's surface temperature (assuming it is a black body), $T_{g,BB}$ is the equilibrium grain temperature if the grain were also a perfect black body, and $T_g$ is the actual grain temperature. The equation must be solved iteratively, but converges quickly. For HD141569, $T_g\approx T_{g,BB}$ except at grains less than 10s of microns. To calculate $Q_{\rm abs}(T)$, $Q_\nu$ is integrated over all frequencies and weighted by a black body of the given temperature, i.e., % \begin{equation} Q_{abs}(T) = \frac{ \int B_{\nu}(T,\nu) Q_\nu d\nu}{ \int B_{\nu}(T,\nu) d\nu}. \end{equation} Taking $T_\text{star} = 10500$ K and the above grain size and spatial distribution, we find that $M(s<{\rm 50~km})\sim 160~M_{\oplus}\frac{\rho_i}{\rm 2.5~g~cm^{-3}}$. This result should be interpreted with caution. A steeper (shallower) solid size distribution can lead to significantly larger (smaller) masses. The result is also dependent on the internal density of the grains, as well as their effective albedo and emissivity. Nonetheless, the results are illustrative that significant debris may be distributed between 10 and 50 au. The total mass of solids would be much larger should debris (at a lower surface brightness) be present at disk radii $r>50$ au, which would be consistent with single dish measurements (see Table 1). \subsection{Gas Mass} The mass of an optically thin gas disk near LTE can be calculated from the integrated line intensity \citep[e.g.,][]{perez}. Given a line flux of F$_{\rm{OBS}} = 15.7 \rm~Jy~km~s^{-1}$, the average line intensity over the source's solid angle $\Omega$ is \begin{equation} \hat{I} = \frac{F_{\rm OBS}}{\lambda \Omega} , \end{equation} where $\lambda = 867 \mu$m is the average wavelength of the observations. The upper transition level column density of CO is given by \begin{equation} N_{3} = \frac{4 \pi \hat{I}}{h \nu A_{32}} , \end{equation} where $\nu = 345.79$ GHz is the frequency of the molecular feature, and $A_{32} = 2.497 \times 10^{-6}$ Hz is the Einstein absorption coefficient\footnote{The spectral information for the CO molecule was obtained from the Splatalogue database http://www.splatalogue.net, \citet{splat}.} for the transition. In the following, $J=3$ (the upper transition level) unless otherwise noted (such as in the summation). Under the assumption that all $J$ energy levels are populated in LTE, the total column density is given by \begin{equation} N_{\rm{Total}} = N_{J} \frac{Z}{2J+1} e ^{\frac{h B_{e} J(J+1)}{kT}}, \end{equation} and Z is \begin{equation} Z = \sum_{j=0}^{\infty} (2j+1) e ^{-\frac{h B_{e} j(j+1)}{kT}}. \end{equation} Here, $B_{e} = 57.635$ s$^{-1}$ is the rotational constant$^{4}$, T is the gas temperature, Z is the canonical partition function. The gas mass is then given by \begin{eqnarray} \rm{M}_{\rm{CO}} &=& m_{\rm{CO}} N_{\rm{Total}} \Omega d^{2},\\\nonumber & = & \frac{4 \pi m_{\rm{CO}} \, d^{2} \,F_{\rm OBS} \, \rm{Z}}{h \nu \lambda A_{32} (2J+1)} ~ e ^{\frac{h B_{e} J(J+1)}{kT}} \end{eqnarray} for a solid angle $\Omega$ and distance to the object $d$. Taking a gas temperature of T $ = 33$K, the minimum excitation temperature of the $J= 3$-2 line, gives M$_{\rm{CO}} =1.9 \pm 0.2 \times 10^{-3}~\rm M_{\bigoplus}$, with the uncertainty propagated from the CO flux density uncertainty in Table 2. The corresponding spatially averaged column density, $N_{\rm{Total}}$, is $1.2 \pm 0.1 \times 10^{16} ~\rm cm^{-2}$. This is within the optically thin limit \citep{wyatt_stages}, but should not be taken as independent confirmation, as we assumed the gas to be thin for the mass calculation. If the gas is partly optically thick, then the actual CO gas mass could be larger by a factor of a few \citep{matra}. As such, the CO mass here could be interpreted as a lower limit. Due to the uncertainty in the appropriate amount of the gas, we will only report the CO mass as M$_{\rm(CO)}\sim2\times10^{-3}~\rm M_{\bigoplus}$ to emphasize that the calculation has important unknowns. \citet{flaherty} find a gas model with total mass of $13^{+50}_{-9}~\rm M_{\bigoplus}$ as constrained by LTE models of gas temperature and density of CO(1-0) and CO(3-2) with CARMA and SMA, respectively. If we assume the $10^4$ ISM number density abundance ratio for H$_{2}$ to CO (as in Flaherty et al.), the inferred H$_2$ gas mass from the ALMA observations is M$_{\rm{H_2}} \sim 1.4 ~\rm M_{\bigoplus}$. Including additional metals would increase the total inferred gas mass to be slightly above $\sim 1.5 ~\rm M_{\bigoplus}$, which is a factor of a few below the lower bound of the SMA and CARMA based model. The observations and models altogether thus suggest that there is 1 to a few tens M$_\oplus$ of gas mass, assuming the ISM scaling can be used, which is not obviously the case. Additional caveats for these gas-mass estimates are discussed below. \subsection{What can HD141569 tell us about grain growth, planet formation, and disk evolution?} The morphology of HD141569 shows a dust disk extending out to about 56 au and an extended CO gas component between about 30 and 210 au. This structure alone suggests that the system is an evolved transition disk. However, as discussed below, HD141569 may be better interpreted as a nascent debris system. The distinction is that the dust would be second generation, and any associated size distribution would reflect the clearing stages of planet formation rather than grain growth outcomes. \subsubsection{Primordial v.~Second Generation} While most debris disks are expected to be extremely gas-poor, several younger debris systems (e.g., $\beta$ Pic as discussed in the Section 1) have been observed with CO masses $ \rm M_{\rm CO} = 10^{-5} - 10^{-2} ~\rm M_{\bigoplus}$ \citep{pascucci, hughesa, dent}. HD141569 has a CO gas mass $\sim 2\times10^{-3} \rm M_\oplus$, which, while younger, is comparable to these more evolved systems. The total gas mass of $\sim1.5~\rm M_{\bigoplus}$ ($\sim 5 \times 10^{-3}\rm M_J$) assumes an ISM H$_2$ to CO abundance ratio. There is ultimately no reason to suspect that this conversion is applicable to HD141569 after 5 Myr of evolution. If the gas disk is not optically thin, as assumed in the calculation above, then using CO as a tracer of total gas could underestimate the actual gas mass \citep{bergin}. The current CO disk should be expected to be depleted by photodissociation on timescales of $\sim120$ yr \citep{visser}, unless significant self-shielding is present. While the derived column density of CO ($\sim 10^{16}\rm~cm^2$) would contribute to some shielding, it is not obviously sufficient to prevent rapid dissociation. Unless the gaseous disk is massive enough to prompt CO formation in rough balance with photodissociation, the low inferred CO mass creates a potentially serious timing problem for a primordial gas interpretation. Instead, if the gas is second-generation as produced by a planetesimal population \citep[e.g.,][]{moor11}, then the short dissociation timescale may not be problematic. Rather, the problem now becomes whether sufficient mass is available to produce a low-mass gaseous disk, and if so, whether the planetesimal destruction rates would be consistent with the dynamics and the radiation field of the system. First, we note that the debris interpretation is corroborated by recent scattered light imaging \citep{konishi}. The images reveal very small grains present around 50 au, a region co-located with the mm grains observed here. Such small grains should be removed by the system quickly by radiation pressure. The presence of the small dust grains in this region of the disk suggests that significant collisional evolution is indeed taking place. This, by itself, does not suggest that the gaseous disk is best described by a debris disk, but it motivates its consideration. If the CO gas is depleted quickly through photodissociation on $\sim120$ yr timescales, then for our estimate of the CO mass, the CO production rate must be $\rm \dot{M}_{\rm CO}\approx 17 ~ M_\oplus ~ Myr^{-1}$. If a typical comet's mass is 10\% CO ice \citep{mumma}, then about 170 $\rm M_\oplus$ of cometary material must be destroyed per Myr to balance photodissociation. This also implies that the total gas mass is within an order of magnitude of the CO gas. Based on cometary compositions, CO can be accompanied by approximately similar abundances of H$_{2}$O and CO$_{2}$ \citep{mumma}. Ultimately, spectroscopic followup must be used to determine the gas composition and compare that with cometary abundances to further constrain this scenario observationally. Is the required comet destruction rate plausible? As discussed in Section \ref{sec:debris}, the ALMA continuum emission of 3.8 mJy with a collisional cascade model implies a total solid mass $M\sim160~\rm M_\oplus$ for $s<50~\rm km$ in the inner disk. While the ALMA CO observations are consistent with single-dish observations, the continuum flux measured here is lower than that found in previous studies. For example, single dish observations by \cite{nilsson} find a continuum flux density of $12.6\pm 4.6~ \rm mJy$ at 870 $\mu m$, and the SMA observations measure $8.2\pm 2.4~ \rm mJy$ \citep{flaherty}. The much larger beam in these observations could be biasing the detected flux through contamination, but at face value, this suggests that there may still be considerable dust mass at larger radii whose emission is resolved out by the interferometer or is too low surface brightness to be detected at the sensitivity of these observations. As such, the true mass in solids may be larger than estimated here. For example, if we extend the collisional cascade model out to 210 au (the extent of the CO disk) and normalize the mass to 12.6 mJy, the total solid mass is over 360 $\frac{\rho_i}{\rm 1~g~cm^{-3}}\rm M_\oplus$ (for $s<50$ km), where we have used $\rho_i= 1\rm~g~cm^{-3}$ to represent icy bodies. We stress that this estimate is very uncertain, as it depends on the assumed size distribution, planetesimal densities, grain albedos and emissivities, and distance to HD141569\footnote{This estimated debris mass for an extended disk scales as roughly $(\frac{d}{\rm 116~pc})^2$.}. Provided that the estimated mass reservoir is dynamically accessible (which is not explored here or obviously met), there is potentially sufficient cometary material to produce the current CO gas, although the system would not maintain this gas abundance for a protracted time without shielding. Why should significant CO gas only appear outside a radius of about 30 au? As noted in the introduction, tenuous, warm CO has been detected interior to the 50 au diameter cavity, but there is a large change in CO abundance exterior to this distance, as revealed here. If the gas is indeed second generation, then the change in CO abundance may reflect where significant CO was incorporated into planetesimals at the time of their formation. In this paradigm, the entire disk is collisionally evolving, but significant CO gas is only released in planetesimals that harbor a large fraction of CO ice. Alternatively, the reduced abundance of CO interior to about 30 au may simply reflect the CO photodissociation environment closer to the star and/or changes in self-shielding. The inner edge of the CO could also be set by a region with a higher rate of stirring by planets and embryos \citep{lissauer}. There is a potential contradiction, however, with this approach. The CO mass was derived assuming that it is optically thin and in LTE. If the gas is indeed second-generation, then it is not obvious whether there will be sufficient collisional partners to populate the rotational levels thermally. In this case, the true CO mass could be significantly different from our estimates, and potentially even orders of magnitude more massive if non-LTE effects do dominate \citep{matra}. To check the degree to which the LTE assumption may be valid, we use the ALMA measured CO(3-2) integrated line flux to estimate the CO(1-0) integrated line flux under LTE conditions, which is approximately $\sim 0.8\rm~Jy~km~s^{-1}$. The \cite{flaherty} CARMA observations found an integrated line flux for CO(1-0) of $1.6\pm 0.2\rm~Jy~km~s^{-1}$, making the estimate good to about a factor of two. Ultimately, observations of disk chemistry are needed to understand the gas's origin. \section{Summary} We have presented ALMA continuum ($870 \mu m$) and CO(3-2) observations of HD141569. The continuum observations show a dust disk that extends out to $0."49$ with a total continuum flux density of $3.8 \pm 0.4 ~\rm mJy$ (peak flux of $1.74 \pm 0.24 ~\rm mJy~beam^{-1}$). A rough lower limit to the amount of dust mass needed to explain the emission is $0.04~\rm M_{\bigoplus}$. If the dust is due to the collisional evolution of debris (rather than leftover millimeter grains from planet-building), then the millimeter flux reflects a comet and asteroid reservoir of $\sim 160~\rm M_{\bigoplus}$ for sizes $s<\rm 50~ km$ (assuming a collisional cascade). The continuum flux density found here is about a factor of three lower than that derived by single dish observations, suggesting that there is additional dust on larger spatial scales or at a lower surface brightness. The CO disk observations reveal CO extending from roughly the outer edge of the inner dust disk to about $1."8$. The CO(3-2) integrated flux density is $15.7 \pm 1.6~\rm Jy~km~s^{-1}$ (peak flux of $0.90 \pm 0.16 ~\rm Jy~km~s^{-1}~beam^{-1}$), which is consistent with single dish measurements. Assuming that the gas is in LTE and optically thin, the corresponding CO mass is $\sim2 \times 10^{-3}~\rm M_{\bigoplus}$ for a distance of 116 pc. Based on modeling the velocity field, the disk is constrained to have a Position Angle $=-3.36^{\circ +.65}_{-.42}$, an inclination $=53.4^{\circ +1.2}_{-.9}$, and a system velocity $v_{sys} = 6.04^{+.02}_{-.03}$ km s$^{-1}$. The gas velocities are consistent with orbiting a star of $2.39^{+.04}_{-.05}\rm~M_{\odot}$ for the most probably inclination and a distance of 116 pc. The uncertainties represent the 95\% confidence region computed from MCMC samples. Instead, considering only the 1-$\sigma$ distance uncertainty with our most probable mass yields $2.39^{+.16}_{-.16}\rm~M_{\odot}$. The channel maps show a localized flux enhancement of the disk to the western section of the disk. Further detailed modeling of the system and higher resolution imaging are needed to properly constrain the full morphology. Because CO should photodissociate rapidly, the gas may require, in part, replenishment through collisions of comets, making the disk a debris system. While the required mass to do this may be high, it is potentially within plausible limits of the inferred debris field. Observations probing the gas composition can be used to further constrain the origin of the gas, particularly as LTE assumptions may not apply. We thank the anonymous referee for the helpful comments during the review process. J.A.W. and A.C.B acknowledge support from an NSERC Discovery Grant, the Canadian Foundation for Innovation, The University of British Columbia, and the European Research Council (agreement number 320620). A.M.H. and K.M.F. are supported by NSF grant AST-1412647. E.B.F.'s contribution was supported in part by funding from the Center for Exoplanets and Habitable Worlds. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium. A.C.B. and E.B.F. also acknowledge The University of Florida and the NASA Sagan Fellowship program. M.J.P. also acknowledges NASA Origins of Solar Systems Program grant NNX13A124G, NASA Origins of Solar System Program grant NNX10AH40G via award agreement 1312645088477, NASA Solar System Observations grant NNX16AD69G, BSF Grant Number 2012384, as well as support from the Smithsonian 2015 CGPS/Pell Grant Program. This paper makes use of the following ALMA data: ADS/JAO.ALMA[2012.1.00698.S] . ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
1,116,691,498,074
arxiv
\section{INTRODUCTION} The problem of how to control routing across a network underlies a vast array of real-world problems including internet routing, voice/video communication, traffic flows, etc. In its general form, the problem is how to optimize the flow of certain entities (e.g., information packets, cars) from sources to destinations across a network of routing nodes. Here we are concerned with the version of the problem in which ``optimization'' consists of minimizing aggregate cost incurred by the entities flowing to their destinations. To ground the discussion, we will consider the case where the entities being routed are packets. Currently, many real-world network routing solutions to this particular problem are based on the Shortest Path Algorithm (SPA), in which each routing node in the network maintains estimates of the ``shortest paths'' (i.e., minimal total incurred costs) from it to each of its destinations and at each moment satisfies any routing requests by sending all its packets down that shortest path. Many algorithms exist for efficiently computing the shortest path in the case where the costs for traversing each component of every path at any given time are known. In particular, there exist many such algorithms that can be applied when node-to-node path-cost communication is available and the costs for traversing each component are unvarying in time (e.g., Dijkstra's Algorithm \cite{ahma93,bega92,depa84,dijk59}. Real-world SPA's apply such algorithms to estimated costs for traversing each component of every path to generate their estimated shortest paths. Consider the case where for all paths from a particular node to a particular destination, the costs that would be incurred by that node's routing all its current traffic along that path is known exactly to that node (the information being stored in that router's ``routing table''). Clearly if a non-infinitesimal amount of traffic is being routed by our node, then in general its sending all that traffic down a single path will not result in minimal cost incurred by that traffic, no matter how that single path is chosen. However if it must choose a single path for all its traffic, then tautologically the SPA chooses the best such path. Accordingly, in the limit of routing an infinitesimally small amount of traffic, with all other nodes' strategies being a ``background'', such a router's running SPA is the optimal (least aggregate incurred cost) routing strategy {\it for that particular routing node considered individually}. One might hope that more generally, if the node must allot all of its traffic to a single path, then its choosing that path via the SPA would be the $globally$ optimal choice of a single path, at least in the limit of infinitesimally little traffic. This is not the case though, because in using the SPA the node is not concerned with the deleterious side-effects of its actions on the costs to other nodes~\cite{kola97,wotu99a}. In the extreme case, as elaborated below, if all nodes were to try to minimize their personal costs via SPA's, then the nodes would actually {\it all} receive higher cost than would be the case under an alternative set of strategies. This is an instance of the famous Tragedy Of the Commons (TOC)~\cite{hard68}. Deleterious side-effects need not be restricted to extend over space; they can also extend over time. Indeed, consider the algorithm of having all routers at a given moment make routing decisions that optimize global cost incurred by the traffic {\it currently being routed}, an algorithm often called ``load-balancing'' (LB). By definition, LB avoids the deleterious side-effects over space that can result in the TOC for the costs incurred by the traffic currently being routed. However, due to side-effects over time, even conventional LB is often suboptimal as far as global cost averaged across time is concerned. Intuitively, one would have to use ``load-balancing over time" to ensure truly optimal performance. In this paper we are concerned with how to address these kinds of deleterious side-effects, and thereby improve performance. In particular, we are interested in ways of doing this that result in better performance than that of the ubiquitous SPA. Now use of the SPA obviously provides no guarantees, even for personal cost of the router using it, if the path-estimates of the nodes are incorrect. Such inaccuracy is the rule rather than the exception in many practical applications. Typically those estimates will be in error because node-to-node communication is not instantaneous, and therefore routing tables may be based on out of date information. More generally though, even if that communication were instantaneous, the cost to traverse a component of the network may be different by the time the packet arrives at that component. In this paper we do not wish to investigate such topics, but rather to highlight the issue of side-effects. Accordingly we ``rig the game'' in favor of the SPA by constructing our simulations so that the first potential cause of routing table inaccuracy does not arise, and the second is minimized. We do this in our experiments by using an {\em Ideal} Shortest Path Algorithm (ISPA) which has direct access to the shortest path at each moment. Note that this ISPA provides an upper bound on the performance of any real-world SPA. In general, even without side-effects, determining the optimal solution to a flow problem (e.g., determining what the loads on each link need to be to maximize throughput on a non-cooperative data network) can be nontractable~\cite{ahma93,orro93a}. Therefore, we will concern ourselves with providing {\em good} solutions that avoid the difficulties the ISPA has with side-effects. It is not our aim here to present algorithms that find the best possible (``load-balanced over time'') solution. We will base our solutions on the concept of Collective Intelligence. A ``COllective INtelligence'' (COIN) is any pair of a large, distributed collection of interacting goal-driven computational processes among which there is little to no centralized communication or control, together with a `world utility' function that rates the possible dynamic histories of the collection~\cite{wotu99a,wotu99b}. In this paper we are particularly concerned with computational processes that use machine learning techniques (e.g., reinforcement learning~\cite{kali96,suba98,sutt88,wada92}) to try to achieve their goal, conventionally represented as maximizing an associated utility function. We consider the central COIN design problem: {\em how, without any detailed modeling of the overall system, can one set utility functions for the individual components in a COIN to have the overall dynamics reliably and robustly achieve large values of the provided world utility?} In other words, how can we leverage an assumption that our learners are individually fairly good at what they do? In a routing context, this question reduces to what goals one ought to provide to each router so that each router's greedily pursuing those goals will maximize throughput (``incentive engineering''). For reasons given above, we know that the answer to this question is not provided by SPA's goals --- some new set of goals is needed. In Section~\ref{sec:back} we discuss the SPA's deficiencies and in particular their manifestations in Braess' paradox. We also demonstrate the suboptimality of load-balancing in that section. We then present Collective Intelligence in Section~\ref{sec:coin}, discuss the routing model we will use in our experiments, and show how the theory of COINs can be applied to that model to provide an alternative to shortest path algorithms. In Section~\ref{sec:sim} we present simulation results with that model that demonstrate that in networks running ISPA, the per packet costs can be as much as 32 \% higher than in networks running algorithms based on COIN theory. In particular, even though it only has access to imprecise estimates of costs (a handicap that does not hold for ISPA), the COIN-based algorithm almost always avoids Braess' paradox, in stark contrast to the ISPA. In that the cost incurred with ISPA's is presumably a lower bound on that of an SPA not privy to instantaneous communication, the implication is that COINs can outperform such real-world SPA's.\footnote{A brief synopsis of the COIN algorithm discussed here was presented in a space-constrained article ~\cite{wotu99a}; this paper presents full details and applies the algorithm to Braess' paradox as an illustration of the suboptimality of the SPA.} \section{Suboptimality of Shortest Path and Load-Balancing} \label{sec:back} In this section we first demonstrate the suboptimality of an SPA when we have multiple nodes making simultaneous routing decisions, where neither node knows ahead of time the other's choice, and therefore does not know ahead of time exactly what the costs will be. We then demonstrate that such suboptimality can hold even when only one node is making a decision, and it knows what decisions the others have previously made. Next we present Braess' paradox, a particularly pointed instance of these effects. (See ~\cite{bass92,coke90,coje97,kola99} for other discussion of Braess' paradox in SPA routing.) We end by demonstrating the suboptimality of conventional load-balancing when cost over time is what's of interest. \subsection{SPA when multiple routers are simultaneously making decisions} \label{sec:ispa} Perhaps the simplest example of how individual greed on the part of all nodes can lead to their collective detriment occurs when two nodes determine that their shortest path is through a shared link with a limited capacity, while both have a second option that is slightly less preferable. In such a case, their using the common link degrades the performance of {\em both} parties, since due to limited capacity the performance of that link will quickly fall below that of their second option. More precisely, consider the case where, given a load $x$, the shared link has a cost given by $x^3$, and where each router has a second option where the cost is given by $2x$. Acting alone, with a single packet to send, they would both send that packet through the shared link (cost of 1). However by both doing so, they incur a larger cost (cost of 8) than if they had both used their second choice (cost of 4). Without knowing what each other will do ahead of time (information not conventionally contained in routing tables), the nodes will necessarily have mistaken cost estimates and therefore make incorrect routing decisions. (Indeed, to have $all$ nodes know what each other are doing ahead of time requires the use of game theory.) In this, even in the limit of differentially small packets, use of SPA will lead to a wrong routing decision. \subsection{SPA when only one router is making a decision} \label{sec:subopt} Consider the network shown in Figure~\ref{fig:simple}. Two source routers $X$ and $Y$ each send one packet at a time, with $X$ sending to either intermediate router $A$ or $B$, and $Y$ sending to either $B$ or $C$. This type of network may arise in many different topologies as a subnetwork. Accordingly, difficulties associated with this network can also apply to many more complex topologies. \begin{figure} [htb] \begin{picture}(100,140)(-100,-30) \put(50,0){\circle*{10}} \put (50,-15) {\makebox(-1,1)[b]{$X$}} \put(50,0) {\line (-2,3){50}} \put(50,0) {\line (2,3){50}} \put(150,0){\circle*{10}} \put (150,-15) {\makebox(-1,1)[b]{$Y$}} \put(150,0) {\line (-2,3){50}} \put(150,0) {\line (2,3){50}} \put(0,75){\circle*{10}} \put (0,90) {\makebox(-1,1)[t]{$A$}} \put(100,75){\circle*{10}} \put (100,90) {\makebox(-1,1)[t]{$B$}} \put(200,75){\circle*{10}} \put (200,90) {\makebox(-1,1)[t]{$C$}} \end{picture} \caption{Independent decisions at the source} \label{fig:simple} \end{figure} Let $x_A$, $x_B$, $y_B$, and $y_C$, be the packet quantities at a particular fixed time $t$, at $A$, $B$, or $C$, and originating from $X$ or $Y$, as indicated. At $t$, each source has one packet to send. So each of our variables is binary, with $x_A + x_B = y_B + y_C = 1$. Have $V_i(z_i)$ be the cost, per packet, at the single instant $t$, at router $i$, when the total number of packets at that instant on that router is $z_i$. So the total cost incurred by all packets at the time $t$, $G(\vec{x}, \vec{y})$, equals $x_A V_A(x_A) + (x_B + y_B) V_B(x_B + y_B) + (y_C) V_C(y_C)$. In an ISPA, $X$ chooses which of $x_A$ or $x_B$ = 1 so as to minimize the cost {\it incurred by X's packet alone}, $g_X(\vec{x}) \equiv x_A V_A(x_A) + x_B V_B(x_B + y_B)$. (Real-world SPA's typically try to approximate this by having $X$ choose either $A$ or $B$ according to whether $V_A(0)$ or $V_B(y_B)$ is smaller, where those two values can be estimated via pings, for example.) In doing this the ISPA ignores the $y_B V_B(x_B + y_B)$ term, i.e., it ignores the ``side effects'' of $X$'s decision. The right thing to do of course is instead have $X$ minimize $G(\vec{x}, \vec{y})$, or more precisely, the components of $G(\vec{x}, \vec{y})$ that depend on $X$. Writing it out for this case, $X$ ought to act to minimize $x_A V_A(x_A) + (x_B + y_B) V_B(x_B + y_B)$. Due to the constraint that $x_A + x_B = 1$, this means sending down $A$ iff $V_A(1) < (y_B + 1) V_B(y_B + 1) - y_B V_B(y_B)$, which differs from the ISPA result in that $X$ is concerned with the full cost of going through router $B$, not just the portion of that cost that its packet receives. In the context of this example, this $G$-minimizing algorithm constitutes ``load-balancing'' (LB). Note that so long as sgn$[V_A(0) - V_B(y_B) - y_BV'_B(y_B)] \neq$ sgn$[V_A(0) - V_B(y_B)]$, even in the limit of infinitesimally small traffic (so that $x_A + x_B$ equals some infinitesimal $\delta$), ISPA and LB still disagree. \subsection{Braess' Paradox} \label{sec:braess} Braess' paradox~\cite{bass92,coke90,coje97,kola98,kola99} dramatically underscores the inefficiency of the ISPA described above. This apparent ``paradox'' is perhaps best illustrated through a highway traffic example first given by Bass~\cite{bass92}: There are two highways connecting towns S and D. The cost associated with traversing either highway (either in terms of tolls, or delays) is $V_1 + V_2$, as illustrated in Net A of Figure~\ref{fig:hex}. So when $x = 1$ (a single traveler) for either path, total accrued cost is 61 units. If on the other hand, six travelers are split equally among the two paths, they will each incur a cost of 83 units to get to their destinations. Now, suppose a new highway is built connecting the two branches, as shown in Net B in Figure~\ref{fig:hex}. Further, note that the cost associated with taking this highway is not particularly high (in fact for any load higher than 1, this highway has a lower cost than any other highway in the system). The benefit of this highway is illustrated by the dramatically reduced cost incurred by the single traveler: by taking the short-cut, one traveler can traverse the network at a cost of 31 units ($2 \;V_1 + V_3$). Adding a new road has seemingly reduced the traversal cost dramatically. \begin{figure}[bh] \begin{picture}(100,140)(-80,-20) \put(50,0){\circle*{10}} \put (58,0) {\makebox(-1,1)[l]{$S$}} \put(50,0) {\line (-5,3){50}} \put(50,0) {\line (5,3){50}} \put(0,30){\circle*{10}} \put (-8,30) {\makebox(-1,1)[r]{$V_1$}} \put(0,30) {\line (0,1){50}} \put(100,30){\circle*{10}} \put (108,30) {\makebox(-1,1)[l]{$V_2$}} \put(100,30) {\line (0,1){50}} \put(0,80){\circle*{10}} \put (-8,80) {\makebox(-1,1)[r]{$V_2$}} \put(0,80) {\line (5,3){50}} \put(100,80){\circle*{10}} \put (108,80) {\makebox(-1,1)[l]{$V_1$}} \put(100,80) {\line (-5,3){50}} \put(50,110){\circle*{10}} \put (58,110) {\makebox(-1,1)[l]{$D$}} \put(225,0){\circle*{10}} \put (233,0) {\makebox(-1,1)[l]{$S$}} \put(225,0) {\line (5,3){50}} \put(225,0) {\line (-5,3){50}} \put(175,30){\circle*{10}} \put (170,30) {\makebox(-1,1)[br]{$V_1$}} \put(175,30) {\line (0,1){50}} \put(275,30){\circle*{10}} \put (280,30) {\makebox(-1,1)[bl]{$V_2$}} \put(275,30) {\line (0,1){50}} \put(175,80){\circle*{10}} \put (170,80) {\makebox(-1,1)[br]{$V_2$}} \put(175,80) {\line (5,3){50}} \put(275,80){\circle*{10}} \put (280,80) {\makebox(-1,1)[bl]{$V_1$}} \put(275,80) {\line (-5,3){50}} \put(225,110){\circle*{10}} \put (233,110) {\makebox(-1,1)[l]{$D$}} \put(225,55){\circle*{10}} \put (225,65) {\makebox(-1,1)[b]{$V_3$}} \put(175,30){\line(2,1){100}} \put (50,-30) {\makebox(-1,1){Net A}} \put (225,-30) {\makebox(-1,1){Net B}} \end{picture} \caption{Hex network with $V_1 = 10 x \;\; ; \;\; V_2 = 50 + x \;\; ; \;\; V_3 = 10 + x$} \label{fig:hex} \end{figure} However consider what happens when six travelers are on the highways in net B. If each node uses an ISPA, then at equilibrium each of the three possible paths contains two travelers.\footnote{We have in mind here the Nash equilibrium for this problem, where no traveler (or equivalently, no router) can gain advantage by changing strategies.} Due to overlaps in the paths however, this results in each traveler incurring a cost of 92 units, which is higher than than what they incurred {\em before} the new highway was built. The net effect of adding a new road is to increase the cost incurred by {\em every} traveler. This phenomenon is known as Braess' paradox. \subsection{The Suboptimality of Load-Balancing} \label{sec:lb} As mentioned before, LB considers side-effects of current routing decisions on other traffic currently being routed. However because it does not consider side-effects of routing decisions on future traffic, even LB may not optimize global cost averaged across all time, depending on the details of the system. Here we present an existence proof of this, by explicitly constructing a situation where conventional LB is suboptimal. Consider a system with discrete time, in which the node $X$ under consideration must route one packet to the (fixed) destination at each time step (cf. Section~\ref{sec:subopt} above). Presume further that no traffic enters any of the nodes $X$ sends to except for $X$. (So that traffic coming from $X$ is the sole source of any costs associated with $X$'s outbound links.) Let $S(t)$ be the number of times our node sent a packet down some link $A$ in the $W$ time steps preceding $t$, and take $s(t)=A,B$ to mean that the router uses link $A$ or $B$, respectively, at time $t$. Model queue backups and the like by having the cost to send a packet down link $A$ at time $t$ be $C_A(S(t) / W)$, and have the cost for our router to instead send the packet down link $B$ be $C_B(1 - S(t) / W)$, For simplicity we assume that both $C_A(.)$ and $C_B(.)$ are monotonically increasing functions of their arguments. Restrict attention to nodes that work by having $s(t) = A$ iff $S(t) \le k$ for some real-valued threshold $k$. The LB algorithm will choose $s(t) = A$ iff $C_A(S(t) / W) \le C_B(1 - S(t) / W)$. So the LB algorithm's behavior is indistinguishable from this kind of threshold algorithm, with $k$ set so that $C_A(k / W) = C_B(1 - k/W)$. (We implicitly assume that $C_A(.)$ and $C_B(.)$ are chosen so that such a solution exists for $1<k<W-1$.) The question is what $k$ should be to optimize total averaged cost across time, and in particular if that $k$ is the same as $k_{LB}$, the $k$ that LB uses. Now as we go from one time step to the next, the routing decision made $W$ time steps ago drops out of the computation of $S(t)$, while the routing decision just made is newly included. In general, $S(t+1)= S(t) + 1$ if the router just used $A$ at time $t$ and used link $B$ at the time $W$ time steps into the past. On the other hand, $S(t+1)=S(t) -1$ if the router just used $B$ and used $A$ $W$ time steps ago, while $S(t+1)=S(t)$ if the routing decision just made is the same as the routing decision $W$ time steps ago. So in general, $S(t)$ can only change by -1, 0, or +1 as we go from one time step to the next. Consider cases where $1 < k < W-1$, so that eventually the router must choose an $A$, and at some subsequent time $t^*$ the router switches from $A$ to $B$. At that time $s(t^*-1)=A$ and $s(t^*)=B$. This implies that $S(t^* -1) \le k, S(t^*) > k$. Define the value $S(t^* - 1)$ as $k^*$. Note that $S(t^*) = k^* + 1$, and $k- 1 < k^* \le k$. Now for any time $t'$, if $S(t') = k^*+1$, $s(t'+1) = B$, and the only possible next values are $S(t'+1) = k^*$ or $S(t'+1) = k^*+1$, depending on the old decision $s(t-W)$ that gets dropped out of the window. Similarly, if $S(t') = k^*$, $s(t'+1) = A$, and the only possible next values are $S(t'+1) = k^*$ or $S(t'+1) = k^* + 1$, again depending on the old decision being dropped. So we see that once $S(t') \in \{k^*, k^*+1\}$, it stays there forever. This means that because of the relationship between $k$ and $k^*$, in any interval of $W$ consecutive time steps subsequent to $t^*$, the number of packets sent along $A$ by router $X$ must be $\in (k-1, k+1]$. (Note that it is possible to send $k+1$ packets along $A$, but not $k-1$ packets. Therefore the number sent along $B$ must be $\in [W - (k+1), W - (k-1))$. Each time that a packet is sent along $A$ the cost incurred is the cost of link $A$ with average traffic level $S(t)/ W$, $C_A(S(t)/W)$. Similarly, each time the link $B$ is chosen, the cost incurred is $C_B(1 - S(t) / W)$. Since $S(t) \in \{k^*, k^*+1\}$, and both $C_A(.)$ and $C_B(.)$ are monotonically increasing, the cost for sending the packet down link $A \in (C_A((k-1)/W), C_A((k+1)/W]$, and that for sending it down link $B$ is contained in $[C_B(1 - (k+1)/W), C_B(1 - (k-1)/W))$. Now we know that the choice of $A$ must have average frequency (across all time) between $k^*/W$ and $(k^*+1)/W$. Similarly, $B$ will have average frequency between $(1 - (k^*+1)/W)$ and $1-k^*/W$. Accordingly, the average cost is bounded above by \begin{eqnarray} \frac{k^*+1}{W} C_A\left(\frac{k+1}{W}\right) \;\;+\;\; \left(1- \frac{k^*}{W}\right) C_B\left(1 - \frac{k-1}{W}\right) \;, \label{eq:upper1} \end{eqnarray} where the first term provides the maximum possible average cost for using link $A$, while the second term independently provides the maximum possible average cost for using link $B$. (Note that the actual cost will be lower since the two frequencies in this bound, one for $A$ and one for $B$, cannot both have the values indicated.) Because $k-1 < k^* \le k$ and since $1-\frac{k-1}{W} = 1+\frac{2}{W}-\frac{k+1}{W}$, our upper bound is itself bounded above by \begin{eqnarray} \frac{k+1}{W} C_A\left(\frac{k+1}{W}\right) \;\;+ \;\; \left(1 + \frac{2}{W} - \frac{k+1}{W}\right) C_B \left(1 + \frac{2}{W} - \frac{k+1}{W}\right) \;. \label{eq:upper2} \end{eqnarray} The optimal $k$ will result in an average cost lower than the minimum over all $k$ of the upper bound on average cost, given in Equation~\ref{eq:upper2}. So the average cost for the optimal $k$ is bounded above by the minimum over $k$ of this upper bound. Lable this argmin of Equation~\ref{eq:upper2} $k$'. Since other values of $k$ besides $k_{LB}$ result in behavior equivalent to LB, it does not suffice to simply test if $k$' = $k_{LB}$. Instead let us evaluate some lower bounds in a similar fashion to how we evaluated upper bounds. Using the average frequencies discussed above, the average cost is bounded below by: \begin{eqnarray} \frac{k^*}{W} C_A\left(\frac{k-1}{W}\right) \;+\; \left(1-\frac{1}{W} - \frac{k^*}{W}\right) C_B\left(1 - \frac{k+1}{W}\right) \;, \label{eq:low1} \end{eqnarray} where the first term provides the minimum possible average cost for using link $A$, while the second term provides the minimum possible average cost for using link $B$. Again, because $k-1 < k^* \le k$, the term is Equation~\ref{eq:low1} is further bounded below by \begin{eqnarray} \frac{k-1}{W} C_A\left(\frac{k-1}{W}\right) \;\;+ \;\; \left(1-\frac{2}{W} - \frac{k-1}{W}\right) C_B\left(1 -\frac{2}{W} - \frac{k-1}{W}\right) \;. \label{eq:low2} \end{eqnarray} In particular this bound holds for the average cost of the LB algorithm: \begin{eqnarray} \frac{k_{LB}-1}{W} C_A\left(\frac{k_{LB}-1}{W}\right) \;\;+ \;\; \left(1-\frac{2}{W} - \frac{k_{LB}-1}{W}\right) C_B\left(1 - \frac{2}{W} - \frac{k_{LB}-1}{W}\right) \; , \label{eq:lb} \end{eqnarray} \noindent where as before $k_{LB}$ satisfies $C_A(k_{LB} / W) = C_B(1 - k_{LB}/W)$. By appropriate choice of $C_A(.)$ and $C_B(.)$, we can ensure that the lower bound on the cost with the LB algorithm (Equation~\ref{eq:lb} evaluated with $k = k_{LB}$) is higher than the upper bound on the average cost incurred by the optimal algorithm (the minimum over $k$ of Equation~\ref{eq:upper2}).\footnote{For example, for $C_A(x) = x^2$ and $C_B(x) = x$, balancing the loads on $A$ and $B$ --- setting $C_A(S(t)/W) = C_B(1-S(t)/W)$ --- results in $(S(t)/W)^2 = 1 - S(t)/W$, leading to $k_{LB} / W = \frac{\sqrt{5} - 1}{2} = .618$. For $W = 1000$, the associated lower bound on average cost (Equation~\ref{eq:lb}) is $.617(.617)^2 + (.998 - .617)^2 = .380$. On the other hand, with $C_A$ and $C_B$ given as above, Eq~\ref{eq:upper2} is $(\frac{k+1}{W})^3 \; + \; (1 + \frac{2}{W} - \frac{k+1}{W})^2$. Differentiating with respect to $k$ and setting the result to zero leads to $\frac{k'}{W} = -\frac{1}{3} - \frac{1}{W} + \frac{\sqrt{28 + 48/W}}{6}$. For a window size of $W=1000$, this yields $k'/W = .548$, a different result than $k_{LB}$. Plugging into Equation~\ref{eq:upper2}, the upper bound on the performance with $k$' is $(.549)^3 + (1.002 - .549)^2 = .371$, which is less than $.380$.} That is, the best possible average cost achieve by load balancing will be worse than the worst average cost that could arise through the optimal routing strategy. This establishes that LB does not engage in optimal routing. \section{COIN-based Routing} \label{sec:coin} One common solution to these types of side-effect problems is to have particular components of the network (e.g., a ``network manager'' \cite{kola95}) dictate certain choices to other nodes. This solution can incur major brittleness and scaling problems however. Another kind of approach, which avoids the problems of a centralized manager, is to provide the nodes with extra incentives that can induce them to take actions that are undesirable to them from a strict SPA sense. Such incentive can be in the form of ``taxes'' or ``tolls'' added to the costs associated with traversing particular links to discourage the use of those links. Such schemes in which tolls are superimposed on the nodes' goals are a special case of the more general approach of replacing the goal of each node with a new goal. These new goals are specifically tailored so that if they are collectively met the system maximizes throughput. {\it A priori}, a node's goal need have no particular relation with the SPA-type cost incurred by that node's packets. Intuitively, in this approach, we provide each node with a goal that is ``aligned'' with the global objective, with no separate concern for of that goal's relation to the SPA-type cost incurred by the traffic routed by that node. In this section, we first summarize the theory of such systems, which are called COllective INtelligences (COIN's)~\cite{wowh99a,wotu99b}. We then use that theory to justify an algorithm that only uses limited knowledge of the state of the network (in particular knowledge that is readily available to routers in common real data networks) to make routing decisions. At each router, this algorithm uses a Memory Based (MB) machine learning algorithm to estimate the value that a private utility (provided by COIN theory) would take on under the different candidate routing decisions. It then makes routing decisions aimed at maximizing that utility. (We call this algorithm an MB COIN.) \subsection{The COIN Formalism} \label{sec:math} In this paper we consider systems that consist of a set of nodes, connected in a network, evolving across a set of discrete, consecutive time steps, $t \in \{0, 1, ...\}$. Without loss of generality, we let all relevant characteristics of a node $\eta$ at time $t$ --- including its internal parameters at that time as well as its externally visible actions --- be encapsulated by a Euclidean vector $\underline{\zeta}_{\eta,t}$ with components $\underline{\zeta}_{\eta,t;i}$. We call this the ``state'' of node $\eta$ at time $t$, and let $\underline{\zeta}_{,t}$ be the state of all nodes at time $t$, while $\underline{\zeta}$ is the state of all node across all time. {\bf World utility}, $G(\underline{\zeta})$, is a function of the state of all nodes across all time. (Note that that state is a Euclidean vector.) When $\eta$ is an agent that uses a machine learning (ML) algorithm to ``try to increase'' its {\bf private utility}, we write that private utility as $g_{\eta}({\zeta})$, or more generally, to allow that utility to vary in time, $g_{\eta,\tau}(\underline{\zeta})$. We assume that $\underline{\zeta}$ encompasses all physically relevant variables, so that the dynamics of the system is deterministic (though of course imprecisely known to anyone trying to control the system). Note that this means that $all$ characteristics of an agent $\eta$ at $ t = 0$ that affects the ensuing dynamics of the system must be included in $\underline{\zeta}_{\eta,0}$. For ML-based agents, this includes in particular the algorithmic specification of its private utility, typically in the physical form of some computer code. (As elaborated in \cite{wotu99b} the mathematics can be generalized beyond ML-based agents.) Here we focus on the case where our goal, as COIN designers, is to maximize world utility through the proper selection of private utility functions. Intuitively, the idea is to choose private utilities that are aligned with the world utility, and that also have the property that it is relatively easy for us to configure each node so that the associated private utility achieves a large value. In this paper, we restrict attention to utilities of the form $\sum_{t \ge \tau} R_{t}(\underline{\zeta}_{,t})$ for {\bf reward functions} $R_t$ (simply $\sum_{t} R_{t}(\underline{\zeta}_{,t})$ for non-time-varying utilities). From now on, we will only consider world utilities whose associated set of \{$R_t$\} are all time-translations of one another. In particular, as shown below, overall network throughput is expressible this way. We need a formal definition of the concept of having private utilities be ``aligned'' with $G$. Constructing such a formalization is a subtle exercise. For example, consider systems where the world utility is the sum of the private utilities of the individual nodes. This might seem a reasonable candidate for an example of ``aligned'' utilities. However such systems are examples of the more general class of systems that are ``weakly trivial''. It is well-known that in weakly trivial systems each individual agent greedily trying to maximize its own utility can lead to the tragedy of the commons ~\cite{hard68,crow69} and actually {\it minimize} $G$. In particular, this can be the case when private utilities are independent of time and $G = \sum_{\eta} g_{\eta}$. Evidently, at a minimum, having $G = \sum_{\eta} g_{\eta}$ is not sufficient to ensure that we have ``aligned'' utilities; some alternative formalization of the concept is needed.\footnote{Note that in the simple network discussed in Section~\ref{sec:ispa}, the utilities are weakly trivial, since $G(\vec{x}, \vec{y}) = g_X(\vec{x}) + g_y(\vec{y})$. This provides another perspective on the suboptimality of ISPA in that network.} A more careful alternative formalization of the notion of aligned utilities is the concept of ``factored'' systems. A system is {\bf factored} at time $\tau$ when the following holds for each agent $\eta$ individually: A change at time $\tau$ to the state of $\eta$ alone, when propagated across time, will result in an increased value of $g_{\eta,\tau}(\underline{\zeta})$ if and only if it results in an increase for $G(\underline{\zeta})$~\cite{wotu99b}. For a factored system, the side-effects of a change to $\eta$'s $t = \tau$ state that increases its private utility cannot decrease world utility. There are no restrictions though on the effects of that change on the private utilities of other nodes and/or times. In particular, we don't preclude different a node's algorithm at two different times from ``working at cross-purposes'' to each other, so long as at both moments the node is working to improve $G$. In game-theoretic terms, optimal global behavior corresponds to the agents' reaching a private utility Nash equilibrium for such systems~\cite{futi91}. In this sense, there can be no TOC for a factored system. As a trivial example, a system is factored for $g_{\eta,\tau} = G \; \forall \eta$. Define the {\bf effect set} of the node-time pair $(\eta,\tau)$ at $\underline{\zeta}$, $C^{eff}_{(\eta,\tau)}(\underline{\zeta})$, as the set of all components $\underline{\zeta}_{\eta',t}$ which under the forward dynamics of the system have non-zero partial derivative with respect to the state of node $\eta$ at $t=\tau$. Intuitively, $(\eta,\tau)$'s effect set is the set of all components $\underline{\zeta}_{\eta',t \ge \tau}$ which would be affected by a change in the state of node $\eta$ at time $\tau$. (They may or may not be affected by changes in the $t=\tau$ states of the other nodes.) Next, for any set $\sigma$ of components ($\eta', t$), define $\mbox{CL}_\sigma(\underline{\zeta})$ as the ``virtual'' vector formed by clamping the $\sigma$-components of $\underline{\zeta}$ to an arbitrary fixed value. (In this paper, we take that fixed value to be $\vec{0}$ for all components listed in $\sigma$.) The value of the effect set {\bf wonderful life utility} (WLU for short) for $\sigma$ is defined as: \begin{equation} WLU_{\sigma}(\underline{\zeta}) \equiv G(\underline{\zeta}) - G(\mbox{CL}_{\sigma}(\underline{\zeta})) . \end{equation} \noindent In particular, we are interested in the WLU for the effect set of node-time pair $(\eta,\tau)$. This WLU is the difference between the actual world utility and the virtual world utility where all node-time pairs that are affected by $(\eta,\tau)$ have been clamped to a zero state while the rest of $\underline{\zeta}$ is left unchanged. Since we are clamping to $\vec{0}$, we can view $(\eta,\tau)$'s effect set WLU as analogous to the change in world utility that would have arisen if $(\eta,\tau)$ ``had never existed''. (Hence the name of this utility - cf. the Frank Capra movie.) Note however, that $\mbox{CL}$ is a purely ``fictional'', counter-factual operator, in the sense that it produces a new $\underline{\zeta}$ without taking into account the system's dynamics. The sequence of states the node-time pairs in $\sigma$ are clamped to in constructing the WLU need not be consistent with the dynamical laws of the system. This dynamics-independence is a crucial strength of the WLU. It means that to evaluate the WLU we do {\it not} try to infer how the system would have evolved if node $\eta$'s state were set to $\vec{0}$ at time $\tau$ and the system evolved from there. So long as we know $\underline{\zeta}$ extending over all time, and so long as we know $G$, we know the value of WLU. Assuming our system is factored with respect to private utilities \{$g_{\eta,\tau}$\}, we want each node to be in a state at time $\tau$ that induces as high a value of the associated private utility as possible (given the initial states of the other nodes). Assume $\eta$ is ML-based and able to achieve fairly large values of most private utilities we are likely to set it for time $\tau$, i.e., assume that given that private utility $g_{\eta,\tau}$, the rest of the components of $\underline{\zeta}_{\eta,\tau}$ are set by $\eta$'s algorithm in such a way so as to achieve a relatively high value of $g_{\eta,\tau}$. So our problem becomes determining for what \{$g_{\eta,\tau}$\} the nodes will best be able to achieve high $g_{\eta}$ (subject to each other's actions) while also causing dynamics that is factored for $G$ and the \{$g_{\eta,\tau}$\}. As mentioned above, regardless of the system dynamics, having $g_{\eta,\tau} = G \; \forall \eta$ means the system is factored at time $\tau$. It is also true that regardless of the dynamics, $g_{\eta,\tau} = WLU_{C^{eff}_{(\eta,\tau)}} \; \forall \eta$ is a factored system at time $\tau$ (proof in ~\cite{wotu99b}). Which of these two choices of the $\{g_{\eta,\tau}\}$ should we use? To answer this, note that since each agent is operating in a large system, it may experience difficulty discerning the effects of its actions on $G$ when $G$ sensitively depends on all the myriad components of the system. Therefore each $\eta$ may have difficulty learning from past experience what to do to achieve high $g_{\eta,\tau}$ when $g_{\eta,\tau} = G$.\footnote {In particular, in the routing problem, having private rewards given by the world reward functions means that to provide each router with its reward at each time step we need to provide it the full throughput of the entire network at that step. This is usually infeasible in practice. Even if it weren't though, using these private utilities would mean that the routers face a very difficult task in trying to discern the effect of their actions on their rewards, and therefore would likely be unable to learn their best routing strategies.} This problem can be mitigated by using effect set WLU as the private utility, since the subtraction of the clamped term removes much of the ``noise'' of the activity of other agents, leaving only the underlying ``signal'' of how the agent in question affects the utility. (This reasoning is formalized as the concept of ``learnability'' in ~\cite{wotu99b}.) Accordingly, one would expect that setting private utilities to WLU's ought to result in better performance than having $g_{\eta,\tau} = G \; \forall \eta,\tau$. In practice, we will often only be able to estimate the ``primary'', most prominent portion of the effect set. However assuming that the associated WLU is close enough to being factored, we would expect the advantage in learnability with such a WLU to still result in better performance than would using $g_{\eta,\tau} = G \; \forall \eta,\tau$. (See ~\cite{wowh99a,wotu99b}.) Indeed, for the sake of improving learnability, often we will elect to exclude certain node-time pairs from our estimate of the effect set of $(\eta,\tau)$, even if we are sure that that are affected by $\underline{\zeta}_{\eta,\tau}$. This will be the case if we expect that the changes in $G$ due to varying $\underline{\zeta}_{\eta,\tau}$ that are ``mediated'' through those node-time pairs are relatively insignificant, and therefore effectively constitute noise for the learning process, so that their effect on learnability is more important than their effect on factoredness. \subsection{Model Description} \label{sec:model} To apply the COIN formalism to a network routing model, we must formally identify the components of that model as deterministically evolving vectors $\underline{\zeta}_{,t}$. In the model used in this paper, at any time step all traffic at a router is a set of pairs of integer-valued traffic amounts and associated ultimate destination tags. At each such time step $t$, each router $r$ sums the integer-valued components of its current traffic at that time step to get its {\bf instantaneous load}. We write that load as $z_r(t) \equiv \sum_d x_{r,d}(t)$, where the index $d$ runs over ultimate destinations, and $x_{r,d}(t)$ is the total traffic at time $t$ going from $r$ towards $d$. After its instantaneous load at time $t$ is evaluated, the router sends all its traffic to the next downstream routers, in a manner governed by its routing algorithm. We indicate such ``next routers'' by writing $x_{r,d}(t) = \sum_{r'} x_{r,d,r'}(t)$ where $r'$ is the first stop on the path to be followed from router $r$ to ultimate destination $d$. After all such routed traffic goes to those next downstream routers, the cycle repeats itself, until all traffic reaches its destinations. In our simulations, for simplicity, traffic was only introduced into the system (at the {\bf source routers}) at the beginning of successive {\bf waves} of $L$ consecutive time steps. ($L$ was always chosen to be the minimal number necessary for all traffic to reach its destination before the next wave of traffic is initiated.) We use $\kappa(t)$ to indicate either the integer-valued wave number associated with time $t$ or the set of all times in that wave, as the context indicates. In a real network, the cost of traversing a router does not change dramatically from one packet to the next. To simulate this effect, we use time-averaged values of the load at a router rather than instantaneous load to determine the cost a packet incurs in traversing that router. More formally, we define the router's {\bf windowed load}, $Z_r(t)$, as the running average of that router's load value over a window of the previous $W$ timesteps: $Z_r(t) \equiv \frac{1}{W} \sum_{t'=t - W + 1}^{t} z_r(t') = \sum_{d'} X_{r,d'}(t)$, where the value of $X_{r,d}(t)$ is set by the dynamical law $X_{r,d}(t) = \frac{1}{W} \sum_{t' = t - W + 1}^t x_{r,d}(t'))$. ($W$ is always set to an integer multiple of $L$.) For large enough $W$, using such a window means that in a typical scenario the costs across nodes will only change substantially over time scales significantly larger than that of the individual routing decisions. The windowed load is the argument to a {\bf load-to-cost} function, $V(\cdot)$, which provides the {\bf cost} accrued at time $t$ by each packet traversing the router at this timestep. That is, at time $t$, the cost for each packet to traverse router $r$ is given by $V(Z_r(t))$.\footnote{Note that in our model, the costs are accrued at the routers, not the links. Also note that for simplicity we do not physically instantiate the cost as a temporal delay in crossing a router.} (We also introduce ``dummy nodes'' denoted by $V_0(\cdot) = 0$ which help in translating the mathematics into the simulations. Omitting them will have no effect on the simulations.) Different routers have different $V(\cdot)$, to reflect the fact that real networks have differences in router software and hardware (response time, queue length, processing speed etc). For simplicity, $W$ is the same for all routers however. With these definitions, world utility is given by \begin{eqnarray} G(\underline{\zeta}) =& \sum_{t,r} \; \; z_r(t) \; \; V_r(Z_r(t)) \nonumber \\ \nonumber =& \sum_{t,r,d} x_{r,d}(t) V_r(Z_r(t)) \\ \nonumber =& \sum_{t,r,d} x_{r,d}(t) V_r(\sum_{d'} X_{r,d'}(t)) \;. \end{eqnarray} Our equation for $G$ explicitly demonstrates that, as claimed above, in our representation we can express $G(\underline{\zeta})$ as a sum of rewards, $\sum_t R_t(\underline{\zeta}_{,t})$, where $R(\underline{\zeta}_{,t})$ can be written as function of a pair of $(r,d)$-indexed vectors: $R_t(x_{r,d}(t), X_{r,d}(t)) = \sum_{r,d} x_{r,d}(t) V_r(\sum_{d'} X_{r,d'}(t))$. Also as claimed, the $R_t$ are temporal translations of one another. Given this model, some of the components of $\underline{\zeta}_{,t}$ must be identified with the values $x_{r,d,r'}(t) \; \forall \; r, d, r'$ and $t$, since those are the actions we will take. Since all arguments of $G$ must be components of $\underline{\zeta}$, we also include the $X_{r,d}(t) \; \forall r,d,t$ as components of $\underline{\zeta}_{,t}$. (We could use the \{$Z_r(t)$\} as an alternative, but this would provide a ``coarser'' WLU; see below.) Formally, for routing based on ML agents, other variables must also be included in $\underline{\zeta}$, to capture the (deterministically evolving) internal parameters used by those agents to make their routing decisions. We won't have any need to explicitly delineate such variables here however, and will mostly phrase the discussion as though there were no such internal parameters. Now the values \{$x_{r,d,r'}(t-1)\} \; \forall r,d,r'$ specify the values \{$x_{r,d}(t)\} \; \forall r, d$ directly. However the decisions of each router's algorithm at all times $t$ is a fixed function of the \{$x_{r,d}(t-1)$\} and the \{$Z_{r}(t-1) = \sum_{d'} X_{r,d'}(t-1)$\}, a function given by the routing algorithm which is implicitly encapsulated in the dynamical laws governing the system. So in point of fact we can map the set of \{$x_{r,d,r'}(t-1)\} \; \forall r,d,r'$ to the full set \{$x_{r,d,r'}(t)\} \; \forall r,d,r'$, not just to \{$x_{r,d}(t)$\}. Accordingly, the $x_{r,d,r'}$ undergo deterministic evolution. Since their values across time set all the values of the $X_{r,d}(t)$ across time, we see that the entire set of the components of $\underline{\zeta}_{,t}$ undergo deterministic evolution in this representation, as required. For evaluating the wonderful life utility we will need to group the components of $\underline{\zeta}_{,t}$ into disjoint nodes $\eta$. Here we will have two types of node, both types being indexed by router-destination pairs. For each such node index $(r, d)$, the first node type is the variable $X_{r,d}(t)$, and the second node type is the Euclidean vector with components indexed by $r'$, $(x_{r,d})_{r'}(t)$. In setting ``actions'' we are concerned with setting the states of the nodes of the second type. Accordingly, our learners will all be associated with nodes of this second type. Unless explicitly indicated otherwise, from now on we will implicitly have that second type of node in mind whenever we refer to a ``node'' or use the symbol $\eta$. At time step $t$, ISPA has access to all the windowed loads at time step $t - 1$ (i.e., it has access to $Z_{r}(t-1) \; \forall r$), and assumes that those values will remain the same at all times $\ge t$. (Note that for large window sizes and times close to $t$, this assumption is arbitrarily accurate.) Using this assumption, in ISPA, each router sends packets along the path that it calculates will minimize the costs accumulated by its packets. \subsection{COIN Routing} \label{sec:coinroute} Based on the COIN formalism presented in Section~\ref{sec:math} and the model described above, we now present the COIN-based routing algorithms. To evaluate the WLU for a node $(r,d)$ at any time $\tau$, we must estimate the (primary members of the) associated effect set. This means determining what components of $\underline{\zeta}_{,}$ will, under the dynamics of the system, be changed by altering any of the components of the vector $x_{r,d}(\tau)$. As a first approximation, we will ignore effects that changing $x_{r,d}(\tau)$ may have that are ``mediated'' by the learning algorithms running in the system. That is, we ignore changes that arise due to the the effects of changing $x_{r,d}(\tau)$ on rewards, changes which induce changes in future training sets, which then in turn get mapped to changes in the \{$x_{r,d,r'}(t)$\} (and therefore the \{$X_{r,d}(t)$\}) via the learning algorithms running on the nodes. As another approximation, we will ignore effects mediated by the routing algorithms' observations of the state of the network. That is, we ignore changes in the \{$x_{r'',d',r'''}(t)$\} that varying $x_{r,d}(\tau)$ may cause due to $(r'',d')$'s routing algorithm perceiving a different state of the network and modifying its routing decisions accordingly. We only consider the behavior of those routing algorithms that are (potentially) directly affected by $x_{r,d}(\tau)$ in that they (potentially) have to route packets that, at time $\tau$, passed through $r$ on the way to $d$. So in particular we ignore effects of $x_{r,d}(\tau)$ on the \{$x_{r'',d' \neq d,r'''}(t)$. Since all packets routed in a wave arrive at their destinations by the end of the wave, these approximations mean that the only $x_{r'',d'',r'''}(t)$ that are in our estimate for $x_{r,d}(\tau)$'s effect set have $t$ in the same wave as $\tau$. (These are the only ones that are, potentially, directly affected by the \{$x_{r,d,r'}(t)$\} by ``chaining together'' the sequence of $x_{r'',d'',r'''}(t)$ that get the packets in $x_{r,d}(t)$ to their ultimate destination.) Due to the wave nature of our simulations though, the only $x_{r'',d'',r'''}(t)$ within $\tau$'s wave that are affected by $x_{r,d}(\tau)$ all have $d'' = d$. For reasons of coding simplicity, we do not concern ourselves whether $t < \tau$ within a given wave and then exclude some $x_{r'',d'',r'''}(t)$ accordingly. In other words, all $t$ within $\tau$'s wave are treated equally. So one set of members of $x_{r,d}(\tau)$'s effect set is \{$x_{r'',d,r'''}(t) \; \forall r'',d,r''',t \in \kappa(\tau)$\}. Note that some of these members will be relatively unaffected by $x_{r,d}(\tau)$ (e.g., those with $r''$ far in the net away from $r$). Again for simplicity, we do not try to determine these and exclude them. As with keeping the $x_{r'',d,r'''}(t<\tau)$, this inclusion of extra nodes in our estimate of the effect set should hurt learnability, but in general should not hurt factoredness. Therefore it should delay how quickly the learners determine their optimal policies, but it won't affect how the quality (for $G$) of those polices finally arrived at. Note also that trying to determine whether some particular $x_{r'',d,r'''}(t \in \kappa(\tau))$ should be included in $x_{r,d}(\tau)$'s effect set would mean, in part, determining whether packets routed from $(r,d)$ would have reached $r''$ if $(r,d)$ had made some routing decision different from the one it actually made. This would be a non-trivial exercise, in general. In contrast to the case with the $x_{r'',d',r'''}(t)$, there are $X_{r'',d'}(t)$ with $t$ in the future of $\tau$'s wave that both are affected by $x_{r,d}(t)$ and also are not excluded by any of our approximations so far. In particular, the $X_{r'',d}(t)$ with either $r'' = r$ or $r''$ one hop away from $r$ will be directly affected by $x_{r,d}(t)$, for $t \in \cup_{i=0}^{W-1} \kappa(\tau + iL))$ (cf. the definition of the $X$ variables). For simplicity, we restrict consideration of such $X_{r'',d}$ variables to those with the same router as $r$, $r'' = r$. This final estimate for the effect set is clearly rather poor --- presumably results better than those presented below would accrue to use of a more accurate effect set. However it's worth bearing in mind that there is a ``self-stabilizing'' nature to the choice of effect sets, when used in conjunction with effect set WLU's. This nature is mediated by the learning algorithms. If I take two nodes and give them the same utility function, then the reward one node gets will be determined in part by what the other one does. So as it modifies its behavior to try to increase its reward, that first node will be modifying its behavior in a way dependent on what the other node does. In other words, if two nodes are given the same WLU because they are estimated to be in each other's effect set, then {\it ipso facto} they will be in each other's effect set. Using our estimate for the effect set, the WLU for $(\eta,\tau)$ is given by the difference between the total cost accrued in $\tau$'s wave by all nodes in the network and the cost accrued by nodes when all nodes sharing $\eta$'s destination are ``erased.'' More precisely, any node $\eta$ that has a destination $d$ will have the following effect set WLU's, $g_{\eta,\tau}$: \begin{eqnarray} g_{\eta,\tau}(\underline{\zeta}) \! \! \! \! \! &=& G(\underline{\zeta}) - G(\mbox{CL}_{C^{eff}_{(\eta,\tau)}}(\underline{\zeta})) \label{eq:geta} \nonumber \\ & = & \! \! \sum_{t,r',d'} \; x_{r',d'}(t) \; V_{r'}\left(\sum_{d'} X_{r',d'}(t)\right) \; - \sum_{t, r',d'} \; \left[ x_{r',d'}(t) (1 - I(t \in \kappa(\tau))I(d' = d)) \right] \nonumber \\ && \; \times \;\;\;\;\;\; V_{r'} \left( \sum_{d''} \;\; [\;X_{r',d''}(t)\;\;(1 - I(t \in \cup_{i=0}^{W-1} \kappa(\tau + iL)) I(d'' = d))\;] \right) \nonumber \\ & = & \! \! \sum_{t \in \kappa(\tau)} \sum_{r'} \; \left( \sum_{d'} \; x_{r',d'}(t) \; \; V_{r'}(\sum_{d''} X_{r',d''}(t)) \; - \; \sum_{d' \neq d} x_{r',d'}(t) \; V_{r'}(\sum_{d'' \neq d} X_{r',d''}(t)) \right) \nonumber \\ && + \! \! \sum_{t \in \cup_{i=1}^{W-1} \kappa(\tau + iL)} \sum_{r'} \left( \sum_{d'} x_{r',d'}(t) \;[V_{r'}(\sum_{d''} X_{r',d''}(t)) - V_{r'}(\sum_{d'' \neq d} X_{r',d''}(t))] \right) \label{eq:netwlu} \end{eqnarray} \noindent where $I(.)$ is the indicator function that equals 1 if its argument is true, 0 otherwise. To allow the learner to receive feedback concerning its actions in a wave immediately following that wave rather than wait for $\sim WL$ time steps, we will approximate the second sum in that last equality, the one over times following $\tau$'s wave, as zero. There is another way we can view the resultant expression, rather than as an approximation to the effect set WLU. That is to view it as the exact WLU of an approximation to the effect set, an approximation which ignores effects on future windowed loads of clamping a current traffic level. Regardless of what view we adopt, presumably better performance could be achieved if we did not implement this approximation. Given this approximation, our WLU becomes a wave-indexed time-translation-invariant WL ``reward function'' (WLR): \begin{eqnarray} g_{\eta,\tau}(\underline{\zeta}_{,t \in \kappa(\tau)}) = \! \! \sum_{t \in \kappa(\tau), r'} \! \left( \sum_{d'} \; x_{r',d'}(t) \; V_{r'}(\sum_{d''} X_{r',d''}(t)) \right. \nonumber \\ \! \! \! \! \! \! \! \! \! - \left. \sum_{d' \neq d} \; x_{r',d'}(t) \; \; V_{r'}(\sum_{d'' \neq d} X_{r',d''}(t)) \right) . \end{eqnarray} \noindent Notice that traffic going from a router $r' \neq r$ to a destination $d' \neq d$ affects the value of the WLR for node $(r,d)$. This reflects the fact that WLR takes into account side-effects of $(r,d)$'s actions on other nodes. Note also that each $r'$-indexed term contributing to the WLR can be computed by the associated router $r'$ separately, from information available to that router. Subsequently those terms can be propagated through the network to $\eta$, in much the same way as routing tables updates are propagated. Given this choice of private utility, we must next specify how the COIN-based routing algorithm collects the initial data that (in conjunction with this utility) is to be used to guide the initial routing decisions that every node with more than one routing option must make. In our experiments that data was collected during a preliminary running of an ISPA. In this preliminary stage, the routing decisions are made using the ISPA, but the resulting actions are ``scored'' using the WLR given by Equation~\ref{eq:netwlu}. \footnote{We use the ISPA to generate the routing decisions in the initial data since it is likely in practice that some kind of SPA will be the routing algorithm running prior to ``turning on'' the COIN algorithm. Alternately one can generate the initial data's routing decisions by having the routers make random decisions, or by having them implement a sequence of decisions that ``sweeps'' across a grid through the possible set of actions.} The data collected in this stage provides us with initial input-output training sets to be used by the machine learning algorithm on each node: for each router-destination node, inputs are identified with windowed loads on outgoing links, and the associated WLR values for the destination in question are the outputs. After sufficient initial data is collected using the ISPA, the system switches to using the COIN algorithm to make subsequent routing decisions. In this stage, each node routes packets along the link that it estimates (based on the training set) would provide the best WLR. To perform the estimation, the MB COIN makes use of a single-nearest-neighbor algorithm as its learner. This algorithm simply guesses that the output that would ensue from any candidate input is the same as the output of the element of the training set that is the nearest neighbor (in input space) of that candidate input.\footnote{This is a very simple learning algorithm, and we use it here only to demonstrate the potential practical feasibility of a COIN-based routing algorithm. The performance can presumably be improved if more sophisticated learning algorithms (e.g., Q-learning \cite{suba98,wada92}) are used.} In other words, the learner finds the training set input-output pair whose input value (loads on outgoing links) is closest to that which would result from each potential routing decision. Then the learner assigns the WLR associated with that training data pair as the estimate for what WLR would result from said routing decision. These WLR values are then used to choose among those potential routing decisions. The input-output data generated under this algorithm is adding to the training set as it is generated. In this routing algorithm, the routers only estimate how their routing decisions (as reflected in their loads at individual time steps) will affect their WLR values (based on many nodes' loads). It is also possible to calculate {\it exactly} how the routing decisions affect the routers' WLR's if, unlike the MB COIN, we had full knowledge of the loads of all nodes in the system. In a way similar to ISPA, for each router we can evaluate the exact WLR value that would ensue from each of its candidate actions, under the assumption that windowed loads on all other routers are the same one wave into the future as they are now. We call this algorithm for directly maximizing WLR (an algorithm we call the full knowledge COIN (FK COIN)). Note that under the assumption behind the FK COIN, the action $\eta$ chooses in wave $\kappa(\tau)$ that maximizes WLR will also maximize the world reward. In other words, WL reward is perfectly factored with respect to (wave-indexed) world reward, even though the associated utilities are not related that way (due to inaccuracy in our estimate of the effect set). Due to this factoredness, the FK COIN is equivalent to load balancing on world rewards. Since LB in general results in inferior performance compared to LB over time, and since the FK COIN is equivalent to LB, one might expect that its performance is suboptimal. Intuitively, this suboptimality reflects the fact that one should not choose the action only with regard to its effect on current reward, but also with concern for the reward of future waves. In the language of the COIN framework, this suboptimality can be viewed as a restatement of the fact that for our inexactly estimated effect set, the system will not be perfectly factored. The learning algorithm of the MB COIN as described is extraordinarily crude. In addition, the associated scheme for choosing an action is purely exploitative, with no exploration whatsoever. Rather than choose some particular more sophisticated scheme and tune it to fit our simulations, we emulated using more sophisticated algorithms {\it in general}. We did this by modifying the MB COIN algorithm to occasionally have the FK COIN determine a router's action rather than the purely greedy learner outlined above. The {\bf steering parameter} discussed in Section~\ref{sec:steer} determines how often the routing decision is based on the MB COIN as opposed to the FK COIN. \section{SIMULATION RESULTS} \label{sec:sim} Based on the model and routing algorithms discussed above, we have performed simulations to compare the performance of ISPA and MB COIN across a variety of networks, varying in size from five to eighteen nodes. In all cases traffic was inserted into the network in a regular, non-stochastic manner at the sources. The results we report are averaged over 20 runs. We do not report error bars as they are all lower than $0.05$. In Sections~\ref{sec:bootes}~-~\ref{sec:ray} we analyze traffic patterns over four networks where ISPA suffers from the Braess' paradox. In contrast, the MB COIN almost never falls prey to the paradox for those networks (or for no networks we have investigated is the MB COIN significantly susceptible to Braess' paradox). Then in Section~\ref{sec:steer} we discuss the effect on the MB COIN's performance of the ``steering'' parameter which determines the intelligence of the MB COIN.\footnote{In Sections~\ref{sec:bootes}~-~\ref{sec:ray}, the steering parameter is set at 0.5.} \subsection{Bootes Network} \label{sec:bootes} The first network type we investigate is shown in Figure~\ref{fig:bootes}. It is in many senses a trivial network. (Indeed, in Net A, the sources do not even have any choices to make.) The loads introduced at the sources do not change in time and are listed in Tables~\ref{tab:bootes2} and \ref{tab:bootes4}, along with the performances of our algorithms. \begin{figure} [htb] \begin{picture}(100,140)(-70,-30) \put(50,0){\circle*{10}} \put (58,0) {\makebox(-1,1)[l]{$S_1$}} \put(50,0) {\line (-1,1){50}} \put(0,50){\circle*{10}} \put (-8,55) {\makebox(-1,1)[r]{$V_1$}} \put(100,50){\circle*{10}} \put (108,55) {\makebox(-1,1)[l]{$V_2$}} \put(0,50) {\line (1,1){50}} \put(100,50) {\line (-1,1){50}} \put(50,100){\circle*{10}} \put (58,100) {\makebox(-1,1)[l]{$D$}} \put(125,0){\circle*{10}} \put (132,0) {\makebox(-1,1)[l]{$S_2$}} \put(125,0) {\line (-1,2){25}} \put(25,25){\circle*{10}} \put (33,25) {\makebox(-1,1)[l]{$V_0$}} \put(112,25){\circle*{10}} \put (120,25) {\makebox(-1,1)[l]{$V_0$}} \put(225,0){\circle*{10}} \put (233,0) {\makebox(-1,1)[l]{$S_1$}} \put(225,0) {\line (-1,1){50}} \put(225,0) {\line (1,1){50}} \put(250,25){\circle*{10}} \put (257,25) {\makebox(-1,1)[l]{$V_3$}} \put(175,50){\circle*{10}} \put (167,50) {\makebox(-1,1)[r]{$V_1$}} \put(275,50){\circle*{10}} \put (283,50) {\makebox(-1,1)[l]{$V_2$}} \put(175,50) {\line (1,1){50}} \put(275,50) {\line (-1,1){50}} \put(225,100){\circle*{10}} \put (233,100) {\makebox(-1,1)[l]{$D$}} \put(300,0){\circle*{10}} \put (280,0) {\makebox(-1,1)[l]{$S_2$}} \put(300,0) {\line (-1,2){25}} \put(200,25){\circle*{10}} \put (208,25) {\makebox(-1,1)[l]{$V_0$}} \put(287,25){\circle*{10}} \put (295,25) {\makebox(-1,1)[l]{$V_0$}} \put (50,-30) {\makebox(-1,1){Net A}} \put (225,-30) {\makebox(-1,1){Net B}} \end{picture} \caption{Bootes Network} \label{fig:bootes} \end{figure} \begin{table}[htb] \centering \caption{Average Per Packet Cost for BOOTES2 networks for $V_1 = 10 + log(1 + x) \; ; \; V_2 = 4 x^2 \; ; \; V_3 = log(1 + x) $ .} \vspace*{.1in} \begin{tabular}{c|c|c|c} \hline Loads at $(S_1,S_2)$ & Net & ISPA & MB COIN \\ \hline \hline 1,1 & A & 6.35 & 6.35 \\ & B & 8.35 & 5.93 \\ \hline 2,1 & A & 8.07 & 8.07 \\ & B & 10.40 & 7.88 \\ \hline 2,2 & A & 9.55 & 9.55 \\ & B & 10.88 & 9.71 \\ \hline 4,2 & A & 10.41 & 10.41 \\ & B & 11.55 & 10.41 \\ \hline \end{tabular} \label{tab:bootes2} \end{table} \begin{table}[htb] \centering \caption{Average Per Packet Cost for BOOTES4 network for $V_1 = 50 + log(1 + x) \; ; \; V_2 = 10 x \; ; \; V_3 = log(1 + x) $ .} \vspace*{.1in} \begin{tabular}{c|c|c|c} \hline Loads at $(S_1,S_2)$ & Net & ISPA & MB COIN \\ \hline \hline 1,1 & A & 30.35 & 30.35 \\ & B & 20.35 & 20.35 \\ \hline 2,2 & A & 35.55 & 35.55 \\ & B & 40.55 & 34.99 \\ \hline 4,2 & A & 41.07 & 41.07 \\ & B & 50.47 & 44.13 \\ \hline 6,3 & A & 44.63 & 44.63 \\ & B & 51.40 & 44.63 \\ \hline \end{tabular} \label{tab:bootes4} \end{table} The MB COIN results are identical to the ISPA results in the absence of the additional link (Network A). However, Braess' paradox arises with ISPA, in that the addition of the new link in network B degrades the performance of the ISPA in six of the eight traffic regimes and load-to-cost functions investigated. The MB COIN on the other hand is only hurt by the addition of the new link once, and manages to gainfully exploit it seven times. (When behavior is analyzed infinitesimally, the MB COIN either uses the additional link efficiently or chooses to ignore it in those seven cases.) Moreover, the MB COIN's performance with the additional link is always better than the ISPA's. For example, adding the new link causes a degradation of the performance by as much as 30 \% (loads = $\{2,1\}$) for the ISPA, whereas for the same load vector MB COIN performance improves by 7 \%. \subsection{Hex Network} In this section we revisit the network first discussed in Section~\ref{sec:ispa} (redrawn in Figure~\ref{fig:hex2} to include the dummy nodes). In Table~\ref{tab:hex3} we give full results for the load-to-delay functions discussed in that section. We then use load-to-cost functions which are qualitatively similar to those discussed in Section~\ref{sec:ispa}, but which incorporate non-linearities that better represent real router characteristics. That load-to-cost function and associated results are reported in Table~\ref{tab:hex4}. \begin{figure}[bth] \begin{picture}(100,140)(-80,-20) \put(50,0){\circle*{10}} \put (58,0) {\makebox(-1,1)[l]{$S$}} \put(50,0) {\line (-5,3){50}} \put(50,0) {\line (5,3){50}} \put(0,30){\circle*{10}} \put (-8,30) {\makebox(-1,1)[r]{$V_1$}} \put(0,30) {\line (0,1){50}} \put(100,30){\circle*{10}} \put (108,30) {\makebox(-1,1)[l]{$V_2$}} \put(100,30) {\line (0,1){50}} \put(0,80){\circle*{10}} \put (-8,80) {\makebox(-1,1)[r]{$V_2$}} \put(0,80) {\line (5,3){50}} \put(100,80){\circle*{10}} \put (108,80) {\makebox(-1,1)[l]{$V_1$}} \put(100,80) {\line (-5,3){50}} \put(50,110){\circle*{10}} \put (58,110) {\makebox(-1,1)[l]{$D$}} \put(0,55){\circle*{10}} \put (-8,55) {\makebox(-1,1)[r]{$V_0$}} \put(100,55){\circle*{10}} \put (106,55) {\makebox(-1,1)[l]{$V_0$}} \put(225,0){\circle*{10}} \put (233,0) {\makebox(-1,1)[l]{$S$}} \put(225,0) {\line (5,3){50}} \put(225,0) {\line (-5,3){50}} \put(175,30){\circle*{10}} \put (170,30) {\makebox(-1,1)[br]{$V_1$}} \put(175,30) {\line (0,1){50}} \put(275,30){\circle*{10}} \put (280,30) {\makebox(-1,1)[bl]{$V_2$}} \put(275,30) {\line (0,1){50}} \put(175,80){\circle*{10}} \put (170,80) {\makebox(-1,1)[br]{$V_2$}} \put(175,80) {\line (5,3){50}} \put(275,80){\circle*{10}} \put (280,80) {\makebox(-1,1)[bl]{$V_1$}} \put(275,80) {\line (-5,3){50}} \put(225,110){\circle*{10}} \put (233,110) {\makebox(-1,1)[l]{$D$}} \put(225,55){\circle*{10}} \put (225,65) {\makebox(-1,1)[b]{$V_3$}} \put(175,30){\line(2,1){100}} \put(175,55){\circle*{10}} \put (170,55) {\makebox(-1,1)[br]{$V_0$}} \put(275,55){\circle*{10}} \put (280,55) {\makebox(-1,1)[bl]{$V_0$}} \put (50,-30) {\makebox(-1,1){Net A}} \put (225,-30) {\makebox(-1,1){Net B}} \end{picture} \caption{Hex network} \label{fig:hex2} \end{figure} This network demonstrates that while the addition of a new link may be beneficial in low traffic cases, it leads to bottlenecks in higher traffic regimes. For ISPA although the per packet cost for loads of 1 and 2 drop drastically when the new link is added, the per packet cost increases for higher loads. The MB COIN on the other hand uses the new link efficiently. Notice that the MB COIN's performance is slightly worse than that of the ISPA in the absence of the additional link. This is caused by the MB COIN having to use a learner to estimate the WLU values for potential actions whereas the ISPA simply has direct access to all the information it needs (costs at each link). \begin{table}[htb] \centering \caption{Average Per Packet Cost for HEX network for $V_1 = 50 + x \; ; \; V_2 = 10 x \; ; \; V_3 = 10 + x$ .} \vspace*{.1in} \begin{tabular}{c|c|c|c} \hline Load & Net & ISPA & MB COIN \\ \hline \hline 1 & A & 55.50 & 55.56 \\ & B & 31.00 & 31.00 \\ \hline 2 & A & 61.00 & 61.10 \\ & B & 52.00 & 51.69 \\ \hline 3 & A & 66.50 & 66.65 \\ & B & 73.00 & 64.45 \\ \hline 4 & A & 72.00 & 72.25 \\ & B & 87.37 & 73.41 \\ \hline \end{tabular} \label{tab:hex3} \end{table} \begin{table}[htb] \centering \caption{Average Per Packet Cost for HEX network for $V_1 = 50 + log(1+x) \; ; \; V_2 = 10 x \; ; \; V_3 = log(1+x)$ .} \vspace*{.1in} \begin{tabular}{c|c|c|c} \hline Load & Net & ISPA & MB COIN \\ \hline \hline 1 & A & 55.41 & 55.44 \\ & B & 20.69 & 20.69 \\ \hline 2 & A & 60.69 & 60.80 \\ & B & 41.10 & 41.10 \\ \hline 3 & A & 65.92 & 66.10 \\ & B & 61.39 & 59.19 \\ \hline 4 & A & 71.10 & 71.41 \\ & B & 81.61 & 69.88 \\ \hline \end{tabular} \label{tab:hex4} \end{table} \subsection{Butterfly Network} The next network we investigate is shown in Figure~\ref{fig:butterfly}. It is an extension to the simple network discussed in Section~\ref{sec:bootes}. We now have doubled the size of the network and have three sources that have to route their packets to two destinations (packets originating at $S_1$ go to $D_1$, and packets originating at $S_2$ or $S_3$ go to $D_2$). Initially the two halves of the network have minimal contact, but with the addition of the extra link two sources from the two two halves of the network share a common router on their potential shortest path. \begin{figure} [htb] \begin{picture}(200,140)(-40,-30) \put(30,0){\circle*{10}} \put (22,0) {\makebox(-1,1)[r]{$S_1$}} \put(30,0) {\line (-3,5){30}} \put(0,50){\circle*{10}} \put (-8,50) {\makebox(-1,1)[r]{$V_1$}} \put(60,50){\circle*{10}} \put (68,50) {\makebox(-1,1)[l]{$V_2$}} \put(0,50) {\line (3,5){30}} \put(60,50) {\line (-3,5){30}} \put(60,50) {\line (3,5){30}} \put(30,100){\circle*{10}} \put (23,100) {\makebox(-1,1)[r]{$D_1$}} \put(90,0){\circle*{10}} \put (82,0) {\makebox(-1,1)[r]{$S_2$}} \put(90,0) {\line (-3,5){30}} \put(90,0) {\line (3,5){30}} \put(90,100){\circle*{10}} \put (98,100) {\makebox(-1,1)[l]{$D_2$}} \put(120,50){\circle*{10}} \put (127,50) {\makebox(-1,1)[l]{$V_3$}} \put(120,50) {\line (-3,5){30}} \put(105,25){\circle*{10}} \put (112,25) {\makebox(-1,1)[l]{$V_1$}} \put(130,0){\circle*{10}} \put (137,0) {\makebox(-1,1)[l]{$S_3$}} \put(130,0) {\line (-1,1){25}} \put(15,25){\circle*{10}} \put (7,25) {\makebox(-1,1)[r]{$V_0$}} \put(75,25){\circle*{10}} \put (68,25) {\makebox(-1,1)[r]{$V_0$}} \put(230,0){\circle*{10}} \put (222,0) {\makebox(-1,1)[r]{$S_1$}} \put(230,0) {\line (-3,5){30}} \put(245,25){\circle*{10}} \put (239,25) {\makebox(-1,1)[r]{$V_3$}} \put(230,0) {\line (3,5){30}} \put(200,50){\circle*{10}} \put (192,50) {\makebox(-1,1)[r]{$V_1$}} \put(260,50){\circle*{10}} \put (268,50) {\makebox(-1,1)[l]{$V_2$}} \put(200,50) {\line (3,5){30}} \put(260,50) {\line (-3,5){30}} \put(260,50) {\line (3,5){30}} \put(230,100){\circle*{10}} \put (223,100) {\makebox(-1,1)[r]{$D_1$}} \put(290,0){\circle*{10}} \put (282,0) {\makebox(-1,1)[r]{$S_2$}} \put(290,0) {\line (-3,5){30}} \put(290,0) {\line (3,5){30}} \put(290,100){\circle*{10}} \put (298,100) {\makebox(-1,1)[l]{$D_2$}} \put(320,50){\circle*{10}} \put (326,50) {\makebox(-1,1)[l]{$V_3$}} \put(320,50) {\line (-3,5){30}} \put(305,25){\circle*{10}} \put (312,25) {\makebox(-1,1)[l]{$V_1$}} \put(330,0){\circle*{10}} \put (337,0) {\makebox(-1,1)[l]{$S_3$}} \put(330,0) {\line (-1,1){25}} \put(215,25){\circle*{10}} \put (208,25) {\makebox(-1,1)[r]{$V_0$}} \put(275,25){\circle*{10}} \put (270,25) {\makebox(-1,1)[r]{$V_0$}} \put (75,-30) {\makebox(-1,1){Net A}} \put (275,-30) {\makebox(-1,1){Net B}} \end{picture} \caption{Butterfly Network} \label{fig:butterfly} \end{figure} Table~\ref{tab:butterfly4} presents two sets of results: first we present results for uniform traffic through all three sources, and then results for asymmetric traffic. For the first case, the Braess' paradox is apparent in the ISPA: adding the new link is beneficial for the network at low load levels where the average per packet cost is reduced by nearly $20\%$, but deleterious at higher levels. The MB COIN, on the other hand, provides the benefits of the added link for the low traffic levels, without suffering from deleterious effects at higher load levels. \begin{table}[htb] \centering \caption{Average Per Packet Cost for BUTTERFLY network for $V_1 = 50 + log(1 + x) \; ; \; V_2 = 10 x \; ; \; V_3 = log(1 + x)$. } \vspace*{.1in} \begin{tabular}{c|c|c|c} \hline Loads $(S_1,S_2,S_3)$ & Net & ISPA & MB COIN \\ \hline \hline 1,1,1 & A & 112.1 & 112.7 \\ & B & 92.1 & 92.3 \\ \hline 2,2,2 & A & 123.3 & 124.0 \\ & B & 133.3 & 122.5 \\ \hline 4,4,4 & A & 144.8 & 142.6 \\ & B & 156.5 & 142.3 \\ \hline \hline 3,2,1 & A & 81.8 & 82.5 \\ & B & 99.5 & 81.0 \\ \hline 6,4,2 & A & 96.0 & 94.1 \\ & B & 105.3 & 94.0 \\ \hline 9,6,3 & A & 105.5 & 98.2 \\ & B & 106.7 & 98.8 \\ \hline \end{tabular} \label{tab:butterfly4} \end{table} For the asymmetric traffic patterns, the added link causes a drop in performance for the ISPA, especially for low overall traffic levels. This is not true for the MB COIN. Notice also that in the high, asymmetric traffic regime, the ISPA performs significantly worse than the MB COIN even without the added link, showing that a bottleneck occurs on the right side of network alone (similar to the Braess' paradox observed in Section~\ref{sec:bootes}). \subsection{Ray Network} \label{sec:ray} In all the networks and traffic regimes discussed so far the sources are the only routers with more than one routing option. The final network we investigate is a larger network where the number of routers with multiply options is significantly higher than in the previous networks. Figure~\ref{fig:ray} shows the initial network (Net A) and the ``augmented'' network (Net B), where new links have been added. The original network has relatively few choices for the routers, as packets are directed toward their destinations along ``conduits.'' The new links are added in the augmented networks to provide new choices (crossing patterns) that could be beneficial if certain of the original conduits experience large costs. \begin{figure} [htb] \begin{picture}(200,200)(10,-20) \put(100,0){\circle*{10}} \put (92,0) {\makebox(-1,1)[r]{$S_1$}} \put(100,0) {\line (-5,6){30}} \put(150,0){\circle*{10}} \put (158,0) {\makebox(-1,1)[l]{$S_2$}} \put(150,0) {\line (5,6){30}} \put(75,30){\circle*{10}} \put (67,30) {\makebox(-1,1)[r]{$V_3$}} \put(75,30) {\line (-2,3){25}} \put(75,30) {\line (2,3){25}} \put(175,30){\circle*{10}} \put (183,30) {\makebox(-1,1)[l]{$V_3$}} \put(175,30) {\line (-2,3){25}} \put(175,30) {\line (2,3){25}} \put(50,70){\circle*{10}} \put (42,70) {\makebox(-1,1)[r]{$V_1$}} \put(50,70) {\line (0,1){40}} \put(100,70){\circle*{10}} \put (92,70) {\makebox(-1,1)[r]{$V_2$}} \put(100,70) {\line (0,1){40}} \put(150,70){\circle*{10}} \put (158,70) {\makebox(-1,1)[l]{$V_2$}} \put(150,70) {\line (0,1){40}} \put(200,70){\circle*{10}} \put (208,70) {\makebox(-1,1)[l]{$V_1$}} \put(200,70) {\line (0,1){40}} \put(50,90){\circle*{10}} \put (42,90) {\makebox(-1,1)[r]{$V_0$}} \put(100,90){\circle*{10}} \put (92,90) {\makebox(-1,1)[r]{$V_0$}} \put(150,90){\circle*{10}} \put (158,90) {\makebox(-1,1)[l]{$V_0$}} \put(200,90){\circle*{10}} \put (208,90) {\makebox(-1,1)[l]{$V_0$}} \put(50,110){\circle*{10}} \put (42,110) {\makebox(-1,1)[r]{$V_2$}} \put(50,110) {\line (5,3){50}} \put(100,110){\circle*{10}} \put (92,110) {\makebox(-1,1)[r]{$V_1$}} \put(100,110) {\line (0,1){30}} \put(100,110) {\line (5,3){50}} \put(150,110){\circle*{10}} \put (158,110) {\makebox(-1,1)[l]{$V_1$}} \put(150,110) {\line (0,1){30}} \put(150,110) {\line (-5,3){50}} \put(200,110){\circle*{10}} \put (208,110) {\makebox(-1,1)[l]{$V_2$}} \put(200,110) {\line (-5,3){50}} \put(100,140){\circle*{10}} \put (108,140) {\makebox(-1,1)[l]{$D_1$}} \put(150,140){\circle*{10}} \put (158,140) {\makebox(-1,1)[l]{$D_2$}} \put(300,0){\circle*{10}} \put (292,0) {\makebox(-1,1)[r]{$S_1$}} \put(300,0) {\line (-5,6){30}} \put(300,0) {\line (0,1){30}} \put(350,0){\circle*{10}} \put (358,0) {\makebox(-1,1)[l]{$S_2$}} \put(350,0) {\line (5,6){30}} \put(350,0) {\line (0,1){30}} \put(275,30){\circle*{10}} \put (267,30) {\makebox(-1,1)[r]{$V_3$}} \put(275,30) {\line (-2,3){25}} \put(275,30) {\line (2,3){25}} \put(300,30){\circle*{10}} \put (308,30) {\makebox(-1,1)[l]{$V_3$}} \put(300,30) {\line (5,4){50}} \put(375,30){\circle*{10}} \put (383,30) {\makebox(-1,1)[l]{$V_3$}} \put(375,30) {\line (-2,3){25}} \put(375,30) {\line (2,3){25}} \put(350,30){\circle*{10}} \put (342,30) {\makebox(-1,1)[r]{$V_3$}} \put(350,30) {\line (-5,4){50}} \put(250,70){\circle*{10}} \put (242,70) {\makebox(-1,1)[r]{$V_1$}} \put(250,70) {\line (0,1){40}} \put(300,70){\circle*{10}} \put (292,70) {\makebox(-1,1)[r]{$V_2$}} \put(300,70) {\line (0,1){40}} \put(300,70) {\line (-5,4){50}} \put(350,70){\circle*{10}} \put (358,70) {\makebox(-1,1)[l]{$V_2$}} \put(350,70) {\line (0,1){40}} \put(350,70) {\line (5,4){50}} \put(400,70){\circle*{10}} \put (408,70) {\makebox(-1,1)[l]{$V_1$}} \put(400,70) {\line (0,1){40}} \put(275,90){\circle*{10}} \put (270,90) {\makebox(-1,1)[r]{$V_3$}} \put(375,90){\circle*{10}} \put (381,89) {\makebox(-1,1)[l]{$V_3$}} \put(250,90){\circle*{10}} \put (242,90) {\makebox(-1,1)[r]{$V_0$}} \put(300,90){\circle*{10}} \put (307,90) {\makebox(-1,1)[l]{$V_0$}} \put(350,90){\circle*{10}} \put (343,90) {\makebox(-1,1)[r]{$V_0$}} \put(400,90){\circle*{10}} \put (408,90) {\makebox(-1,1)[l]{$V_0$}} \put(250,110){\circle*{10}} \put (242,110) {\makebox(-1,1)[r]{$V_2$}} \put(250,110) {\line (5,3){50}} \put(300,110){\circle*{10}} \put (292,110) {\makebox(-1,1)[r]{$V_1$}} \put(300,110) {\line (0,1){30}} \put(300,110) {\line (5,3){50}} \put(350,110){\circle*{10}} \put (358,110) {\makebox(-1,1)[l]{$V_1$}} \put(350,110) {\line (0,1){30}} \put(350,110) {\line (-5,3){50}} \put(400,110){\circle*{10}} \put (408,110) {\makebox(-1,1)[l]{$V_2$}} \put(400,110) {\line (-5,3){50}} \put(300,140){\circle*{10}} \put (308,140) {\makebox(-1,1)[l]{$D_1$}} \put(350,140){\circle*{10}} \put (358,140) {\makebox(-1,1)[l]{$D_2$}} \put (125,-30) {\makebox(-1,1){Net A}} \put (325,-30) {\makebox(-1,1){Net B}} \end{picture} \caption{Ray network} \label{fig:ray} \end{figure} Table~\ref{tab:ray1} shows the simulation results for these networks ($S_1$ and $S_2$ send packets to $D_1$ and $D_2$ respectively). At low load levels both the ISPA and the MB COIN use the new links effectively, although the MB COIN performs slightly worse. This is mainly caused by the difficulty encountered by the simple learner (single nearest neighbor algorithm) in quickly learning the traffic patterns in this large network. Unlike the ISPA however, the MB COIN avoids the Braess' paradox in all cases except the very high traffic regime. Moreover, even there, the effect is significantly milder than that encountered by the ISPA. \begin{table}[htb] \centering \caption{Average Per Packet Cost for RAY network for $V_1 = 50 + log(1+x) \; ; \; V_2 = 10 x \; ; \; V_3 = 10 + log(1+x)$.} \vspace*{.1in} \begin{tabular}{c|c|c|c} \hline Loads at $S_1 and S_2)$ & Net & ISPA & MB COIN \\ \hline \hline 2,2 & A & 143.6 & 143.7 \\ & B & 124.4 & 126.9 \\ \hline 3,3 & A & 154.6 & 154.9 \\ & B & 165.5 & 151.0 \\ \hline 4,4 & A & 165.4 & 166.0 \\ & B & 197.7 & 165.6 \\ \hline 6,6 & A & 186.7 & 187.4 \\ & B & 205.1 & 191.6 \\ \hline \end{tabular} \label{tab:ray1} \end{table} \subsection{Steering the MB COIN} \label{sec:steer} The final aspect of COIN-based routing we investigate is the impact of the choice for the value of the steering parameter. This parameter both controls the amount of exploration the algorithm performs and determines the ``intelligence'' of the MB COIN at estimating the surface directly calculated by the FK COIN. In Figure~\ref{fig:mbswitch} we provide results when the steering parameter is set to $1.0$ so that the MB COIN reduces to FK COIN. This provides an upper bound on the performance that the MB COIN could achieve if it used no exploration. \begin{figure}[bth] \centering \subfigure[Hex4] {\label{fig:mbhex4} \includegraphics[width=2.8in,height=2.2in]{fig_hex4.ps}} \subfigure[Ray4]{\label{fig:mbray4} \includegraphics[width=2.8in,height=2.2in]{fig_big4.ps}} \subfigure[Butterfly4]{\label{fig:mbbut4} \includegraphics[width=2.8in,height=2.2in]{fig_butt4.ps}} \subfigure[Bootes4]{\label{fig:mbboo4} \includegraphics[width=2.8in,height=2.2in]{fig_boot4.ps}} \caption{Impact of switching.} \label{fig:mbswitch} \end{figure} For the HEX network (Figure~\ref{fig:mbhex4}), the performance at the worst setting for the MB COIN, which corresponds to no steering, is comparable to ISPA. In contrast, with moderate steering (0.5) the results are similar to that of the FK COIN, as the learner has more information to work with (arising from the extra parts of the input space represented in the training set due to the occasional use of the FK COIN), it bridges the gap between a suboptimal algorithm susceptible to Braess' paradox and one which efficiently avoids that paradox. For the RAY network (Figure~\ref{fig:mbray4}), the value of the steering parameter is more critical. With no steering at all, the MB COIN performs poorly in this network --- even worse than ISPA. This is not surprising in that because there are many routing choices that affect the performance, the simple memory-based learner needs proper ``seeding'' to be able to perform well. In any case, with the addition of steering the MB COIN quickly outperforms the ISPA. Finally, for both the Butterfly and Bootes networks (Figures~\ref{fig:mbbut4}~-~\ref{fig:mbboo4}) the MB COIN needs very little steering to perform well. Although for the Butterfly network the performance of MB COIN improves slightly with more information, it is significantly better than the ISPA across the board. \section{CONCLUSION} Effective routing in a network is a fundamental problem in many fields, including data communications and transportation. Shortest path algorithms provide an elegant solution to this problem, but under certain circumstances suffer from less than desirable effects. One such effect is Braess' paradox, where increased capacity results in lower overall throughput for shortest path algorithms due to the potentially harmful side-effects of the decisions made by such algorithms. Even a full-blown load-balancing can suffer from such side-effects , since in general they extend across time as well as space, whereas, load balancing ignores temporal side-effects. Collective Intelligence is a novel way of controlling distributed systems so as to avoid the deleterious side-effects. In a COIN, the central issue is in determining the personal objectives to be assigned to the components of the system. One wants to choose those goals so that the greedy pursuit of those goals by the components of the system leads to a globally desirable solution. We have summarized COIN theory and derived a routing algorithm based on that theory. In our simulations, the ISPA induced average costs as much as 32 \% higher than the COIN-based algorithm. This was despite the ISPA's having access to more information than the MB COIN. Furthermore the COIN-based algorithm avoided the Braess' paradoxes that seriously diminished the performance of the ISPA. In the work presented here, the COIN-based algorithm had to overcome severe limitations. Firstly, the estimation of the effect sets used were exceedingly poor. Secondly, the learners were particularly simple-minded, and therefore were not able to effectively maximize their performance. That a COIN-based router with such serious limitations consistently outperformed an ideal shortest path algorithm demonstrates the strength of the proposed method. We are currently investigating novel utilities that are more ``learnable'' for the routers as well as expand the simulations to larger networks using a commercial event driven simulator. Future work will focus on not making the approximation that current traffic levels do not affect future windowed loads (Equation~\ref{eq:netwlu}). It will also involve investigating better estimates of effect sets, in particular not including all nodes with the same destination in one's effect set, and more generally using a more "fine-grained" representation of the nodes, for example including each packet's originating source, to allow a more fine-grained effect set (and resultant WLU). \section{Acknowledgements} We would like to thank Joe Sill for helpful discussion. \bibliographystyle{plain}
1,116,691,498,075
arxiv
\section{Dataset} \label{sec:dataset} We propose two datasets resulting from the mapping of S2ORC with KP20K and OAGKx corpus, respectively. \citet{lo-etal-2020-s2orc} released S2ORC as a huge corpus of 8.1M scientific documents. While it has full text and metadata (see Table \ref{tab:metadata_table}) the corpus does not contain keyphrases. We took this as an opportunity to create a new corpus for identifying keyphrases from full-length scientific articles. Therefore, we took the KP20K and OAGKx scientific corpus for which keyphrases were already available and mapped them to their corresponding documents in S2ORC. This is the first time in the keyphrase community that such a large number of full-length documents with comprehensive metadata information have been made publicly available for academic use. We release two datasets LDKP3K and LDKP10K corresponding to KP20K and OAGKx, respectively. The first corpus consists of $\approx$ \textbf{100K} keyphrase tagged long documents obtained by mapping KP20K to S2ORC. The KP20K corpus mainly contains title, abstract and keyphrases for computer science research articles from online digital libraries like ACM Digital Library, ScienceDirect, and Wiley. Using S2ORC documents, we increase the average length of the documents in KP20K from 7.42 sentences to 280.67 sentences, thereby also increasing the percentage of present keyphrases in the input text by 18.7\%. The second corpus corresponding to OAGKx consists of \textbf{1.3M} full scientific articles from various domains with their corresponding keyphrases collected from academic graph \cite{sinha2015overview,tang2008arnetminer}. The resulting corpus contains 194.7 sentences (up from 8.87 sentences) on an average with 63.65\% present keyphrases (up from 52.7\%). Since both datasets consist of a large number of documents, we present three versions of each dataset with the training data split into \textit{small}, \textit{medium} and \textit{large} sizes, as given in Table \ref{tab:new_datasets}. This was done in order to provide an opportunity even to researchers and practitioners with scarcity of computing resources to evaluate the performance of their methods on a smaller dataset that can be trained in free platforms like Google Colab\footnote{https://colab.research.google.com/}. \begin{figure}[htp] \includegraphics[width=8cm]{LDKP3k_ttv_fos.png} \caption{Field of Studies distribution for train, test and validation split of LDKP3K dataset.} \label{figure:ldkp3k_fos} \end{figure} \begin{figure}[htp] \includegraphics[width=8cm]{LDKP10k_ttv_fos.png} \caption{Field of Studies distribution for train, test and validation split of LDKP10K dataset.} \label{figure:ldkp10k_fos} \end{figure} \subsection{Dataset Preparation} In the absence of any unique identifier shared across datasets, we used paper title to map documents in S2ORC with KP20K/OAGKx. This had its own set of challenges. For example, some papers in KP20K and OAGKx had unigram titles like ``Editorial" or ``Preface". Multiple papers can be found with the same title. We ignored all the papers with unigram and bigram titles. We resolved the title conflicts through manual verification. We also found out that some of the keyphrases in OAGKx and KP20K datasets were parsed incorrectly. Keyphrases that contain delimiters such as \textit{comma} (which is also used as a separator for keyphrase list) have been broken down into two or more keyphrases, \textit{e.g.}, the keyphrase `\textit{2,4- dichlorophenoxyacetic acid}' has been broken down into [`2', `4- dichlorophenoxyacetic acid']. In some cases, the publication year, page number, DOI, \textit{e.g.}, \textit{1999:14:555-558}, were inaccurately added to the list of keyphrases. To solve this, we filtered out all the keyphrases that did not have any alphabetical characters in them. Next, in order to facilitate the usage of particular sections in KPE algorithms, we standardized the section names across all the papers. The section names varied across different papers in the S2ORC dataset. For example, some papers have a section named \textit{``Introduction"} while others have it as \textit{``1.Introduction", ``I. Introduction", ``I Introduction" etc}. To deal with this problem, we replaced the unique section names with a common generic section name, like \textit{``introduction"}, across all the papers. We did this for common sections including introduction, related work, conclusion, methodology, results, and analysis. The proposed dataset LDKP3k and LDKP10k are further divided into train, test and validation splits as shown in Table-\ref{tab:new_datasets}. For LDKP3k, these splits are based on the splits that was present in the original KP20k dataset. For LDKP10k, we resorted to random sampling method to create these splits since OAGKX, the keyphrase dataset corresponding to LDKP10k, wasn't originally divided into train, test and validation. Figures \ref{figure:ldkp3k_fos} and \ref{figure:ldkp10k_fos} show that the splits for both LDKP3k and LDKP10k are of adequate quality because there is a good distribution of papers in terms of field of study across all the splits. \begin{table}[] \begin{tabular}{|l|l|l|l|} \hline \multicolumn{2}{|l|}{\textbf{Dataset}} & \textbf{LDKP3K} & \textbf{LDKP10K} \\ \hline \multirow{3}{*}{\textbf{Train}} & \textbf{Small} & 20,000 & 20,000 \\ \cline{2-4} & \textbf{Medium} & 50,000 & 50,000 \\ \cline{2-4} & \textbf{Large} & 90,019 & 1,296,613 \\ \hline \multicolumn{2}{|l|}{\textbf{Test}} & 3,413 & 10,000 \\ \hline \multicolumn{2}{|l|}{\textbf{Validation}} & 3,339 & 10,000 \\ \hline \end{tabular} \caption{LDKP datasets with their train, validation and test dataset distributions.} \label{tab:new_datasets} \end{table} \begin{table}[] \adjustbox{width=\columnwidth}{ \begin{tabular}{lll} \hline \textbf{\thead{Paper \\ details}} & \textbf{\thead{Paper \\ Identifier}} & \textbf{\thead{Citations and \\ References}} \\ \hline Paper ID & ArXiv ID & Outbound Citations \\ Title & ACL ID & Inbound Citations \\ Authors & PMC ID & Bibliography \\ Year & PubMed ID & References \\ Venue & MAG ID & \\ Journal & DOI & \\ Field of Study & S2 URL & \\\hline \end{tabular} } \caption{Information available in the metadata of each scientific paper in LDKP corpus.} \label{tab:metadata_table} \end{table} \section{Dataset Usage} \label{sec::dataset-usage} Please refer to the Huggingface hub pages for LDKP3k and LDKP10k for a detailed information about downloading and using the dataset. \begin{enumerate} \item LDKP3K - \url{https://huggingface.co/datasets/midas/ldkp3k} \item LDKP10K - \url{https://huggingface.co/datasets/midas/ldkp10k} \end{enumerate} \section{Conclusion} In this work, we identified the shortage of corpus comprising of long documents for training and evaluating keyphrase extraction and generation models. We created two very large corpus - LDKP3K and LDKP10K comprising of $\approx$ 100K and $\approx$ 1.3M documents and make it publicly available. We hope this would encourage the researchers to innovate and propose new models capable of identifying high quality keyphrases from long multi-page documents. \section{Experiments} \label{sec:experiments} In this section, we describe our methodology to evaluate several popular keyphrase extraction algorithms on the proposed \textsc{LDKP3k} and \textsc{LDKP10k} datasets. We further report the benchmark results and also discuss the comparative advantage of different algorithms to provide future research direction. \subsection{Unsupervised Methods} There are multiple unsupervised methods for extracting keyphrases from a document. We used the following popular graph-based algorithms: TextRank \cite{mihalcea-tarau-2004-textrank}, PositionRank \cite{florescu-caragea-2017-positionrank}, SingleRank \cite{singlerank}, TopicRank \cite{bougouin-etal-2013-topicrank}, and SGRank \cite{danesh-etal-2015-sgrank}. These algorithms first identify the candidate keyphrases using lexical rules followed by ranking the candidates using a graph-based approach. \label{table:candidate_kp} \input{candidate_kp_table} \begin{table*}[htbp] \scalebox{.87}{ \begin{tabular}{c|cc|cc|cc|cc|cc} \hline \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Krapivin}} & \multicolumn{2}{c|}{\textbf{NUS}} & \multicolumn{2}{c|}{\textbf{SemEval-2010}} & \multicolumn{2}{c|}{\textbf{LDKP3k}} & \multicolumn{2}{|c}{\textbf{LDKP10k}} \\ \cline{2-11} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10}\\ \hline \textbf{PositionRank} & 0.042 & 0.052 & 0.060 & 0.086 & 0.074 & 0.098 & 0.059 & 0.062 & 0.052 & 0.061 \\ \hline \textbf{TextRank} & 0.036 & 0.047 & 0.071 & 0.090 & 0.085 & 0.117 & 0.082 & 0.094 & 0.068 & 0.074 \\ \hline \textbf{TopicRank} & 0.071 & 0.080 & 0.130 & 0.152 & 0.111 & 0.132 & 0.108 & 0.110 & 0.098 & 0.102 \\ \hline \textbf{SingleRank} & 0.001 & 0.003 & 0.005 & 0.008 & 0.009 & 0.010 & 0.016 & 0.025 & 0.011 & 0.014 \\ \hline \textbf{MultipartiteRank} & 0.103 & 0.107 & 0.150 & 0.193 & 0.116 & 0.145 & 0.129 & 0.110 & 0.104 & 0.106 \\ \hline \textbf{TopicalPageRank} & 0.009 & 0.012 & 0.046 & 0.059 & 0.014 & 0.024 & 0.019 & 0.027 & 0.020 & 0.031 \\ \hline \textbf{SGRank} & 0.140 & 0.131 & 0.195 & 0.203 & 0.177 & 0.201 & 0.138 & 0.128 & 0.136 & 0.132 \\ \hline \end{tabular} } \caption{Results on long document datasets using unsupervised graph-based models} \label{tab:results} \end{table*} \begin{table*}[htbp] \scalebox{.87}{ \begin{tabular}{c|cc|cc|cc|cc|cc} \hline \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Krapivin}} & \multicolumn{2}{c|}{\textbf{NUS}} & \multicolumn{2}{c|}{\textbf{SemEval-2010}} & \multicolumn{2}{c|}{\textbf{LDKP3k}} & \multicolumn{2}{|c}{\textbf{LDKP10k}} \\ \cline{2-11} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10}\\ \hline \textbf{TFIDF} & 0.033 & 0.052 & 0.063 & 0.111 & 0.062 & 0.070 & 0.093 & 0.099 & 0.072 & 0.080 \\ \hline \textbf{KPMiner} & 0.125 & 0.151 & 0.169 & 0.212 & 0.155 & 0.181 & 0.164 & 0.152 & 0.151 & 0.142 \\ \hline \textbf{Yake} & 0.105 & 0.107 & 0.177 & 0.235 & 0.088 & 0.129 & 0.140 & 0.132 & 0.114 & 0.114 \\ \hline \end{tabular} } \caption{{Results on long document datasets using unsupervised statistical models.}} \label{tab:results} \end{table*} \subsection{Supervised Methods} For supervised keyphrase extraction, we marked all the words in the document belonging to keyphrases as `B' or `I' depending on whether they are the first word of keyphrase or otherwise. Every other word, i.e. which was not part of any keyphrase got tagged as `O'. Then the keyphrase extraction task is posed as a supervised sequence tagging problem to predict `B', `I', and `O' labels. Note that most supervised models like BERT \cite{bert} can accommodate text only up to 512 tokens long. To extract keyphrase from long documents, however we need models which can accommodate long texts. On the other hand, recently introduced models Longformer \cite{longformer}, Bigbird \cite{big_bird}, and Reformer \cite{Reformer} are specifically designed to handle longer pieces of text. In our experiments, we used LongFormer mainly due to its memory efficient implementation, thus making it an excellent choice for processing long documents. Unfortunately, in spite of being extremely powerful, Longformer can accommodate only up to 4098 tokens due to its design constraints. To work around this, we used the important sections like Title, Abstract, Introduction, Conclusion, Discussion, Results, and Related Work, as the input since they contained most of the keyphrases. Following \cite{longformer} Longformer was initialized with \textit{Longformer Base} model and trained using Cross-entropy loss using batch size of 2 for four full passes over the entire corpus using a single RTX 3090 GPU. \begin{table*}[] \scalebox{.87}{ \begin{tabular}{c|cc|cc|cc|cc|cc} \hline \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Krapivin}} & \multicolumn{2}{c|}{\textbf{NUS}} & \multicolumn{2}{c|}{\textbf{SemEval-2010}} & \multicolumn{2}{c|}{\textbf{LDKP3k}} & \multicolumn{2}{|c}{\textbf{LDKP10k}} \\ \cline{2-11} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10} & \textbf{F1@5} & \textbf{F1@10}\\ \hline \textbf{Kea} & 0.041 & 0.063 & 0.069 & 0.134 & 0.077 & 0.090 & 0.109 & 0.118 & 0.087 & 0.096 \\ \hline \textbf{WINGNUS} & 0.059 & 0.151 & 0.057 & 0.085 & 0.059 & 0.152 & 0.099 & 0.109 & 0.093 & 0.102 \\ \hline \textbf{Bert} & 0.265 & 0.267 & 0.307 & 0.339 & 0.162 & 0.171 & 0.358 & 0.361 & 0.240 & 0.243 \\ \hline \textbf{Longformer(4096 tokens)} & 0.229 & 0.232 & 0.253 & 0.284 & 0.203 & 0.219 & 0.317 & 0.319 & 0.309 & 0.340 \\ \hline \textbf{Longformer(512 tokens)} & - & - & - & - & - & - & 0.257 & 0.262 & 0.246 & 0.252 \\ \hline \end{tabular} } \caption{{Results on Long document datasets on Supervised models}} \label{tab:results} \end{table*} \begin{table}[] \adjustbox{width=\columnwidth}{ \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Model} & \textbf{Dataset} & \textbf{Ground truth used} & \multicolumn{1}{l|}{\textbf{F1@5}} & \multicolumn{1}{l|}{\textbf{F1@10}} \\ \hline Longformer & LDKP3k & Longformer & 0.358309 & 0.362957 \\ \hline Bert & LDKP3k & Bert & 0.357925 & 0.360629 \\ \hline Bert & LDKP3k & Longformer & 0.328236 & 0.331269 \\ \hline \hline Longformer & LDKP10k & Longformer & 0.326409 & 0.343332 \\ \hline Bert & LDKP10k & Bert & 0.239801 & 0.242627 \\ \hline Bert & LDKP10k & Longformer & 0.196547 & 0.20044 \\ \hline \end{tabular}% } \caption{{Comparison of Bert & Longformer performance}} \label{tab:comparison} \end{table} \subsection{Evaluation Metrics} We have used macro-averaged $F1@5$ and $F1@10$ as our evaluation metrics. Before evaluating, we lower-cased, stemmed, and removed punctuation from ground truth and predicted keyphrases. \\ Let $Y$ denote the ground truth keyphrases and $\bar{Y} = (\bar{y_1},\bar{y_2}, \dots, \bar{y_m})$ denote the predicted keyphrases ordered by their quality of prediction. Then we can define the metrics as follows: \[ Precision@k = \frac{|Y \cap \bar{Y_k} |}{min\{\bar{|Y_k|},k\}} \] \[ Recall@k = \frac{|Y \cap \bar{Y_k} |}{|Y|} \] \[ F1@k = \frac{2*Precision@k*Recall@k}{Precision@k + Recall@k} \] where $\bar{Y_k}$ denotes the top k elements of the set $\bar{Y}$. \subsection{Results} We used the same baseline results as specified in \cite{rui_meng} for comparison between models trained on abstract and title only v/s our model trained on full documents. To make a fair comparison, we also used the same dataset split of NUS, Krapivin, and SemEval-2010 to train our supervised and unsupervised model. The only difference is that instead of using abstract and title only, we used full document. As expected, unsupervised algorithms like TextRank, PositionRank, and SingleRank did not perform well on long documents than short documents. One possible reason could be that unsupervised models learned more noise than learning context from long documents. On further analysis, we observed that SGrank outperforms every other algorithm if we consider F1@10 scores. While training the Textrank model, we observed that as the window size increases, the F1 scores improved, but the number of predicted keyphrases decreased. Due to this reason, the line graph of TextRank does not dip like other algorithms. The F1 scores between F1@1 to F1@30 for the \textsc{LDkp3k} testing dataset are given in Figure \ref{fig:unsup_res_kp20k}. A similar pattern was observed for the \textsc{LDkp10k} test dataset. We used supervised algorithm Maui\cite{maui} and KEA\cite{KEA} for baseline and comparison with our Longformer model. As stated in Table \ref{tab:results}, Longformer outperformed almost all the baseline on NUS, Krapivin, and SemEval-2010 datasets. We used the medium train split for \textsc{LDkp3k} and \textsc{LDkp10k} to train Longformer token classification model. Longformer on \textsc{LDkp3k} (inspired from KP20k) is performing better than all of the baseline (i.e. Maui, KEA) and CopyRNN model in \cite{rui_meng} paper. Since the \textsc{LDkp3k} test dataset is a subset of the KP20k test dataset, it will not be fair to compare them. We did not compare long document keyphrase extraction on the \textsc{LDkp10k} dataset as there is no previous reported score, and a generic split of the OAGKX dataset that is available. \section{Interpretability} \label{sec:interpretability} Next, to understand the challenges associated with the long document keyphrase extraction task and how the two supervised algorithms (BERT and Longformer) are learning the task, we do an interpretability study over them. We use Integrated Gradients (IGs) \cite{sundararajan2017axiomatic} for attributing the BIO tags to words, phrases and sections. \\ \textbf{Setup} The interpretability experiments/statistics were conducted over 1500 Integrated Gradients attribution maps for BERT and Longformer which were created using the accurate KeyPhrase predictions for 100 samples. The Figures~\ref{figure:attn_patterns} and~\ref{figure:comb_attn_patterns} show these attribution plots, where \textcolor{red}{red} shading denotes negative attribution, and \textcolor{green}{green} shading denotes positive attribution. \\ \textbf{Effect of Length:} Document length has two effects: one, additional keyphrases are found in the longer document length and two, the additional context length helps in better extraction of keyphrases. We evaluate the effect of length on both these counts using our interpretability experiments. \begin{figure}[htp] \includegraphics[width=8cm]{acl-ijcnlp2021-templates/ig_plots/kp_cdf_set(1).pdf} \caption{CDF of Avg. Key-phrase count (first occurrence counted) in buckets of 20 words} \label{figure:kpe_countvslength_cdf} \end{figure} \begin{figure}[htp] \includegraphics[width=8cm]{acl-ijcnlp2021-templates/ig_plots/cho_new(1).pdf}\vspace{1mm} \hline \vspace{1mm} \includegraphics[width=8cm]{acl-ijcnlp2021-templates/ig_plots/quet_new(1).pdf}\vspace{1mm} \hline \vspace{1mm} \includegraphics[width=8cm]{acl-ijcnlp2021-templates/ig_plots/integral_new(1).pdf} \caption{Integrated Gradients attribution maps for Keywords Cho, Quet and Integral respectively} \label{figure:attn_patterns} \end{figure} Fig~\ref{figure:attn_patterns} shows the IG plots for `begin' and `inside' classes for longformer for the keyphrase `Cho-quet-integral'. From the figures, we can note that the other occurrences of the KP are shaded, indicating that keyphrases themselves are important for detecting other keyphrases. \includegraphics[width=8cm]{acl-ijcnlp2021-templates/ig_plots/articles_importances(1).pdf}) We calculate that on average, keyphrases pull 6.84 times the attribution weight compared to non-keyphrase words. One key challenge faced by traditional datasets is that the region near the abstract (most KPE datasets) misses out on many keyphrases present in the heavy-tail of the KPE distribution within a document. This is shown by (Figure~\ref{figure:kpe_countvslength_cdf}), where only ~60\% of KPs are present in the first 200 tokens(included in the abstract). \\ Additionally, Fig.~\ref{figure:comb_attn_patterns} shows the average attribution values over all the keyphrase predictions. We can see that longformer has a number of peaks in the document end where the context length of BERT is not even able to reach. We posit that this helps longformer in achieving better results than BERT. Training longformer on a context size equal to BERT confirms this finding (Table~\ref{table:performance}).\\ Next we try to analyse the context size of BERT vs Longformer using aggregate statistics. For longformer, the attribution values fall 63\% slower than BERT for 10-word intervals from the keyphrase location, words, thus indicating that longformer's gradients are more spread out than BERT. \\ We also calculate the ratio of attribution weights in the first 512 tokens of longformer to the next 3586 words (4098-512 words). It is spread in the average ratio of 14.38 in the first 512 tokens to the next 3586 words. Therefore, while the words near keyphrases are more important, there is a heavy-tail distribution which spreads till the end of the context length. Fig.~\ref{figure:attvslength} demonstrates these stats by plotting how the attribution weights fall with sequence length for the single keyword prediction "Integral".\\ Table~\ref{table:attvslen} gives average sequence length for observing words in the top 90, 95, and 99 percentile attribution weights. We can note that longformer has, on average, 35\% higher sequence lengths for top-k percentile weighted words. It is important to note that all the sequence lengths in the Table~\ref{table:attvslen} are lower than the attention window of BERT and longformer. \\ In conclusion, longformer relies on a greater sequence length than BERT for identifying keyphrases, which likely boosts its performance. \\ \textbf{Effect of Linguistic Factors:} Next we investigate some linguistic reasons for longformer's performance. Due to scholarly nature of the corpus, we find that keyphrases have a high representation from academic words\footnote{We use the work by \citet{coxhead2000new} for the list of academic words} \cite{coxhead2000new}. We also see that academic words (words that appear frequently in academic text) are have 66.70\% higher attribution than nonacademic words. \\ Articles (`a', `an', `the') are considered as stop-words (unimportant words) in most NLP tasks and are often removed while processing text samples for those tasks. However, in keyphrase extraction, we note that articles carry a significant weight (Fig.~\ref{figure:articlesvslength}) and that articles around the intended keyphrase, on an average, have a higher weight than articles at other places. We find that this is the case since articles are used in conjunction with noun phrases and noun phrases have been shown to be useful for the task of keyphrase extraction \cite{nagy2011noun}. \\ Therefore, we analyse the effect of longer document length, context length and linguistic preferences of longformer and how they affect its performance compared to BERT. \\ \begin{figure}[htp] \includegraphics[width=8.3cm]{acl-ijcnlp2021-templates/ig_plots/example2_combined_bert_attn.pdf} \vspace{0.01em} \hline \vspace{1em} \includegraphics[width=8.3cm]{acl-ijcnlp2021-templates/ig_plots/example2_combined_long.pdf} \caption{Combined Integrated Gradients attribution patterns for Keywords of an article: Left - BERT, Right- LONGFORMER} \label{figure:comb_attn_patterns} \end{figure} \begin{figure}[htp] \includegraphics[width=8.3cm]{acl-ijcnlp2021-templates/ig_plots/all_models_new(2).pdf} \includegraphics[width=8.3cm]{acl-ijcnlp2021-templates/ig_plots/all_models_new_integral_74(1).pdf} \caption{Comparing gradient weight maps for the 4096 length Longformer, 512 Longformer and BERT: Left- Averaged, Right- for a Keyword "INTEGRAL" at index 74} \label{figure:attvslength} \end{figure} \begin{figure}[htp] \includegraphics[width=8cm]{acl-ijcnlp2021-templates/ig_plots/articles_importances(1).pdf} \caption{Calculating the A, AN, THE attribution scores relative to their distance from keyphrase detected} \label{figure:articlesvslength} \end{figure} \begin{table} {\begin{tabular}{|l||l||l|} \hline \textbf{Model} & \textbf{\makecell{Percentile\\Attention}} & \textbf{\makecell{Length from\\KP (NTokens)}} \\ \hline BERT & 90 & 364.02\\ \hline BERT & 95 & 322.27\\ \hline BERT & 99 & 219.54 \\ \hline LONGFORMER & 90 & 505.59\\ \hline LONGFORMER & 95 & 429.23\\ \hline LONGFORMER & 99 & 285.887 \\ \hline \end{tabular}} \caption{\label{table:attvslen} \small Comparing Attention vs Length for Bert AND Longformer } \end{table} \begin{table} \scalebox{0.8}{ {\begin{tabular}{|l||l||l|} \hline \textbf{Model} & \textbf{\makecell{F1 Score}} & \textbf{\makecell{Accuracy Score}} \\ \hline Longformer(512 tokens) & 25.33 & 96.8 \\ \hline Longformer(4096 tokens) & 33.9 & 98.8 \\ \hline \end{tabular}}} \caption{\label{table:performance} \small Comparing Longformer Validation Performance for length 512 and 4096 } \end{table} \section{Introduction and Background} \label{sec:intro} Identifying keyphrases (KPs) is a form of extreme summarization, where given an input document, the task is to find a set of \textbf{representative} phrases that can effectively summarize it. Over the last decade, we have seen an exponential increase in the velocity at which unstructured text is produced on the web, with the vast majority of them untagged or poorly tagged. KPs provide an effective way to search, summarize, tag, and manage these documents. Identifying KPs have proved to be useful as preprocessing, pre-training \cite{kulkarni2021learning}, or supplementary tasks in other tasks such as search \cite{sanyal2019enhancing, gutwin1999improving, song2006keyphrase}, recommendation systems \cite{augenstein-etal-2017-semeval}, advertising \cite{yih2006finding}, summarization \cite{qazvinian2010citation}, opinion mining \cite{berend2011opinion} to name a few. This has motivated researchers to explore machine learning algorithms for automatically mapping documents to a set of keyphrases commonly referred as the \textit{keyphrase extraction} (KPE) task \cite{kim-etal-2010-semeval, augenstein-etal-2017-semeval}. \begin{table*}[!h] \centering \scalebox{1.0}{ \begin{tabular} {lllllll} \hline \thead{Dataset} & \thead{Size} & \thead{Long\\Doc} & \thead{Avg\\\# Sentences} & \thead{Avg\\\# Words} & \thead{Present\\KPs} & \thead{Absent\\KPs}\\ \hline SemEval2017 \cite{augenstein-etal-2017-semeval} & {\color[HTML]{000000}0.5k} & {\color[HTML]{000000}$\times$} & 7.36 & 176.13 & 42.01\% & 57.69\%\\ KDD \cite{KDD-Caragea2014CitationEnhancedKE} & {\color[HTML]{000000}0.75k} & {\color[HTML]{000000}$\times$} & 8.05 & 188.43& 45.99\% & 54.01\%\\ Inspec \cite{inspec} & {\color[HTML]{000000}2k} & {\color[HTML]{000000}$\times$} & 5.45 & 130.57 & 55.69\% & 44.31\%\\ KP20k \cite{rui_meng} & 568k & {\color[HTML]{000000}$\times$} & 7.42 & 188.47 & 57.4\% & 42.6\%\\ OAGKx \cite{oagkx} & 22M & {\color[HTML]{000000}$\times$} & 8.87 & 228.50 & 52.7\% & 47.3\% \\ \hline NUS \cite{10.1007/978-3-540-77094-7_41} & {\color[HTML]{000000}0.21k} & \checkmark & 375.93 & 7644.43 & 67.75\% & 32.25\%\\ SemEval2010 \cite{kim-etal-2010-semeval} & {\color[HTML]{000000}0.24k} & \checkmark & 319.32 & 7434.52 & 42.01\% & 57.99\%\\ Krapivin \cite{krapivin-2010} & {\color[HTML]{000000}2.3k} & \checkmark & 370.48 & 8420.76 & 44.74\% & 52.26\%\\ \hline \textbf{LDKP3K} (S2ORC $\leftarrow$ KP20K) & \textbf{100k} & \checkmark & 280.67 & 6027.10 & 76.11\% & 23.89\%\\ \textbf{LDKP10K} (S2ORC $\leftarrow$ OAGKx) & \textbf{1.3M} & \checkmark & 194.76 & 4384.58 & 63.65\% & 36.35\%\\ \hline \end{tabular}} \caption{Characteristics of the proposed datasets compared to the existing datasets.} \label{tab:new_datasets_benefits} \end{table*} Various algorithms have been proposed over time to solve the problem of identifying keyphrases from text documents that can primarily be categorized into supervised and unsupervised approaches \cite{papagiannopoulou2020review}. Majority of these approaches take an abstract (\textit{a summary}) of a text document as the input and produce keyphrases as output. However, in real world industrial applications in different domains such as advertising \cite{hussain2017automatic}, search and indexing, finance \cite{gupta2020comprehensive}, law \cite{bhargava2017catchphrase}, and many other real-world use cases, document summaries are not readily available. Moreover, most of the documents encountered in these applications are greater than 8 sentences (the average length of abstracts in KP datasets, see Table~\ref{tab:new_datasets_benefits}). We also find that a significant percentage of keyphrases ($>$18\%) are \textit{directly} found beyond the limited context of a document's title and abstract/summary. These constraints limit the potential of currently developed KPE and KPG algorithms to only theoretical pursuits. Many previous studies have pointed out the constraints imposed on KPE algorithms due to the short inputs and artificial nature of available datasets \cite{nguyen2010wingnus,hasan2014automatic,cano2019keyphrase,gallina2020large,kontoulis2021keyphrase}. In particular, \citet{cano2019keyphrase} while explaining the limitations of their proposed algorithms, note that the title and the abstract may not carry sufficient topical information about the article, even when joined together. While most datasets in the domain of KPE consist of titles and abstracts \cite{oagkx}, there have been some attempts at providing long document KP datasets as well (Table~\ref{tab:new_datasets_benefits}). \citet{krapivin-2010} released 2,000 full-length scientific papers from the computer science domain. \citet{kim-etal-2010-semeval} in a SemEval-2010 challenge released a dataset containing 244 full scientific articles along with their author and reader assigned keyphrases. \citet{10.1007/978-3-540-77094-7_41} released 211 full-length scientific documents with multiple annotated keyphrases. All of these datasets were released more than a decade ago and were more suitable for machine-learning models available back then. With today's deep learning paradigms like un/semi-supervised learning requiring Wikipedia sized corpora ($>$6M articles), it becomes imperative to update the KPE and KPG tasks with similar sized corpus. In this work, we develop two large datasets (LDKP - Long Document Keyphrase) comprising of 100K and 1.3M documents for identifying keyphrases from full-length scientific articles along with their metadata information such as venue, year of publication, author information, inbound and outbound citations, and citation contexts, among others. We achieve this by mapping the existing KP20K \cite{rui_meng} and OAGKx \cite{oagkx} corpus for KPE and KPG to the documents available in S2ORC dataset \cite{lo-etal-2020-s2orc}. We make the dataset publicly available on Huggingface hub (Section \ref{sec::dataset-usage}) in order to facilitate research on identifying keyphrases from long documents. We hope that researchers working in this area would acknowledge the shortcomings of the popularly used datasets and methods in KPE and KPG and devise exciting new approaches for overcoming the challenges related to identifying keyphrases from long documents and contexts beyond summaries. This would make the algorithms more useful in practical real-world settings.
1,116,691,498,076
arxiv
\section{Introduction} \IEEEPARstart{Q}{uantum} secret sharing schemes are protocols that enable the secure distribution of a secret among mutually collaborating parties so that only certain collections of parties can recover the secret. \noindent Quantum secret sharing schemes were first proposed by Hillery {\em et al.} for classical secrets \cite{hillery99}. Subsequently, Cleve {\em et al.} proposed quantum secret sharing schemes for quantum secrets \cite{cleve99}. Since these pioneering works, there has been extensive progress in this field, and it continues to be actively researched \cite{karlsson99,smith99,gottesman00,bandyopadhyay00,nascimento01,imai03,ps10,ben12,markham08,senthoor19,tyc02, qin20}. Quantum secret sharing has also been experimentally demonstrated by many groups \cite{tittel01,hao11,bogdanski08,bell14,schmid05,gaertner07,lance04, wei13, pinnell20}. The progress has been rapid with demonstrations over distances as large as 50 km\cite{wei13}. Furthermore, non-binary protocols over 11-dimensional qudits have also been demonstrated \cite{pinnell20}. Quantum secret sharing can be done under various settings: with classical data as the secret or an arbitrary quantum state as the secret, with parties having classical and quantum data (hybrid) or only quantum data, with or without pre-existing quantum entanglement shared among the parties, to name a few. Here, we consider the setting where the secret is an arbitrary quantum state, with all the parties having only quantum data and no pre-existing quantum entanglement. In this paper we are interested in optimizing the resources needed for quantum secret sharing. Specifically, we study the communication efficient threshold quantum secret sharing (CE-QTS) schemes and propose the improved model of universal CE-QTS schemes. The most popular quantum secret sharing scheme is the quantum threshold secret sharing scheme (QTS). In this scheme, out of the total $n$ parties, a minimum of $k$ parties are required to recover the secret. Also, here we look at only perfect QTS schemes, where any set of less than $k$ parties should not have any information on the secret. It is often denoted as a $((k,n))$ scheme. The state given to each party is called the share of the party. After the secret has been shared, the parties who plan to recover the secret combine their shares together and reconstruct the secret. Alternatively, the parties involved in the recovery could communicate all or part of their share to a third party designated as the combiner. The amount of quantum communication to the combiner for recovering the secret is called the communication complexity. For sharing a secret of size $m$ qudits under this setting, a standard $((k,n))$ scheme (for example, \cite{cleve99}) requires $m n$ qudits to be shared for share distribution ($m$ qudits for each party) and at least $mk$ qudits for recovery. \subsection{Previous work} The analogous problem of reducing communication complexity has been studied for classical secret sharing schemes \cite{wang08,bitar16,bitar18,huang16,huang17,penas18} but not as much in the quantum setting. Ref.~\cite{nascimento01} and \cite{ben12} aim to reduce the quantum communication during secret distribution to the parties but do not look at reducing the quantum communication cost during secret recovery. Only recently, \cite{senthoor19} showed that the quantum communication cost during secret recovery can be reduced by using a subset of $d$ parties whose cardinality is more than the threshold $k$ required to recover the secret. This scheme is called $((k,n,d))$ communication efficient quantum secret sharing (CE-QTS) scheme. These gains can be significant and for a $((k,n=2k-1,d))$ threshold scheme, it was shown that the gains in communication complexity of recovery per secret qudit can be as large as $O(k)$. For sharing a secret of $m$ qudits, this scheme requires $mn$ qudits to be shared for secret distribution, $mk$ qudits for secret recovery when accessing $k$ parties and $dm/(d-k+1)$ qudits when accessing $d$ parties. However, the improvement in communication cost only works for a fixed value of $d$ in the range of $k<d\leq n$. The value of $d$ is decided prior to encoding of the secret and cannot be changed. \subsection{ Contributions} In this paper, we develop the theory of communication efficient quantum secret sharing schemes. Specifically, we address the problem of designing quantum threshold schemes that are universal in the sense that any subset of parties of an arbitrary size greater than $k$ would provide further gains in communication cost during recovery. This is the first such class of universal communication efficient quantum threshold secret sharing schemes where the number of parties contacted for secret recovery can be varied from $k$ to $n$. First, we give a framework for constructing CE-QTS schemes from a combination of ramp QSS schemes and threshold schemes. We also propose a construction of CE-QTS schemes for both fixed $d$ and universal $d$ with this framework using the ramp secret sharing schemes proposed in \cite{ogawa05}. This framework can also be used to derive other constructions for CE-QTS schemes by using different ramp QSS schemes. Second, we propose a class of universal CE-QTS schemes based on the Staircase codes. These schemes are inspired by the classical communication efficient secret sharing schemes of \cite{bitar16,bitar18}. The constructions for these classical schemes are also related to codes for distributed storage aimed at reducing communication cost\cite{rashmi11}. The constructions for universal CE-QTS schemes proposed in this paper, when an arbitrary $d\geq k$ number of parties are contacted, achieve the same communication complexity as that of fixed $d$. So there is no penalty in communication complexity with the increased flexibility to change $d$. The universal CE-QTS constructions provide the same storage cost and communication cost (normalized to secret size) as the CE-QTS constructions. But the universal CE-QTS constructions need to have larger secret sizes to provide communication efficiency for various values of $d$. For a short summary of our constructions, refer Table \ref{tab:contributions}. Third, we devive lower bounds on the communication complexity of CE-QTS schemes (both fixed $d$ and universal). We also propose an information-theoretic model of CE-QTS schemes and prove that our constructions are optimal with respect to both share size and communication cost. The information theoretic model is used to give an alternative proof for the bound on communication cost. Some preliminary results of this paper are discussed in the upcoming conference publication \cite{senthoor20}. \begin{table*}[t] \begin{center} \begin{tabular}{|l|c|c|c|l|} \hline & Number of parties & Secret size, $m$ & Communication & Dimension of \\ & accessed by combiner, $d$ & & cost, CC$_n(d)/m$ & qudits, $q$ (prime) \\\hline QTS\cite{gottesman00} & $d=k$, fixed & 1 & $k$ & $\geq 2k-1$ \\\hline CE-QTS (Staircase codes) \cite{senthoor19} & $k\leq d\leq n$, fixed & $d-k+1$ & $\frac{d}{d-k+1}$ & $>2k-1$ \\\hline CE-QTS (Concatenation) & $k\leq d\leq n$, fixed & $d-k+1$ & $\frac{d}{d-k+1}$ & $>d+k-1$ \\\hline Universal CE-QTS (Staircase codes) & $k\leq d\leq n$, variable & lcm$\{1,2,\hdots,k\}$ & $\frac{d}{d-k+1}$ & $>2k-1$ \\\hline Universal CE-QTS (Concatenation) & $k\leq d\leq n$, variable & lcm$\{1,2,\hdots,n-k+1\}$ & $\frac{d}{d-k+1}$ & $>n+k-1$ \\\hline \end{tabular} \end{center} \captionsetup{justification=justified} \caption{Parameters of various $((k,n))$ QTS constructions. Here $2\leq k\leq n\leq 2k-1$. For all these constructions, the individual share size is $m$ and CC$_n(k)/m=k$. \label{tab:contributions}} \end{table*} \subsection{Organization} We begin with a brief review of quantum secret sharing schemes in Section~\ref{sec:bg}. Then we give a concrete illustration of the universal communication efficient quantum secret sharing schemes in Section~\ref{s:example}. In Section \ref{s:framework}, we propose the Concatenation framework for constructing CE-QTS schemes from ramp and threshold QSS schemes. We also extend this framework to construct universal CE-QTS schemes. In Section~\ref{s:iv}, we give a construction of universal CE-QTS schemes based on Staircase codes. We derive lower bounds on the communication complexity of CE-QTS schemes in Section \ref{s:v}. In Section \ref{s:vi}, we propose an information theoretic model for studying CE-QTS schemes. Finally, we conclude with a brief sketch of further directions of research. \section{Background} \label{sec:bg} \subsection{Notation} Let $q$ be a prime and $\mathbb{F}_q$ denote a finite field with $q$ elements. We take the standard basis of $\mathbb{C}^q$ to be $\{\ket{x}\mid x\in \F_q \}$. We denote $\ket{x_1x_2\cdots x_\ell}$ by $\ket{\underline{x}}$ where $\underline{x}$ is the vector with the entries $(x_1,x_2,\hdots,x_\ell)$. The standard basis for $\mathbb{C}^{q^n}$ is taken to be $\{\ket{\underline{x}}\mid \underline{x}\in \F_q^n \}$. For any invertible matrix $K\in\F_q^{\ell\times\ell}$, we define the unitary operation $U_K$ \begin{eqnarray*} U_K\ket{\underline{x}} = \ket{K\underline{x}} =\ket{\underline{y}} \end{eqnarray*} where $\underline{y}= (y_1,\ldots, y_n)$ and $y_i = \sum_{j}K_{ij}x_j$. We define the two qudit unitary operator $L_\alpha$ as \begin{eqnarray*} L_\alpha\ket{i}_c\ket{j}_t = \ket{i}_c\ket{j+\alpha i}_t, \end{eqnarray*} where $i, j\in\F_q$ and $\alpha\in\F_q$ is a constant. The subscript $c$ and $t$ indicate that they are control and target qudits respectively. This operator generalizes the CNOT gate. We use the notation $[n]:=\{1,2,\ldots, n \}$ and $[i,j]:=\{i, i+1,\ldots, j \}$. Let $V$ be a $m\times n$ matrix and $A \subseteq [m]$, $B\subseteq [n]$. We denote by $V_A$, the submatrix of $V$ formed by taking the rows indexed by entries in $A$. Similarly, we can form a submatrix of $V$ by taking the columns of $V$. This is indicated as $V^B$. We can also form a submatrix $V_A^B$ of $V$ which takes rows indexed by $A$ and columns indexed by $B$. For a matrix $V\in \F_q^{m\times n}$, the notation $\ket{V}$ indicates the state $\ket{v_{11}v_{21}\hdots v_{m1}}$$\ket{v_{12}v_{22}\hdots v_{m2}}$$\hdots$$\ket{v_{1n}v_{2n}\hdots v_{mn}}$ where $v_{ij}$ is the element of $V$ in $i$th row and $j$th column. Let $A\in \F_q^{m\times n} $ matrix and $K$ is an invertible $m\times m $ matrix, then we can transform the state $\ket{A}$ to $\ket{KA}$ by the unitary operation $U_K^{\otimes n}$. We refer to this operation as applying $K$ on $\ket{A}$ to obtain $\ket{KA}$. \subsection{Quantum secret sharing (QSS)} A quantum secret sharing scheme is a protocol to encode the secret in arbitrary quantum state and share it among $n$ parties such that certain subsets of parties, called authorized sets, can recover the secret (recoverability) and certain subsets of parties, called unauthorized sets, do not have any information about the secret (secrecy). The access structure $\Gamma$ of a QSS scheme is defined as \begin{eqnarray*} \Gamma=\{X\subseteq[n]:X\text{ is an authorized set}\}. \end{eqnarray*} A QSS scheme is called perfect quantum secret sharing scheme if any subset of the $n$ parties is either an authorized set or an unauthorized set and non-perfect otherwise. For non-perfect schemes, some subsets of the $n$ parties are allowed to have partial information about the secret. These sets are called intermediate sets. A concrete realization of a quantum secret sharing scheme is specified by giving an encoding for the basis states of the secret. An encoding has to satisfy the properties of recoverability and secrecy to realize a QSS scheme. \begin{definition} A quantum secret sharing scheme for an access structure $\Gamma$ is the encoding and distribution of the secret in an arbitrary quantum state among $n$ parties such that \begin{itemize} \item (Recoverability) any authorized set $A\in\Gamma$ can recover the secret \textit{i.e.} there exists some recovery operation which can decode the secret from the shares in $A$, \item (Secrecy) any unauthorized set $B\notin\Gamma$ has no information about the secret. \end{itemize} \end{definition} In a pure state QSS scheme, the encoding is such that the combined state of all shares is a pure state whenever the secret is in pure state. Otherwise, the scheme is called mixed state scheme. \begin{lemma}[Mixed state schemes from pure state schemes] \text{\cite[Theorem 3]{gottesman00}} \label{lm:mixed-to-pure} Any mixed state QSS scheme can be described as a pure state QSS scheme with one share discarded. \end{lemma} The no-cloning theorem implies that the complement of an authorized set is unauthorized set. In pure state schemes the converse also holds as given in the following result. \begin{lemma}[Authorized sets in pure state schemes] \cite[Corollary 2]{gottesman00} \label{lm:pu-auth} In a pure state quantum secret sharing scheme, complement of any unauthorized set is an authorized set. \end{lemma} We use the following notation for parameters of QSS schemes: $q$ is the fixed dimension of all the qudits in the scheme, $m$ gives the size of the secret in qudits and $w_i$ gives the size of the $i$th share in qudits. \subsection{Quantum threshold secret sharing (QTS)} An important class of perfect quantum secret sharing schemes are the quantum threshold secret sharing schemes. In threshold schemes, a set of parties is either authorized or unauthorized based on the number of parties in the set. \begin{definition}[Quantum threshold scheme] A $((k,n))$ quantum threshold secret sharing scheme for $1<k\leq n\leq 2k-1$ is a QSS scheme with $n$ parties where any $k$ or more parties can recover the secret, but $k-1$ or fewer parties have no information on the secret. \end{definition} If $n>2k$, then there exist two non-overlapping authorized sets which can give two copies of the secret thus violating no-cloning theorem. Cleve {\em et al.} \cite{cleve99} have given a construction for $((k,n))$ QTS schemes as follows. Consider the case of $n=2k-1$. Take $m=1$ and a prime $q\geq 2k-1$. The encoding for a basis state of the secret $s\in \mathbb{F}_q$ is given by the following superposition. \begin{gather} \ket{s}\ \mapsto\sum_{\underline{r}\in\mathbb{F}_q^{k-1}}\ket{v_1(\underline{r},s)}\ket{v_2(\underline{r},s)}\hdots\ket{v_n(\underline{r},s)} \label{eq:qts-enc} \end{gather} Here $\underline{r}=(r_1,r_2,\hdots,r_{k-1})\in\mathbb{F}_q^{k-1}$ and $v_i(\underline{r},s)\in\mathbb{F}_q$ is the evaluation of the polynomial \begin{eqnarray*} v_i(\underline{r},s)=r_1+r_2x_i+\hdots+r_{k-1}x_i^{k-2}+sx_i^{k-1}. \end{eqnarray*} where $x_1,x_2,\hdots,x_n$ are distinct constants from $\mathbb{F}_q$. Each of the $n$ parties is given one qudit from the encoded state. For example, the encoding for a $((k=2,n=3))$ QTS scheme will be as follows where each qudit has dimension three. \begin{eqnarray*} \ket{s}\ \mapsto\sum_{r\in\mathbb{F}_3}&\ket{r}\ket{r+s}\ket{r+2s} \end{eqnarray*} To obtain a $((k,n))$ QTS scheme for $n<2k-1$, simply discard $2k-1-n$ shares after encoding the secret in the above scheme. \begin{lemma} \text{\cite{cleve99}} The encoding in \eqref{eq:qts-enc} provides a $q$-ary $((k,n))$ quantum threshold secret sharing scheme for $n\leq 2k-1$ with the following parameters. \begin{gather*} q\geq 2k-1\text{ (prime)}\\ m=1\\ w_1=w_2=\cdots=w_n=1 \end{gather*} \label{lm:cleve-qts} \end{lemma} This scheme can be used to encode a secret of $m>1$ qudits by individually encoding each qudit in the secret. \subsection{Storage and communication complexity} The storage cost of a secret sharing scheme is directly related to the sizes of the shares. In this context the following result has been shown about the size of a share. \begin{lemma}[Share size, \cite{gottesman00}] \label{lm:qts-opt} The size of each share in a threshold QSS scheme should be at least as large as the size of the secret. \end{lemma} Clearly, the QTS scheme in Lemma \ref{lm:cleve-qts} has optimal storage cost. Apart from storage cost which depends on how the secret is encoded and distributed among the parties, it is also important to see how much quantum communication is needed during the secret recovery. There are two prominent approaches to reconstructing the secret. In the first approach, the parties from an authorized set could collaborate among themselves by means of nonlocal operations to recover the secret. In the second approach, they can communicate all or part of their shares to a third party called the combiner. In this paper, we focus on the latter method of secret reconstruction. \begin{definition}[Communication cost for an authorized set] The communication cost for an authorized set in a QSS scheme is the number of qudits sent to the combiner by the parties in that set for recovering the secret. \end{definition} For the same encoding of the secret, it is possible to have different recovery operations for a given authorized set, thus giving multiple values for the communication cost. However the above definition for communication cost is defined for a particular recovery operation defined by the QSS scheme for an authorized set. \begin{definition}[Communication cost for $d$ in QTS] The communication cost for threshold $d\geq k$ in a $((k,n))$ quantum threshold secret sharing scheme is the maximum communication cost over all the authorized sets of size $d$. This will be denoted as $\text{CC}_n(d)$. \end{definition} Thus, for the QTS scheme defined in Lemma \ref{lm:cleve-qts}, the communication cost for secret recovery is CC$_n(k)=k$. \subsection{Fixed \titlemath{d} communication efficient QTS (CE-QTS)} Assume that the combiner in a QTS scheme has access to more than $k$ parties in the scheme. Then, the $((k,n))$ QTS scheme will still have the same communication cost of $k$ qudits. However, by allowing each party in a $((k,n))$ QTS scheme to send only a part of its share to the combiner, it is possible to reduce this communication cost further. \begin{definition}[CE-QTS] A $((k,n))$ threshold secret sharing scheme is said to be communication efficient, if for some $d$ such that $k<d\leq n$, \begin{equation} \text{CC}_n(d)<\text{CC}_n(k) \label{eq:ce-ineq} \end{equation} Such schemes are denoted as $((k, n, d))$ CE-QTS schemes. \label{def:ce-qts} \end{definition} Here, $d$ is a fixed integer satisfying $k<d\leq n$. {The strict inequality \eqref{eq:ce-ineq} in this definition is necessary because any $((k,n))$ scheme can allow recovery from $d$ parties by communicating some $k$ shares from these $d$ parties thus achieving CC$_n(d)=\text{CC}_n(k)$.} A construction for $((k,n,d))$ CE-QTS schemes based on Staircase codes is given in \cite{senthoor19}. For $n=2k-1$, this CE-QTS scheme is constructed as follows. The encoding for a basis state of the secret $\underline{s}=(s_1,s_2,\hdots,s_m)\in\mathbb{F}_q$ is given by the following superposition \begin{eqnarray} \ket{s_1 s_2\hdots s_m}\ \mapsto\sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}} \bigotimes_{i=1}^{2k-1}\ket{c_{i1} c_{i2}\ldots c_{im}} \label{eq:fixed-d-ce-qts-enc} \end{eqnarray} where $\underline{r}=(r_1,r_2,\hdots,r_{m(k-1)})\in\mathbb{F}_q^{m(k-1)}$ and $c_{ij}$ is the $(i,j)$th entry of the matrix \begin{equation} C=VY.\nonumber \end{equation} Here, $V$ is a Vandermonde matrix defined as \begin{eqnarray} V=\left[\begin{array}{cccc} 1 & x_1 & \ldots & x_1^{d-1}\\ 1 & x_2 & \ldots & x_2^{d-1}\\ \vdots & \vdots& \ddots & \vdots\\ 1 & x_n & \ldots & x_n^{d-1} \end{array} \right].\nonumber \end{eqnarray} where $x_1, x_2,..., x_n$ are distinct non-zero constants from $\F_q$. The matrix $Y$ is given by {\small \begin{eqnarray} Y=\left[ \begin{array}{c:c} \begin{matrix} s_1 \\ s_2 \\ \vdots \end{matrix} & \text{\huge 0}_{(m-1)\times(m-1)} \\ \cdashline{2-2}[4pt/4pt] s_m & \begin{matrix} \hspace{-0.25in}r_{k-m+1} & r_{k-m+2} & \ \hdots & \ \ \ \ \ r_{k-1} \end{matrix} \\ \cdashline{1-2}[4pt/4pt] \begin{matrix} r_1\\ r_2\\ \vdots\\ r_{k-1} \end{matrix} & \begin{matrix} \ r_k & r_{2(k-1)+1} & \hdots & r_{(m-1)(k-1)+1} \\ \ r_{k+1} & r_{2(k-1)+2} & \hdots & r_{(m-1)(k-1)+2} \\ \ \vdots & \vdots & \ddots & \vdots \\ \ r_{2(k-1)} & r_{3(k-1)} & \hdots & r_{m(k-1)}\end{matrix} \end{array} \right]. \nonumber \end{eqnarray} } After encoding, the first set of $m$ qudits are given to the first party, the second set of $m$ qudits given to the second party and so on till the $n$th party. When the combiner accesses $k$ parties, each of these $k$ parties sends all its $m=d-k+1$ qudits. When the combiner accesses $d$ parties, each of these $d$ parties sends only its first qudit. \begin{lemma} \text{\cite{senthoor19}} The encoding in \eqref{eq:fixed-d-ce-qts-enc} provides a $q$-ary $((k,n,d))$ communication efficient quantum threshold secret sharing scheme with the following parameters \begin{gather*} q>2k-1\text{ (prime)} \\m=d-k+1 \\w_1=w_2=\hdots=w_n=d-k+1 \\\text{CC}_n(k)=k(d-k+1) \\\text{CC}_n(d)=d. \end{gather*} \label{lm:senthoor-ce-qts} \end{lemma} To obtain a $((k,n,d))$ CE-QTS scheme for $n<2k-1$, simply discard $2k-1-n$ shares after encoding the secret in the above scheme. By Lemma \ref{lm:qts-opt}, this scheme has an optimal storage cost. It is also proved in \cite{senthoor19} that this scheme gives an optimal communication cost when the combiner accesses $d$ parties, for the specific case of $n=2k-1$. In this paper, we prove that optimality of this scheme holds for $n<2k-1$ as well. For example, for $k=3, d=5$, this construction gives a $((3,5,5))$ CE-QTS scheme with the parameters \begin{subequations} \label{eq:eg-senthoor-ce-qts} \begin{gather} q=7\\ m=3\\ w_1=w_2=\hdots=w_5=3\\ \text{CC}_n(3)=9,\ \text{CC}_n(5)=5. \end{gather} \end{subequations} The matrices $V$ and $Y$ in this scheme are given by \begin{equation*} V= \begin{bmatrix} 1&1&1&1&1\\1&2&4&1&2\\1&3&2&6&4\\1&4&2&1&4\\1&5&4&6&2 \end{bmatrix} \text{and\ } Y= \left[ \begin{tabular}{ccc} $s_1$&0&0\\$s_2$&0&0\\$s_3$&$r_1$&$r_2$\\$r_1$&$r_3$&$r_5$\\$r_2$&$r_4$&$r_6$ \end{tabular} \right]. \end{equation*} The encoding for the scheme is given by the following mapping \begin{eqnarray} \label{eq:enc_qudits_3_5} \ket{\underline{s}}\mapsto\sum_{\underline{r}\in\F_7^6}\ket{c_{11}c_{12}c_{13}}&&\!\!\ket{c_{21}c_{22}c_{23}}\ket{c_{31}c_{32}c_{33}} \\[-0.5cm]&&\ \ \ \ \ket{c_{41}c_{42}c_{43}}\ket{c_{51}c_{52}c_{53}}\nonumber \end{eqnarray} where $\underline{s}=(s_1,s_2,s_3)$ indicates a basis state of the quantum secret, $\underline{r}=(r_1,r_2,\hdots,r_6)$ and $c_{ij} $ is the $(i,j)$th entry of the matrix \begin{equation} C=VY.\nonumber \end{equation} The encoded state in \eqref{eq:enc_qudits_3_5} can also be written as, \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,0,r_1,r_3,r_4)} \ket{v_1(0,0,r_2,r_5,r_6)} \\\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,0,r_1,r_3,r_4)} \ket{v_2(0,0,r_2,r_5,r_6)} \\\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,0,r_1,r_3,r_4)} \ket{v_3(0,0,r_2,r_5,r_6)} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_1,r_3,r_4)} \ket{v_4(0,0,r_2,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,0,r_1,r_3,r_4)} \ket{v_5(0,0,r_2,r_5,r_6)}. \end{array} \end{align*} $v_i()$ indicates the polynomial evaluation given by \begin{eqnarray*} v_i(f_1,f_2,f_3,f_4,f_5)&=&f_1+f_2.x_i+f_3.x_i^2+f_4.x_i^3+f_5.x_i^4 \end{eqnarray*} where the expression $v_i(\underline{s},r_1,r_2)$ denotes $v_i(s_1,s_2,s_3,r_1,r_2)$. Here we have taken $x_i=i$ for $1\leq i\leq 5$. When combiner requests $k=3$ parties, each party sends its complete share. When $d=5$, the combiner downloads the first qudit of each share from all the five parties. The secret recovery for this scheme is explained in detail in Appendix \ref{ap:ceqts-full-eg}. \subsection{Ramp quantum secret sharing (RQSS)} The QTS scheme defined earlier is a perfect QSS scheme \text{i.e.} any set of parties is either authorized or unauthorized. But it is also possible to design a non-perfect threshold scheme such that a set of parties may be neither authorized nor unauthorized. A generalization of the threshold schemes leads to the ramp quantum secret sharing. \begin{definition}[Ramp secret sharing schemes] A $((t,n;z))$ ramp quantum secret sharing scheme for $1\leq z<t\leq n\leq t+z$ is a QSS scheme with $n$ parties where any $t$ or more parties can recover the secret, but $z$ or fewer parties have no information on the secret. \end{definition} {Note that the notation for RQSS schemes should not be confused with that of CE-QTS schemes.} When $z=t-1$, then the ramp scheme is identical to a $((t,n))$ perfect threshold scheme. For $z<t-1$, there are sets which may not be able to reconstruct the secret but can have partial information about the secret. Ogawa {\em et al.} \cite{ogawa05} provided a construction for $((t,n;z))$ ramp QSS schemes for $n\leq t+z$ as follows. Consider the case of $n=t+z$. Take $m=t-z$ and a prime $q>t+z$. The encoding for the basis state of the secret $\underline{s}=(s_1,s_2,\hdots,s_m)\in\mathbb{F}_q^m$ is given by the superposition \begin{eqnarray} \ket{s_1 s_2\hdots s_m}\mapsto\sum_{\underline{r}}\ket{u_1(\underline{s},\underline{r}),u_2(\underline{s},\underline{r}),\hdots,u_n(\underline{s},\underline{r})}.\ \ \ \label{eq:rqss-enc} \end{eqnarray} Here $\underline{r}=(r_1,r_2,\hdots,r_z)\in\mathbb{F}_q^z$ and $u_i(\underline{s},\underline{r})$ is the polynomial evaluation \begin{eqnarray} u_i(\underline{s},\underline{r})=s_1&+&s_2 x_i+\hdots+s_m x_i^{m-1}\nonumber \\&&+r_1 x_i^m+r_2 x_i^{m+1}+\hdots+r_z x_i^{t-1}\ \ \ \nonumber \end{eqnarray} where $x_1,x_2,\hdots,x_n$ are distinct non-zero constants from $\mathbb{F}_q$. \begin{remark} A $((t,n;z))$ ramp QSS scheme can be obtained from a $((t,n+\ell;z))$ scheme by simply dropping some $\ell$ shares. \label{re:ramp-by-dropping} \end{remark} Thus, this construction gives $((t,n;z))$ ramp schemes for any $n\leq t+z$. For example, an encoding for a $((t=3,n=4;z=1))$ ramp QSS scheme will be as follows where each qudit has dimension 5. \begin{eqnarray} \ket{s_1 s_2}\ \mapsto\sum_{r_1\in\mathbb{F}_5}&\ket{s_1+s_2+r_1}\ket{s_1+2s_2+4r_1}\ \ \ \ &\nonumber \\[-0.4cm]&\ \ \ \ \ket{s_1+3s_2+4r_1}\ket{s_1+4s_2+r_1}&\nonumber \end{eqnarray} Each party is given one of the qudits from the encoded state. \begin{lemma} \text{\cite{ogawa05}} The encoding in \eqref{eq:rqss-enc} provides a $q$-ary $((t,n;z))$ ramp quantum secret sharing scheme for $z<t, n\leq t+z$ with the following parameters \label{lm:ogawa-ramp} \begin{gather*} q>t+z\text{ (prime)}\\ m=t-z\\ w_1=w_2=\hdots=w_n=1. \end{gather*} \end{lemma} This scheme can be used to encode a secret of $m=\ell(t-z)$ qudits by individually encoding every set of $t-z$ qudits in the secret. For $t=k, z=k-1$, this scheme is very similar to the $((k,n))$ QTS scheme in Lemma \ref{lm:cleve-qts}. \begin{lemma} \label{lm:rqss-opt} \text{\cite[Corollary 2]{ogawa05}} The share size averaged over all parties in a $((t,n;z))$ ramp QSS scheme should be at least as large as $\frac{1}{t-z}$ times the size of the secret. \end{lemma} Note that the bound on storage cost in ramp QSS is in terms of average share size rather than individual share size. Clearly, the RQSS scheme from Lemma \ref{lm:ogawa-ramp} achieves this bound. \subsection{Quantum information theory} We briefly recall some of the terms of quantum information theory and introduce the notation used in the paper. For further reading, we refer the reader to \cite{nielsen00}. The von Neumann entropy of a quantum system $A$ with density matrix $\rho_A$ is given by \begin{equation} \mathsf{S}(A)=-\tr(\rho_A\ \text{log}\ \rho_A)=-\sum_{i=1}^{M_A}\lambda_i\ \text{log}\ \lambda_i.\nonumber \end{equation} Here $\{\lambda_i\}$ are the eigenvalues of $\rho_A$ acting on a Hilbert space $\mathcal{H}_A$ of dimension $M_A$. The maximum value for $\mathsf{S}(A)$ is given by \begin{equation} \mathsf{S}(A)\leq\log M_A. \label{eq:max-entropy} \end{equation} Consider the bipartite quantum system $AB$ whose density matrix $\rho_{AB}$ over the Hilbert space $\mathcal{H}_A\otimes\mathcal{H}_B$. Joint quantum entropy of $AB$ is defined as \begin{equation} \mathsf{S}(AB)=-\tr(\rho_{AB}\ \text{log}\ \rho_{AB}).\nonumber \end{equation} It satisfies two important properties. \begin{eqnarray} \mathsf{S}(AB)\leq \mathsf{S}(A)+\mathsf{S}(B) \label{eq:sub-addi} \\\mathsf{S}(AB)\geq|\mathsf{S}(A)-\mathsf{S}(B)| \label{eq:araki-lieb} \end{eqnarray} The property \eqref{eq:sub-addi} is called subadditivity and \eqref{eq:araki-lieb} is called the Araki-Lieb inequality. Mutual information between two quantum systems $A$ and $B$ is defined as \begin{equation} I(A:B)=\mathsf{S}(A)+\mathsf{S}(B)-\mathsf{S}(AB).\nonumber \end{equation} Consider an operator $\mathcal{W}$ acting on the system $B$ and the obtained state be represented by the system $B'$ \textit{i.e.} $\rho_{B'}=\mathcal{W}(\rho_B)$. Then the data processing inequality states that \begin{equation} I(A:B')\leq I(A:B) \end{equation} where $A$ is another quantum system. \begin{lemma}[Quantum data processing inequality \cite{schumacher96}] Consider an arbitrary quantum state $Q$ with a reference system $\mathcal{R}$ such that $Q\mathcal{R}$ is in pure state. If $\mathcal{W}$ is a quantum operation which takes state $Q$ to $Q'$, then \begin{eqnarray} \mathsf{S}(Q)\geq\mathsf{S}(Q')-\mathsf{S}(\mathcal{R}Q') \nonumber \end{eqnarray} with equality achieved if and only if the original state $Q$ can be completely recovered from $Q'$. \end{lemma} \section{Universal CE-QTS: A First Look}\label{s:example} In this section, we take the first steps for a formal treatment of universal communication efficient quantum threshold schemes. After defining them, we illustrate the gains in communication complexity for a suitably designed quantum threshold scheme. Later sections in this paper provide constructions for such universal communication efficient quantum secret sharing schemes. \begin{definition}[Universal CE-QTS] A $((k,n))$ threshold secret sharing scheme is said to be universal communication efficient, if for any $d_i$ and $d_j$ such that $k\leq d_i<d_\ell\leq n$, $\text{CC}(d_\ell)<\text{CC}(d_i)$. Such schemes are denoted as $((k,n,*))$ universal CE-QTS schemes \end{definition} In other words, in universal CE-QTS schemes, $\text{CC}_n(n)<\text{CC}_n(n-1)<\hdots<\text{CC}_n(k+1)<\text{CC}_n(k)$. {Similar to Definition \ref{def:ce-qts}, this definition also requires strict reduction in communication cost CC$_n(d)$ for increasing values of $d$.} \subsection{An example for universal CE-QTS} Consider the example of $((k=3,n=5,*))$ universal CE-QTS scheme with the following parameters. \begin{subequations} \begin{gather} q=7 \\m=3 \\w_1=w_2=\hdots=w_5=3 \\\text{CC}_5(3)=9,\ \text{CC}_5(4)=8,\ \text{CC}_5(5)=5. \end{gather} \end{subequations} The encoding for the scheme is given by the following mapping \begin{eqnarray} \label{eq:enc_qudits_3_5_s} \ket{\underline{s}}\mapsto\sum_{\underline{r}\in\F_7^6}\ket{c_{11}c_{12}c_{13}}&&\!\!\ket{c_{21}c_{22}c_{23}}\ket{c_{31}c_{32}c_{33}} \\[-0.5cm]&&\ \ \ \ \ket{c_{41}c_{42}c_{43}}\ket{c_{51}c_{52}c_{53}}\nonumber \end{eqnarray} where $\underline{s}=(s_1,s_2,s_3)\in\mathbb{F}_7^3$ indicates a basis state of the quantum secret, $\underline{r}=(r_1,r_2,\hdots,r_6)\in\mathbb{F}_7^6$ and $c_{ij} $ is the $(i,j)$th entry of the matrix \begin{equation*} C=VY. \end{equation*} Here the matrices $V$ and $Y$ are defined as follows. \begin{equation*} V= \begin{bmatrix} 1&1&1&1&1\\1&2&4&1&2\\1&3&2&6&4\\1&4&2&1&4\\1&5&4&6&2 \end{bmatrix} \text{\ \ and\ \ \ } Y= \left[ \begin{tabular}{ccc} $s_1$&0&0\\$s_2$&$r_1$&0\\$s_3$&$r_2$&$r_3$\\$r_1$&$r_3$&$r_5$\\$r_2$&$r_4$&$r_6$ \end{tabular} \right]. \end{equation*} The encoded state in \eqref{eq:enc_qudits_3_5_s} can also be written as, \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,r_1,r_2,r_3,r_4)} \ket{v_1(0,0,r_3,r_5,r_6)} \\\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,r_1,r_2,r_3,r_4)} \ket{v_2(0,0,r_3,r_5,r_6)} \\\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,r_1,r_2,r_3,r_4)} \ket{v_3(0,0,r_3,r_5,r_6)} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)}. \end{array} \end{align*} Here $v_i()$ indicates the polynomial evaluation given by \begin{eqnarray*} v_i(f_1,f_2,f_3,f_4,f_5)&=&f_1+f_2.x_i+f_3.x_i^2+f_4.x_i^3+f_5.x_i^4 \end{eqnarray*} and the expression $v_i(\underline{s},r_1,r_2)$ denotes $v_i(s_1,s_2,s_3,r_1,r_2)$. Here, we have taken $x_i=i$ for $1\leq i\leq 5$. When combiner requests $d=5$ parties, they send the first qudit from each of their shares. When $d=4$, the combiner downloads the first two qudits of each share of the four parties contacted. When $d=3$, the combiner downloads all three qudits of the share of the three parties contacted. (For clarity, the qudits accessible to the combiner have been highlighted in blue in the description below.) Consider the case when $d=5$ \textit{i.e.} the first qudits from all five parties are accessed. \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}}\ket{v_1(0,r_1,r_2,r_3,r_4)} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}}\ket{v_2(0,r_1,r_2,r_3,r_4)} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}}\ket{v_3(0,r_1,r_2,r_3,r_4)} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_4(\underline{s},r_1,r_2)}}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_5(\underline{s},r_1,r_2)}}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{align*} Applying the operation $U_{V^{-1}}$ on these five qudits, we obtain \begin{align*} \bl{\ket{\underline{s}}}\sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \ket{v_1(0,r_1,r_2,r_3,r_4)} \ket{v_1(0,0,r_3,r_5,r_6)} \\\ket{v_2(0,r_1,r_2,r_3,r_4)} \ket{v_2(0,0,r_3,r_5,r_6)} \\\ket{v_3(0,r_1,r_2,r_3,r_4)} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_1}}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_2}}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{align*} Here, the three qudits containing the basis state of the secret are not entangled with any of the other qudits. Thus, any arbitrary superposition of the basis states can be recovered with the above step. Consider the case when $d=4$. Assume that the first four parties are accessed. The first two qudits from the four parties are sent to the combiner. \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,r_1,r_2,r_3,r_4)}} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,r_1,r_2,r_3,r_4)}} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,r_1,r_2,r_3,r_4)}} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)}} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{align*} Applying the operation $U_{K_1}$ on the set of four second qudits, where $K_1$ is the inverse of $V_{[4]}^{[2,5]}$, we obtain \begin{eqnarray} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{r_1}} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{r_2}} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{r_3}} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_4(\underline{s},r_1,r_2)}\ket{r_4}} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)}. \end{array} \nonumber \end{eqnarray} Then, on applying the operators $L_6\ket{r_2}\ket{v_1(\underline{s},r_1,r_2)}$, $L_5\ket{r_2}\ket{v_2(\underline{s},r_1,r_2)}$, $L_3\ket{r_2}\ket{v_3(\underline{s},r_1,r_2)}$ and $L_3\ket{r_2}$ $\ket{v_4(\underline{s},r_1,r_2)}$, we obtain \begin{eqnarray*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,0)}\ket{r_1}} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_2(\underline{s},r_1,0)}\ket{r_2}} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_3(\underline{s},r_1,0)}\ket{r_3}} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_4(\underline{s},r_1,0)}\ket{r_4}} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)}. \end{array} \end{eqnarray*} Applying the operation $U_{K_2}$ on the set of four first qudits, where $K_2$ is the inverse of $V_{[4]}^{[4]}$, we obtain the following state. \begin{eqnarray*} \bl{\ket{\underline{s}}} \sum_{\underline{r}\in\mathbb{F}_7^6} \hspace{-0.1cm} \begin{array}{l} \bl{\ket{r_1}} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_2}} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_3}} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_1}\ket{r_4}} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array}\nonumber \end{eqnarray*} \vspace{-0.3cm} \begin{equation} \label{eq:entangled_secret} \vspace{-0.1cm} \end{equation} We disentangle the basis state $\ket{\underline{s}}$ from the rest of qudits by applying the operator $U_{K_3}$ on $\ket{r_1}\ket{r_2}\ket{r_3}\ket{r_4}$ to get $\ket{r_1}\ket{r_2}\ket{r_3}\ket{v_5(0,r_1,r_2,r_3,r_4)}$ and then applying $U_{K_4}$ on $\ket{s_1}\ket{s_2}\ket{s_3}\ket{r_1}\ket{r_2}$ to get $\ket{s_1}\ket{s_2}\ket{s_3}\ket{r_1}\ket{v_5(\underline{s},r_1,r_2)}$. \begin{equation} K_3=\left[ \begin{tabular}{c} 1 0 0 0\\ 0 1 0 0\\ 0 0 1 0\\\hline $V_{\{5\}}^{[2,5]}$ \end{tabular} \right] \text{\ and\ \ } K_4=\left[ \begin{tabular}{c} 1 0 0 0 0\\ 0 1 0 0 0\\ 0 0 1 0 0\\ 0 0 0 1 0\\\hline $V_{\{5\}}$ \end{tabular} \right]\nonumber \end{equation} Now, we obtain \begin{eqnarray*} \hspace{-0.1cm} \bl{\ket{\underline{s}}} \sum_{\underline{r}\in\mathbb{F}_7^6} \hspace{-0.1cm} \begin{array}{l} \bl{\ket{r_1}} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_5(\underline{s},r_1,r_2)}} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_3}} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_1}\ket{v_5(0,r_1,r_2,r_3,r_4)}} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{eqnarray*} \vspace{-0.7cm} \begin{eqnarray} \\&&\hspace{-0.6cm}=\bl{\ket{\underline{s}}} \sum_{\substack{(r_1,r_2,r_3,r_4',\\r_5,r_6)\in\mathbb{F}_7^6}} \begin{array}{l} \bl{\ket{r_1}} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_5(\underline{s},r_1,r_2)}} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_3}} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_1}\ket{r_4'}} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{r_4'} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \label{eq:disentangled_secret_before} \\&&\hspace{-0.6cm}=\bl{\ket{\underline{s}}} \sum_{\substack{(r_1,r_2',r_3,r_4',\\r_5,r_6)\in\mathbb{F}_7^6}} \begin{array}{l} \bl{\ket{r_1}} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_2'}} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_3}} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{r_1}\ket{r_4'}} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{r_2'}\ket{r_4'} \ket{v_5(0,0,r_3,r_5,r_6)}. \end{array} \label{eq:disentangled_secret} \end{eqnarray} The variable change in \eqref{eq:disentangled_secret_before} is possible because the qudits $\sum_{r_4\in\mathbb{F}_7}\bl{\ket{v_5(0,r_1,r_2,r_3,r_4)}}\ket{v_5(0,r_1,r_2,r_3,r_4)}$ give the uniform superposition $\sum_{r_4'\in\mathbb{F}_7}\bl{\ket{r_4'}}\ket{r_4'}$ independent of $r_1,r_2,r_3,r_5,r_6$. The variable change from $r_2$ to $r_2'$ can also be obtained similarly. Now, the secret is disentangled with the rest of the qudits. Thus, any arbitrary superposition of the basis states can be recovered with above steps for $d=4$. In the case when $d=3$, each of the three contacted parties sends all three qudits in its share. \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,r_1,r_2,r_3,r_4)} \ket{v_1(0,0,r_3,r_5,r_6)}} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,r_1,r_2,r_3,r_4)} \ket{v_2(0,0,r_3,r_5,r_6)}} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,r_1,r_2,r_3,r_4)} \ket{v_3(0,0,r_3,r_5,r_6)}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{align*} The secret recovery for $d=3$ also uses operations similar to those in the case of $d=4$. For sake of completeness, the secret recovery for $d=3$ in this scheme has been explained in Appendix \ref{ap:univ-ceqts-d-3}. In all the three cases, the first step was to recover the basis state $\ket{\underline{s}}=\ket{s_1s_2s_3}$. The recovery is complete at this point if the secret is any one of the basis states (identical to a classical secret). But the quantum secret can be in an arbitrary superposition of basis states. To recover this quantum secret, the three qudits containing information on the secret needs to be disentangled from the rest of the qudits. For example, the first three qudits in \eqref{eq:entangled_secret}, though they have information on the basis states, are still entangled with the other qudits while these qudits are disentangled form the other qudits in \eqref{eq:disentangled_secret}. \subsection{Comparison with fixed \titlemath{d} CE-QTS} In contrast with the above scheme, for the standard $((3,5))$ QSS scheme due to Cleve {\em et al.} 3 qudits need to be communicated for recovery of 1 qudit of secret whenever the combiner accesses three or more parties. The $((3,5,5))$ CE-QTS scheme from \cite{senthoor19} described in \eqref{eq:eg-senthoor-ce-qts} gives a better communication cost of 5/3 qudits per 1 qudit of secret when the combiner accesses 5 parties. But this scheme does not provide the flexibility of also contacting four parties communication efficiently. The scheme provided above can solve that problem. It provides communication efficiency at both $d=5$ and $d=4$. At $d=4$, the above scheme gives communication cost of 8 qudits to recover secret of 3 qudits \textit{i.e.} 8/3 qudits per one qudit of secret. However this is not the optimal communication cost for $d=4$. Because, for $d=4$, the communication cost in a $((3,5,4))$ fixed $d$ CE-QTS scheme from \cite{senthoor19} gives 2 qudits per one qudit of secret. The constructions proposed in the coming sections can give a $((3,5,*))$ universal CE-QTS scheme with the same communication efficiency as the fixed $d$ CE-QTS schemes of \cite{senthoor19} at both $d=4$ and $d=5$. \section{Concatenation Framework for Constructing Communication Efficient QTS Schemes}\label{s:framework} In this section, we develop a framework for constructing communication efficient quantum secret sharing schemes. We propose a general framework which can be used to derive many classes of CE-QTS schemes. Ramp secret sharing schemes and threshold schemes are the central ingredients of the proposed constructions. First, we give a systematic method to construct CE-QTS schemes where the combiner can contact $d$ parties, and reconstruct the secret. Here $d$ is determined prior to secret distribution. Then, we provide a systematic method to construct CE-QTS schemes where the combiner can contact any $d$ parties to reconstruct the secret. Here, $d$ can be determined after secret distribution arbitrarily by the combiner. \subsection{Fixed \titlemath{d} CE-QTS from ramp QSS} \label{ss:iii_a} Suppose we have a $((k,n,d))$ CE-QTS scheme. Consider any authorized set of $d\geq k$ parties. Since this is an authorized set, we can reconstruct the secret. In a communication efficient scheme, these $d$ parties do not communicate their entire shares to the combiner. They only communicate a portion of their share. For gaining the intuition, let us assume that the portion communicated by a party when a set of $d$ parties are contacted by the combiner is independent of the choice of the remaining $d-1$ parties. Since this is a $((k,n,d))$ scheme, any $k-1$ or fewer {\em portions} \textit{i.e.} partial shares cannot reveal any information about the secret. However, $k$ or more portions may reveal partial information about the secret, while $d$ out of all the $n$ portions can completely recover the secret. Therefore, the set of portions communicated by all the $n$ parties to the combiner can be modelled as a $((d,n;k-1))$ ramp QSS scheme. Now let us see if we can build a $((k,n))$ QTS scheme out of this $((d,n;k-1))$ ramp QSS scheme. If $k$ of these $n$ parties attempt to reconstruct the secret with just their shares from the ramp scheme, then their $k$ shares may not be enough for the reconstruction of the secret. The combiner will need shares from $d-k$ more parties of the ramp scheme for the additional information required to recover the secret for sure. So we extend the ramp scheme to a $((d,n+d-k;k-1))$ scheme by allowing for $d-k$ more new shares in the previous ramp scheme. These additional $d-k$ shares of the ramp scheme are distributed to the $n$ parties after encoding by a $((k,n))$ threshold scheme so that even if only $k$ parties are contacted by the combiner these $d-k$ extra shares necessary for secret recovery can be recovered. The full scheme is illustrated in Fig.~\ref{fig:fixed_d_rqss} and formally proved in Theorem~\ref{th:ramp-fixed-ceqts}. \begin{figure}[ht] \begin{center} \hspace{-0.5cm} \begin{tikzpicture}[scale=0.7, every node/.style={scale=0.78}] \draw (-0.6,-0.05) -- (1.5,-0.05); \draw (1.5,-0.05) -- (1.5,5.5); \draw (1.5,5.5) -- (-0.6,5.5); \draw (-0.6,5.5) -- (-0.6,-0.05); \node at (0.45,4) {\small $((t',$$n';$$z'))$}; \node at (0.45,3.5) {\small ramp QSS}; \node at (0.45,2.5) {\small $n'$$=$$n$$+$$d$$-$$k$}; \node at (0.45,2) {\small $t'$$=$$d$}; \node at (0.45,1.5) {\small $z'$$=$$k$$-$$1$}; \draw[->] (-1,2.75) -- (-0.6,2.75); \node at (-1.35,2.75) {$\ket{\phi}$}; \draw[->] (1.5,5.25) -- (6.15,5.25); \draw[->] (1.5,4.75) -- (6.15,4.75); \draw[->] (1.5,4.25) -- (6.15,4.25); \node at (1.8,3.75) {$\vdots$}; \draw[->] (1.5,3) -- (6.15,3); \draw [decorate,decoration={brace,amplitude=6}](2,5.55) -- (2,2.7); \node at (2,5.75) {\small $n$}; \draw[->] (1.5,1.8) -- (3.25,1.8); \draw[->] (1.5,1.3) -- (3.25,1.3); \node at (1.8,0.925) {$\vdots$}; \draw[->] (1.5,0.3) -- (3.25,0.3); \draw [decorate,decoration={brace,amplitude=6}](2,2.1) -- (2,0); \node at (2.1,2.3) {\small $d-k$}; \node at (1,6.4) {\small Layer 1 encoding}; \draw [dashed] (2.75,-0.7) -- (2.75,6.7); \node at (4.15,6.4) {\small Layer 2 encoding}; \begin{scope}[xshift=-0.5cm] \draw (3.75,-0.2) -- (5.1,-0.2); \draw (5.1,-0.2) -- (5.1,2.3); \draw (5.1,2.3) -- (3.75,2.3); \draw (3.75,2.3) -- (3.75,-0.2); \node at (4.425,1.3) {\small $((k$,$n))$}; \node at (4.425,0.9) {\small QTS}; \draw (5.1,2.05) -- (9.75,2.05); \node at (5.35,1.675) {$\vdots$}; \draw (5.1,1.05) -- (10.5,1.05); \draw (5.1,0.55) -- (11,0.55); \draw (5.1,0.05) -- (11.5,0.05); \draw [decorate,decoration={brace,amplitude=6}](5.5,2.35) -- (5.5,-0.25); \node at (5.6,-0.5) {\small $n$}; \end{scope} \begin{scope}[xshift=3.5cm] \draw (5.75,2.05) -- (5.75,2.9); \draw (6.5,1.05) -- (6.5,4.15); \draw (7,0.55) -- (7,4.65); \draw (7.5,0.05) -- (7.5,5.15); \draw[->] (5.75,2.9) -- (5.5,2.9); \draw[->] (6.5,4.15) -- (5.5,4.15); \draw[->] (7,4.65) -- (5.5,4.65); \draw[->] (7.5,5.15) -- (5.5,5.15); \end{scope} \draw [dashed] (5.6,-0.7) -- (5.6,6.7); \begin{scope}[xshift=1cm,yshift=0.25cm] \draw (5.15,5.25) -- (5.15,2.5); \draw (5.75,5.25) -- (5.75,2.5); \draw (8,5.25) -- (8,2.5); \draw (8,5.25) -- (5.15,5.25); \node at (5.45,5) {$A_1$}; \node at (6.875,5) {$B_1$}; \node at (8.3,5.15) {$S_1$}; \draw (5.15,4.75) -- (8,4.75); \node at (5.45,4.5) {$A_2$}; \node at (6.875,4.5) {$B_2$}; \node at (8.3,4.65) {$S_2$}; \draw (5.15,4.25) -- (8,4.25); \node at (5.45,4) {$A_3$}; \node at (6.875,4) {$B_3$}; \node at (8.3,4.15) {$S_3$}; \draw (5.15,3.75) -- (8,3.75); \node at (5.45,3.5) {$\vdots$}; \node at (6.875,3.5) {$\vdots$}; \node at (8.3,3.5) {$\vdots$}; \draw (5.15,3) -- (8,3); \node at (5.45,2.75) {$A_n$}; \node at (6.875,2.75) {$B_n$}; \node at (8.3,2.9) {$S_n$}; \draw (5.15,2.5) -- (8,2.5); \end{scope} \end{tikzpicture} \captionsetup{justification=justified} \caption{Concatenation framework for constructing $((k,n,d))$ CE-QTS scheme with $((d,n+d-k;k-1))$ ramp QSS scheme.} \label{fig:fixed_d_rqss} \end{center} \end{figure} \begin{algorithm}[ht] \caption{Encoding for a $((k,n,d))$ CE-QTS scheme using a $((d,n+d-k;k-1))$ ramp QSS and a $((k,n))$ QTS.} \begin{algorithmic}[1] \REQUIRE {Secret $\ket{\phi}$} \ENSURE {Shares of the $n$ parties, $S_j$ for $1\leq j\leq n$} \STATE Encode the secret $\ket{\phi}$ using the $((d,n+d-k;k-1))$ ramp QSS scheme. Denote the $j$th share generated by the ramp scheme as $A_j$ for $1\leq j\leq n+d-k$ such that the last $d-k$ shares have the largest share sizes. \STATE Encode the quantum state in $(A_{n+1},A_{n+2},\hdots,A_{n+d-k})$ using a $((k, n))$ quantum threshold scheme. Denote the $j$th share of this QTS scheme as $B_j$ for $1\leq j\leq n$. \STATE Distribute $S_j=(A_j,B_j)$ to the $j$th party for $1\leq j\leq n$. \end{algorithmic} \label{alg:fixed_d_rqss_enc} \end{algorithm} \begin{algorithm}[ht] \caption{Secret recovery for the $((k,n,d))$ CE-QTS scheme from the encoding in Algorithm \ref{alg:fixed_d_rqss_enc}} \begin{algorithmic}[1] \REQUIRE {Shares of $k$ parties or layer 1 from any $d$ parties} \ENSURE {Secret $\ket{\phi}$} \IF{combiner has access to only $k$ shares} \PARSTATE{Download full shares from the $k$ parties.} \PARSTATE{Use layer 2 from the $k$ parties to recover the input to the $((k,n))$ QTS scheme \textit{i.e.} $(A_{n+1},A_{n+2},\hdots,A_{n+d-k})$.} \PARSTATE{Use $(A_{n+1},A_{n+2},\hdots,A_{n+d-k})$ and layer 1 from the $k$ parties to get $d$ shares of the ramp QSS scheme and recover the secret $\ket{\phi}$.} \ELSIF{combiner has access to $d$ shares} \PARSTATE{Download layer 1 from the $d$ parties.} \PARSTATE{Use layer 1 from the $d$ parties to get $d$ shares of the ramp QSS scheme and recover the secret $\ket{\phi}$.} \ENDIF \end{algorithmic} \label{alg:fixed_d_rqss_rec} \end{algorithm} \begin{theorem}[Concatenation framework for fixed $d$ CE-QTS] \label{th:ramp-fixed-ceqts} A $((k,n,d))$ CE-QTS scheme exists, if a $((d,n+d-k;k-1))$ ramp QSS scheme and a $((k,n))$ QTS scheme exist. The encoding for this scheme is given in Algorithm~\ref{alg:fixed_d_rqss_enc} and the recovery in Algorithm~\ref{alg:fixed_d_rqss_rec}. \end{theorem} \begin{proof} The proof is by giving an explicit construction of a $((k, n, d ))$ CE-QTS scheme from the given ramp QSS and $((k, n))$ threshold schemes. The encoding for the $((k,n,d))$ CE-QTS scheme is as given in Algorithm~\ref{alg:fixed_d_rqss_enc}. Each share $S_j$ consists of two portions $(A_j, B_j)$. We say that $A_j$ forms the first layer of the share $S_j$ and $B_j$ the second layer. Here, for any $L\subseteq[n]$, $S_L$ denotes $\{S_j\}_{j\in L}$ and $|S_j|$ gives the number of qudits in the share $S_j$. Similar notations are used for $\{A_j\}$ and $\{B_j\}$ as well. \begin{compactenum}[(i)] \item \textit{Recoverability}: The secret recovery for the $((k,n,d))$ CE-QTS scheme is as given in Algorithm~\ref{alg:fixed_d_rqss_rec}. While the combiner accesses any set of $d$ parties, it just needs layer 1 of these parties to recover the secret from the underlying ramp scheme. But while accessing only $k$ parties, the combiner needs $d-k$ more shares of the ramp scheme to recover the secret. These $d-k$ extra shares are recovered from the $((k, n ))$ scheme with qudits from second layer. \item \textit{Secrecy}: Consider any set $L\subseteq [n]$ such that $|L|=k-1$. By Lemma \ref{lm:mixed-to-pure}, let $E_1$ be the purifying state for the ramp QSS scheme such that the shares $A_1,A_2,\hdots,A_{n+d-k},E_1$ give a pure state scheme encoding $\ket{\phi}$. Similarly, let $E_2$ be the purifying state for the perfect QSS scheme such that the shares $B_1,B_2,\hdots,B_n,E_2$ give a pure state scheme encoding $(A_{n+1},A_{n+2},\hdots,A_{n+d-k})$. Overall, $S_{[n]}\cup\{E_1,E_2\}$ gives a pure state scheme encoding $\ket{\phi}$. If it can be proved that $S_{[n]\backslash L}\cup\{E_1,E_2\}$ can recover the secret, then by no-cloning theorem, $S_L$ has no information on the secret which proves the secrecy property of the CE-QTS scheme of Algorithm~\ref{alg:fixed_d_rqss_enc}. Assume that Alice has the shares $S_{[n]\backslash L}\cup\{E_1,E_2\}$. Clearly, $B_L$ is an unauthorized set in the QTS scheme. By Lemma \ref{lm:pu-auth}, $B_{[n]\backslash L}\cup\{E_2\}$ is an authorized set for recovering $(A_{n+1},A_{n+2},\hdots,A_{n+d-k})$. Thus, Alice recovers $(A_{n+1},A_{n+2},\hdots,A_{n+d-k})$ from the QTS scheme. Now, Alice has the shares $A_{[n+d-k]\backslash L}\cup\{E_2\}$. $A_L$ is an unauthorized set in the ramp QSS scheme. By Lemma \ref{lm:pu-auth}, $A_{[n-d+k]\backslash L}\cup\{E_1\}$ is an authorized set in the ramp QSS scheme. Hence, Alice recovers the secret $\ket{\phi}$ from the ramp QSS scheme. \item \textit{Communication efficiency}: Consider the set of $d$ parties given by $D\subseteq[n]$ which has maximum communication cost among all sets of $d$ parties. By definition, the communication cost of this set of $d$ parties equals CC$_n(d)$. Pick a $K\subset D$ such that $|K|=k$. \begin{eqnarray} \text{CC}_n(k)&=&\sum_{j\in K}|S_j|=\sum_{j\in K}(|A_j|+|B_j|)\nonumber \\&\geq&\sum_{j\in K}|A_j|+\sum_{j\in K}\sum_{\ell=n+1}^{n+d-k}|A_\ell| \label{eq:layer2-size} \\&=&\sum_{j\in K}|A_j|+k\sum_{\ell=n+1}^{n+d-k}|A_\ell| \nonumber \\&>&\sum_{j\in K}|A_j|+\sum_{\ell=n+1}^{n+d-k}|A_\ell| \label{eq:k-exceeds-1} \\&\geq&\sum_{j\in K}|A_j|+\sum_{j\in D\backslash K}|A_j| \label{eq:largestd_k} \\&=&\sum_{j\in D}|A_j|=\text{CC}_n(d)\nonumber \end{eqnarray} where $J=D\backslash K$. The bound on \eqref{eq:layer2-size} is due to Lemma~\ref{lm:qts-opt} which implies that each share $B_i$ of the QTS scheme is at least as large as the input state given by $(A_{n+1},A_{n+2},\hdots,A_{n+d-k})$. The strict inequality in \eqref{eq:k-exceeds-1} is because $k>1$. The inequality \eqref{eq:largestd_k} is due to the fact that the shares $A_{n+1},A_{n+2},\hdots,A_{n+d-k}$ have the largest sizes among all the $n+d-k$ shares of the ramp scheme. \end{compactenum} This concludes the proof of the theorem. \end{proof} Theorem~\ref{th:ramp-fixed-ceqts} can be used with various ramp QSS and threshold schemes. Note that Theorem~\ref{th:ramp-fixed-ceqts} does not require the alphabet $q$ to be a prime. The communication complexity of the resulting schemes clearly depends on the underlying ramp QSS scheme and QTS scheme. Here, we propose a construction for CE-QTS scheme using the ramp QSS scheme proposed by Ogawa {\em et al.}\cite{ogawa05} and the QTS scheme from Cleve {\em et al.}\cite{cleve99}. \begin{corollary}[Concatenated construction for fixed $d$ CE-QTS] \label{co:ramp-fixed-ceqts-constn} A $q$-ary $((k,n,d))$ communication efficient QTS scheme can be constructed using the encoding in Algorithm \ref{alg:fixed_d_rqss_enc} with the following parameters. \begin{gather*} q>d+k-1\text{ (prime)}\\ m=d-k+1\\ w_1=w_2=\hdots=w_n=d-k+1\\ \text{CC}_n(k)=k(d-k+1)\\ \text{CC}_n(d)=d. \end{gather*} \end{corollary} \begin{proof} Consider the Concatenation framework from Theorem~\ref{th:ramp-fixed-ceqts}. Use the ramp scheme from \cite{ogawa05} given in Lemma \ref{lm:ogawa-ramp} and the QTS scheme from \cite{cleve99} given in Lemma \ref{lm:cleve-qts} for the underlying schemes. By Lemma \ref{lm:ogawa-ramp}, the dimension of the qudits has to be a prime $q$ such that $q>d-k+1$. This also satisfies the constraint on the dimension for the QTS scheme from Lemma \ref{lm:cleve-qts}. The size of the secret in the ramp scheme is $m=d-k+1$ qudits. Each share of the ramp QSS is of size one qudit. Thus the first layer of each share in the CE-QTS has one qudit. The input state for the $((k,n))$ QTS will have $d-k$ qudits. By Lemma \ref{lm:cleve-qts}, the size of each share of the QTS scheme is also $d-k$. Hence, the second layer of each share in CE-QTS has $d-k$ qudits. In total, each share in the CE-QTS scheme has $w_j=d-k+1$ qudits for $1\leq j\leq n$. When the combiner attempts to recover from just $k$ parties, each of them transmits the entire share of $d-k+1$ qudits. Thus $\text{CC}_n(k)=k(d-k+1)$. When the combiner contacts any $d$ parties, each of them sends a qudit from the first layer, giving $\text{CC}_n(d)=d$. \end{proof} In the CE-QTS scheme as described in Corollary \ref{co:ramp-fixed-ceqts-constn}, note that the dimension of each of the $d-k+1$ qudits in the secret has to be more than $d+k-1$. Compare this with the CE-QTS scheme from \cite{senthoor19} which can give a smaller dimension of $q>2k-1$. (Refer Table \ref{tab:contributions}.) However, using other ramp schemes in this framework could lead to CE-QTS schemes with qudits of dimension less than or equal to $d-k+1$. \subsection{Universal CE-QTS from ramp QSS}\label{ss:iii_b} Consider an $((n,n;k-1))$ ramp QSS scheme (marked black in Fig. \ref{fig:var_rqss}). Now, if a combiner has access to only $n-1$ out of the $n$ parties, the combiner will not be able to recover the secret unless he receives one more share from this scheme. If these $n-1$ parties can send the combiner some more qudits containing information about an extra share, then the combiner can recover the secret with this extra share. This flexibility can be achieved by instead taking an $((n+1,n;k-1))$ ramp QSS scheme where the first $n$ shares are given to $n$ parties and the $(n+1)$th share is encoded and distributed among the $n$ parties through an $((n-1,n;k-1))$ scheme (which is indicated with blue in Fig. \ref{fig:var_rqss}). Then, whenever the combiner has access to only $n-1$ parties, he will first decode the $((n-1,n;k-1))$ scheme to recover the extra share and then use the $n-1$ shares from the $((n,n+1;k-1))$ ramp scheme along with this extra share to recover the secret. \begin{figure}[ht] \begin{center} \hspace{-0.5cm} \begin{tikzpicture}[scale=0.7, every node/.style={scale=0.78}] \draw (-0.6,-0.05) -- (1.5,-0.05); \draw (1.5,-0.05) -- (1.5,5.5); \draw (1.5,5.5) -- (-0.6,5.5); \draw (-0.6,5.5) -- (-0.6,-0.05); \node at (0.45,4) {\small $((t,$$n';$$z))$}; \node at (0.45,3.5) {\small ramp QSS}; \node at (0.45,2.5) {\small $n'$$=$$n$\bl{$+$$1$}}; \node at (0.45,2) {\small $t$$=$$n$}; \node at (0.45,1.5) {\small $z$$=$$k$$-$$1$}; \draw[->] (-1,2.75) -- (-0.6,2.75); \node at (-1.35,2.75) {$\ket{\phi}$}; \draw[->] (1.5,5.25) -- (6.15,5.25); \draw[->] (1.5,4.55) -- (6.15,4.55); \draw[->] (1.5,3.85) -- (6.15,3.85); \node at (1.75,3.125) {$\vdots$}; \draw[->] (1.5,2.4) -- (6.15,2.4); \draw [decorate,decoration={brace,amplitude=6}](1.85,5.55) -- (1.85,2.1); \node at (1.85,5.75) {\small $n$}; \bl{\draw[->] (1.5,0.3) -- (2.85,0.3);} \node at (0.7,6.4) {\small Layer 1 encoding}; \draw [dashed] (2.45,-1.4) -- (2.45,6.7); \node at (4.15,6.4) {\small Layer 2 encoding}; \bl{ \begin{scope}[xshift=-0.5cm,yshift=-0.7cm] \draw (3.35,-0.5) -- (5.25,-0.5); \draw (5.25,-0.5) -- (5.25,2.5); \draw (5.25,2.5) -- (3.35,2.5); \draw (3.35,2.5) -- (3.35,-0.5); \node at (4.3,2) {\small $((t$,$n'$;$z))$}; \node at (4.3,1.5) {\small ramp QSS}; \node at (4.3,1) {\small $n'$$=$$n$}; \node at (4.3,0.5) {\small $t$$=$$n$$-$$1$}; \node at (4.3,0) {\small $z$$=$$k$$-$$1$}; \draw (5.25,2.05) -- (9.75,2.05); \node at (5.5,1.675) {$\vdots$}; \draw (5.25,1.05) -- (10.5,1.05); \draw (5.25,0.55) -- (11,0.55); \draw (5.25,0.05) -- (11.5,0.05); \draw [decorate,decoration={brace,amplitude=6}](5.65,2.35) -- (5.65,-0.25); \node at (5.75,-0.5) {\small $n$}; \end{scope} \begin{scope}[xshift=3.5cm] \draw (5.75,1.35) -- (5.75,2.4); \draw (6.5,0.35) -- (6.5,3.85); \draw (7,-0.15) -- (7,4.55); \draw (7.5,-0.65) -- (7.5,5.25); \draw[->] (5.75,2.4) -- (5.5,2.4); \draw[->] (6.5,3.85) -- (5.5,3.85); \draw[->] (7,4.55) -- (5.5,4.55); \draw[->] (7.5,5.25) -- (5.5,5.25); \end{scope} \draw [dashed] (5.7,-1.4) -- (5.7,6.7); \begin{scope}[xshift=1cm,yshift=0.35cm] \draw (5.15,5.25) -- (5.15,1.7); \draw (6,5.25) -- (6,1.7); \draw (5.15,5.25) -- (6,5.25); \node at (5.6,4.9) {$S_1^{(1)}$}; \node at (8.3,5.15) {$S_1$}; \draw (5.15,4.55) -- (6,4.55); \node at (5.6,4.2) {$S_2^{(1)}$}; \node at (8.3,4.45) {$S_2$}; \draw (5.15,3.85) -- (6,3.85); \node at (5.6,3.5) {$S_3^{(1)}$}; \node at (8.3,3.75) {$S_3$}; \draw (5.15,3.15) -- (6,3.15); \node at (5.6,2.9) {$\vdots$}; \node at (8.3,3.15) {$\vdots$}; \draw (5.15,2.4) -- (6,2.4); \node at (5.6,2.05) {$S_n^{(1)}$}; \node at (8.3,2.3) {$S_n$}; \draw (5.15,1.7) -- (6,1.7); \bl{ \draw (8,5.25) -- (8,1.7); \draw (6,5.25) -- (8,5.25); \node at (7,4.9) {$S_1^{(2)}$}; \draw (6,4.55) -- (8,4.55); \node at (7,4.2) {$S_2^{(2)}$}; \draw (6,3.85) -- (8,3.85); \node at (7,3.5) {$S_3^{(2)}$}; \draw (6,3.15) -- (8,3.15); \node at (7,2.9) {$\vdots$}; \draw (6,2.4) -- (8,2.4); \node at (7.1,2.05) {$S_n^{(2)}$}; \draw (5.15,1.7) -- (8,1.7); } \end{scope} \end{tikzpicture} \captionsetup{justification=justified} \caption{Concatenation of two ramp quantum secret sharing schemes to construct a $((t,n;k-1))$ ramp QSS scheme with flexible $t\in\{n-1,n\}$.} \label{fig:var_rqss} \end{center} \end{figure} Thus, by concatenating an $((n-1,n;k-1))$ ramp scheme which encodes the extra share from an $((n,n+1;k-1))$ ramp scheme, a $((t,n;k-1))$ ramp QSS with a flexible threshold $t\in\{n-1,n\}$ can be designed. Similarly, a $((k,n,*))$ universal CE-QTS scheme is a QTS scheme in which the secret recovery can happen efficiently for all thresholds $d\in\{k,k+1,\hdots,n\}$. The main idea in our following framework for constructing universal CE-QTS schemes is that this generalization of $d$ can be achieved by concatenating $n-k+1$ ramp schemes with increasing threshold $t$ successively. \begin{figure}[ht] \begin{center} \hspace{-0.2cm} \begin{tikzpicture}[scale=0.52, every node/.style={scale=0.7}] \newcommand\BoxLayer[5] { \tikzset{shift={(#2,#3)}} \draw (0,-0.2) -- (2,-0.2); \draw (2,-0.2) -- (2,3.2); \draw (2,3.2) -- (0,3.2); \draw (0,3.2) -- (0,-0.2); \node at (1,2.6) {RQSS$_{#1}$}; \node at (1,1.6) {\small $n_#1${\scriptsize=}#5}; \node at (1,1) {\small $t_#1${\scriptsize=}#4}; \node at (1,0.4) {\small $z_#1${\scriptsize=}$k$-1}; \draw (2,3) -- (2.55,3); \draw (2.25-0.15,3-0.15) -- (2.25+0.15,3+0.15); \node at (2.25,3.25) {\small $n$}; \tikzset{shift={(-#2,-#3)}} } \draw[->] (-0.25,1.1) -- (0,1.1); \node at (-0.6,1.1) {$\ket{\phi}$}; \BoxLayer{1}{0}{0}{$n$}{$n$+$h$-1} \draw (2,2.5) -- (2.75,2.5); \draw[->] (2,2) -- (2.4,2); \node at (3.45,1.95) {\small To RQSS$_3$}; \draw[->] (2,1.5) -- (2.4,1.5); \node at (3.45,1.45) {\small To RQSS$_4$}; \node at (2.2,1.25) {\small $\vdots$}; \draw (2,0.75) -- (6.55,0.75); \node at (2.2,0.45) {\small $\vdots$}; \draw (2,0) -- (11.9,0); \BoxLayer{2}{3}{2.5}{$n$-1}{$n$+$h$-2} \draw (2.75,2.5) -- (2.75,3.75); \draw[->] (2.75,3.75) -- (3,3.75); \draw[->] (5,5) -- (5.4,5); \node at (6.45,4.95) {\small To RQSS$_3$}; \draw[->] (5,4.5) -- (5.4,4.5); \node at (6.45,4.45) {\small To RQSS$_4$}; \node at (5.2,4) {\small $\vdots$}; \draw (5,3.25) -- (6.55,3.25); \node at (5.2,2.95) {\small $\vdots$}; \draw (5,2.5) -- (11.85,2.5); \node at (6.5,6.5) {$\hdots$}; \BoxLayer{i}{8}{6}{$n$-$i$+1}{$n$+$h$-$i$} \node at (6.9,0.75) {$\hdots$}; \draw (7.25,0.75) -- (7.75,0.75); \draw (7.75,0.75) -- (7.75,6.2); \draw[->] (7.75,6.2) -- (8,6.2); \node at (6.9,3.25) {$\hdots$}; \draw (7.25,3.25) -- (7.5,3.25); \draw (7.5,3.25) -- (7.5,6.7); \draw[->] (7.5,6.7) -- (8,6.7); \node at (7.75,7.2) {$\vdots$}; \draw[->] (7.6,7.5) -- (8,7.5); \node at (6.15,7.5) {\small From RQSS$_{i-2}$}; \draw[->] (7.6,8) -- (8,8); \node at (6.15,8) {\small From RQSS$_{i-1}$}; \draw[->] (10,8) -- (10.4,8); \node at (11.6,8.5) {\small To RQSS$_{i+1}$}; \draw[->] (10,8.5) -- (10.4,8.5); \node at (11.6,8) {\small To RQSS$_{i+2}$}; \node at (10.25,7.25) {$\vdots$}; \draw (10,6) -- (11.8,6); \node at (11.5,10) {$\hdots$}; \BoxLayer{h}{13.5}{9.5}{$k$}{$n$} \tikzset{shift={(4.5,3.5)}} \node at (7.75,-3.5) {$\hdots$}; \draw (8.1,-3.5) -- (8.75,-3.5); \draw (8.75,-3.5) -- (8.75,6); \draw[->] (8.75,6) -- (9,6); \node at (7.7,-1) {$\hdots$}; \draw (8.05,-1) -- (8.5,-1); \draw (8.5,-1) -- (8.5,6.5); \draw[->] (8.5,6.5) -- (9,6.5); \node at (8.8,7) {\small $\vdots$}; \node at (7.65,2.5) {$\hdots$}; \draw (8,2.5) -- (8.25,2.5); \draw (8.25,2.5) -- (8.25,7.25); \draw[->] (8.25,7.25) -- (9,7.25); \node at (8.8,7.7) {\small $\vdots$}; \draw[->] (8.6,8) -- (9,8); \node at (7.1,8) {\small From RQSS$_{h-2}$}; \draw[->] (8.6,8.5) -- (9,8.5); \node at (7.1,8.5) {\small From RQSS$_{h-1}$}; \tikzset{shift={(-4.5,-3.5)}} \draw[->] (2.55,3) -- (2.55,13.5); \draw[->] (5.55,5.5) -- (5.55,13.5); \draw[->] (10.55,9) -- (10.55,13.5); \draw[->] (16.05,12.5) -- (16.05,13.5); \draw (1.5,13.5) -- (1.5,18.5); \node at (2.5,18.8) {Layer 1}; \node at (2.5,16.6) {$\vdots$}; \node at (2.5,14.85) {$\vdots$}; \draw (3.6,13.5) -- (3.6,18.5); \node at (5,18.8) {Layer 2}; \node at (5,16.6) {$\vdots$}; \node at (5,14.85) {$\vdots$}; \draw (6.5,13.5) -- (6.5,18.5); \node at (7.5,18.8) {$\hdots$}; \node at (7.5,16.6) {$\vdots$}; \node at (7.5,14.85) {$\vdots$}; \draw (8.55,13.5) -- (8.55,18.5); \node at (10,18.8) {Layer $i$}; \node at (10,16.6) {$\vdots$}; \node at (10,15.65) {$S_j^{(i)}$}; \node at (10,14.85) {$\vdots$}; \draw (11.5,13.5) -- (11.5,18.5); \node at (12.25,18.8) {$\hdots$}; \node at (12.25,16.6) {$\vdots$}; \node at (12.25,14.85) {$\vdots$}; \draw (13,13.5) -- (13,18.5); \node at (14.65,18.8) {Layer $h${\small =}$n$-$k$+$1$}; \node at (14.75,16.6) {$\vdots$}; \node at (14.75,14.85) {$\vdots$}; \draw (16.3,13.5) -- (16.3,18.5); \draw (1.5,18.5) -- (16.3,18.5); \node at (1.1,18.15) {$S_1$}; \draw (1.5,17.75) -- (16.3,17.75); \node at (1.1,17.4) {$S_2$}; \draw (1.5,17) -- (16.3,17); \node at (1.1,16.6) {$\vdots$}; \draw (1.5,16) -- (16.3,16); \node at (1.1,15.65) {$S_j$}; \draw (1.5,15.25) -- (16.3,15.25); \node at (1.1,14.85) {$\vdots$}; \draw (1.5,14.25) -- (16.3,14.25); \node at (1.1,13.9) {$S_n$}; \draw (1.5,13.5) -- (16.3,13.5); \end{tikzpicture} \vspace{0cm} \captionsetup{justification=justified} \caption{Concatenation framework for constructing $((k,n,*))$ universal CE-QTS scheme by concatenating multiple ramp QSS schemes. Here $d_i=n+1-i$ for $1\leq i\leq n-k+1$ and the $((t_i=d_i,n_i=n+d_i-k;z_i=k-1))$ ramp QSS scheme is denoted by RQSS$_i$.} \label{fig:univ_d_rqss} \end{center} \end{figure} \begin{algorithm}[ht] \caption{Encoding for a $((k,n,*))$ universal CE-QTS scheme } \begin{algorithmic}[1] \REQUIRE {Secret $\ket{\phi}$} \ENSURE {Shares of the $n$ parties, $S_j$ for $1\leq j\leq n$} \STATE Encode the secret $\ket{\phi}$ using the RQSS$_1$ scheme. \FOR{$i=1$ to $n-k+1$} \PARSTATE{Distribute the smallest $n$ shares from the RQSS$_i$ scheme $(S_1^{(i)},S_2^{(i)},\hdots,S_n^{(i)})$ to the $n$ parties. This is called the $i$th layer of the encoding.} \IF{$d_i>k$} \PARSTATE{For all $1\leq\ell\leq d_i-k$, the share $S_{n+\ell}^{(i)}$ goes as part of input to the RQSS$_{i+\ell}$ scheme.} \PARSTATE{The combined state of all the qudits passed from the previous $i$ ramp schemes to the RQSS$_{i+1}$ scheme is encoded using the RQSS$_{i+1}$ scheme.} \ENDIF \ENDFOR \end{algorithmic} \label{alg:univ_d_rqss_enc} \end{algorithm} \begin{algorithm}[ht] \caption{Secret recovery for the $((k,n,*))$ universal CE-QTS scheme in Algorithm \ref{alg:univ_d_rqss_enc}.} \begin{algorithmic}[1] \REQUIRE {The first $i$ layers of qudits from any $d_i$ parties for any $1\leq i\leq n-k+1$} \ENSURE {Secret $\ket{\phi}$} \STATE Use the $i$th layer of the $d_i$ parties to recover the input state of the RQSS$_i$ scheme. \FOR{$\ell=i-1$ to $1$ step -1} \PARSTATE{Consider the RQSS$_\ell$ scheme. The $\ell$th layer of the $d_i$ parties will give $d_i$ shares of this ramp scheme.} \PARSTATE{Collect $d_\ell-d_i=i-\ell$ more shares of this ramp scheme one each from the input states recovered from the layers $\ell+1$ to $i$.} \PARSTATE{Use all these $d_\ell=d_i+i-\ell$ shares to recover the input state of the RQSS$_\ell$ scheme.} \ENDFOR \STATE The input state of the RQSS$_1$ scheme gives the secret $\ket{\phi}$. \end{algorithmic} \label{alg:univ_d_rqss_rec} \end{algorithm} \begin{theorem \label{lm:ramp-univ-ceqts} If $((d_i,n+d_i-k;k-1))$ ramp QSS schemes exist for $1\leq i\leq n-k+1$ where $d_i=n+1-i$, then a $q$-ary $((k,n,*))$ universal communication efficient QTS exists. The encoding for this scheme is given in Algorithm~\ref{alg:univ_d_rqss_enc} and the recovery in Algorithm~\ref{alg:univ_d_rqss_rec}. \end{theorem} \begin{proof} The proof is by giving a construction for the CE-QTS scheme from the given ramp QSS schemes. The encoding of the $((k,n,*))$ universal CE-QTS is as given in Algorithm \ref{alg:univ_d_rqss_enc}. The $((d_i,n+d_i-k;k-1))$ ramp QSS scheme is referred to as RQSS$_{i}$ here. Here, for any $L\subseteq[n]$, $S_L$ denotes $\{S_j\}_{j\in L}$ and $|S_j|$ gives the number of qudits in the share $S_j$. Similar notations are used for $\{S_j^{(i)}\}$ as well. \begin{compactenum}[(i)] \item \textit{Recoverability}: The secret recovery for the $((k,n,*))$ universal CE-QTS scheme is as given in Algorithm~\ref{alg:univ_d_rqss_rec}. Whenever the combiner accesses $d_i$ parties, each of those parties send the first $i$ layers to the combiner. Once this is done, the combiner has $d_i$ shares in the RQSS$_i$ scheme. Hence, RQSS$_i$ can be decoded and its input qudits recovered. However, for decoding RQSS$_\ell$ schemes for $1\leq\ell\leq i-1$, the combiner still needs $d_\ell-d_i=i-\ell$ shares. For each RQSS$_\ell$, these deficit shares can be provided by the input qudits recovered from the schemes RQSS$_{\ell+1}$, RQSS$_{\ell+2},\hdots,$ RQSS$_i$, one share from each of these $i-\ell$ schemes. This iterative decoding of RQSS$_\ell$ will finally give the secret $\ket{\phi}$ after decoding RQSS$_1$. \item \textit{Secrecy}: Consider the set $J\subset[n]$ such that $|J|=k-1$. By Lemma \ref{lm:mixed-to-pure}, let $E_i$ be the purifying state for the RQSS$_i$ scheme for all $1\leq i\leq n-k+1$. Assume Alice has the set of shares $\{S_{[n]\backslash J},E_1,E_2,\hdots,E_{n-k+1}\}$. For RQSS$_{n-k+1}$, now Alice has the purifying state and every share except some $k-1$ shares. This set of $k-1$ shares in RQSS$_{n-k+1}$ has no information on its qudits. Therefore, by Lemma \ref{lm:pu-auth}, Alice has an authorized set for RQSS$_{n-k+1}$, from which she recovers its input qudits. These qudits will now give one extra share to each of the schemes RQSS$_{n-k}$ till RQSS$_1$. With this extra share, RQSS$_{n-k}$ will have an authorized set and from which Alice recovers its input qudits and retrieves one extra share to each of the schemes RQSS$_{n-k-1}$ till RQSS$_{1}$. By this iterative recovery process, finally Alice can recover the secret $\ket{\phi}$ from RQSS$_1$. Thus, the secret can be recovered from the set of shares $\{S_{[n]\backslash J},E_1,E_2,\hdots,E_{n-k+1}\}$. Hence, by no-cloning theorem, $S_J$ has no information on the secret \textit{i.e.} any $k-1$ or less parties in this scheme has no information on the secret. \item \textit{Communication efficiency}: Here, we will prove that for any $d_i$ such that $k<d_i\leq n$, the communication cost in our scheme is less than that of $d_i-1$. By definition, CC$_n(d_i)$ is the maximum among the communication costs of all authorized sets of size $d_i$. Let $D\subseteq[n]$ be the authorized set which has this maximum communication cost CC$_n(d_i)$. Let $p\in D$ be one of these $d_i$ parties. Clearly, CC$_n(d_i-1)$ should be greater than or equal to the communication cost of the authorized set given by $D\backslash\{p\}$. \begin{eqnarray} \text{CC}_n(d_i-1)&\geq&\sum_{j\in D\backslash\{p\}}\sum_{\ell=1}^{i+1}|S^{(\ell)}_j|\nonumber \\&=&\sum_{j\in D\backslash\{p\}}\sum_{\ell=1}^{i}|S^{(\ell)}_j|+\sum_{j\in D\backslash\{p\}}|S^{(i+1)}_j|\nonumber \\\label{eq:temp6} \end{eqnarray} The $d_i-1$ shares in $\{S_j^{(i+1)}\}_{j\in D\backslash\{p\}}$ are from the $((d_i-1,n+d_i-1-k;k-1))$ RQSS$_i$ ramp scheme. Recall from Remark \ref{re:ramp-by-dropping} that after dropping the remaining $n-k$ shares from RQSS$_i$ scheme, this set of shares alone will give a $((d_i-1,d_i-1;k-1))$ ramp scheme which encodes the same state as RQSS$_i$ scheme. By Lemma \ref{lm:rqss-opt}, the average share size of this ramp scheme is at least $\frac{1}{d_i-k}$ times the total input size. \begin{eqnarray} \frac{1}{d_i-1}\sum_{j\in D\backslash\{p\}}|S^{(i+1)}_j|\geq\frac{1}{d_i-k}\sum_{j=1}^{i}|S^{(j)}_{n+i+1-j}|\ \ \nonumber \end{eqnarray} Applying this bound in \eqref{eq:temp6}, we obtain \begin{flalign} &\text{CC}_n(d_i-1)&\nonumber \\&\ \ \ \geq\sum_{j\in D\backslash\{p\}}\sum_{\ell=1}^{i}|S^{(\ell)}_j|+\frac{d_i-1}{d_i-k}\sum_{\ell=1}^{i}|S^{(\ell)}_{n+i-\ell+1}|&\nonumber \\&\ \ \ >\sum_{j\in D\backslash\{p\}}\sum_{\ell=1}^{i}|S^{(\ell)}_j|+\sum_{\ell=1}^{i}|S^{(\ell)}_{n+i-\ell+1}|& \label{eq:temp7} \\&\ \ \ \geq\sum_{j\in D\backslash\{p\}}\sum_{\ell=1}^{i}|S^{(\ell)}_j|+\sum_{\ell=1}^{i}|S^{(\ell)}_p|& \label{eq:largestdi_k} \\&\ \ \ =\sum_{j\in D}\sum_{\ell=1}^{i}|S^{(\ell)}_j| =\text{CC}_n(d_i)\nonumber \end{flalign} The strict inequality in \eqref{eq:temp7} is because $k>1$. The inequality \eqref{eq:largestdi_k} is due to the fact that the shares $S_{n+1}^{(i)},S_{n+2}^{(i)},\hdots,S_{n+d_i-k}^{(i)}$ have the largest sizes among the $n+d_i-k$ shares of the RQSS$_i$ scheme. This concludes the proof. \end{compactenum} \end{proof} With the above framework, the following construction for a universal CE-QTS can be provided by using the ramp QSS scheme by Ogawa et al\cite{ogawa05}. \begin{corollary}[Concatenated construction for universal CE-QTS \label{co:ramp-univ-ceqts-constn} A $q$-ary $((k,n,*))$ universal communication efficient QTS scheme can be constructed using the encoding in Algorithm \ref{alg:univ_d_rqss_enc} with the following parameters. \begin{gather*} q>n+k-1\text{ (prime)}\\ m=\textup{lcm}\{1,2,\hdots,n-k+1\} \\ w_1=w_2=\hdots=w_n=m\\ \textup{CC}_n(d)=\frac{dm}{d-k+1} \text{ for }d\in\{k,k+1,\hdots,n\} \end{gather*} \end{corollary} \begin{proof} Consider the universal CE-QTS scheme from Algorithm \ref{alg:univ_d_rqss_enc} and use the schemes from \cite{ogawa05} given in Lemma \ref{lm:ogawa-ramp} for the underlying ramp schemes. Clearly the dimension of each qudit $q$ should be above $t_i+z_i=d_i+k-1=n+k-i$ for all $1\leq i\leq n-k+1$. Therefore, $q>n+k-1$. Let $e_i$ be the number of qudits in the input state of the ramp QSS scheme RQSS$_i$ corresponding to the $i$th layer. The secret is the input to the scheme RQSS$_1$. Clearly, $e_1=m.$ For $i>1$, the input state of the ramp QSS scheme RQSS$_i$ has one share each from the ramp QSS schemes RQSS$_1$ to RQSS$_{i-1}$. \begin{equation} e_i=\sum_{\ell=1}^{i-1}|S_{n+i-\ell}^{(\ell)}| \label{eq:recursion-1} \end{equation} Recall that, in the $((t_i,n_i;z_i))$ ramp schemes given in Lemma~\ref{lm:ogawa-ramp}, the size of each share is $\frac{1}{t_i-z_i}$ times the secret size \textit{i.e.} for any $1\leq j\leq n+d_i-k$, \begin{equation} |S_j^{(i)}|=\frac{e_i}{d_i-k+1}. \label{eq:recursion-2} \end{equation} Solving the recursion from \eqref{eq:recursion-1} and \eqref{eq:recursion-2} with the initial condition $e_1=m$, we obtain, for $2\leq i\leq n-k+1$, \begin{eqnarray} e_i&=&\frac{m}{d_{i-1}-k+1}. \nonumber \end{eqnarray} Note that for each $1\leq i\leq n-k+1$, implementing the scheme RQSS$_i$ requires $e_i$ to be divisible by $t_i-z_i=d_i-k+1$. This can be achieved by taking $m=\text{lcm}\{1,2,\hdots,n-k+1\}$. From \eqref{eq:recursion-2}, the size of the $j$th share from RQSS$_i$ is \begin{gather*} |S_j^{(1)}|=\frac{m}{(d_1-k+1)} \\|S_j^{(i)}|=\frac{m}{(d_i-k+1)(d_{i-1}-k+1)} \end{gather*} for $2\leq i\leq n-k+1$. The total communication cost during secret recovery from a set of any $d_i$ parties given by $D$ can be calculated as \begin{equation} \text{CC}_n(d_i)=\sum_{j\in D}\sum_{\ell=1}^{i}|S_j^{(\ell)}| =\frac{d_i m}{d_i-k+1} \nonumber \end{equation} Also, for $1\leq j\leq n$, the size of the $j$th share is given by \begin{eqnarray} w_j=\sum_{i=1}^{n-k+1}|S_j^{(i)}| =\sum_{i=1}^{n-k+1}\frac{e_i}{d_i-k+1} =m. \nonumber \end{eqnarray} \end{proof} The above corollary gives a construction based on the concatenation framework for a universal CE-QTS scheme. In the next section, we give another construction for universal CE-QTS schemes. \section{Universal CE-QTS schemes based on Staircase codes}\label{s:iv} In this section, we propose an alternate construction of universal CE-QTS based on classical communication efficient secret schemes constructed using Staircase codes\cite{bitar18}. While constructing QSS schemes based on classical secret sharing schemes, there are some important differences. For QSS schemes, the secret recovery should recover not just the basis states but also any arbitrary superposition of the basis states. Hence the qudits containing the secret have to be disentangled from the remaining qudits, thus making the secret recovery in QSS schemes more involved. \subsection{Encoding}\label{ss:iv_a} \noindent Communication efficient quantum secret sharing schemes for particular values of $k$ and $n=2k-1$ can be designed to work for all possible values of $d$ in the range $k$ through $n$ where $k\leq d\leq n$. We introduce the following terms before discussing the scheme. For $1\leq i\leq k$, \begin{subequations} \label{eq:ceqts-params} \begin{gather} d_i=n+1-i=2k-i\\ m=\textup{lcm}\{k,k-1,\hdots,1\}\\ a_i=m/(d_i-k+1)\\ b_i=a_{i} -\ a_{i-1} \text{ for }i>1,\ b_1=a_1 \end{gather} \end{subequations} Here $m$ is the total number of secret qudits shared. The total number of qudits with each party is also given by $m$. This is consistent with the fact that in a perfect threshold secret sharing scheme the size of the share must be at least as large as the secret \cite{gottesman00,imai03}. Now $a_i$ gives the number of qudits communicated from each accessible share when $d_i$ parties are accessed to recover the secret. This means that $a_id_i$ qudits are communicated to the combiner when $d_i$ parties are contacted. Pick a prime number \begin{equation*} q>2k-1. \end{equation*} Consider the basis state of the secret $\underline{s}=(s_1,s_2,\ldots,s_m)\in\mathbb{F}_q^m$ and $\underline{r}=(r_1, r_2,\ldots,r_{m(k-1)})\in \mathbb{F}_q^{m(k-1)}$. Entries in $\underline{s}$ are rearranged into the matrix $S$ of size $k\times (m/k)$.\vspace{-0.25cm} \begin{eqnarray} S=\left[\begin{array}{cccc} s_1&s_{k+1}& \cdots & s_{m-k+1}\\ s_2&s_{k+2}& \cdots & s_{m-k+2}\\ \vdots&\vdots& \ddots & \vdots\\ s_k&s_{2k}& \cdots & s_{m}\\ \end{array} \right]\label{eq:secret-ceqts} \end{eqnarray} Entries in $\underline{r}$ are rearranged into $k$ matrices \textit{i.e.} $R_1$ of size $(k-1) \times b_1$, $R_2$ of size $(k-1)\times b_2$ and so on till $R_k$ of size $(k-1)\times b_k$. \begin{eqnarray} R_1= \left[\begin{array}{cccc} r_1& r_k& \cdots & r_{(a_1-1)(k-1)+1} \\ r_2& r_{k+1}& \cdots & r_{(a_1-1)(k-1)+2}\\ \vdots& \vdots & \ddots & \vdots\\ r_{k-1}& r_{2(k-1)}& \cdots & r_{a_1(k-1)} \end{array} \right]\nonumber \end{eqnarray} For $2\leq i\leq k$, $R_i$ is given by \begin{eqnarray} \!\left[\!\!\begin{array}{cccc} r_{a_{i-1}(k-1)+1}& r_{(a_{i-1}+1)(k-1)+1}& \cdots & r_{(a_i-1)(k-1)+1} \\ r_{a_{i-1}(k-1)+2}& r_{(a_{i-1}+1)(k-1)+2}& \cdots & r_{(a_i-1)(k-1)+2}\\ \vdots& \vdots & \ddots & \vdots\\ r_{(a_{i-1}+1)(k-1)}& r_{(a_{i-1}+2)(k-1)}& \cdots & r_{a_i(k-1)} \end{array}\!\! \right].\nonumber \end{eqnarray} The matrix $C$, called code matrix, is defined as follows. \begin{eqnarray*} C=V \end{eqnarray*} where $Y$ is given by \begin{eqnarray*} Y= \left[ \begin{tabular}{c:c:c:c:c} \multirow{4}{*}{$\ S\ $} & {\large \ 0\ } & \multirow{2}{*}{\large 0} & \multirow{4}{*}{$\ \ddots\ $} & \multirow{3}{*}{\large 0}\\ \cdashline{2-2} &\multirow{3}{*}{$D_1$} & &\\ \cdashline{3-3} & & \multirow{2}{*}{$D_2$} &\\ \cdashline{5-5} & & & & $\ \ D_{k-1}\ \ $\\ \cdashline{1-5} \multirow{2}{*}{$R_1$} & \multirow{2}{*}{$R_2$} & \multirow{2}{*}{$R_3$} & \multirow{2}{*}{$\hdots$} & \multirow{2}{*}{$R_k$}\\ & & & &\\ \end{tabular} \right] \end{eqnarray*} and $V$ is a $n\times n$ Vandermonde matrix given by \begin{eqnarray} V=\left[\begin{array}{cccc} 1 & x_1 & \ldots & x_1^{d-1}\\ 1 & x_2 & \ldots & x_2^{d-1}\\ \vdots & \vdots& \ddots & \vdots\\ 1 & x_n & \ldots & x_n^{d-1} \end{array} \right]. \end{eqnarray} where $x_1, x_2,..., x_n$ are distinct non-zero constants from $\F_q$. Here, $D_i$ of size $(k-i)\times b_{i+1}$ is constructed by rearranging the entries in $i$th row of the matrix $[R_1\ R_2\hdots\ R_i]$. Clearly, $D_i$ contains $a_i=(k-i)b_{i+1}$ entries. The encoding for a universal QTS is given as follows: \begin{eqnarray} \ket{s_1 s_2\hdots s_m}\ \mapsto\sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}} \ \bigotimes_{i=1}^{n}\ \ket{c_{i,1} c_{i,2}\hdots c_{i,m}} \label{eq:enc_qudits_univ_d} \end{eqnarray} where $c_{ij}$ is the entry in $C=VY$ from $i$th row and $j$th column. After encoding, the $u$th set of $m$ qudits is given to the $u$th party. For example, take $k=3$. The $((k=3,n=5,*))$ scheme will have the following parameters. \begin{gather*} q=7\\ m=\text{lcm}\{1,2,3\}=6\\ w_1=w_2=w_3=w_4=w_5=6\\ d_1=5,d_2=4,d_3=3\\ a_1=2,a_2=3,a_3=6\\ b_1=2,b_2=1,b_3=3 \end{gather*} Then $C$, the coding matrix for $k=3$ is given as. \begin{eqnarray*} \left[ \begin{tabular}{ccccc} 1&1&1&1&1\\ 1&2&4&1&2\\ 1&3&2&6&4\\ 1&4&2&1&4\\ 1&5&4&6&2 \end{tabular} \right] \left[ \begin{tabular}{cc:c:ccc} $s_1$ & $s_4$ & 0 & 0 & 0 & 0\\ $s_2$ & $s_5$ & $r_1$ & 0 & 0 & 0\\ $s_3$ & $s_6$ & $r_3$ & $r_2$ & $r_4$ & $r_6$\\\hdashline $r_1$ & $r_3$ & $r_5$ & $r_7$ & $r_9$ & $r_{11}$\\ $r_2$ & $r_4$ & $r_6$ & $r_8$ & $r_{10}$ & $r_{12}$ \end{tabular} \right] \end{eqnarray*} The encoding for this $((3,5,*))$ scheme is then given by \eqref{eq:enc_qudits_univ_d}. Note that each entry in matrix $C$, $c_{ij}$ is a function of $\underline{s}$ and $\underline{r}$. However, $D_i$ are functions of $\underline{r}$ alone. For a detailed description of this scheme, refer to the appendix in \cite{senthoor20a}. Our encoding matrix is somewhat similar to the matrix used in \cite{bitar18}. However, there are some minor structural differences. Since we are encoding quantum states in superposition, there is no need for generating random bits. Furthermore, due to the no-cloning theorem, the total number of parties cannot exceed $2k-1$. \subsection{Reconstruction of the secret}\label{ss:iv_b} The combiner can reconstruct the secret depending upon the choice of $d$. Once $d=d_i$ is chosen, the combiner contacts a set of any $d_i$ parties to reconstruct the secret. Each of the contacted party sends $a_i=\frac{m}{d_i-k+1}$ qudits to the combiner. In total, the combiner has $\frac{d_im}{d_i-k+1}=a_id_i$ qudits. With respect to the $((3,5,*))$ example in the previous section, suppose that the third party is contacted for reconstruction. If the party belongs to recovery set of size $d_1=5$, then $a_1=2$ qudits are communicated to the combiner. Similarly, if $d_2=4$, then $a_2=3$ and if $d_3=3$, then $a_3=6$ qudits are sent. The secret reconstruction happens in two stages. First, the basis states of the secret are reconstructed through suitable unitary operations. The classical secret sharing schemes stop the reconstruction at this point. But, the qudits containing the basis states of the secret can be entangled with the remaining qudits. So, in the second stage, the secret is extracted into a set of qudits that are disentangled with the remaining qudits. \begin{lemma}[Secret recovery]\label{lm:recovery} For a $((k,2k-1,*))$ scheme with the encoding given in \eqref{eq:enc_qudits_univ_d}, we can recover the secret from any $d=2k-i$ shares where $1\leq i\leq k$ by downloading only the first $a_i=\frac{m}{d-k+1}$ qudits from each share where $m$ is as given in \eqref{eq:ceqts-params}. \end{lemma} \begin{proof} Each of the $d$ participants sends their first $a_i$ qudits to the combiner for reconstructing the secret. Let $D = \{j_1, j_2, \hdots, j_d\} \subseteq \{1,2,\hdots,2k-1\}$ be the set of $d$ shares chosen and $E=\{j_{d+1},j_{d+2},\hdots,j_{2k-1}\}$ be the complement of $D$. Then, \eqref{eq:enc_qudits_univ_d} can be rearranged as \begin{eqnarray} \sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}} &&\textcolor{blue}{\ket{c_{j_1,1}c_{j_2,1}...c_{j_d,1}} \ket{c_{j_1,2}c_{j_2,2}...c_{j_d,2}}}\nonumber \\[-0.2in]&&\textcolor{blue}{\ \ \ \ \ \ \hdots\ket{c_{j_1,a}c_{j_2,a}...c_{j_d,a}}}\nonumber \\&&\ \ \ket{c_{j_{d+1},1}c_{j_{d+2},1}...c_{j_n,1}} \ket{c_{j_{d+1},2}c_{j_{d+2},2}...c_{j_n,2}}\nonumber \\&&\ \ \ \ \ \ \ \ \hdots\ket{c_{j_{d+1},a}c_{j_{d+2},a}...c_{j_n,a}}\nonumber \\&&\ \ \ \ \ket{c_{1,a+1}c_{2,a+1}...c_{n,a+1}} \ket{c_{1,a+2}c_{2,a+2}...c_{n,a+2}}\nonumber \\&&\ \ \ \ \ \ \ \ \ \ \ \ \hdots\ket{c_{1,m}c_{2,m}...c_{n,m}} \label{eq:acc_qudits} \end{eqnarray} where we have highlighted (in blue) the qudits communicated to the combiner. For the sake exposition we will first cover the case of $i=1$ \textit{i.e.} $d_i=2k-1$ where all the parties are contacted for their first $a_1$ qudits by the combiner. \\\\\textit{Case (i): $i=1$ \\For $i=1$, $d=2k-1=n$. Now \eqref{eq:acc_qudits} can be rewritten as \begin{eqnarray*} \sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}} \textcolor{blue}{\ket{V(S,R_1)}}&&\ket{V(0,D_1,R_2)}\ket{V(0,D_2,R_3)}\nonumber \\[-0.5cm]&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hdots\ket{V(0,D_{k-1},R_k)} \end{eqnarray*} where we slightly abused the notation. By $A(B_1,B_2,B_3)$ we actually refer to the matrix product $A\left[\begin{array}{ccc} B_1^t & B_2^t & B_{3}^t\end{array} \right]^t$. Since $V$ is a $n\times n$ Vandermonde matrix and, we can apply ${V}^{-1}$ to the state $\ket{V(S, R_1)}$, to obtain \begin{eqnarray*} \textcolor{blue}{\ket{S}}\sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}} \textcolor{blue}{\ket{R_1}}&&\ket{V(0,D_1,R_2)}\ket{V(0,D_2,R_3)}\nonumber \\[-0.5cm]&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hdots\ket{V(0,D_{k-1},R_k)} \end{eqnarray*} We can clearly see that the secret is disentangled with the rest of the qudits. Therefore, we can recover arbitrary superpositions also. \noindent \\\\\textit{Case (ii): $2\leq i\leq k$: } Under this case, the state of the system is as follows. (This is the same as \eqref{eq:acc_qudits}, only the qudits in possession of the combiner have been highlighted.) \begin{eqnarray*} \sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}} &&\!\!\!\textcolor{blue}{\ket{V_D(S,R_1)}\ \ket{V_D(0,D_1,R_2)}\hdots\ket{V_D(0,D_{i-1},R_i)}}\nonumber \\[-0.5cm]&&\ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray*} We can simplify this state using the fact $V_D(0, D_j, R_{j+1}) = V_D^{[j+1,n]} (D_j, R_{j+1})$. \begin{eqnarray*} =\sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}} &&\textcolor{blue}{\ket{V_D(S,R_1)}\ket{{V_D}^{[2,2k-1]}(D_1,R_2)}}\nonumber \\[-0.5cm]&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textcolor{blue}{\hdots\ket{{V_D}^{[i,2k-1]}(D_{i-1},R_i)}}\nonumber \\&&\ \ \ \ \ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\nonumber \\&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ \ \ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray*} Since ${V_D}^{[i,2k-1]}$ is a $d\times d $ Vandermonde matrix, the combiner can apply the inverse of ${V_D}^{[i,2k-1]}$ to $\ket{V_D^{[i,n]}(D_{i-1}, R_{i})}$ to transform the state as follows. \begin{eqnarray} \sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}}\hspace{-0.5cm} &&\textcolor{blue}{\ket{V_D(S,R_1)}\ \ket{{V_D}^{[2,2k-1]}(D_1,R_2)}}\nonumber \\[-0.5cm]&&\ \ \ \ \ \ \ \ \textcolor{blue}{\hdots\ket{{V_D}^{[i-1,2k-1]}(D_{i-2},R_{i-1})}\ \ket{D_{i-1}}\ket{R_i}}\nonumber \\&&\ \ \ \ \ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ \ \ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray} Note that the matrix $D_{i-1}$ contains elements from the $(i-1)$th row of $R_{i-1}$. Rearranging the qudits, we get \begin{eqnarray*} \sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}}\hspace{-0.5cm} &&\textcolor{blue}{\ket{V_D(S,R_1)}\ket{{V_D}^{[2,2k-1]}(D_1,R_2)}}\nonumber \\[-0.5cm]&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textcolor{blue}{\hdots\ket{{V_D}^{[i-2,2k-1]}(D_{i-3},R_{i-2})}}\nonumber \\&&\ \ \textcolor{blue}{\ket{W_{i-1}(D_{i-2},R_{i-1})}\ \ket{D_{i-1}\backslash\{R_{i-1}\}}\ket{R_i}}\nonumber \\&&\ \ \ \ \ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray*} where $D_\ell\backslash\{R_j,R_{j+1},\hdots,R_\ell\}$ indicates a vector with entries from $D_\ell$ which are not in the matrices $R_j,R_{j+1},\hdots,R_\ell$. Here $W_\ell = [{{V_D}^{[\ell,2k-1]}}^t\ \underline{w}_{\ell,k+1}\ \underline{w}_{\ell,k+2}\hdots\underline{w}_{\ell,k+i-\ell}]^t$ for $1\leq\ell\leq i-1$ where $\underline{w}_{\ell,j}$ is a column vector of length $(2k-\ell)$ with one in the $j$th position and zeros elsewhere. $W_\ell$ is a $(2k-\ell)\times(2k-\ell)$ full-rank matrix. Clearly, \begin{equation*} W_\ell \left[\begin{array}{c} D_{\ell-1}\\R_\ell \end{array}\right] =\left[\begin{array}{c} {V_D}^{[\ell,2k-1]}(D_{\ell-1},R_\ell)\\R_{\ell,[\ell,i-1]} \end{array}\right] \end{equation*} Now applying $W_{i-1}^{-1}$ to the state $\ket{W_{i-1}(D_{i-2},R_{i-1})}$, we obtain \begin{eqnarray*} \sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}}\hspace{-0.5cm} &&\textcolor{blue}{\ket{V_D(S,R_1)}\ket{{V_D}^{[2,2k-1]}(D_1,R_2)}}\nonumber \\[-0.5cm]&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textcolor{blue}{\hdots\ket{{V_D}^{[i-2,2k-1]}(D_{i-3},R_{i-2})}}\nonumber \\&&\ \ \ \ \textcolor{blue}{\ket{D_{i-2}}\ket{R_{i-1}}\ \ket{D_{i-1}\backslash\{R_{i-1}\}}\ket{R_i}}\nonumber \\&&\ \ \ \ \ \ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray*} Rearranging the qudits, we obtain, \begin{eqnarray*} \sum_{\underline{r}\in\mathbb{F}_q^{m(k-1)}}\hspace{-0.5cm} &&\textcolor{blue}{\ket{V_D(S,R_1)}\ket{{V_D}^{[2,2k-1]}(D_1,R_2)}}\nonumber \\[-0.5cm]&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textcolor{blue}{\hdots\ket{W_{i-2}(D_{i-3},R_{i-2})}}\nonumber \\&&\ \ \textcolor{blue}{\ket{D_{i-2}\backslash R_{i-2}}\ket{R_{i-1}}\ \ket{D_{i-1}\backslash \{R_{i-1},R_{i-2}\}}\ket{R_i}}\nonumber \\&&\ \ \ \ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray*} Repeating this process for $(D_{i-3},R_{i-2})$ through $(S,R_1)$, by applying the inverses of $W_{i-2}, W_{i-3},\hdots W_{1}$ in successive steps to the suitable sets of qudits and rearranging, we obtain, \begin{eqnarray*} \textcolor{blue}{\ket{S}}\!\!\sum_{\substack{\underline{r}\in\\\mathbb{F}_q^{m(k-1)}}}\hspace{-0.5cm} &&\textcolor{blue}\ \ \bl{\ket{R_1}\ket{R_2}\hdots \ket{R_i}}\nonumber \\[-0.7cm]&&\ \ \ \ket{V_E(S,R_1)}\ket{V_E(0,D_1,R_2)}\hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray*} The $m$ qudits corresponding to $\ket{S}$ is still entangled with other qudits in the system. Since $D_{i-1}$ is formed by entries from the $(i-1)$th row in $[R_1\ R_2\ \hdots\ R_{i-1}]$, we can rearrange the qudits to obtain \begin{eqnarray*} \textcolor{blue}{\ket{S}}\!\!\sum_{\substack{\underline{r}\in\\\mathbb{F}_q^{m(k-1)}}}\hspace{-0.5cm} &&\ \bl{\ket{R_{1,J_{i-1}}}\ket{R_{2,J_{i-1}}}\hdots \ket{R_{i-1,J_{i-1}}}\ket{D_{i-1}}\ket{R_i}}\nonumber \\[-0.7cm]&&\ \ \ \ \!\!\!\ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ \ \ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray*} where $J_\ell=[k-1]\backslash\{\ell\}$ for $1\leq\ell\leq i-1$.\vspace{0.1cm} Consider the $(2k-\ell)\times(2k-\ell)$ full-rank matrix \begin{eqnarray*} P_\ell=\left[ \renewcommand{\arraystretch}{1.5} \begin{tabular}{ccc} $I_{k-\ell+1}$& \multicolumn{2}{c}{\large 0} \\\hdashline \multicolumn{3}{c}{$V_E^{[\ell,2k-1]}$} \\\hdashline \multicolumn{2}{c}{\large \ 0\ }& $I_{k-i}$ \end{tabular} \right] \end{eqnarray*} where $1\leq\ell\leq i-1$. Apply $P_{i-1}$ on $\ket{D_{i-1}}\ket{R_i}$ to obtain \begin{eqnarray*} \textcolor{blue}{\ket{S}}\!\!\sum_{\substack{\underline{r}\in\\\mathbb{F}_q^{m(k-1)}}}\hspace{-0.5cm} &&\ \bl{\ket{R_{1,J_{i-1}}}\ket{R_{2,J_{i-1}}}\hdots \ket{R_{i-1,J_{i-1}}}}\nonumber \\[-0.7cm]&&\ \ \ \ \ \ \ \ \ \ \bl{\ket{D_{i-1}}\ket{V_E(0,D_{i-1},R_i)}\ket{R_{i,[i,k-1]}}}\nonumber \\&&\ \ \ \ \!\!\!\ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\hdots\ket{V_E(0,D_{i-1},R_i)}\nonumber \\&&\ \ \ \ \ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \end{eqnarray*} Now, this can be rearranged to get \begin{eqnarray*} \textcolor{blue}{\ket{S}}\sum_{\substack{(R_1,R_2,\hdots R_{i-1},\\R_{i,[i,k-1]},\\R_{i+1}\hdots R_k)\nonumber \\\in\mathbb{F}_q^{m(k-1)-(i-1)b_i}}}\hspace{-0.5cm} &&\textcolor{blue}{\ket{R_1,R_2,\hdots ,R_{i-1}}\ \ket{R_{i,[i,k-1]}}}\nonumber \\[-1.4cm]&&\ \ \ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\nonumber \\&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hdots\ket{V_E(0,D_{i-2},R_{i-1})}\nonumber \\&&\ \ \ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \\\sum_{\substack{R_{i,[1,i-1]}\\\in\mathbb{F}_q^{(i-1)\times b_i}}}&&\ket{V_E(0,D_{i-1},R_i)}\textcolor{blue}{\ket{V_E(0,D_{i-1},R_i)}} \end{eqnarray*} \begin{eqnarray*} =\textcolor{blue}{\ket{S}}\!\!\!\sum_{\substack{(R_1,R_2,\hdots R_{i-1},\\R_{i,[i,k-1]},\\R_{i+1}\hdots R_k)\nonumber \\\in\mathbb{F}_q^{m(k-1)-(i-1)b_i}}}\hspace{-0.5cm} &&\textcolor{blue}{\ket{R_1,R_2,\hdots ,R_{i-1}}\ \ket{R_{i,[i,k-1]}}}\nonumber \\[-1.4cm]&&\ \ \ket{V_E(S,R_1)}\ \ket{V_E(0,D_1,R_2)}\nonumber \\&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hdots\ket{V_E(0,D_{i-2},R_{i-1})}\nonumber \\&&\ \ \ \ \ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \\&&\ \ \ \ \ \ \sum_{T_i\in\mathbb{F}_q^{(i-1)\times b_i}}\ket{T_i}\textcolor{blue}{\ket{T_i}} \end{eqnarray*} because the state \begin{equation*} \sum_{\substack{R_{i,[1,i-1]}\\\in\mathbb{F}_q^{(i-1)\times b_i}}}\ket{V_E(0,D_{i-1},R_i)}\textcolor{blue}{\ket{V_E(0,D_{i-1},R_i)}} \end{equation*} is a uniform superposition of states $\ket{T_i}\ket{T_i}$ over $T_i\in\mathbb{F}_q^{(i-1)\times b_i}$ independent of the value of $D_{i-1}$ and $R_{i,[i,k-1]}$. Repeating these operations with all $\ket{R_j}$ for $1\leq j\leq i-1$, we obtain, \begin{eqnarray*} \textcolor{blue}{\ket{S}}\sum_{\substack{(R_{i+1}\hdots R_k)\\\in\mathbb{F}_q^{(m-a_i)(k-1)}\\(R_{1,[i,k-1]},\hdots R_{i,[i,k-1]})\\\in\mathbb{F}_q^{(k-i)a_i}}}\hspace{-0.5cm} &&\textcolor{blue}{\ket{R_{1,[i,k-1]},R_{2,[i,k-1]},\hdots R_{i,[i,k-1]}}}\nonumber \\[-1.4cm]&&\ \ \ket{V(0,D_i,R_{i+1})}\hdots\ket{V(0,D_{k-1},R_k)}\nonumber \\\nonumber\\\nonumber \\&&\hspace{-2.4cm}\sum_{\substack{T_1\in\\\mathbb{F}_q^{(i-1)\times b_1}}}\ket{T_1}\textcolor{blue}{\ket{T_1}}\sum_{\substack{T_2\in\\\mathbb{F}_q^{(i-1)\times b_2}}}\ket{T_2}\textcolor{blue}{\ket{T_2}}\hdots\sum_{\substack{T_i\in\\\mathbb{F}_q^{(i-1)\times b_i}}}\ket{T_i}\textcolor{blue}{\ket{T_i}} \end{eqnarray*} At this point the secret is completely disentangled with the rest of the qudits and the recovery is complete. \end{proof} \subsection{Secrecy}\label{ss:iv_c} In the scheme given by \eqref{eq:enc_qudits_univ_d}, the combiner can recover the secret by accessing $k$ parties (from case (ii) when $i=k$ in the proof of Lemma \ref{lm:recovery}). So, by no-cloning theorem, the remaining $k-1$ parties in the scheme should have no information about the secret. Thus, this scheme satisfies the secrecy property. With these results in place we have our central contribution. \begin{theorem}[Staircase construction for universal CE-QTS] The encoding given in \eqref{eq:enc_qudits_univ_d} gives a $((k,n=2k-1,*))$ universal CE-QTS scheme with the following parameters. \label{th:staircase-constn-univ-ceqts} \begin{gather*} q>2k-1\text{ (prime)} \\m=\text{lcm}\{1,2,\hdots,k\} \\w_1=w_2=\hdots=w_n=m \\\text{CC}_n(d)=\frac{dm}{d-k+1} \text{ for }d\in\{k,k+1,\hdots,2k-1\} \end{gather*} \end{theorem} We can compare this universal CE-QTS scheme with the scheme from Corollary \ref{co:ramp-univ-ceqts-constn}. (Refer Table \ref{tab:contributions}.) Both give the same values for the parameters $w_j/m$ and $CC_n(d)/m$. Though the scheme based on Staircase codes gives a better bound on the dimension of the qudits, the concatenated construction could give a smaller secret size for $n<2k-1$. \subsection{Discussion on communication complexity gains} \label{ss:iv_d} In the standard $((k,n))$ QTS scheme from \cite{cleve99}, the secret can be recovered when the combiner communicates with $k$ parties. Here, if the secret is of size $m$ qudits, then the number of qudits communicated to the combiner is $km$ qudits. The communication cost per secret qudit is $k$ qudits. In the $((k,n,d))$ communication efficient QTS schemes from \cite{senthoor19} and concatenated construction in Corollary \ref{co:ramp-fixed-ceqts-constn}, the secret can be recovered when the combiner contacts $k$ parties and receiving $km$ qudits where $m=d-k+1$. This leads to a cost of $k$ qudits per secret qudit. However, when the combiner contacts $d$ parties, where $d$ is a fixed value such that $k\leq d\leq n$, the secret can be recovered with a communication cost of $\frac{dm}{d-k+1}$ qudits. The cost per qudit is $\frac{d}{d-k+1}$ which is strictly less than $k$ for $d>k$. In the $((k,n,*))$ universal QTS schemes, the secret can be recovered by the combiner by accessing any $d$ parties, where the number of parties accessed given by $k\leq d\leq n$ can also be chosen by the combiner. For the chosen value of $d$, the secret can be recovered by downloading $\frac{d m}{d-k+1}$ qudits. The communication cost for each qudit of the secret is $\frac{d}{d-k+1}$ which is same as that of \cite{senthoor19}. The communication cost decreases with the increasing number of parties accessed. (Refer Fig. \ref{fig:di-vs-cc}.) However, we are able to achieve this for all possible $d$ using the same scheme and not fixing $d$ a priori. Refer Table \ref{tab:contributions} for a comparison of the different QTS schemes we discussed so far. The optimality of our construction with respect to the communication complexity will be discussed in the next section. \begin{figure}[ht] \begin{center} \hspace{-0.5cm} \begin{tikzpicture}[scale=0.9, every node/.style={scale=1}] \draw (5.5,0) -- (6.2,0); \node at (6.2,0) {\rotilde}; \node at (6.3,0) {\rotilde}; \draw[->] (6.35,0) -- (14,0); \node at (14.4,0) {$d_i$}; \draw[->] (5.5,0) -- (5.5,8); \node at (5.5,8.3) {$\text{CC}_n(d_i)$}; \node at (5.5,-0.2) {0}; \draw (7,0) -- (7,-0.1); \node at (7,-0.4) {$k$}; \draw (5.5,7) -- (5.4,7); \node at (5,7) {$k$}; \node at (7,7) {$\circ$}; \draw (7,0) -- (7,6.95); \draw (8,0) -- (8,-0.1); \node at (8,-0.4) {$k$$+$$1$}; \draw (5.5,4) -- (5.4,4); \node at (5,4) {$\frac{k+1}{2}$}; \node at (8,4) {$\circ$}; \draw (8,0) -- (8,3.95); \draw (9,0) -- (9,-0.1); \node at (9,-0.4) {$k$$+$$2$}; \draw (5.5,3) -- (5.4,3); \node at (5,3) {$\frac{k+2}{3}$}; \node at (9,3) {$\circ$}; \draw (9,0) -- (9,2.95); \node at (5,2.5) {$\vdots$}; \node at (10,2.5) {$\circ$}; \draw (10,0) -- (10,2.45); \node at (11,-0.4) {$\hdots$}; \node at (11,1.0) {$\hdots$}; \node at (12,2) {$\circ$}; \draw (12,0) -- (12,1.95); \draw (13,0) -- (13,-0.1); \node at (13,-0.4) {$2k$$-$$1$}; \draw (5.5,1.857) -- (5.4,1.857); \node at (5,1.857) {$\frac{2k-1}{k}$}; \node at (13,1.857) {$\circ$}; \draw (13,0) -- (13,1.807); \end{tikzpicture} \captionsetup{justification=justified} \caption{Communication cost for $d\in\{k,k+1,\hdots,n\}$ in $((k,n=2k-1,*))$ universal CE-QTS schemes from Concatenated construction and Staircase construction} \label{fig:di-vs-cc} \end{center} \end{figure} \section{Optimality of CE-QTS schemes} \label{s:v} In this section, we derive lower bounds on the quantum communication complexity of the quantum threshold schemes. Our bounds are applicable for both universal and non-universal communication efficient schemes. Specifically, we show that secret recovery from a set of $d$ shares in communication efficient QTS schemes (for both fixed $d$ and universal) requires at least $\frac{d}{d-k+1}$ qudits to be transmitted to the combiner for each qudit in the secret. Then we show that our constructions satisfies these bounds on communication complexity. We also discuss the optimality of our constructions with respect to the storage cost. \subsection{Lower bound on communication complexity} Bound on communication complexity for the $((k,n,d))$ CE-QTS was already shown in \cite{senthoor19} for the special case of $n=2k-1$. Here we generalize these bounds to both $((k,n,d))$ and $((k, n, *))$ QTS and also lift the restriction that $n=2k-1$. We first bound the combined size of partial shares from $d-k+1$ parties. The generalization of the result from $n=2k-1$ to $n\leq 2k-1$ is mainly due to a difference in our approach to prove this bound. Then we use this to prove the bound on the communication cost \textit{i.e.} the combined size of partial shares from all $d$ parties in a way similar to \cite{senthoor19}. Our bounds imply that the proposed CE-QTS and universal CE-QTS constructions for all $n\leq 2k-1$ are optimal with respect to the communication cost. First we need the following lemmas. \begin{lemma \label{lm:sec-rep} \cite[Theorem~5]{gottesman00} A party having access to an authorized set of shares in a quantum secret sharing scheme can replace the secret encoded with any arbitrary state (of the same dimension as the secret) without disturbing the remaining shares. After this replacement, secret recovery from any of the authorized sets will give only the new state. \end{lemma} \begin{lemma \label{lm:lim-qc} \cite[Lemma~5]{senthoor19} Even in the presence of pre-existing entanglement between two parties, transmitting an arbitrary quantum state from a Hilbert space of dimension $M$ requires a channel of dimension $M$. \end{lemma} With these two lemmas we can bound the combined size of partial shares from $d-k+1$ parties in the secret recovery from $d$ parties. \begin{lemma \label{lm:bound-helps} In any $((k,n))$ QSS scheme, which recovers a secret of dimension $M$ by accessing a set of $d$ parties, the total communication to the combiner from any $d-k+1$ parties among the $d$ parties is of dimension at least $M$. \end{lemma} \begin{proof} Let $S_1, S_2,\hdots, S_n$ be the shares of the $n$ parties in the $((k,n))$ QSS scheme. By Lemma~\ref{lm:mixed-to-pure}, consider an extra share $E$ for the given $((k,n))$ scheme such that the new QSS scheme with $n+1$ parties thus obtained is a pure state QSS scheme. (This pure state QSS scheme need not be a threshold QSS scheme.) Now, we prove the lemma by means of a communication protocol between Alice and Bob based on this pure state QSS scheme. The objective of the protocol is for Alice to send an arbitrary state $\ket{\psi}$ of dimension $M$ to Bob. First, encode the state $\ket{0}$ using the pure state QSS scheme. Consider the set of $d$ parties $D\subseteq [n]$ where each participant in $D$ can send a part of its share to the combiner to recover the secret. Consider any subset $L\subseteq D$ with $d-k+1$ parties. Bob is given the $k-1$ shares from the parties in $D\backslash L$, which form an unauthorized set. Alice is given the $d-k+1$ shares from $L$, the $n-d$ shares from $[n]\backslash D$ and the extra share $E$. By Lemma~\ref{lm:pu-auth}, the set of shares with Alice form an authorized set, as this set is actually a complement of the unauthorized set with Bob. Now, Alice replaces the secret $\ket{0}$ in the scheme with $\ket{\psi}$ (by Lemma~\ref{lm:sec-rep}). Clearly, Bob has no prior information on $\ket{\psi}$ even though he may share some entanglement with Alice due to qudits he received so far. Now, if Alice needs to transmit $\ket{\psi}$ to Bob, she needs to transmit some of the qudits with her to Bob so that Bob can use the secret recovery of the underlying QSS scheme to recover $\ket{\psi}$. To achieve this, Alice can transmit to Bob the necessary parts from the $d-k+1$ shares from $L$ (which along with necessary parts from the $k-1$ shares from $D\backslash L$ already with him will give complete information about the secret). Applying Lemma~\ref{lm:lim-qc} here, it is implied that the communication from the shares in $L$ during the secret recovery from the shares in $D$ has to be at least $M$. \end{proof} Next we use Lemma~\ref{lm:bound-helps} to obtain a lower bound on communication complexity of $d$ partial shares. We use the same technique as in \cite{senthoor19} to achieve this. \begin{theorem}[Lower bound on communication cost]\label{th:co-lbounds} In any $((k,n))$ quantum secret sharing scheme, recovery of a secret of dimension $M$ by accessing $d$ parties requires communication of a state from a Hilbert space of dimension at least $M^{d/(d-k+1)}$ to the combiner. \end{theorem} \begin{proof} Consider a set of $d$ parties given by $D\subseteq[n]$ accessed by the combiner for secret recovery. For each $i\in D$, let the part of the share transmitted by the $j$th party to the combiner be denoted as $H_{j,D}$. Clearly $H_{j,D}$ is a subsystem of $S_j$. Without loss of generality, we take the set of parties to be given by $D=\{1,2,\hdots,d\}$ such that \begin{eqnarray} \dim(H_{1,D})\geq\dim(H_{2,D})\geq\hdots\geq\dim(H_{d,D}). \label{eq:share-sizes} \end{eqnarray} Applying Lemma \ref{lm:bound-helps} for the partial shares $H_{k,D}, H_{k+1,D},\hdots H_{d,D}$ sent to the combiner, the overall communication from these $d-k+1$ shares is bounded as \begin{eqnarray} \prod_{j=k}^d \dim(H_{j,D})\geq M. \label{eq:bound_some_helps} \end{eqnarray} Then by \eqref{eq:share-sizes}, we have \begin{eqnarray*} \dim(H_{k,D})^{d-k+1}\geq M\nonumber \\\dim(H_{k,D})\geq M^{1/(d-k+1)}. \end{eqnarray*} This implies \begin{eqnarray} \dim(H_{j,D}) &\geq& M^{1/(d-k+1)} \label{eq:bound_one_help} \end{eqnarray} for $1\leq j\leq k$. From \eqref{eq:bound_some_helps}~and~\eqref{eq:bound_one_help}, the communication to the combiner from the $d$ shares in $D$ can be lower bounded as \begin{eqnarray*} \prod_{j=1}^d\dim(H_{j,D})&=&\prod_{j=1}^{k-1}\dim(H_{j,D})\prod_{j=k}^d\dim(H_{j,D})\nonumber \\&\geq&\bigg(\prod_{j=1}^{k-1}M^{1/(d-k+1)}\bigg)M\nonumber \\&=&M^{d/d-k+1)} \end{eqnarray*} This shows that the set of $d$ parties $D$ must communicate a state that is in a Hilbert space of dimension at least $M^{d/(d-k+1)}$. \end{proof} In the next subsection, we use this bound to evaluate the performance of our constructions for CE-QTS schemes. \subsection{Optimality of the proposed schemes} The bound on the dimension of the communication cost in Theorem \ref{th:co-lbounds} can be used to obtain a bound on the communication cost in terms of qudits. \begin{corollary} In a $((k,n,d))$ CE-QTS scheme sharing a secret of $m$ qudits, the communication cost is bounded as \begin{equation*} \text{CC}_n(d)\geq \frac{dm}{d-k+1}. \end{equation*} \label{lm:ce-qts-opt} \end{corollary} \begin{proof} Let $q$ be the dimension of each qudit in the scheme. Clearly, the dimension of the secret $M=q^m$. By Theorem \ref{th:co-lbounds}, the communication from any $d$ shares is going to be at least $q^\frac{dm}{d-k+1}$. Thus, the $d$ parties need to send at least $\frac{dm}{d-k+1}$ qudits for recovering the secret. \end{proof} Recall from Lemma \ref{lm:qts-opt} that, for any QTS scheme, the share size is lower bounded by the size of the secret \textit{i.e.} for all $1\leq j\leq n$ \begin{eqnarray*} w_j\geq m. \end{eqnarray*} \begin{remark $((k,n,d))$ CE-QTS scheme from \cite{senthoor19} based on Staircase codes has optimal storage cost and optimal communication cost. \end{remark} \begin{remark $((k,n,d))$ CE-QTS scheme from Corollary \ref{co:ramp-fixed-ceqts-constn} based on ramp schemes from \cite{ogawa05} has optimal storage cost and optimal communication cost. \end{remark} Note that these bounds apply for both fixed $d$ and universal CE-QTS schemes. \begin{corollary} In a $((k,n,*))$ universal CE-QTS scheme sharing a secret of $m$ qudits, for any $d$ such that $k\leq d\leq n$, the communication cost is bounded as \begin{equation*} \text{CC}_n(d)\geq \frac{dm}{d-k+1}. \end{equation*} \end{corollary} \begin{remark $((k,n,*))$ universal CE-QTS scheme from Theorem \ref{th:staircase-constn-univ-ceqts} based on Staircase codes has optimal storage cost and optimal communication cost. \end{remark} \begin{remark $((k,n,*))$ universal CE-QTS scheme from Corollary \ref{co:ramp-univ-ceqts-constn} based on ramp schemes from \cite{ogawa05} has optimal storage cost and optimal communication cost. \end{remark} In the following section, we prove the bound on communication cost of CE-QTS schemes using a quantum information theoretic approach. \section{Information theoretic model of CE-QTS} \label{s:vi} The storage cost and the communication complexity required for secret sharing schemes can also be studied with information theory. For classical threshold schemes, such results have been obtained in \cite{karnin83},\cite{wang08},\cite{huang16}. In this section, we will be using quantum information theory to develop a framework to get similar results for communication efficient quantum threshold schemes building upon the work by Imai et al\cite{imai03}. We propose a quantum information theoretic framework for CE-QTS schemes and use this to study their communication complexity. We refer the reader to Section~\ref{sec:bg} for some of the definitions and terms. \subsection{Information theoretic model for quantum secret sharing} Let $\mathcal{S}$ be the quantum secret from the Hilbert space $\mathcal{H}_\mathcal{S}$ of dimension $M$. Then the density matrix corresponding to $\mathcal{S}$ can be defined as \begin{equation} \rho_\mathcal{S}=\sum_{i=0}^{M-1}p_i\ketbra{\phi_i}{\phi_i} \nonumber \end{equation} where $\{p_i\}$ gives the probability distribution in a measurement over some basis of orthonormal states $\{\ket{\phi_0},\ket{\phi_1},\hdots,\ket{\phi_{M-1}}\}$. Let $\mathcal{R}$ be the reference system such that the combined system $\mathcal{SR}$ is in pure state \textit{i.e.} $\mathsf{S}(\mathcal{SR})=0$. Thus, by Araki-Lieb inequality, $\mathsf{S}(\mathcal{R})=\mathsf{S}(\mathcal{S})$. Let $S_1,S_2,\hdots,S_n$ be the quantum systems corresponding to the $n$ shares defined over the Hilbert spaces $\mathcal{H}_1,\mathcal{H}_2,\hdots,\mathcal{H}_n$ respectively. Then the encoding of the secret is given by the encoding map $\mathcal{E}:\mathcal{H}_\mathcal{S}\rightarrow\mathcal{H}_1\otimes\mathcal{H}_2\otimes\cdots\otimes\mathcal{H}_n$. A subset of $\ell$ parties can be indicated by the set $L\subseteq[n]$ corresponding to their indices. The combined system of these parties are then denoted as \begin{equation} S_L=S_{i_1}S_{i_2}\hdots S_{i_\ell} \nonumber \end{equation} where $L=\{i_1,i_2,\hdots,i_\ell\}$ with $i_1<i_2<\hdots<i_\ell$. Then the density matrix of the $i$th party for $i\in[n]$ can be written as \begin{equation} \rho_i=\text{tr}_{S_{[n]\backslash i}}\mathcal{E}(\rho_\mathcal{S}). \nonumber \end{equation} With these notations, we can define the requirements of a quantum secret sharing scheme as quantum information theoretical constraints. \begin{definition} \label{de:info-model-qts} A quantum secret sharing scheme for an access structure $\Gamma$ is a quantum operation which encodes the quantum secret $\mathcal{S}$ into shares $S_1, S_2, \hdots, S_n$ where \begin{itemize} \item \textit{(Recoverability)} For every authorized set $A\in \Gamma$, \begin{equation} I(\mathcal{R}:S_A)=I(\mathcal{R}:\mathcal{S}), \label{eq:rec-constraint} \end{equation} \item \textit{(Secrecy)} For every unauthorized set $B\notin\Gamma$, \begin{equation} I(\mathcal{R}:S_B)=0. \label{eq:sec-constraint} \end{equation} \end{itemize} \end{definition} The same definition expands to QTS schemes where the authorized set is given by $A\subseteq[n]$ such that $|A|\geq k$ and the unauthorized set is given by $B\subset[n]$ such that $|B|\leq k-1$. The following result from \cite{imai03} gives a bound on the entropy of each share. \begin{lemma}\label{th:entropy-unauth In any quantum secret sharing scheme realizing an access structure $\Gamma$ for any subsets of parties $A$ and $B$ such that $A,B\notin\Gamma$ but $A\cup B\in\Gamma$ it holds that $\mathsf{S}(S_A)\geq\mathsf{S}(S)$ where $S$ is the secret being shared. \end{lemma} \begin{corollary} In any $((k,n))$ QTS scheme, the entropy of any share $S_j$ is bounded as \begin{equation*} \mathsf{S}(S_j)\geq\mathsf{S}(\mathcal{S}) \end{equation*} where $S$ is the secret being shared. \end{corollary} \begin{proof} Take $A=\{j\}$ and some $B\subseteq[n]\backslash\{j\}$ such that $|B|=k-1$ in Theorem~\ref{th:entropy-unauth}. \end{proof} The conditions in the above definition follow from the quantum data processing inequality. This is the same set of conditions as defined in \cite{imai03} for quantum threshold secret sharing schemes. However for communication efficient quantum threshold schemes, more conditions have to be defined for when the combiner recovers the secret from partial shares from $d>k$ parties. \subsection{Extension of information theoretic model to CE-QTS} Let $D\subseteq[n]$, where $|D|=d$, give the indices of some $d$ parties being accessed by the combiner for communication efficient recovery. For each $j\in D$, consider a superoperator $\pi_{j,D}$ acting on $S_j$ such that the resultant state $H_{j,D}$ is then transmitted to the combiner. The density matrix for $H_{j,D}$ can be written as \begin{equation*} \sigma_{j,D}=\pi_{j,D}(\rho_j). \end{equation*} Here $\pi_{j,D}$ is the operator acting on $S_j$. Consider $E\subseteq D$ corresponding to some $e$ of these $d$ parties. The combined system of the partial shares sent to the combiner by these $e$ parties is denoted as \begin{equation*} H_{E,D}=H_{j_1,D}H_{j_2,D}\hdots H_{j_e,D} \end{equation*} where $E=\{j_1,j_2,\hdots,j_e\}$ with $j_1<j_2<\hdots<j_e$. Clearly, the number of qudits in $H_{j,D}$ is $\log_q\text{dim}(H_{j,D})$. Now, CC$_n(d)$ can be written as \begin{eqnarray} \text{CC}_n(d)&=&\max_{\substack{D\subseteq[n]\\\text{s.t. }|D|=d}}\ \sum_{j\in D}\ \log_q\text{dim}(H_{j,D})\nonumber \\\text{CC}_n(d)&\geq&\frac{1}{\log q}\ \max_{\substack{D\subseteq[n]\\\text{s.t. }|D|=d}}\ \sum_{j\in D}\mathsf{S}(H_{j,D}). \label{eq:cc-d-bound} \end{eqnarray} The inequality in \eqref{eq:cc-d-bound} is from the bound on entropy given by \eqref{eq:max-entropy}. Similarly, the communication cost for secret recovery in a standard $((k,n))$ threshold scheme can be bounded as, \begin{equation} \text{CC}_n(k)\geq\frac{1}{\log q}\max_{\substack{A\subseteq[n]\\\text{s.t. }|A|=k}}\ \sum_{i\in A}\mathsf{S}(S_i). \end{equation} Now, the following set of constraints can be included to define the model for a communication efficient quantum threshold scheme. \begin{definition} A $((k,n,d))$ CE-QTS scheme is a quantum operation which encodes the quantum secret $\mathcal{S}$ into shares $S_1, S_2, \hdots, S_n$ where \begin{itemize} \item \textit{(Recoverability from $k$ shares)} For every $A\subseteq[n]$ such that $|A|\geq k$, \begin{equation} I(\mathcal{R}:S_A)=I(\mathcal{R}:\mathcal{S}). \label{eq:ce-rec-constraint} \end{equation} \item \textit{(Recoverability from $d$ partial shares)} For every $D\subseteq[n]$ such that $|D|=d$, \begin{equation} I(\mathcal{R}:H_{D,D})=I(\mathcal{R}:\mathcal{S}) \label{eq:ce-d-rec-constraint} \end{equation} \item \textit{(Secrecy)} For every $B\subset [n]$ such that $|B|<k$, \begin{equation} I(\mathcal{R}:S_B)=0. \label{eq:ce-sec-constraint} \end{equation} \item \textit{(Communication efficiency)} \begin{equation} \text{CC}_n(d)<\text{CC}_n(k). \label{eq:ce-constraint} \end{equation} \end{itemize} \end{definition} \begin{definition} A $((k,n,*))$ universal CE-QTS scheme is a quantum operation which encodes the quantum secret $\mathcal{S}$ into shares $S_1, S_2, \hdots, S_n$ where \begin{itemize} \item \textit{(Recoverability)} For every $D\subseteq[n]$ such that $k\leq|D|\leq n$, \begin{equation} I(\mathcal{R}:H_{D,D})=I(\mathcal{R}:\mathcal{S}) \label{eq:uce-rec-constraint} \end{equation} \item \textit{(Secrecy)} For every $B\subset[n]$ such that $|B|<k$, \begin{equation} I(\mathcal{R}:S_B)=0. \label{eq:uce-sec-constraint} \end{equation} \item \textit{(Universal communication efficiency)} \begin{equation} \text{CC}_n(n)<\text{CC}_n(n-1)<\hdots<\text{CC}_n(k+1)<\text{CC}_n(k). \label{eq:uce-constraint} \end{equation} \end{itemize} \end{definition} In the above definition for universal CE-QTS, a separate condition for the threshold of $k$ shares is not needed as $d$ can be assumed to take any value from $k$ to $n$. With these definitions, we can bound the communication cost of CE-QTS schemes (both fixed $d$ and universal) using quantum information theoretic inequalities. In the following theorem, a similar bound on the entropy of partial shares sent to the combiner has been derived. This result is then used to obtain a bound on the communication cost. \begin{theorem In any $((k,n))$ quantum secret sharing scheme, recovery of a secret of dimension $M$ by accessing $d$ parties requires communication of a state from a Hilbert space of dimension at least $M^{d/(d-k+1)}$ to the combiner. \end{theorem} \begin{proof} Let $D$ represent the indices of the $d$ parties from which partial shares are sent to the combiner. Clearly, $D$ is an authorized set. For simplicity, we will drop the second subscript $D$ in $H_{E,D}$ for any $E\subseteq D$ and write it simply as $H_E$. Choose some $F\subseteq D$ such that $|F|=d-k+1$. By considering $H_D$ as the bipartite quantum system $H_{D\backslash F}H_F$, \eqref{eq:ce-rec-constraint} gives \begin{eqnarray} I(\mathcal{R}:H_{D\backslash F}H_F)&=&I(\mathcal{R}:S)\nonumber \\\mathsf{S}(H_{D\backslash F}H_F)-\mathsf{S}(\mathcal{R}H_{D\backslash F}H_F)&=&\mathsf{S}(\mathcal{S})-\mathsf{S}(\mathcal{R}\mathcal{S})\ \ \ \ \nonumber \\\mathsf{S}(H_{D\backslash F}H_F)-\mathsf{S}(\mathcal{R}H_{D\backslash F}H_F)&=&\mathsf{S}(\mathcal{S}) \label{eq:al-to-be-applied} \end{eqnarray} Applying the Araki-Lieb inequality to $\mathsf{S}(\mathcal{R}H_{D\backslash F}H_F)$ gives \begin{equation*} \mathsf{S}(\mathcal{R}H_{D\backslash F}H_F)\geq\mathsf{S}(RH_{D\backslash F})-\mathsf{S}(H_F). \end{equation*} Applying this in \eqref{eq:al-to-be-applied}, we obtain \begin{equation} \mathsf{S}(H_{D\backslash F}H_F)-\mathsf{S}(\mathcal{R}H_{D\backslash F})+\mathsf{S}(H_F)\geq\mathsf{S}(\mathcal{S}). \label{eq:temp1} \end{equation} Since any set of $k-1$ or lesser shares have no information about the secret, any set of partial shares from $k-1$ or lesser parties have no information about the secret as well. Since $|D\backslash F|=k-1$, it follows $I(\mathcal{R}:H_{D\backslash F})=0$. This implies \begin{eqnarray*} \mathsf{S}(\mathcal{R}H_{D\backslash F})&=&\mathsf{S}(\mathcal{R})+\mathsf{S}(H_{D\backslash F}) \end{eqnarray*} Substituting this in \eqref{eq:temp1} and because $\mathsf{S}(\mathcal{R})=\mathsf{S}(\mathcal{S})$, we obtain \begin{eqnarray} &&\mathsf{S}(H_{D\backslash F}H_F)+\mathsf{S}(H_F)-\mathsf{S}(H_{D\backslash F})\geq2\ \mathsf{S}(\mathcal{S}). \end{eqnarray} By subadditivity property, $\mathsf{S}(H_{D\backslash F}H_F)\leq\mathsf{S}(H_{D\backslash F})+\mathsf{S}(H_F)$. Therefore, \begin{eqnarray} 2\mathsf{S}(H_F)\geq2\mathsf{S}(\mathcal{S})\nonumber \\\mathsf{S}(H_F)\geq\mathsf{S}(\mathcal{S}). \label{eq:temp2} \end{eqnarray} By subadditivity property, \begin{equation*} \mathsf{S}(H_F)\leq\sum_{j\in F}\mathsf{S}(H_{j,D}) \end{equation*} Hence, from \eqref{eq:temp2}, we get \begin{equation} \sum_{j\in F}\mathsf{S}(H_{j,D})\geq\mathsf{S}(S) \label{eq:temp3} \end{equation} This inequality holds for any of the $\binom{d}{d-k+1}$ possible choices for $F\subset D$. Now, sum the inequality \eqref{eq:temp3} over all these $F$ to get \begin{eqnarray} \sum_{\substack{F\subset D\\\text{s.t. }|F|=d-k+1}}\sum_{j\in F}\mathsf{S}(H_{j,D})\ &\geq&\sum_{\substack{F\subset D\\\text{s.t. }|F|=d-k+1}}\mathsf{S}(S)\nonumber \\\binom{d-1}{d-k}\ \sum_{j\in D}\ \mathsf{S}(H_{j,D})\ &\geq&\binom{d}{d-k+1}\ \mathsf{S}(S)\nonumber \\\sum_{j\in D}\ \mathsf{S}(H_{j,D})\ &\geq&\ \frac{d}{d-k+1}\ \mathsf{S}(S) \end{eqnarray} This inequality gives a bound on sum of entropies of the partial shares from $d$ shares to the combiner in terms of the entropy of the secret. This can be extended to a bound on dimensions of these systems as follows. We know that the maximum value for entropy of a system is related to its dimension by \eqref{eq:max-entropy}. Thus, we obtain \begin{eqnarray} \sum_{j\in D}\log\text{dim}(H_{j,D})&\geq&\ \frac{d}{d-k+1}\ \mathsf{S}(S)\nonumber \\\log\prod_{j\in D}\text{dim}(H_{j,D})&\geq&\ \frac{d}{d-k+1}\ \mathsf{S}(S). \label{eq:temp5} \end{eqnarray} The state of $H_{j,D}$ lies in the same Hilbert space for any state of the secret $S$. Thus dim$(H_{j,D})$ remains the same for any arbitrary secret state and the bound \eqref{eq:temp5} is valid for all possible states of the secret. Consider the secret state with the density matrix \begin{equation*} \rho_S=\sum_{\ell=0}^{M-1}\frac{1}{M}\ketbra{\phi_\ell}{\phi_\ell}. \end{equation*} For this state, $\mathsf{S}(S)=\log M$. Hence, \eqref{eq:temp5} gives \begin{eqnarray} \sum_{j\in D}\log\text{dim}(H_{j,D})&\geq&\ \frac{d}{d-k+1}\ \log M\nonumber \\\prod_{j\in D}\text{dim}(H_{j,D})&\geq& M^{d/(d-k+1)}. \end{eqnarray} This concludes the proof. \end{proof} The above result derived using the quantum information theoretic framework is same as Theorem \ref{th:co-lbounds}. This framework can be potentially generalized to bound communication costs and share sizes for quantum secret sharing schemes with non-threshold access structures as well. \section{Conclusion}\label{sec:conc} In this paper, we proposed new constructions for CE-QTS schemes. We introduced the universal CE-QTS schemes and provided optimal constructions for CE-QTS and universal CE-QTS schemes using concatenation of ramp QSS schemes. We also proposed another optimal construction for universal CE-QTS schemes based on Staircase codes. We proved the bounds on communication cost during secret recovery in CE-QTS schemes. Finally we developed a quantum information theoretic model to study CE-QTS schemes. A natural direction for further study would be to extend these ideas to non-threshold access structures. In the recent years there has been tremendous progress in experimental realization of quantum secret sharing schemes. Hence, it would be also interesting to see if the dimension of the secret can be reduced while constructing CE-QTS schemes particularly for small number of parties. \appendices \section{\titlemath{((k=3,n=5,d=5))} CE-QTS scheme based on Staircase codes} \label{ap:ceqts-full-eg} Consider the $((3,5,5))$ CE-QTS scheme from the construction based on Staircase codes given in \cite{senthoor19}. This scheme has the following parameters. \begin{subequations} \begin{gather} q=7\\ m=3\\ w_1=w_2=\hdots=w_5=3\\ \text{CC}_n(3)=9,\ \text{CC}_n(5)=5. \end{gather} \end{subequations} The encoding for the scheme is given by the mapping \begin{eqnarray} \label{eq:enc_qudits_3_5_ap} \ket{\underline{s}}\mapsto\sum_{\underline{r}\in\F_7^6}\ket{c_{11}c_{12}c_{13}}&&\!\!\ket{c_{21}c_{22}c_{23}}\ket{c_{31}c_{32}c_{33}} \\[-0.5cm]&&\ \ \ \ \ket{c_{41}c_{42}c_{43}}\ket{c_{51}c_{52}c_{53}}\nonumber \end{eqnarray} where $\underline{s}=(s_1,s_2,s_3)$ indicates a basis state of the quantum secret, $\underline{r}=(r_1,r_2,\hdots,r_6)$ and $c_{ij} $ is the $(i,j)$th entry of the matrix \begin{equation} C=VY.\nonumber \end{equation} Here the matrices $V$ and $Y$ are given by \begin{equation*} V= \begin{bmatrix} 1&1&1&1&1\\1&2&4&1&2\\1&3&2&6&4\\1&4&2&1&4\\1&5&4&6&2 \end{bmatrix} \text{and\ } Y= \left[ \begin{tabular}{ccc} $s_1$&0&0\\$s_2$&0&0\\$s_3$&$r_1$&$r_2$\\$r_1$&$r_3$&$r_5$\\$r_2$&$r_4$&$r_6$ \end{tabular} \right]. \end{equation*} The encoded state in \eqref{eq:enc_qudits_3_5_ap} can also be written as, \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,0,r_1,r_3,r_4)} \ket{v_1(0,0,r_2,r_5,r_6)} \\\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,0,r_1,r_3,r_4)} \ket{v_2(0,0,r_2,r_5,r_6)} \\\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,0,r_1,r_3,r_4)} \ket{v_3(0,0,r_2,r_5,r_6)} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_1,r_3,r_4)} \ket{v_4(0,0,r_2,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,0,r_1,r_3,r_4)} \ket{v_5(0,0,r_2,r_5,r_6)}. \end{array} \end{align*} $v_i()$ indicates the polynomial evaluation given by \begin{eqnarray*} v_i(f_1,f_2,f_3,f_4,f_5)&=&f_1+f_2.x_i+f_3.x_i^2+f_4.x_i^3+f_5.x_i^4 \end{eqnarray*} and the expression $v_i(\underline{s},r_1,r_2)$ denotes $v_i(s_1,s_2,s_3,r_1,r_2)$. Here we have taken $x_i=i$ for $1\leq i\leq 5$. When combiner requests $k=3$ parties, each party sends its complete share. When $d=5$, the combiner downloads the first qudit of each share from all the five parties. \subsection{Secret recovery for \titlemath{d=5}} When the combiner accesses all of the five parties, each party sends its first qudit. Thus CC$_n(5)=5$. The qudits with the combiner are given as \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}}\ket{v_1(0,r_1,r_2,r_3,r_4)} \ket{v_1(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}}\ket{v_2(0,r_1,r_2,r_3,r_4)} \ket{v_2(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}}\ket{v_3(0,r_1,r_2,r_3,r_4)} \ket{v_3(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_4(\underline{s},r_1,r_2)}}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\bl{\ket{v_5(\underline{s},r_1,r_2)}}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{align*} Applying the operation $U_{V^{-1}}$ on these five qudits, we obtain \begin{align*} \bl{\ket{\underline{s}}}\sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \ket{v_1(0,0,r_1,r_3,r_4)} \ket{v_1(0,0,r_2,r_5,r_6)} \\\ket{v_2(0,0,r_1,r_3,r_4)} \ket{v_2(0,0,r_2,r_5,r_6)} \\\ket{v_3(0,0,r_1,r_3,r_4)} \ket{v_3(0,0,r_2,r_5,r_6)} \\\bl{\ket{r_1}}\ket{v_4(0,0,r_1,r_3,r_4)} \ket{v_4(0,0,r_2,r_5,r_6)} \\\bl{\ket{r_2}}\ket{v_5(0,0,r_1,r_3,r_4)} \ket{v_5(0,0,r_2,r_5,r_6)} \end{array} \end{align*} Here, the three qudits from the first three parties contain the basis state of the secret. Also, these qudits are not entangled with any of the other qudits. Thus, any arbitrary superposition of the basis states can be recovered with the above step. \subsection{Secret recovery for \titlemath{k=3}} When the combiner accesses any three parties, the all three qudits from each of the three parties are transmitted to the combiner. Thus CC$_n(3)=9$. Assume that the combiner accesses the first three parties. Then the qudits with the combiner are given as \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,0,r_1,r_3,r_4)} \ket{v_1(0,0,r_2,r_5,r_6)}} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,0,r_1,r_3,r_4)} \ket{v_2(0,0,r_2,r_5,r_6)}} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,0,r_1,r_3,r_4)} \ket{v_3(0,0,r_2,r_5,r_6)}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_1,r_3,r_4)} \ket{v_4(0,0,r_2,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,0,r_1,r_3,r_4)} \ket{v_5(0,0,r_2,r_5,r_6)} \end{array} \end{align*} \begin{enumerate} \item Apply the operation $U_{K_5}$ on the set of three second qudits and then applying $U_{K_5}$ on the set of third qudits where $K_5$ is the inverse of $V_{[3]}^{[3,5]}$, to obtain \begin{eqnarray} \hspace{-0.5cm} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{r_1} \ket{r_2}} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{r_3} \ket{r_5}} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{r_4} \ket{r_6}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_1,r_3,r_4)} \ket{v_4(0,0,r_2,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,0,r_1,r_3,r_4)} \ket{v_5(0,0,r_2,r_5,r_6)} \end{array}\nonumber \end{eqnarray} \item Then, apply the following operators. \begin{enumerate} \item $L_6\ket{r_2}\ket{v_1(\underline{s},r_1,r_2)}$ to get $\ket{r_2}\ket{v_1(\underline{s},r_1,0)}$ \item $L_6\ket{r_2}\ket{v_2(\underline{s},r_1,r_2)}$ to get $\ket{r_2}\ket{v_2(\underline{s},r_1,0)}$ \item $L_6\ket{r_2}\ket{v_3(\underline{s},r_1,r_2)}$ to get $\ket{r_2}\ket{v_3(\underline{s},r_1,0)}$ \item $L_6\ket{r_1}\ket{v_1(\underline{s},r_1,0)}$ to get $\ket{r_1}\ket{v_1(\underline{s},0,0)}$ \item $L_6\ket{r_1}\ket{v_2(\underline{s},r_1,0)}$ to get $\ket{r_1}\ket{v_2(\underline{s},0,0)}$ \item $L_6\ket{r_1}\ket{v_3(\underline{s},r_1,0)}$ to get $\ket{r_1}\ket{v_3(\underline{s},0,0)}$ \end{enumerate} Now, we obtain \begin{eqnarray} \hspace{-0.5cm} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},0,0)}\ket{r_1} \ket{r_2}} \\\bl{\ket{v_2(\underline{s},0,0)}\ket{r_3} \ket{r_5}} \\\bl{\ket{v_3(\underline{s},0,0)}\ket{r_4} \ket{r_6}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_1,r_3,r_4)} \ket{v_4(0,0,r_2,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,0,r_1,r_3,r_4)} \ket{v_5(0,0,r_2,r_5,r_6)} \end{array}\nonumber \end{eqnarray} \item Apply the operation $U_{K_6}$ on the set of three first qudits, where $K_6$ is the inverse of $V_{[3]}^{[3]}$ to obtain \begin{eqnarray} \hspace{-0.6cm} \bl{\ket{\underline{s}}} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{r_1}\ket{r_2}} \\\bl{\ket{r_3}\ket{r_5}} \\\bl{\ket{r_4}\ket{r_6}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_1,r_3,r_4)} \ket{v_4(0,0,r_2,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,0,r_1,r_3,r_4)} \ket{v_5(0,0,r_2,r_5,r_6)} \end{array}\nonumber \end{eqnarray} Here, the three first qudits from the first three parties contain the basis state of the secret. For an equivalent classical secret sharing scheme, the secret recovery would have been complete at this stage. However these three qudits are still entangled with the first qudits from fourth and fifth parties. Thus, any arbitrary superposition of the basis states cannot be recovered at this stage for a quantum secret. \item Apply the following operators to disentangle the basis state from the rest of the qudits. \begin{enumerate} \item $U_{K_7}$ on $\ket{r_2}\ket{r_5}\ket{r_6}$ to get \\$\ket{r_2}\ket{v_4(0,0,r_2,r_5,r_6)}\ket{v_5(0,0,r_2,r_5,r_6)}$ where \begin{equation} K_7=\left[ \begin{tabular}{c} 1 0 0\\\hline $V_{[4,5]}^{[3,5]}$ \end{tabular} \right]\nonumber \end{equation} \item $U_{K_8}$ on $\ket{r_1}\ket{r_3}\ket{r_4}$ to get \\$\ket{r_1}\ket{v_4(0,0,r_1,r_3,r_4)}\ket{v_5(0,0,r_1,r_3,r_4)}$ where \begin{equation} K_8=\left[ \begin{tabular}{c} 1 0 0\\\hline $V_{[4,5]}^{[3,5]}$ \end{tabular} \right]\nonumber \end{equation} \item $U_{K_9}$ on $\ket{s_1}\ket{s_2}\ket{s_3}\ket{r_1}\ket{r_2}$ to get \\$\ket{s_1}\ket{s_2}\ket{s_3}\ket{v_4(\underline{s},r_1,r_2)}\ket{v_5(\underline{s},r_1,r_2)}$ where \begin{equation} K_9=\left[ \begin{tabular}{c} 1 0 0 0 0\\ 0 1 0 0 0\\ 0 0 1 0 0\\\hline $V_{[4,5]}$ \end{tabular} \right]\nonumber \end{equation} \end{enumerate} Now, we obtain \begin{eqnarray} &&\hspace{-1cm} \bl{\ket{\underline{s}}} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_4(\underline{s},r_1,r_2)}\ket{v_5(\underline{s},r_1,r_2)}} \\\bl{\ket{v_4(0,0,r_1,r_3,r_4)}\ket{v_4(0,0,r_2,r_5,r_6)}} \\\bl{\ket{v_5(0,0,r_1,r_3,r_4)}\ket{v_5(0,0,r_2,r_5,r_6)}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_1,r_3,r_4)}\ket{v_4(0,0,r_2,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,0,r_1,r_3,r_4)}\ket{v_5(0,0,r_2,r_5,r_6)} \end{array}\nonumber \\&&\hspace{-1cm} =\bl{\ket{\underline{s}}} \sum_{\substack{(r_1,r_2,r_3,r_4,\\r_5',r_6')\in\mathbb{F}_7^6}} \begin{array}{l} \bl{\ket{v_4(\underline{s},r_1,r_2)}\ket{v_5(\underline{s},r_1,r_2)}} \\\bl{\ket{v_4(0,0,r_1,r_3,r_4)}\ket{r_5'}} \\\bl{\ket{v_5(0,0,r_1,r_3,r_4)}\ket{r_6'}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_1,r_3,r_4)}\ket{r_5'} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,0,r_1,r_3,r_4)}\ket{r_6'} \end{array} \label{eq:first-var-replaced} \end{eqnarray} \begin{eqnarray} &&\hspace{-1cm} =\bl{\ket{\underline{s}}} \sum_{\substack{(r_1,r_2,r_3',r_4',\\r_5',r_6')\in\mathbb{F}_7^6}} \begin{array}{l} \bl{\ket{v_4(\underline{s},r_1,r_2)}\ket{v_5(\underline{s},r_1,r_2)}} \\\bl{\ket{r_3'}\ket{r_5'}} \\\bl{\ket{r_4'}\ket{r_6'}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{r_3'}\ket{r_5'} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{r_4'}\ket{r_6'} \end{array}\nonumber \\&&\hspace{-1cm} =\bl{\ket{\underline{s}}} \sum_{\substack{(r_1',r_2',r_3',r_4',\\r_5',r_6')\in\mathbb{F}_7^6}} \begin{array}{l} \bl{\ket{r_1'}\ket{r_2'}} \\\bl{\ket{r_3'}\ket{r_5'}} \\\bl{\ket{r_4'}\ket{r_6'}} \\\ket{r_1'}\ket{r_3'}\ket{r_5'} \\\ket{r_2'}\ket{r_4'}\ket{r_6'} \end{array} \nonumber \end{eqnarray} The variable change in \eqref{eq:first-var-replaced} is possible because independent of $r_1,r_2,r_3,r_4$, the subsystem \begin{eqnarray} \sum_{(r_5,r_6)\in\mathbb{F}_7^2} \begin{array}{l} \ket{v_4(0,0,r_2,r_5,r_6)}\ket{v_4(0,0,r_2,r_5,r_6)} \\\ \ \ket{v_5(0,0,r_2,r_5,r_6)}\ket{v_5(0,0,r_2,r_5,r_6)} \end{array}\nonumber \end{eqnarray} gives the uniform superposition \begin{eqnarray} \sum_{(r_5',r_6')\in\mathbb{F}_7^2} \ket{r_5'}\ket{r_5'}\ket{r_6'}\ket{r_6'}.\nonumber \end{eqnarray} The succeeding expressions are derived similarly. Now, the secret is disentangled with the rest of the qudits. Thus, any arbitrary superposition of the basis states can be recovered with above steps for $d=3$. \end{enumerate} \section{Secret recovery for \titlemath{d=3} in the ((3,5,*)) universal CE-QTS scheme from Section \ref{s:example}} \label{ap:univ-ceqts-d-3} Consider the example of $((k=3,n=5,*))$ universal CE-QTS scheme in section \ref{s:example} with the following parameters. \begin{subequations} \begin{gather} q=7 \\m=3 \\w_1=w_2=\hdots=w_5=3 \\\text{CC}_5(3)=9,\ \text{CC}_5(4)=8,\ \text{CC}_5(5)=5. \end{gather} \end{subequations} The encoding for the scheme is given by the following mapping \begin{eqnarray} \label{eq:enc_qudits_3_5_s_ap} \ket{\underline{s}}\mapsto\sum_{\underline{r}\in\F_7^6}\ket{c_{11}c_{12}c_{13}}&&\!\!\ket{c_{21}c_{22}c_{23}}\ket{c_{31}c_{32}c_{33}} \\[-0.5cm]&&\ \ \ \ \ket{c_{41}c_{42}c_{43}}\ket{c_{51}c_{52}c_{53}}\nonumber \end{eqnarray} where $\underline{s}=(s_1,s_2,s_3)$ indicates a basis state of the quantum secret, $\underline{r}=(r_1,r_2,\hdots,r_6)$ and $c_{ij} $ is the $(i,j)$th entry of the matrix \begin{equation} C=VY.\nonumber \end{equation} Here the matrices $V$ and $Y$ are defined as follows. \begin{equation*} V= \begin{bmatrix} 1&1&1&1&1\\1&2&4&1&2\\1&3&2&6&4\\1&4&2&1&4\\1&5&4&6&2 \end{bmatrix} \text{and\ } Y= \left[ \begin{tabular}{ccc} $s_1$&0&0\\$s_2$&$r_1$&0\\$s_3$&$r_2$&$r_3$\\$r_1$&$r_3$&$r_5$\\$r_2$&$r_4$&$r_6$ \end{tabular} \right]. \end{equation*} The encoded state in \eqref{eq:enc_qudits_3_5_s_ap} can also be written as, \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,r_1,r_2,r_3,r_4)} \ket{v_1(0,0,r_3,r_5,r_6)} \\\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,r_1,r_2,r_3,r_4)} \ket{v_2(0,0,r_3,r_5,r_6)} \\\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,r_1,r_2,r_3,r_4)} \ket{v_3(0,0,r_3,r_5,r_6)} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)}. \end{array} \end{align*} $v_i()$ indicates the polynomial evaluation given by \begin{eqnarray*} v_i(f_1,f_2,f_3,f_4,f_5)&=&f_1+f_2.x_i+f_3.x_i^2+f_4.x_i^3+f_5.x_i^4 \end{eqnarray*} and the expression $v_i(\underline{s},r_1,r_2)$ denotes $v_i(s_1,s_2,s_3,r_1,r_2)$. Here we have taken $x_i=i$ for $1\leq i\leq 5$. When combiner requests $d=5$ parties, they send the first qudit from each of their shares. When $d=4$, the combiner downloads the first two qudits of each share of the four parties contacted. When $d=3$, the combiner downloads all three qudits of the share of the three parties contacted. (For clarity, the qudits accessible to the combiner have been highlighted in blue in the description below.) In the case when $d=3$, each of the three contacted parties sends all three qudits in its share. \begin{align*} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,r_1,r_2,r_3,r_4)} \ket{v_1(0,0,r_3,r_5,r_6)}} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,r_1,r_2,r_3,r_4)} \ket{v_2(0,0,r_3,r_5,r_6)}} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,r_1,r_2,r_3,r_4)} \ket{v_3(0,0,r_3,r_5,r_6)}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{align*} \begin{enumerate} \item Applying the operation $U_{K_5}$ on the set of three third qudits, where $K_5$ is the inverse of $V_{[3]}^{[3,5]}$, we obtain \begin{eqnarray*} \hspace{-0.5cm}\sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,r_1,r_2,r_3,r_4)} \ket{r_3}} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,r_1,r_2,r_3,r_4)} \ket{r_5}} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,r_1,r_2,r_3,r_4)} \ket{r_6}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{eqnarray*} \item Then, on applying the operators $L_6\ket{r_3}\ket{v_1(0,r_1,r_2,r_3,r_4)}$, $L_6\ket{r_3}\ket{v_2(0,r_1,r_2,r_3,r_4)}$ and $L_1\ket{r_3}\ket{v_3(0,r_1,r_2,r_3,r_4)}$, we obtain \begin{eqnarray*} \hspace{-0.5cm} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{v_1(0,r_1,r_2,0,r_4)} \ket{r_3}} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{v_2(0,r_1,r_2,0,r_4)} \ket{r_5}} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{v_3(0,r_1,r_2,0,r_4)} \ket{r_6}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{eqnarray*} \item Applying the operation $U_{K_6}$ on the set of three second qudits, where $K_6$ is the inverse of $V_{[3]}^{\{2,3,5\}}$, we obtain \begin{eqnarray*} \hspace{-0.5cm}\sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_1(\underline{s},r_1,r_2)}\ket{r_1} \ket{r_3}} \\\bl{\ket{v_2(\underline{s},r_1,r_2)}\ket{r_2} \ket{r_5}} \\\bl{\ket{v_3(\underline{s},r_1,r_2)}\ket{r_4} \ket{r_6}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{eqnarray*} \item Applying operation $U_{K_7}$ on the qudits $\ket{v_1(\underline{s},r_1,r_2)}$ $\ket{v_2(\underline{s},r_1,r_2)}\ket{v_3(\underline{s},r_1,r_2)}\ket{r_1}\ket{r_2}$ where \begin{equation*} K_7=\left[ \begin{tabular}{c} $V_{[3]}$\\\hline 0 0 0 1 0\\ 0 0 0 0 1 \end{tabular} \right]^{-1} \end{equation*} we obtain \begin{eqnarray*} \hspace{-0.6cm}\bl{\ket{\underline{s}}} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{r_1} \ket{r_3}} \\\bl{\ket{r_2} \ket{r_5}} \\\bl{\ket{r_4} \ket{r_6}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array} \end{eqnarray*} \item After recovering the basis state of the secret, we disentangle it from the rest of qudits by applying suitable operators as follows. \begin{enumerate} \item Apply $U_{K_8}$ on $\ket{r_3}\ket{r_5}\ket{r_6}$ to get \\$\ket{r_3}$ $\ket{v_4(0,0,r_3,r_5,r_6)}\ket{v_5(0,0,r_3,r_5,r_6)}$ where \begin{equation*} K_8=\left[ \begin{tabular}{c} 1 0 0\\\hline $V_{[4,5]}^{[3,5]}$ \end{tabular} \right]. \end{equation*} \item Apply $U_{K_9}$ on $\ket{r_1}\ket{r_2}\ket{r_3}\ket{r_4}$ to get \\$\ket{r_1}\ket{r_2}\ket{v_4(0,r_1,r_2,r_3,r_4)}\ket{v_5(0,r_1,r_2,r_3,r_4)}$ where \begin{equation*} K_9=\left[ \begin{tabular}{c} 1 0 0 0\\ 0 1 0 0\\\hline $V_{[4,5]}^{[2,5]}$ \end{tabular} \right]. \end{equation*} \item Apply $U_{K_{10}}$ on $\ket{s_1}\ket{s_2}\ket{s_3}\ket{r_1}\ket{r_2}$ to get \\$\ket{s_1}\ket{s_2}\ket{s_3}\ket{v_4(\underline{s},r_1,r_2)}\ket{v_5(\underline{s},r_1,r_2)}$ where \begin{equation*} K_{10}=\left[ \begin{tabular}{c} 1 0 0 0 0\\ 0 1 0 0 0\\ 0 0 1 0 0\\\hline $V_{[4,5]}$ \end{tabular} \right]. \end{equation*} \end{enumerate} Now, we obtain \begin{eqnarray*} &&\hspace{-1cm} \bl{\ket{\underline{s}}} \sum_{\underline{r}\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)}} \\\bl{\ket{v_5(\underline{s},r_1,r_2)}\ket{v_4(0,0,r_3,r_5,r_6)}} \\\bl{\ket{v_5(0,r_1,r_2,r_3,r_4)}\ket{v_5(0,0,r_3,r_5,r_6)}} \\\ket{v_4(\underline{s},r_1,r_2)}\ket{v_4(0,r_1,r_2,r_3,r_4)} \ket{v_4(0,0,r_3,r_5,r_6)} \\\ket{v_5(\underline{s},r_1,r_2)}\ket{v_5(0,r_1,r_2,r_3,r_4)} \ket{v_5(0,0,r_3,r_5,r_6)} \end{array}\nonumber \\&&\hspace{-1cm} =\bl{\ket{\underline{s}}} \sum_{\underline{r}''\in\mathbb{F}_7^6} \begin{array}{l} \bl{\ket{r_1''}\ket{r_3''}} \\\bl{\ket{r_2''}\ket{r_5''}} \\\bl{\ket{r_4'')}\ket{r_6''}} \\\ket{r_1''}\ket{r_3''}\ket{r_5''} \\\ket{r_2''}\ket{r_4''}\ket{r_6''} \end{array} \end{eqnarray*} where $\underline{r}''=(r_1'',r_2'',r_3'',r_4'',r_5'',r_6'')$. Now, the secret is disentangled with the rest of the qudits. \end{enumerate} Thus, any arbitrary superposition of the basis states can be recovered with above steps for $d=3$. \bibliographystyle{IEEEtran}
1,116,691,498,077
arxiv
\section*{References} \begin{enumerate} \item J.F. Donoghue, H.P. Nilles and D. Wyler, Phys. Lett. {\bf 128B}, 55 (1983); M.J. Duncan, Nucl. Phys. {\bf B221}, 285 (1983); F. Gabbiani and A. Masiero, $ibid$., {\bf B322}, 235 (1989); J.S. Hagelin, S. Kelley and T. Tanaka, Preprint MIU-THP-92/60 (1993). \item For reviews, see eg., S.M. Barr and W.A. Marciano, in {\it CP Violation}, (ed. C. Jarlskog, World Scientific, 1989); W. Bernreuther and M. Suzuki, Rev. Mod. Phys. {\bf 63}, 313 (1991). \item W. Buchmuller and D. Wyler, Phys. Lett. {\bf 121B}, 321 (1982); J. Polchinski and M. Wise, $ibid.$, {\bf 125B}, 393 (1983); E. Franco and M. Mangano, $ibid$., {\bf 135B}, 445 (1984). \item F. del Aguila, M.B. Gavela, J.A. Grifols and A. Mendez, Phys. Lett. {\bf 126B}, 71 (1983). \item A. Dannenberg, L. Hall and L. Randall, Nucl. Phys. {\bf B271}, 574 (1986). \item S.M. Barr and A. Masiero, Phys. Rev. {\bf D38},, 366 (1988); S.M. Barr and G. Segre, $ibid$. {\bf D48}, 302 (1993). \item A. Pomarol, Phys. Rev. {\bf D47}, 273 (1993); R. Garisto and G. Kane, Preprint TRI-PP-93-1 (1993). \item M. Dine, R. Leigh and A. Kagan, Preprint SLAC-PUB-6090 (1993). \item R.D. Peccei and H.R. Quinn, Phys. Rev. Lett. {\bf 38}, 1440 (1977). \item J.E. Kim, Phys. Rev. Lett. {\bf 43}, 103 (1979); M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. {\bf B166}, 199 (1981). \item G. Ecker and W. Grimus, Z. Phys. {\bf C30}, 293 (1986). \item J.F. Donoghue, E. Golowich, B.R. Holstein and W.A. Ponce, Phys. Rev. {\bf D23}, 1213 (1981). \item L.J. Hall, V.A. Kostelecky and S. Raby, Nucl. Phys. {\bf B267}, 415 (1986); F. Gabbiani and A. Masiero, Ref. 1. \item W. Bernreuther and M. Suzuki, Ref. 2. \end{enumerate} \section*{Figure Captions} \begin{itemize} \item Fig. 1. A box diagram whereby the phase of the gluino mass contributes to $Im(M_{12})$ in the neutral Kaon system. \item Fig. 2. The gluino penguin graph contribution to $|\epsilon^\prime/\epsilon|$. \item Fig. 3. A contribution to the edm of the $d$--quark coming from the phase of the gluino mass. A similar diagram exists for the $u$--quark. These in turn induce an edm of the neutron of comparable magnitude. \item Fig. 4. A contribution to the edm of the electron arising from the phase of the photino mass. There are several other diagrams involving neutralinos and charginos. \item Fig. 5. The diagram by which the gaugino masses acquire a phase of order $3 \times 10^{-3}$. \item Fig. 6. A diagram contributing to the $\Tilde{d}_L\Tilde{s}_R$ squark mass insertion proportional to the neutrino Dirac mass matrix. \end{itemize} \end{document}
1,116,691,498,078
arxiv
\chapter{Pr\'{e}liminaires} Ce chapitre est essentiellement constitu\'{e} de rappels sur la th\'{e}orie g\'{e}n\'{e}rale des groupes. La lettre $G$ d\'{e}signe un groupe. \section{Actions de groupes}\label{action} \begin{defi} On dit que le groupe $G$ \emph{op\`{e}re \`{a} gauche} sur un ensemble $X$ si l'on s'est donn\'{e} une application $$ \left\{\!\! \begin{array}{rcl} G \times X & \longrightarrow & X\\ (g,x) & \longmapsto & g.x \end{array} \right. $$ v\'{e}rifiant les conditions: \begin{enumerate} \item[(1)] $g.(g'.x)=(gg').x$ pour tout $x\in X$ et tout couple $(g,g')\in G\times G$. \item[(2)] $1.x=x$ pour tout $x\in X$, o\`{u} $1$ est l'\'{e}l\'{e}ment neutre de $G$. \end{enumerate} \end{defi} {\it Remarque. } La donn\'{e}e d'une action \`{a} gauche de $G$ sur $X$ \'{e}quivaut \`{a} la donn\'{e}e d'un homomorphisme $\tau$ de $G$ dans le groupe $\mathcal{S}_X$ des permutations de $X$ d\'{e}fini pour tout $g\in G$ et tout $x\in X$ par $\tau(g)(x)=g.x.$ \bigskip On aurait une d\'{e}finition analogue pour les op\'{e}rations \`{a} droite. \bigskip Le groupe $G$ d\'{e}coupe alors $X$ en \emph{orbites}: deux \'{e}l\'{e}ments $x$ et $y$ de $X$ sont dans la m\^{e}me orbite si et seulement s'il existe $g\in G$ tel que $x=g.y$. L'ensemble des orbites est le quotient de $X$ par $G$ et est not\'{e} $G\backslash X$ dans le cas d'une action \`{a} gauche (et $X/G$ dans le cas d'une action \`{a} droite). \begin{defi} On dit que $G$ agit transitivement sur X si $G\backslash X$ est r\'{e}duit \`{a} un \'{e}l\'{e}ment. \end{defi} En particulier, le groupe $G$ agit transitivement sur chaque orbite. \begin{defi} Soit $x\in X$; on appelle \emph{stabilisateur de $x$} (ou \emph{fixateur} de $x$) et on note $H_x$ le sous-groupe de $G$ form\'{e} des \'{e}l\'{e}ments $g\in G$ qui fixent $x$ (i.e. tels que $g.x=x$). \end{defi}\label{stabilisateur} {\it Remarque. } Si $G$ op\`{e}re transitivement sur $X$ et si $x\in X$, on a une bijection de $G/H_x$ sur $X$ donn\'{e}e par $g H_x \longmapsto g.x$, o\`{u} $G/H_x$ est l'ensemble des classes \`{a} gauche de $G$ modulo $H_x$. Si $x'\in X$, il existe $g\in G$ tel que $x'=g.x$. Alors $H_{x'}=g{H_x}g^{-1}$. Donc changer de point de base revient \`{a} remplacer le stabilisateur de $x$ par un de ses conjugu\'{e}s. Inversement, si $H$ est un sous-groupe de $G$, alors $G$ agit transitivement sur $G/H$ et $H$ stabilise la classe de $1$. Ainsi la donn\'{e}e de $X$ sur lequel $G$ op\`{e}re transitivement revient \`{a} celle d'un sous-groupe de $G$, d\'{e}termin\'{e} \`{a} conjugaison pr\`{e}s. \bigskip{\it Exemple. } Soit $X$ une droite affine d\'{e}finie sur un corps $K$ et soit $G$ le groupe des similitudes $$G=\left\{x\mapsto ax+b,\, a\in K^*,\,b\in K\right\}.$$ Le groupe $G$ op\`{e}re transitivement sur $X$. Si $x\in X$, le stabilisateur de $x$ est le groupe des homoth\'{e}ties centr\'{e}es en $x$. \bigskip{\it Application. } Soit $G$ un groupe \emph{fini}, dont on note $|G|$ l'ordre. Soit $X$ un ensemble o\`{u} $G$ op\`{e}re. On a $X=\coprod_{i\in I}{Gx_i}$ o\`{u} les $Gx_i$ sont les orbites (2 \`{a} 2 disjointes) sous l'action de $G$, les $x_i$ formant un syst\`{e}me de repr\'{e}sentants des \'{e}l\'{e}ments de $G\backslash X$. On a vu que $Gx_i$ est en bijection avec $G/{H_{x_i}}$, donc $|Gx_i|=|G|.{|H_{x_i}|}^{-1}$. On en d\'{e}duit $|X|=\sum_{i\in I}{|G|.{|H_{x_i}|}^{-1}}$ puis $|X|.{|G|}^{-1}=\sum_{i\in I}{{|H_{x_i}|}^{-1}}$. \bigskip\noindent{\it Cas particulier.} Le groupe $G$ op\`{e}re sur lui-m\^{e}me par automorphismes int\'{e}rieurs; on a une application: $$ \left\{\!\! \begin{array}{rcl} G & \longrightarrow & \mathcal{S}_G\\ x & \longmapsto & {\rm int}_x \end{array} \right.$$ o\`{u} ${\rm int}_x(y)=xyx^{-1}={}^xy$. Les orbites sont les classes de conjugaison\label{classeconjugaison}. Le stabilisateur d'un \'{e}l\'{e}ment $x$ de $G$ est l'ensemble des \'{e}l\'{e}ments de $G$ qui commutent \`{a} $x$ (on l'appelle \emph{centralisateur}\label{centralisateur} de $x$ et on le note $C_G(x)$). On a $1=\sum_{i\in I}{{|C_{G}(x_i)|}^{-1}}$ o\`{u} ${(x_i)}_{i\in I}$ est un syst\`{e}me de repr\'{e}sentants des classes de conjugaison. Pour $x_i=1$, on a $C_{G}(x_i)=G$ et donc $\sup_{i\in I}{|C_{G}(x_i)|}=|G|$. \bigskip{\it Exercice. }\\ $(i)$ Si $h$ est un entier $\geqslant 1$, montrer qu'il n'y a qu'un nombre fini de d\'{e}compositions $1=\sum_{i=1}^{h}{\frac{1}{n_i}}$ avec $n_i\in \mathbf{Z}$, $n_i\geqslant 1$. [Par exemple, si $h=3$, les seuls $n_i$ possibles sont $(3,3,3)$, $(2,4,4)$ et $(2,3,6)$.] $(ii)$ En d\'{e}duire que, si un groupe fini $G$ a un nombre de classes de conjugaison \'{e}gal \`{a} $h$, l'ordre de $G$ est major\'{e} par une constante $N(h)$ ne d\'{e}pendant que de $h$. (On peut prendre $N(h)$ de la forme ${c_1}^{{c_2}^h}$, o\`{u} $c_1$, $c_2$ sont des constantes $>0$. J'ignore si l'on peut faire beaucoup mieux.) \section{Sous-groupes normaux; sous-groupes caract\'{e}ristiques; groupes simples}\label{grpe simple} \begin{defi} On dit qu'un sous-groupe $H$ de $G$ est \emph{normal} (ou \emph{invariant}) si pour tout $x\in G$ et tout $h\in H$, on a $xhx^{-1}\in H$. \end{defi}\label{normal} Cela revient \`{a} dire que le sous-groupe $H$ est stable par tout automorphisme int\'{e}rieur. Une telle situation se d\'{e}crit par une suite exacte: $$\xymatrix{\{1\} \ar[r] & H \ar[r] & G \ar[r] & G/H \ar[r] & \{1\}}.$$ {\it Remarque. } Si $H$ est un sous-groupe de $G$, il existe un plus grand sous-groupe de $G$ dans lequel $H$ est normal, \`{a} savoir l'ensemble des $g\in G$ tels que $gHg^{-1}=H$. On l'appelle le \emph{normalisateur de $H$ dans $G$}\label{normalisateur}, et on le note $N_G(H)$. On dit qu'une partie de $G$ \emph{normalise} $H$ si elle est contenue dans $N_G(H)$. \begin{defi} On dit qu'un sous-groupe $H$ de $G$ est \emph{caract\'{e}ristique} s'il est stable par tout automorphisme de $G$. \end{defi} \label{caracteristique} Un tel sous-groupe est normal. \bigskip{\it Exemple. } Le \emph{centre}\label{centre} de $G$ (ensemble des \'{e}l\'{e}ments qui commutent \`{a} tous les \'{e}l\'{e}ments de $G$) est un sous-groupe caract\'{e}ristique. Il en est de m\^{e}me du \emph{groupe d\'{e}riv\'{e}} de $G$, ainsi des sous-groupes $D^nG$, $C^iG$ et $\Phi(G)$ d\'{e}finis au chap. \ref{chap3}. \begin{defi} On dit qu'un groupe $G$ est \emph{simple} lorsqu'il a exactement deux sous-groupes normaux: $\{1\}$ et $G$. \end{defi} {\it Exemples.\\ } $(1)$ Les seuls groupes \emph{ab\'{e}liens} simples sont les groupes cycliques d'ordre premier, c'est-\`{a}-dire les groupes $\mathbf{Z}/{p\mathbf{Z}}$ avec $p$ premier. $(2)$ Le groupe altern\'{e} $\mathcal{A}_n$ est simple si $n\geqslant 5$. $(3)$ Le groupe $\mathbf{PSL}_n(\mathbf{F}_q)$ est simple pour $n\geqslant 2$ sauf dans le cas $n=2$ et $q=2$ ou $3$. \section{Filtrations et th\'{e}or\`{e}me de Jordan-H\"{o}lder} \label{filtrations} \begin{defi} Une \emph{filtration} du groupe $G$ est une suite finie ${(G_i)}_{0\leqslant i \leqslant n}$ de sous-groupes telle que $$G_0=\{1\}\subset G_1\subset \cdots \subset G_i \subset \cdots \subset G_n=G$$ avec $G_i$ normal dans $G_{i+1}$, pour $0\leqslant i\leqslant n-1$. On appelle \emph{gradu\'{e}} de $G$ (associ\'{e} \`{a} la filtration ${(G_i)}_{0\leqslant i \leqslant n}$) et on note $\mathrm{gr}(G)$ la suite des $\mathrm{gr}_i(G)=G_i/G_{i-1}$, pour $1\leqslant i\leqslant n$. \end{defi} \begin{defi} Une filtration ${(G_i)}_{0\leqslant i \leqslant n}$ de $G$ est dite \emph{de Jordan-H\"{o}lder} si $G_i/G_{i-1}$ est simple pour tout $1\leqslant i\leqslant n$. \end{defi} \begin{prop} Si $G$ est fini, $G$ poss\`{e}de une suite de Jordan-H\"{o}lder. \end{prop} Si $G=\{1\}$, on a la suite de Jordan-H\"{o}lder triviale ($n=0$). Si $G$ est simple, on prend $n=1$. Si $G$ n'est pas simple, on raisonne par r\'{e}currence sur l'ordre de $G$. Soit $N\subset G$, $N$ normal dans $G$ d'ordre maximal. Alors $G/N$ est simple, car sinon il existerait $M$ normal dans $G$ contenant strictement $N$ et distinct de $G$. Comme $|N|<|G|$, on peut appliquer l'hypoth\`{e}se de r\'{e}currence et si ${(N_i)_{0\leqslant i\leqslant n}}$ est une suite de Jordan-H\"{o}lder pour $N$, alors $(N_0, \cdots, N_{n-1}, N, G)$ en est une pour $G$.~\hfill{$\square$} \bigskip{\it Remarque. } Si $G$ est infini, il peut ne pas poss\'{e}der de suite de Jordan-H\"{o}lder: c'est par exemple le cas de $\mathbf{Z}$. \begin{thm}[Jordan-H\"{o}lder] Soit $G$ un groupe fini et soit ${(G_i)}_{0\leqslant i \leqslant n}$ une suite de Jordan-H\"{o}lder de $G$. Le gradu\'{e} de $G$, \`{a} permutation pr\`{e}s des indices, ne d\'{e}pend pas de la suite choisie. \end{thm} Il suffit de montrer que si $S$ est un groupe simple fix\'{e} et si $n\left(G,(G_i),S\right)$ est le nombre de $j$ tels que $G_j/G_{j-1}$ est isomorphe \`{a} $S$, alors $n\left(G,(G_i),S\right)$ ne d\'{e}pend pas de la suite $(G_i)$. On commence par une remarque: si $H$ est un sous-groupe de $G$, une filtration $(G_i)$ sur $G$ induit une filtration $(H_i)$ sur $H$ d\'{e}finie par $H_i=G_i\cap H$. De m\^{e}me, si $N$ est normal, on a une filtration sur $G/N$ d\'{e}finie par $(G/N)_i=G_i/(G_i\cap N)$. La suite exacte $$\xymatrix{ \{1\} \ar[r] & N \ar[r] & G \ar[r] & G/N \ar[r] & \{1\} }$$ se conserve par filtration: $$\xymatrix{ \{1\} \ar[r] & N_i/{N_{i-1}} \ar[r] & G_i/G_{i-1} \ar[r] & (G/N)_i/(G/N)_{i-1} \ar[r] & \{1\}}$$ d'o\`{u} finalement la suite exacte $$\xymatrix{ \{1\} \ar[r] & \mathrm{gr}_i(N) \ar[r] & \mathrm{gr}_i(G) \ar[r] & \mathrm{gr}_i(G/N) \ar[r] & \{1\}. }$$ Si la filtration initiale est de Jordan-H\"{o}lder, $\mathrm{gr}_i(G)$ est simple pour tout $i$, donc $\mathrm{gr}_i(N)$ est isomorphe \`{a} $\{1\}$ ou \`{a} $\mathrm{gr}_i(G)$. Par r\'{e}indexation, on peut donc obtenir une filtration de Jordan-H\"{o}lder sur $N$ et de m\^{e}me sur $G/N$. Cette remarque permet de d\'{e}montrer le th\'{e}or\`{e}me; on a en effet deux possibilit\'{e}s: soit $\mathrm{gr}_i(N)=\{1\}$ et $\mathrm{gr}_i(G/N)=\mathrm{gr}_i(G)$, soit $\mathrm{gr}_i(N)=\mathrm{gr}_i(G)$ et $\mathrm{gr}_i(G/N)=\{1\}$. On en d\'{e}duit une partition de $I=\{0, \dots, n\}$ en deux parties: $I_1=\left\{i,\, \mathrm{gr}_i(N)=\{1\}\right\}$ et $I_2=\left\{i,\,\mathrm{gr}_i(N)=\mathrm{gr}_i(G)\right\}$.\\ On raisonne alors par r\'{e}currence sur l'ordre de $G$. Si $G=\{1\}$, il n'y a pas de probl\`{e}me. Sinon, on peut toujours supposer que $G$ n'est pas simple. Soit alors $N$ un sous-groupe normal tel que $|N|<|G|$ et $|G/N|<|G|$. L'hypoth\`{e}se de r\'{e}currence s'applique \`{a} $N$ et $G/N$: $n\left(N,{(N_i)}_{i\in I_2},S\right)$ et $n\left(G/N,\left((G/N)_i\right)_{i\in I_1},S\right)$ sont ind\'{e}pendants de la filtration. Or $$n\left(G,{(G_i)}_{i\in I},S\right)=n\left(N,{(N_i)}_{i\in I_2},S\right)+n\left(G/N,\left((G/N)_i\right)_{i\in I_1},S\right),$$ donc $n\left(G,{(G_i)}_{i\in I},S\right)$ est ind\'{e}pendant de la filtration choisie.~\hfill{$\square$} \bigskip{\it Application. } On retrouve ainsi l'unicit\'{e} de la d\'{e}composition d'un entier en produit de facteurs premiers. En effet, si $n=p_1^{h_1}\cdots p_k^{h_k}$, on a pour $\mathbf{Z}/n\mathbf{Z}$ la filtration de Jordan-H\"{o}lder suivante: $$\mathbf{Z}/n\mathbf{Z}\supset p_1 \mathbf{Z}/n\mathbf{Z} \supset p_1^2 \mathbf{Z}/n\mathbf{Z}\supset\cdots\supset p_1^{h_1} \mathbf{Z}/n\mathbf{Z}\supset\cdots$$ Donc $\mathbf{Z}/p_i\mathbf{Z}$ appara\^{i}t $h_i$ fois dans le gradu\'{e}, d'o\`{u} l'unicit\'{e}. \bigskip{\it Exemples.\\ } $(1)$ Filtration de $\mathcal{S}_3$: $\mathcal{A}_3$ est normal dans $\mathcal{S}_3$ et $\mathcal{A}_3$ est cyclique d'ordre $3$. D'o\`{u} la filtration $$\{1\}\subset \mathcal{A}_3 \subset \mathcal{S}_3.$$ $(2)$ Filtration de $\mathcal{S}_4$: $\mathcal{A}_4$ est normal dans $\mathcal{S}_4$ avec $(\mathcal{S}_4:\mathcal{A}_4)=2$. Dans $\mathcal{A}_4$, il existe un sous-groupe normal $D$ de type $(2,2)$: $D=\{1, \sigma_1, \sigma_2, \sigma_3\}$ avec \begin{eqnarray*} \sigma_1 & = & (a,b)(c,d),\\ \sigma_2 & = & (a,c)(b,d),\\ \sigma_3 & = & (a,d)(b,c). \end{eqnarray*} On a donc la filtration $$\{1\} \subset \{1, \sigma_i\} \subset D \subset \mathcal{A}_4 \subset \mathcal{S}_4.$$ L'ordre des quotients successifs est\, $2,2,3,2$. Le choix de $i$ \'{e}tant arbitraire, il n'y a pas unicit\'{e} de la filtration. $(3)$ Filtration de $\mathcal{S}_n$ pour $n\geqslant 5$: le groupe $\mathcal{A}_n$ \'{e}tant simple, on a la filtration $$\{1\}\subset \mathcal{A}_n \subset \mathcal{S}_n.$$ \chapter{Th\'{e}or\`{e}mes de Sylow} Soit $p$ un nombre premier et soit $G$ un groupe fini. \section{D\'{e}finitions}\label{sylow} \begin{defi} On dit que $G$ est un \emph{$p$-groupe} si l'ordre de $G$ est une puissance de $p$. Si $G$ est d'ordre $p^n m$ avec $m$ premier \`{a} $p$, on dit qu'un sous-groupe $H$ de $G$ est un \emph{$p$-Sylow} de $G$ si $H$ est d'ordre $p^n$. \end{defi} {\it Remarques.\\ } $(1)$ Soit $S$ un sous-groupe de $G$; $S$ est un $p$-Sylow de $G$ si et seulement si $S$ est un $p$-groupe et $(G:S)$ est premier \`{a} $p$. $(2)$ Tout conjugu\'{e} d'un $p$-Sylow de $G$ est un $p$-Sylow de $G$. \bigskip{\it Exemple. } Soit $K$ un corps fini de caract\'{e}ristique $p$ \`{a} $q=p^f$ \'{e}l\'{e}ments. Soit $G=\mathbf{GL}_n(K)$ le groupe des matrices inversibles $n\times n$ \`{a} coefficients dans $K$. Ce groupe est isomorphe \`{a} $\mathbf{GL}(V)$ o\`{u} $V$ est un espace vectoriel sur $K$ de dimension $n$. On remarque que l'ordre de $G$ est le nombre de bases d'un espace vectoriel de dimension $n$ sur $K$, soit: $$|G|= (q^n-1)(q^n-q)\cdots (q^n-q^{n-1})= q^{n(n-1)/2}\prod_{i=1}^{n}{(q^i-1)} = p^{fn(n-1)/2} m,$$ o\`{u} $m=\prod_{i=1}^{n}{(q^i-1)}$ est premier \`{a} $q$, donc \`{a} $p$.\\ Consid\'{e}rons d'autre part le groupe $P$ constitu\'{e} des matrices triangulaires sup\'{e}rieures \`{a} coefficients diagonaux \'{e}gaux \`{a} $1$. C'est un sous-groupe de $G$ d'ordre $|P|=q^{n(n-1)/2}=p^{fn(n-1)/2}$. Donc $P$ est un $p$-Sylow de $G$. \section{Existence des $p$-Sylow}\label{2.2} Le but de cette section est de d\'{e}montrer le premier th\'{e}or\`{e}me de Sylow: \begin{thm}\label{sylow 1} Tout groupe fini poss\`{e}de au moins un $p$-Sylow. \end{thm} \subsection{Premi\`{e}re d\'{e}monstration} Elle repose sur la proposition suivante: \begin{prop} Soit $H$ un sous-groupe de $G$ et soit $S$ un $p$-Sylow de $G$. Alors il existe $g\in G$ tel que $H\cap gSg^{-1}$ soit un $p$-Sylow de $H$. \end{prop} Soit $X$ l'ensemble des classes \`{a} gauche de $G$ modulo $S$. Le groupe $G$ (resp. $H$) agit sur $X$ par translations. Les stabilisateurs des points de $X$ sous $G$ (resp. sous $H$) sont les conjugu\'{e}s de $S$ (resp. les $H\cap gSg^{-1}$). Or $|X|\not\equiv 0{\pmod p}$ car $S$ est un $p$-Sylow de $G$. L'une des orbites $\mathcal O$ de $X$ sous l'action de $H$ a un nombre d'\'{e}l\'{e}ments premier \`{a} $p$ (sinon $|X|$ serait divisible par $p$); soit $x\in \mathcal O$ et soit $H_x$ le stabilisateur de $x$ dans $H$. Le groupe $H_x$ est un $p$-groupe, de la forme $H\cap gSg^{-1}$ (pour un certain $g$) et $(H:H_x)=|{\mathcal O}|$ est premier \`{a} $p$. Donc $H_x$ est un $p$-Sylow de $H$ de la forme $H\cap gSg^{-1}$.~\hfill{$\square$} \begin{coro} Si $G$ a des $p$-Sylow et si $H$ est un sous-groupe de $G$, alors $H$ a aussi des $p$-Sylow. \end{coro} {\it Application. }[Une premi\`{e}re preuve du th. \ref{sylow 1}] Soit $G$ un groupe fini d'ordre $n$. On peut plonger $G$ dans le groupe sym\'{e}trique $\mathcal{S}_n$. D'autre part, $\mathcal{S}_n$ se plonge dans $\mathbf{GL}_n(K)$ (o\`{u} $K$ est un corps fini de caract\'{e}ristique $p$): si $\sigma\in \mathcal{S}_n$ et si $(e_i)_{1\leqslant i\leqslant n}$ est une base de $K^n$, on associe \`{a} $\sigma$ la transformation lin\'{e}aire $f$ d\'{e}finie par $f(e_i)=e_{\sigma(i)}$. Donc $G$ se plonge dans $\mathbf{GL}_n(K)$. D'apr\`{e}s l'exemple du \S\ \ref{sylow}, $\mathbf{GL}_n(K)$ poss\`{e}de un $p$-Sylow. Le corollaire ci-dessus permet de conclure. \subsection{Seconde d\'{e}monstration (Miller-Wielandt)}\label{Miller-Wielandt} On suppose que $|G|=p^n m$ avec $m$ premier \`{a} $p$. On note $X$ l'ensemble des parties de $G$ \`{a} $p^n$ \'{e}l\'{e}ments et $s$ le nombre de $p$-Sylow de $G$. \begin{lemme} $|X|\equiv sm{\pmod p}$. \end{lemme} Le groupe $G$ op\`{e}re sur $X$ par translations \`{a} gauche. Soit $X=\coprod_{i} X_i$ la d\'{e}composition de $X$ en orbites sous l'action de $G$. Si $A_i\in X_i$, on a $X=\coprod_{i} GA_i$. On note $G_i$ le stabilisateur de $A_i$. On rappelle que $|GA_i|=|G|/|G_i|$.\\ Remarque: $|G_i|\leqslant p^n$. Soit en effet $x\in A_i$. Si $g\in G_i$, alors $gx$ appartient \`{a} $A_i$, donc peut prendre $p^n$ valeurs. On a donc au plus $p^n$ choix pour $g$. On distingue donc deux cas:\\ $\bullet$ Si $|G_i|<p^n$, alors $|GA_i|$ est divisible par $p$.\\ $\bullet$ Si $|G_i|=p^n$, alors $G_i$ est un $p$-Sylow de $G$.\\ R\'{e}ciproquement soit $P$ un $p$-Sylow de $G$; $Pg\in X$ pour tout $g\in G$ et le stabilisateur de $Pg$ est $P$. De m\^{e}me, si $P$ stabilise une partie $A$ de $X$, alors $PA\subset A$ donc pour tout $a\in A$, on a $Pa\subset A$ donc $A=Pa$. (les deux ensembles ont m\^{e}mes cardinaux). Donc $P$ stabilise exactement son orbite sous l'action de $G$ (et le cardinal de cette orbite est $|G/P|=m$). Finalement $$|X|={\sum_{i\,/\,|G_i|<p^n}\!\!\!\!{|GA_i|}}\ \ +{\sum_{i\,/\,|G_i|=p^n}\!\!\!\!{|GA_i|}}$$ soit $$|X|\equiv 0+sm{\pmod p}$$ d'o\`{u} le r\'{e}sultat.~\hfill{$\square$} \bigskip Ce lemme nous donne le th. \ref{sylow 1}. En effet, d'apr\`{e}s ce lemme, la classe de $s$ modulo $p$ ne d\'{e}pend que de l'ordre de $G$. Or $G'=\mathbf{Z}/|G|\mathbf{Z}$ a un unique $p$-Sylow (qui est isomorphe \`{a} $\mathbf{Z}/p^n\mathbf{Z}$). Donc $s\equiv 1{\pmod p}$; en particulier, $s$ est non nul.~\hfill{$\square$} \bigskip{\it Remarque. } On a d\'{e}montr\'{e} en fait que le nombre de $p$-Sylow d'un groupe $G$ est congru \`{a} $1$ modulo $p$. On retrouvera cette propri\'{e}t\'{e} ult\'{e}rieurement. \begin{coro}[Cauchy] Si $p$ divise l'ordre de $G$, alors $G$ contient un \'{e}l\'{e}ment d'ordre~$p$. \end{coro} En effet, soit $S$ un $p$-Sylow de $G$ (il en existe d'apr\`{e}s le th. \ref{sylow 1}); $S$ n'est pas r\'{e}duit \`{a} $\{1\}$ car $p$ divise l'ordre de $G$. Soit $x\in S$ distinct de $\{1\}$. L'ordre de $x$ est une puissance de $p$, soit $p^m$ ($m\geqslant 1$). Alors $x^{p^{m-1}}$ est d'ordre $p$.~\hfill{$\square$} \section{Propri\'{e}t\'{e}s des $p$-Sylow}\label{2.3} \begin{thm}[Second th\'{e}or\`{e}me de Sylow]\label{sylow 2} \ \begin{enumerate} \item[(1)] Tout $p$-sous-groupe de $G$ est contenu dans un $p$-Sylow de $G$. \item[(2)] Les $p$-Sylow de $G$ sont conjugu\'{e}s. \item[(3)] Le nombre des $p$-Sylow est congru \`{a} $1$ modulo $p$. \end{enumerate} \end{thm} \begin{lemme}\label{lemme1} Soit $X$ un ensemble fini sur lequel op\`{e}re un $p$-groupe $P$ et soit $X^P$ l'ensemble des \'{e}l\'{e}ments de $X$ fix\'{e}s par $P$. Alors $|X|\equiv |X^P|\pmod p$. \end{lemme} Les orbites \`{a} un \'{e}l\'{e}ment de $X$ sous l'action de $P$ sont celles constitu\'{e}es d'un point de $X^P$. L'ensemble $X-X^P$ est donc r\'{e}union d'orbites non triviales, de cardinal divisible par $p$.~\hfill{$\square$} \bigskip On peut alors d\'{e}montrer les points $(1)$ et $(2)$ du th. \ref{sylow 2}: soit $S$ un $p$-Sylow de $G$ et soit $P$ un $p$-sous-groupe de $G$. On applique le lemme \ref{lemme1} \`{a} l'ensemble $X$ des classes \`{a} gauche de $G$ modulo $S$: $|X|\not\equiv 0 \pmod p$ donc $|X^P|\not\equiv 0 \pmod p$. En particulier, il existe $x\in X$ fix\'{e} par $P$. Le stabilisateur de $x$ contient donc $P$ et est un conjugu\'{e} de $S$. Donc $P$ est contenu dans un conjugu\'{e} de $S$ (c'est-\`{a}-dire dans un $p$-Sylow de $G$).\\ Pour le point $(2)$, on applique $(1)$ \`{a} $P=S'$ o\`{u} $S'$ est un $p$-Sylow de $G$. Il existe $g\in G$ tel que $S'\subset gSg^{-1}$, donc $S'=gSg^{-1}$.~\hfill{$\square$} \bigskip Pour le point $(3)$, on donne une nouvelle d\'{e}monstration bas\'{e}e sur le lemme suivant: \begin{lemme}\label{lemme2} Soient $S$ et $S'$ deux $p$-Sylow de $G$. Si $S'$ normalise $S$, alors $S=S'$. \end{lemme} Soit $H$ le sous-groupe de $G$ engendr\'{e} par $S$ et $S'$. Le groupe $H$ normalise $S$ qui est un $p$-Sylow de $H$. Donc $S$ est le seul $p$-Sylow de $H$ (les $p$-Sylow de $H$ sont conjugu\'{e}s); or $S'$ est un $p$-Sylow de $H$ donc $S=S'$.~\hfill{$\square$} \bigskip Montrons le point $(3)$. Si $X$ est l'ensemble des $p$-Sylow de $G$, alors $S$ op\`{e}re sur $X$ par conjugaison et d'apr\`{e}s le lemme \ref{lemme2}, $S$ est le seul \'{e}l\'{e}ment de $X$ fix\'{e} par $S$. On r\'{e}applique le lemme \ref{lemme1} (avec $P=S$): $|X|\equiv 1 \pmod p$.~\hfill{$\square$} \begin{coro} Si $S$ est un $p$-Sylow de $G$, alors $\big(G:N_G(S)\big)\equiv 1 \pmod p$. \end{coro} L'application $f$ de $G/N_G(S)$ dans l'ensemble des $p$-Sylow de $G$, d\'{e}finie par $f({\bar g})=gSg^{-1}$ (o\`{u} $g$ est un repr\'{e}sentant quelconque de $\bar g$) est bijective.~\hfill{$\square$}\bigskip On a vu que pour tout sous-groupe $H$ de $G$, il existe un $p$-Sylow de $G$ dont l'intersection avec $H$ est un $p$-Sylow de $H$. Ce n'est pas vrai pour tout $p$-Sylow de $G$. Mais si $H$ est normal, on a: \begin{prop}\label{prop2.10} Soit $H$ un sous-groupe normal de $G$ et soit $S$ un $p$-Sylow de $G$. Alors \begin{enumerate} \item[(1)] $S\cap H$ est un $p$-Sylow de $H$. \item[(2)] L'image de $S$ dans $G/H$ est un $p$-Sylow de $G/H$ (et on les obtient tous ainsi). \item[(3)] (\,{\bf Frattini}) Si $Q$ est un $p$-Sylow de $H$, alors $H.N_G(Q)=G$. \end{enumerate} \end{prop}\label{thmFrattini} $(1)$ Evident. $(2)$ L'image de $S$ dans $G/H$ est isomorphe \`{a} $S/(H\cap S)$. Si $p^a$ (resp. $p^b$) est la puissance de $p$ maximale divisant l'ordre de $H$ (resp. de $G/H$), $p^{a+b}$ est la puissance maximale de $p$ divisant l'ordre de $G$. Par suite, $S$ a $p^{a+b}$ \'{e}l\'{e}ments. De plus, $H\cap S$ a au plus $p^a$ \'{e}l\'{e}ments donc $S/(H\cap S)$ au moins $p^b$ et donc exactement $p^b$. Il s'ensuit que $S/(H\cap S)$ est un $p$-Sylow de $G/H$. D'autre part, on obtient tous les $p$-Sylow par conjugaison, d'o\`{u} $(2)$. $(3)$ Soit $g\in G$. On a $gQg^{-1}\subset gHg^{-1}=H$ ($H$ est normal). Or $gQg^{-1}$ est un $p$-Sylow de $H$, donc il existe $h\in H$ tel que $gQg^{-1}=hQh^{-1}$, donc $h^{-1}g\in N_G(Q)$ et donc $g\in H.N_G(Q)$. Ainsi $G\subset H.N_G(Q)$, donc $H.N_G(Q)=G$.~\hfill{$\square$} \begin{coro}\label{2.11} Soit $S$ un $p$-Sylow de $G$ et soit $H$ un sous-groupe de $G$ contenant $N_G(S)$. Alors $N_G(H)=H$. \end{coro} Le groupe $H$ est normal dans $N_G(H)$ et contient $S$ qui est donc un $p$-Sylow de $H$. On applique le point $(3)$ de la proposition ci-dessus: $H.N_G(S)=N_G(H)$. Donc $N_G(H)\subset H$ d'o\`{u} le r\'{e}sultat.~\hfill{$\square$}\bigskip En particulier, si $S$ est un $p$-Sylow de $G$, on a $N_G\big(N_G(S)\big)=N_G(S)$. \section{Fusion}\label{2.4} Soit $S$ un $p$-Sylow de $G$. On note $N$ le normalisateur de $S$ dans $G$. On se pose le probl\`{e}me de savoir si deux \'{e}l\'{e}ments de $S$ conjugu\'{e}s dans $G$ sont conjugu\'{e}s dans $N$. On a la: \label{Burnside1} \begin{prop}[Burnside] Soient $X$ et $Y$ deux parties du centre de $S$, conjugu\'{e}es dans $G$ et soit $g\in G$ tel que $gXg^{-1}=Y$. Alors il existe $n\in N$ tel que $nxn^{-1}=gxg^{-1}$ pour tout $x\in X$. En particulier, $nXn^{-1}=Y$. \end{prop} On veut trouver $n\in N$ tel que $nxn^{-1}=gxg^{-1}$ pour tout $x\in X$ i.e. $g^{-1}nxn^{-1}g=x$ pour tout $x\in X$. Donc on cherche $n\in N$ tel que $g^{-1}n\in A=C_G(X)$ (le centralisateur de $X$). Or $X$ est contenu dans le centre de $S$ donc $A$ contient $S$. De m\^{e}me, $Y=gXg^{-1}$ donc $g^{-1}Sg$ est contenu dans $A$. Les groupes $S$ et $g^{-1}Sg$ sont des $p$-Sylow de $A$ (il suffit de regarder leurs ordres) donc sont conjugu\'{e}s dans $A$: il existe $a\in A$ tel que $ag^{-1}Sga^{-1}=S$. Donc $n=ga^{-1}$ appartient \`{a} $N$ et $g^{-1}n$ appartient \`{a} $A$.~\hfill{$\square$} \begin{coro} Soient $x$ et $y$ deux \'{e}l\'{e}ments du centre de $S$. S'ils sont conjugu\'{e}s dans $G$, ils sont conjugu\'{e}s dans $N$. \end{coro} {\it Remarque. } L'hypoth\`{e}se \og $x$ et $y$ appartiennent au centre de $S$\fg \ ne peut \^{e}tre supprim\'{e}e: si l'on prend $G=\mathbf{GL}_3(\mathbf{Z}/p\mathbf{Z})$ et $$S=\left\{\left( \begin{array}{ccc} 1 & \times & \times \\ 0 & 1 & \times \\ 0 & 0 & 1 \\ \end{array} \right)\right\}$$ alors $$N=\left\{\left( \begin{array}{ccc} \times & \times & \times \\ 0 & \times & \times \\ 0 & 0 & \times \\ \end{array} \right)\right\}$$ et les \'{e}l\'{e}ments $$x=\left( \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right)\;\;\;\mbox{et}\;\;\; y=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \\ \end{array} \right)$$ sont conjugu\'{e}s dans $G$ et ne le sont pas dans $N$. \ Disons que deux \'{e}l\'{e}ments $x,y$ de $S$ sont \emph{localement conjugu\'{e}s} s'il existe un sous-groupe $U$ de $S$ les contenant tel que $x$ et $y$ soient conjugu\'{e}s dans $N_G(U)$. \begin{thm}[Alperin]\label{Alperin} La relation d'\'{e}quivalence sur $S$ engendr\'{e}e par la relation \og $x$ et $y$ sont localement conjugu\'{e}s \fg\ est la relation \og $x$ et $y$ sont conjugu\'{e}s dans $G$ \fg. \end{thm} En d'autres termes: \pagebreak[2] \begin{thm} Si $x,y\in S$ sont conjugu\'{e}s dans $G$, il existe une suite $a_0, \dots, a_n$ d'\'{e}l\'{e}ments de $S$ telle que: \begin{enumerate} \item[(1)] $a_0=x$ et $a_n=y$. \item[(2)] $a_i$ est localement conjugu\'{e} de $a_{i+1}$ pour $0\leqslant i\leqslant n-1$. \end{enumerate} \end{thm} Cela r\'{e}sulte du th\'{e}or\`{e}me plus pr\'{e}cis suivant: \begin{thm} Soit $A$ une partie de $S$ et soit $g\in G$ tel que $A^g\subset S$. Il existe alors un entier $n\geqslant 1$, des sous-groupes $U_1, \dots, U_n$ de $S$ et des \'{e}l\'{e}ments $g_1, \dots, g_n$ de $G$ avec: \begin{enumerate} \item[(1)] $g=g_1\cdots g_n$. \item[(2)]$g_i\in N_G(U_i)$ pour $1\leqslant i\leqslant n$. \item[(3)] $A^{g_1\cdots g_{i-1}}\subset U_i$ pour $1\leqslant i\leqslant n$. \end{enumerate} \end{thm} (Dans cet \'{e}nonc\'{e}, $A^g$ d\'{e}signe $g^{-1}Ag$.) {\it Remarque. } Pour $i=1$, $(3)$ signifie que $A\subset U_1$. Noter que l'on a $A^{g_1\cdots g_i}\subset U_i$ pour $1\leqslant i\leqslant n$, comme on le voit en combinant $(2)$ et $(3)$. On a en particulier $A^g\subset U_n$. \bigskip Le th\'{e}or\`{e}me ci-dessus est un corollaire de celui-ci (prendre $A$ r\'{e}duit \`{a} un \'{e}l\'{e}ment). \bigskip {\it D\'{e}monstration. } Soit $T$ le sous-groupe de $S$ engendr\'{e} par $A$. On raisonne par r\'{e}currence sur \emph{l'indice} $(S:T)$ de $T$ dans $S$. Si cet indice est $1$, on a $T=S$, d'o\`{u} $S^g=S$ et $g\in N_{G}(S)$. On prend alors $n=1$, $g_1=g$ et $U_1=S$. Supposons donc $(S:T)>1$, i.e. $T\neq S$. Le groupe $T_1=N_S(T)$ est alors distinct de $T$. C'est un $p$-sous-groupe de $N_G(T)$. Choisissons un $p$-Sylow $\Sigma$ de $N_G(T)$ contenant $T_1$. D'apr\`{e}s le th. \ref{sylow 2}, il existe $u\in G$ tel que $\Sigma^u\subset S$. Posons d'autre part $V=T^g$; on a $V\subset S$ par hypoth\`{e}se. Le groupe $\Sigma^g$ est un $p$-Sylow de $N_G(V)=\big(N_G(T)\big)^g$.\\ Comme $N_S(V)$ est un $p$-sous-groupe de $N_G(V)$, il existe $w\in N_G(V)$ tel que $\big(N_S(V)\big)^w\subset\Sigma^g$. Posons $v=u^{-1}gw^{-1}$. On a $g=uvw$. On va maintenant d\'{e}composer $u$ et $v$:\\ (i) On a $T^u\subset\Sigma^u\subset S$. Comme l'indice de $T_1$ dans $S$ est strictement inf\'{e}rieur \`{a} celui de $T$, l'hypoth\`{e}se de r\'{e}currence montre qu'il existe des sous-groupes $U_1,\dots,U_m$ de $S$ et des \'{e}l\'{e}ments $u_1\in N_G(U_1),\dots,u_m\in N_G(U_m)$ avec $u=u_1\cdots u_m$ et $T_1^{u_1\cdots u_{i-1}}\subset U_i$ pour $1\leqslant i\leqslant m$.\\ (ii) Posons $T_2=N_S(V)$ et $T_3=T_2^{v^{-1}}=T_2^{wg^{-1}u}$. Comme $T_2^w$ est contenu dans $\Sigma^g$, on a $T_3\subset\Sigma^{gg^{-1}u}=\Sigma^u$. Le groupe $T_3$ est contenu dans $S$, et $T_3^v=T_2$ aussi. Comme l'indice de $T_3$ est strictement inf\'{e}rieur \`{a} celui de $T$, on en d\'{e}duit comme ci-dessus l'existence de sous-groupes $V_1,\dots,V_r$ de $S$ et d'\'{e}l\'{e}ments $v_j\in N_G(V_j)$, avec $v=v_1\cdots v_r$ et $T_3^{v_1\cdots v_{j-1}}\subset V_j$ pour $1\leqslant j\leqslant r$. Il reste \`{a} v\'{e}rifier que les sous-groupes $U_1,\dots,U_m,V_1,\dots,V_r,V$ de $S$ et la d\'{e}composition $g=u_1\cdots u_m v_1\cdots v_r w$ de $g$ satisfont aux conditions du th\'{e}or\`{e}me.\\ On a $$u_i\in N_G(U_i),\; v_j\in N_G(V_j),\; w\in N_G(V)$$ par construction, ainsi que $T^{u_1\cdots u_{i-1}}\subset U_i$ ($1\leqslant i\leqslant m$) puisque $T$ est contenu dans $T_1$. Il reste \`{a} voir que $$T^{u_1\cdots u_m v_1\cdots v_{j-1}}\subset V_j$$ pour $1\leqslant j\leqslant r$.\\ Or $T^{u_1\cdots u_m}=T^u$ est contenu dans $T_3=T_2^{wg^{-1}u}$; en effet, $V=T^g$ est normalis\'{e} par $w^{-1}$; on a donc $T^{gw^{-1}}=V\subset N_S(V)=T_2$, d'o\`{u} $T\subset T_2^{wg^{-1}}$ et $T^u\subset T_2^{wg^{-1}u}$.\\ On d\'{e}duit de l\`{a} que $$T^{u_1\cdots u_m v_1\cdots v_{j-1}}=T^{u v_1\cdots v_{j-1}}\subset T_3^{v_1\cdots v_{j-1}}\subset V_j,$$ ce qui ach\`{e}ve la d\'{e}monstration.~\hfill{$\square$} \chapter{Groupes r\'{e}solubles et groupes nilpotents}\label{chap3} \section{Groupes r\'{e}solubles}\label{grpe resol} Soit $G$ un groupe et soient $x,y$ deux \'{e}l\'{e}ments de $G$. L'\'{e}l\'{e}ment $x^{-1}y^{-1}xy$ est appel\'{e} le \emph{commutateur}\label{commutateur} de $x$ et $y$. On le note $(x,y)$. On a $$xy=yx(x,y).$$ Si $A$ et $B$ sont deux sous-groupes de $G$, on note $(A,B)$ le groupe engendr\'{e} par les commutateurs $(x,y)$ avec $x\in A$ et $y\in B$. Le groupe $(G,G)$ est appel\'{e} le \emph{groupe des commutateurs} de $G$ ou encore le \emph{groupe d\'{e}riv\'{e}} de $G$ et est not\'{e} $D(G)$. C'est un sous-groupe caract\'{e}ristique de $G$. De sa d\'{e}finition r\'{e}sulte aussit\^{o}t la: \begin{prop} Soit $H$ un sous-groupe de $G$. Les propri\'{e}t\'{e}s suivantes sont \'{e}quivalentes: \begin{enumerate} \item[(1)] $H$ contient $D(G)$. \item[(2)] $H$ est normal et $G/H$ est ab\'{e}lien. \end{enumerate} \end{prop} Ainsi $G/D(G)$ est le plus grand quotient ab\'{e}lien de $G$. On le note parfois $G^{ab}$. On peut it\'{e}rer le proc\'{e}d\'{e} et d\'{e}finir la suite des sous-groupes d\'{e}riv\'{e}s de $G$: \begin{eqnarray*} D^0G & = & G,\\ D^nG & = & (D^{n-1}G,D^{n-1}G)\quad \mbox{pour } n\geqslant 1. \end{eqnarray*} On a $G\supset D^1G\supset D^2G\supset \cdots$. \label{classeresolubilite} \begin{defi} Un groupe $G$ est dit \emph{r\'{e}soluble} s'il existe un entier $n\geqslant 0$ tel que $D^nG=\{1\}$. On appelle alors \emph{classe de r\'{e}solubilit\'{e}} de $G$ et on note $cl(G)$ le plus petit entier $n$ positif pour lequel $D^nG=\{1\}$. \end{defi} Ainsi, $cl(G)=0$ \'{e}quivaut \`{a} $G=\{1\}$ et $cl(G)\leqslant 1$ \'{e}quivaut \`{a} dire que $G$ est ab\'{e}lien. \begin{prop} Soit $G$ un groupe et soit $n$ un entier $\geqslant 1$. Les propri\'{e}t\'{e}s suivantes sont \'{e}quivalentes: \begin{enumerate} \item[(1)] $G$ est r\'{e}soluble de classe $\leqslant n$, \item[(2)] Il existe une suite $G=G_0\supset G_1\supset \cdots \supset G_n=\{1\}$ de sous-groupes normaux de $G$ tels que $G_i/G_{i+1}$ soit ab\'{e}lien pour $0\leqslant i \leqslant n-1$, \item[(2')] Il existe une suite $G=G_0\supset G_1\supset \cdots \supset G_n=\{1\}$ de sous-groupes de $G$ tels que $G_i$ soit normal dans $G_{i-1}$ et que $G_{i-1}/G_i$ soit ab\'{e}lien, pour $1\leqslant i \leqslant n$, \item[(3)] Il existe un sous-groupe ab\'{e}lien $A$ normal dans $G$ tel que $G/A$ soit r\'{e}soluble de classe $\leqslant n-1$. \end{enumerate} \end{prop} $(1)\Rightarrow(2)$ Posons $G_i=D^iG$ pour tout $i\geqslant0$. Puisque $D(G)$ est stable par tout automorphisme (m\^{e}me non int\'{e}rieur!) de $G$, $D^iG$ est normal dans $G$ pour tout $i$. La suite ${(G_i)}_{i\geqslant 0}$ ainsi d\'{e}finie v\'{e}rifie donc $(2)$. $(2)\Rightarrow(2')$ est trivial. $(2')\Rightarrow(1)$ Par r\'{e}currence sur $k$ on voit que $D^kG\subset G_k$ pour tout $k$, d'o\`{u} $D^nG=\{1\}$. $(1)\Rightarrow(3)$ On prend $A=D^{n-1}G$. $(3)\Rightarrow(1)$ D'apr\`{e}s l'implication $(1)\Rightarrow(2)$, appliqu\'{e}e \`{a} $G/A$ et \`{a} $n-1$, il existe une suite $$A_0=G\supset A_1 \cdots \supset A_{n-1}=A$$ de sous-groupes normaux de $G$ telle que la suite des quotients $$G/A\supset A_1/A\supset \cdots \supset A_{n-1}/A=\{1\}$$ v\'{e}rifie la condition $(2)$. Alors la suite $$G\supset A_1\supset \cdots \supset A_{n-1}\supset \{1\}$$ v\'{e}rifie la condition $(2)$ et l'implication $(2)\Rightarrow(1)$ appliqu\'{e}e \`{a} $G$ et \`{a} $n$ permet de conclure.~\hfill{$\square$} \bigskip {\it Remarque. } Tout sous-groupe (et tout groupe quotient) d'un groupe r\'{e}soluble de classe~$\leqslant~n$ est r\'{e}soluble de classe $\leqslant n$. \begin{prop} Soit $G$ un groupe fini et soit $G=G_0\supset G_1 \supset \cdots \supset G_n=\{1\}$ une suite de Jordan-H\"{o}lder de $G$. Pour que $G$ soit r\'{e}soluble, il faut et il suffit que $G_i/G_{i+1}$ soit cyclique d'ordre premier pour $0\leqslant i\leqslant n-1$. \end{prop} Remarquons d'abord que si un groupe est simple et r\'{e}soluble, alors son groupe d\'{e}riv\'{e}, \'{e}tant normal, est r\'{e}duit \`{a} $\{1\}$; le groupe est donc ab\'{e}lien et, \'{e}tant simple, est cyclique d'ordre premier. La proposition en r\'{e}sulte.~\hfill{$\square$} \bigskip{\it Exemples.\\ } $(1)$ Les groupes $\mathcal{S}_n$ sont r\'{e}solubles si et seulement si $n\leqslant 4$. $(2)$ Un groupe simple non ab\'{e}lien n'est pas r\'{e}soluble. $(3)$ Soit $V$ un espace vectoriel de dimension $n$ sur un corps commutatif $K$ et soit $$V=V_0\supset V_1\supset \cdots \supset V_n=0$$ un drapeau complet (i.e. une suite d\'{e}croissante de sous-espaces vectoriels de $V$ tels que $\mathrm{codim}(V_i)=i$). On pose $$G=\left\{s\in \mathbf{GL}(V)\ |\ sV_i=V_i,\ 0\leqslant i\leqslant n\right\}$$ (si on choisit dans $V$ une base adapt\'{e}e au drapeau, $G$ peut \^{e}tre identifi\'{e} au groupe des matrices triangulaires sup\'{e}rieures).\\ On d\'{e}finit alors une suite de sous-groupes $(B_i)_{0\leqslant i \leqslant n}$ de $G$ par $$B_i=\{s\in G\ |\ (s-1)V_j\subset V_{i+j},\ 0\leqslant j\leqslant n-i\}.$$ En particulier, $B_0=G$. On va d\'{e}montrer que $(B_j,B_k)\subset B_{j+k}$ pour $0\leqslant j \leqslant n$ et $0\leqslant k \leqslant n$ avec $0\leqslant j+k \leqslant n$. Soient en effet $s\in B_j$, $t\in B_k$ et $x\in V_i$. Il existe $v_{i+k}\in V_{i+k}$ tel que $$tx=x+v_{i+k},$$ puis $$stx=sx+sv_{i+k}=x+w_{i+j}+v_{i+k}+t_{i+j+k}$$ (avec $w_{i+j}\in V_{i+j}$ et $t_{i+j+k}\in V_{i+j+k}$). De m\^{e}me $$tsx=t(x+w_{i+j})=x+v_{i+k}+w_{i+j}+t'_{i+j+k}$$ (avec $t'_{i+j+k}\in V_{i+j+k}$). Donc $$stx\equiv tsx \pmod{V_{i+j+k}}$$ ou encore $$s^{-1}t^{-1}stx \equiv x \pmod{V_{i+j+k}}$$ d'o\`{u} le r\'{e}sultat. En particulier:\\ $\bullet$ $(B_0,B_i)\subset B_i$ pour $0\leqslant i\leqslant n$, donc les $B_i$ sont normaux dans $B_0=G$.\\ $\bullet$ $(B_i,B_i)=D(B_i)\subset B_{2i}\subset B_{i+1}$ pour $1\leqslant i \leqslant n$, donc les quotients $B_i/B_{i+1}$ sont ab\'{e}liens pour $1\leqslant i \leqslant n-1$.\\ $\bullet$ Enfin, $B_0/B_1=G/B_1$ s'identifie au groupe des matrices diagonales (ab\'{e}lien car $K$ est commutatif). Donc la suite $B_0=G\supset B_1\supset\cdots\supset B_n=\{1\}$ v\'{e}rifie la condition $(2)$ et $G$ est r\'{e}soluble. $(4)$ On verra ult\'{e}rieurement (th. \ref{thmburn5.4}) que tout groupe d'ordre $p^aq^b$ (o\`{u} $p$ et $q$ sont premiers) est r\'{e}soluble. $(5)$ Mentionnons aussi le (tr\`{e}s difficile) th\'{e}or\`{e}me de Feit-Thompson\footnote{R\'{e}f\'{e}rence: W. Feit et J.G. Thompson, {\it Solvability of groups of odd order}, Pacific J. Math. $13$ ($\mit 1963$), $775-1029$.}: tout groupe d'ordre impair est r\'{e}soluble (ou encore: l'ordre d'un groupe simple non ab\'{e}lien est pair). $(6)$ Les groupes r\'{e}solubles interviennent en th\'{e}orie des corps. Soit $K$ un corps de caract\'{e}ristique $0$ et soit $\overline{K}$ une cl\^{o}ture alg\'{e}brique de $K$. On note $K_{rad}$ le plus petit sous-corps de $\overline{K}$ contenant $K$ tel que pour tout $x\in K_{rad}$ et tout entier $n\geqslant 1$, on ait $x^{1/n}\in K_{rad}$. On d\'{e}montre qu'une extension galoisienne finie de $K$ est contenue dans $K_{rad}$ si et seulement si son groupe de Galois est r\'{e}soluble (i.e. une \'{e}quation est \emph{r\'{e}soluble par radicaux} si et seulement si son groupe de Galois est r\'{e}soluble. C'est de l\`{a} que provient la terminologie \og r\'{e}soluble\fg). \section{Suite centrale descendante} \label{suitecentrale} Soit $G$ un groupe. On appelle \emph{suite centrale descendante} de $G$ la suite $(C^nG)_{n\geqslant 1}$ de sous-groupes de $G$ d\'{e}finie par r\'{e}currence par: \begin{eqnarray*} C^1G & = & G,\\ C^{n+1}G & = & (G,C^nG)\quad \mbox{pour } n\geqslant 1. \end{eqnarray*} Pour tout $n\geqslant 1$, $C^nG$ est un sous-groupe caract\'{e}ristique de $G$. \begin{prop} On a $(C^iG,C^jG)\subset C^{i+j}G$ pour tout $i\geqslant 1$ et tout $j\geqslant 1$. \end{prop} On raisonne par r\'{e}currence sur $i$, la proposition \'{e}tant claire pour $i=1$ et tout $j\geqslant 1$. Soit $j\geqslant 1$; on a $(C^{i+1}G,C^jG)=\big((G,C^iG),C^jG\big)$. Or $\big((C^iG,C^jG),G\big)\subset (C^{i+j}G,G)$ (par hypoth\`{e}se de r\'{e}currence), donc $\big((C^iG,C^jG),G\big)\subset C^{i+j+1}G$. De m\^{e}me $\big((C^jG,G),C^iG\big)$ est contenu dans $C^{i+j+1}G$. Le lemme suivant permet de conclure.\hfill{$\square$} \begin{lemme} Si $X$, $Y$ et $Z$ sont des sous-groupes normaux de $G$ et si $H$ est un sous-groupe de $G$ contenant $\big((Y,Z),X\big)$ et $\big((Z,X),Y\big)$, alors $H$ contient $\big((X,Y),Z\big)$. \end{lemme} On utilise l'identit\'{e} de Hall: $$\big(x^y,(y,z)\big)\big(y^z,(z,x)\big)\big(z^x,(x,y)\big)=1,$$ (o\`{u} $x^y=y^{-1}xy$) qui s'obtient en d\'{e}veloppant les quarante-deux termes du membre de gauche. (cf. Bourbaki, A.I, \S\ 6).~\hfill{$\square$} \bigskip{\it Remarque. } L'identit\'{e} de Hall est l'analogue pour les groupes de l'identit\'{e} de Jacobi: $$\big[x,[y,z]\big]+\big[y,[z,x]\big]+\big[z,[x,y]\big]=0$$ pour les alg\`{e}bres de Lie. On peut l'utiliser pour associer \`{a} tout groupe $G$ muni d'une filtration $(G_i)$ satisfaisant \`{a} $(G_i,G_j)\subset G_{i+j}$ une alg\`{e}bre de Lie $gr(G)$, \`{a} savoir $$gr(G)=\bigoplus_i{G_i/G_{i+1}}.$$ Si $\xi\in G_i/G_{i+1}$ et $\eta\in G_j/G_{j+1}$, le crochet $$[\xi,\eta]\in G_{i+j}/G_{i+j+1}$$ est par d\'{e}finition l'image du commutateur $(x,y)$ o\`{u} $x$ (resp. $y$) est un repr\'{e}sentant dans $G_i$ (resp. $G_j$) de $\xi$ (resp. $\eta$). Ceci s'applique notamment au cas o\`{u} $G_i=C^iG$ (cf. aussi Bourbaki, Lie II, \S\ 4, $\mbox{n}^{\rm o}$\! 4). \section{Groupes nilpotents}\label{grpe nilp} \label{classenilpotence} \begin{defi} Un groupe $G$ est dit \emph{nilpotent} s'il existe un entier $n$ positif tel que $C^{n+1}G=\{1\}$. La \emph{classe de nilpotence} de $G$ est alors le plus petit tel entier $n$. \end{defi} En particulier: \begin{itemize} \item Le groupe $G$ est ab\'{e}lien si et seulement s'il est nilpotent de classe $\leqslant 1$. \item Un produit fini de groupes nilpotents est nilpotent et la classe de nilpotence du produit est la borne sup\'{e}rieure des classes des groupes. \item Un sous-groupe (resp. un groupe quotient) d'un groupe nilpotent est nilpotent. \end{itemize} \begin{prop} Tout groupe nilpotent est r\'{e}soluble. \end{prop} En effet, pour tout $n\geqslant 0$, on a $D^nG\subset C^{2^n}G$.~\hfill{$\square$} \bigskip La r\'{e}ciproque est fausse: le groupe $\mathcal{S}_3$ est r\'{e}soluble de classe $2$. Regardons la suite centrale descendante de $\mathcal{S}_3$; on a $C^1\mathcal{S}_3=\mathcal{S}_3$, $C^2\mathcal{S}_3={\cal C}_3$ (groupe cyclique d'ordre 3) puis $C^3\mathcal{S}_3={\cal C}_3$, etc. La suite est stationnaire et n'atteint pas $\{1\}$. Donc $\mathcal{S}_3$ n'est pas nilpotent. \bigskip On peut former des groupes nilpotents de la fa\c{c}on suivante: \begin{prop} Un groupe $G$ est nilpotent de classe $\leqslant n+1$ si et seulement s'il est extension centrale d'un groupe $\Gamma$ nilpotent de classe $\leqslant n$ (i.e. s'il existe une suite exacte $\{1\}\rightarrow A \rightarrow G \rightarrow \Gamma \rightarrow \{1\}$ o\`{u} $A$ est contenu dans le centre de $G$). \end{prop} Si $G$ est nilpotent de classe $n+1$, alors $C^{n+2}G=\{1\}$ donc $C^{n+1}G$ est dans le centre de $G$. Posons $\Gamma=G/{C^{n+1}G}$; on a $C^{n+1}\Gamma=\{1\}$ donc $\Gamma$ est nilpotent de classe $\leqslant n$.\\ R\'{e}ciproquement, si une telle suite exacte existe et si $C^{n+1}\Gamma=\{1\}$, alors $C^{n+1}G\subset A$, donc $C^{n+1}G$ est contenu dans le centre de $G$, et $C^{n+2}G=\{1\}$.~\hfill{$\square$} \begin{coro}\label{3.8} Soit $G$ un groupe nilpotent et soit $H$ un sous-groupe de $G$ distinct de $G$. Alors $N_G(H)$ est distinct de $H$. \end{coro} On raisonne par r\'{e}currence sur la classe de nilpotence $n$ de $G$. Si $n=1$, on a $N_G(H)=G$ (car $G$ est ab\'{e}lien) donc $N_G(H)\neq H$.\\ Si $n\geqslant 2$, choisissons un sous-groupe central $A$ de $G$ tel que $G/A$ soit nilpotent de classe $\leqslant n-1$. Alors $N_G(H)$ contient $A$. Si $H$ ne contient pas $A$ alors $N_G(H)\neq H$. Si $H$ contient $A$, alors $H/A$ est un sous-groupe propre de $G/A$ et l'hypoth\`{e}se de r\'{e}currence montre que $H/A\neq N_{G/A}(H/A)$. Comme $N_G(H)/A=N_{G/A}(H/A)$, on en d\'{e}duit que $N_G(H)$ est distinct de $H$.~\hfill{$\square$} \bigskip Une autre caract\'{e}risation des groupes nilpotents est donn\'{e}e par la \begin{prop} Un groupe $G$ est nilpotent si et seulement s'il existe une filtration $(G_i)_{1\leqslant i\leqslant n+1}$ telle que $G_1=G\supset G_2\supset \cdots \supset G_{n+1}=\{1\}$ avec $(G,G_i)\subset G_{i+1}$ pour tout $1 \leqslant i \leqslant n$. \end{prop} {\it Remarque. } Une condition plus forte serait $(G_i,G_j)\subset G_{i+j}$. \bigskip {\it D\'{e}monstration. } Si une telle filtration existe, alors $C^kG\subset G_k$ pour tout $k\geqslant 1$, donc $G$ est nilpotent. R\'{e}ciproquement, si $G$ est nilpotent, on prend $G_k=C^kG$.~\hfill{$\square$} \bigskip {\it Exemple. } Soit $V$ un espace vectoriel de dimension finie $n$ sur un corps $K$ et soit $$V=V_0\supset V_1\supset \cdots \supset V_n=0$$ un drapeau complet de $V$. Reprenons l'exemple du \S\ \ref{grpe resol} et posons $$B_j=\{g\in \mathbf{GL}(V)\ |\ (g-1)V_i\subset V_{i+j},\ i\geqslant 0\}.$$ Alors $B_1$ est nilpotent; en effet $(B_i)_{i\geqslant 1}$ est une filtration telle que $B_1\supset B_2\supset \cdots \supset B_n=\{1\}$ et $(B_i,B_j)\subset B_{i+j}$ comme on l'a vu plus haut. \bigskip En application, on a le: \begin{thm}[Kolchin] Soit $V$ un espace vectoriel de dimension finie sur un corps commutatif $K$ et soit $G$ un sous-groupe de $\mathbf{GL}(V)$. On suppose que tout \'{e}l\'{e}ment $g$ de $G$ admet $1$ comme unique valeur propre (i.e. $g-1$ est nilpotent). Alors il existe un drapeau complet de $V$ tel que $G$ soit contenu dans le groupe $B_1$ correspondant (cf. ci-dessus). En particulier, $G$ est nilpotent. \end{thm}\label{Kolchin} On raisonne par r\'{e}currence sur la dimension $n$ de $V$, le cas $n=0$ \'{e}tant trivial. Supposons donc $n\geqslant 1$ et montrons qu'il existe $x\in V$ non nul tel que $gx=x$ pour tout $g\in G$. Le probl\`{e}me \'{e}tant lin\'{e}aire, on peut \'{e}tendre les scalaires et supposer $K$ alg\'{e}briquement clos. Soit $A_G$ le sous-espace vectoriel de $\mathrm{End}(V)$ engendr\'{e} par $G$. C'est une sous-alg\`{e}bre de $\mathrm{End}(V)$. Distinguons deux cas:\\ $\bullet$ $V$ est un $A_G$-module r\'{e}ductible, i.e. il existe $V'\subset V$, stable par $G$, distinct de $0$ et de $V$. L'hypoth\`{e}se de r\'{e}currence s'applique \`{a} $V'$ et fournit un $x$ non nul dans $V'$ tel que $gx=x$ pour tout $g\in G$.\\ $\bullet$ Si $V$ est irr\'{e}ductible, on a $A_G=\mathrm{End}(V)$ d'apr\`{e}s un th\'{e}or\`{e}me de Burnside (cf. Bourbaki, A. VIII, \S\ 4, $\mbox{n}^{\rm o}$\!~3). Or si $a,a'\in A_G$, on a $n\mathrm{Tr}(aa')=\mathrm{Tr}(a)\mathrm{Tr}(a')$. En effet, c'est clair si $a,a'\in G$ car alors $\mathrm{Tr}(aa')=\mathrm{Tr}(a)=\mathrm{Tr}(a')=n$ et c'est donc vrai dans $A_G$ par lin\'{e}arit\'{e}. Si $n>1$, les \'{e}l\'{e}ments $a$ et $a'$ de matrices respectives $$\left(% \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array}% \right)\;\;\; \mbox{et}\;\;\; \left(% \begin{array}{cc} 0 & 0 \\ 0 & 1 \\ \end{array}% \right)$$ sont dans $\mathrm{End}(V)$ donc dans $A_G$. Or $\mathrm{Tr}(a)=\mathrm{Tr}(a')=1$ et $\mathrm{Tr}(aa')=0$, donc $\mathrm{dim}\,(V)=1$ et tout $x$ non nul de $V$ convient.\\ Une fois l'existence de $x$ prouv\'{e}e, on note $V_{n-1}$ la droite engendr\'{e}e par $x$. L'hypoth\`{e}se de r\'{e}currence, appliqu\'{e}e \`{a} $V/V_{n-1}$ fournit un drapeau complet pour $V/V_{n-1}$, stable par $G$, d'o\`{u} aussit\^{o}t un drapeau complet pour $V$, stable par $G$ et il est clair que $G$ est contenu dans le sous-groupe $B_1$ correspondant.~\hfill{$\square$} \section{Groupes nilpotents finis}\label{grpe nilp fini} Soit $p$ un nombre premier. \begin{prop} Tout $p$-groupe est nilpotent. \end{prop} On va en donner deux d\'{e}monstrations.\\ $\bullet$ Soit $P$ un $p$-groupe; alors $P$ peut se plonger dans $\mathbf{GL}_n(\mathbf{Z}/p\mathbf{Z})$ pour un entier $n$ suffisamment grand. Donc $P$ est contenu dans un $p$-Sylow de $\mathbf{GL}_n(\mathbf{Z}/p\mathbf{Z})$, qui est conjugu\'{e}, comme on l'a d\'{e}j\`{a} vu, \`{a} l'ensemble $B_1$ des matrices triangulaires sup\'{e}rieures \`{a} \'{e}l\'{e}ments diagonaux \'{e}gaux \`{a} $1$. D'apr\`{e}s l'exemple du \S\ \ref{grpe nilp}, $B_1$ est nilpotent, donc aussi $P$. $\bullet$ On peut supposer $P\neq\{1\}$. Faisons op\'{e}rer $P$ sur lui-m\^{e}me par automorphismes int\'{e}rieurs; l'ensemble des points fixes est le centre $C(P)$ de $P$. Comme $P$ est un $p$-groupe, on a d'apr\`{e}s le lemme \ref{lemme1} $$|P|\equiv |C(P)| \pmod{p},$$ donc $C(P)\neq\{1\}$. Ainsi $P/C(P)$ est d'ordre strictement inf\'{e}rieur \`{a} celui de $P$. Une r\'{e}currence permet de conclure.~\hfill{$\square$} \begin{coro}\label{corogrpenilp} Soit $G$ un groupe d'ordre $p^n$ (avec $p$ premier et $n\geqslant 1$). Alors: \begin{enumerate} \item[(1)] Tout sous-groupe de $G$ d'ordre $p^{n-1}$ est normal. \item[(2)] Si $H$ est un sous-groupe de $G$, il existe une suite de sous-groupes $(H_i)_{1\leqslant i\leqslant m}$ telle que $H=H_1\subset H_2\subset \cdots \subset H_m=G$ avec $(H_i:H_{i-1})=p$ pour $2\leqslant i\leqslant m$. \item[(3)] Tout sous-groupe de $G$ distinct de $G$ est contenu dans un sous-groupe d'ordre $p^{n-1}$. \end{enumerate} \end{coro} $(1)$ Soit $H$ un sous-groupe de $G$ d'ordre $p^{n-1}$. Alors $H$ est distinct de $N_G(H)$ car $G$ est nilpotent. Donc l'ordre de $N_G(H)$ est $p^n$ et $H$ est normal dans $G$. $(2)$ On raisonne par r\'{e}currence sur l'ordre de $G$. Soit donc $H$ un sous-groupe de $G$, qu'on peut supposer distinct de $G$. Soit $H'$ un sous-groupe de $G$, distinct de $G$, contenant $H$, d'ordre maximal. Alors $H'$ est distinct de $N_G(H')$, donc $N_G(H')=G$ et $H'$ est normal dans $G$. En particulier, $G/H'$ est un $p$-sous-groupe d'ordre $p^k$ pour un certain entier $k\geqslant 1$. Si $k>1$, il existe un sous-groupe de $G/H'$, distinct de $\{1\}$ et de $G/H'$, donc un sous-groupe de $G$, distinct de $G$ contenant $H'$ strictement, ce qui est absurde. Donc $k=1$ et $(G:H')=p$. L'hypoth\`{e}se de r\'{e}currence appliqu\'{e}e \`{a} $H'$ donne le r\'{e}sultat. $(3)$ d\'{e}coule imm\'{e}diatement de $(2)$.~\hfill{$\square$} \begin{coro}\label{coronilp} Tout produit fini de $p$-groupes est nilpotent. \end{coro} On va d\'{e}montrer une r\'{e}ciproque. \begin{thm}\label{caractnilp} Soit $G$ un groupe fini. Les assertions suivantes sont \'{e}quivalentes: \begin{enumerate} \item[(1)] $G$ est nilpotent. \item[(2)] $G$ est produit de $p$-groupes. \item[(3)] Pour tout $p$ premier, $G$ a un unique $p$-Sylow. \item[(4)] Soient $p$ et $p'$ deux nombres premiers distincts et soit $S_p$ (resp. $S_{p'}$) un $p$-Sylow de $G$ (resp. un $p'$-Sylow); alors $S_p$ et $S_{p'}$ se centralisent mutuellement (i.e. tout \'{e}l\'{e}ment de $S_p$ commute \`{a} tout \'{e}l\'{e}ment de $S_{p'}$). \item[(5)] Deux \'{e}l\'{e}ments de $G$ d'ordres premiers entre eux commutent. \end{enumerate} \end{thm} $(2)\Rightarrow(1)$ C'est le cor. \ref{coronilp} ci-dessus. $(1)\Rightarrow(3)$ Soit $S$ un $p$-Sylow de $G$ et soit $N$ son normalisateur. Alors $N$ est son propre normalisateur (cf. cor. \ref{2.11}). Puisque nous supposons $G$ nilpotent, cela entra\^{i}ne $N=G$, cf. cor. \ref{3.8}, donc $S$ est normal. Les $p$-Sylow de $G$ \'{e}tant conjugu\'{e}s, $G$ a un unique $p$-Sylow. $(3)\Rightarrow(4)$ Pour tout nombre premier $p$, soit $S_p$ l'unique $p$-Sylow de $G$: il est normal dans $G$. Si $p$ et $p'$ sont deux premiers distincts, $S_p\cap S_{p'}$ est r\'{e}duit \`{a} $\{1\}$ car c'est \`{a} la fois un $p$-groupe et un $p'$-groupe. Or si $x\in S_p$ et $y\in S_{p'}$, on a $x^{-1}y^{-1}xy\in S_p\cap S_{p'}$, d'o\`{u} $x^{-1}y^{-1}xy=1$. Donc $S_p$ et $S_{p'}$ se centralisent mutuellement. $(4)\Rightarrow(2)$ Pour tout $p$ premier, choisissons un $p$-Sylow de $G$ qu'on note $S_p$. Le groupe engendr\'{e} par les $S_p$ ($p$ d\'{e}crivant l'ensemble des nombres premiers) est $G$ tout entier (car son ordre est divisible par celui de $G$). D\'{e}finissons alors une application $$\varphi: \left\{\!\!\begin{array}{rcl} \prod_p{S_p} & \longrightarrow & G\\ {(s_p)}_p & \longmapsto & \prod_p{s_p}. \end{array}\right.$$ Par hypoth\`{e}se, $\varphi$ est un homomorphisme, car les $S_p$ se centralisent mutuellement. D'autre part, $\varphi$ est surjective, car $G$ est engendr\'{e} par les $S_p$. Enfin $G$ et $\prod_p{S_p}$ ont m\^{e}me cardinal, donc $\varphi$ est un isomorphisme, ce qui d\'{e}montre $(2)$. $(2)\Rightarrow(5)$ Supposons que $G$ soit un produit de $p$-groupes $G_p$. Soient $x$ et $y$ deux \'{e}l\'{e}ments de $G$ d'ordres premiers entre eux. Alors $x={(x_p)}_p$ et $y={(y_p)}_p$ et pour tout $p$ on a $x_p=1$ ou $y_p=1$. En effet, on a $x_p=1$ si et seulement si $p$ ne divise pas l'ordre de $x$. Donc $x^{-1}y^{-1}xy={(x_p^{-1}y_p^{-1}x_py_p)}_p=(1)$, donc $xy=yx$. $(5)\Rightarrow(4)$ C'est \'{e}vident. En r\'{e}sum\'{e}, nous avons montr\'{e} les implications suivantes: $$\xymatrix{(1) \ar@{=>}[d] & (2) \ar@{=>}[l] \ar@{=>}[r]& (5) \ar@{=>}[dl]\\ (3) \ar@{=>}[r] & (4) \ar@{=>}[u] & }$$ Le th\'{e}or\`{e}me en r\'{e}sulte.~\hfill{$\square$} \section{Cas des groupes ab\'{e}liens} Tout groupe ab\'{e}lien est nilpotent; d'apr\`{e}s le \S\ \ref{grpe nilp fini}, tout groupe ab\'{e}lien est produit de $p$-groupes ab\'{e}liens. On peut alors poursuivre la d\'{e}composition gr\^{a}ce au \begin{thm} Tout $p$-groupe ab\'{e}lien est produit de groupes cycliques. \end{thm} {\it Remarque. } Si $G$ est un $p$-groupe ab\'{e}lien fini, il existe un entier $n$ tel que $p^nx=0$ pour tout $x\in G$, et l'on peut consid\'{e}rer $G$ comme un module sur $\mathbf{Z}/p^n\mathbf{Z}$ (pour $n$ assez grand). Il suffit donc de d\'{e}montrer le th\'{e}or\`{e}me suivant: \begin{thm} Tout module (non n\'{e}cessairement fini) sur $\mathbf{Z}/p^n\mathbf{Z}$ est somme directe de modules monog\`{e}nes (isomorphes \`{a} $\mathbf{Z}/p^i\mathbf{Z}$ pour $i\leqslant n$). \end{thm} Soit $G$ un module sur $\mathbf{Z}/p^n\mathbf{Z}$ et soit $V$ le module quotient $G/pG$: c'est un espace vectoriel sur $\mathbf{Z}/p\mathbf{Z}$. Notons $G_i$ l'ensemble $\{x\in G\ |\ p^ix=0\}$. Les $G_i$ d\'{e}finissent une filtration de $G$: $$G_0=0\subset G_1\subset \cdots \subset G_n=G.$$ Soit $V_i$ l'image de $G_i$ dans $V$, alors $$V_0=0\subset V_1\subset \cdots \subset V_n=V.$$ On peut choisir une base $S$ de $V$ adapt\'{e}e \`{a} cette d\'{e}composition (i.e. $S_i=S\cap V_i$ est une base de $V_i$).\\ Si $s\in S$, on note $i(s)$ le plus petit $i$ pour lequel $s\in S_i$ et on choisit un repr\'{e}sentant $\bar{s}$ de $s$ dans $G_{i(s)}$. On a $p^{i(s)}\bar{s}=0$. Soit $G'=\bigoplus_{s\in S}{\mathbf{Z}/p^{i(s)}\mathbf{Z}}$; on d\'{e}finit un homomorphisme $\varphi$ de $G'$ dans $G$ de la fa\c{c}on suivante: si ${(n_s)}_{s\in S}\in G'$, on pose $\varphi\big((n_s)\big)=\sum_{s\in S}{n_s\bar{s}}$. On va montrer que $\varphi$ est un isomorphisme.\\ $\bullet$ $\varphi$ est surjectif: il suffit de prouver que le sous-groupe $H$ de $G$ engendr\'{e} par les $\bar{s}$ est $G$ tout entier. Or l'application projection de $H$ dans $V$ est surjective (par d\'{e}finition, $\bar{s}$ s'envoie sur $s$ qui parcourt une base de $V$). Donc $H+pG=G$ ou encore $G=pG+H$. En it\'{e}rant, $$G=p(pG+H)=p^2G+H=\cdots=p^nG+H=H.$$ $\bullet$ $\varphi$ est injectif: soit $(n_s)\in G'$ tel que $\sum_{s\in S}{n_s\bar{s}}=0$ dans $G$. Montrons que $n_s$ est divisible par $p^i$ pour tout $i$ (donc $n_s=0$). En effet, montrons par r\'{e}currence sur $i$ que $n_s\in p^i\left(\mathbf{Z}/p^{i(s)}\mathbf{Z}\right)$. Pour $i=0$, il n'y a rien \`{a} d\'{e}montrer. Si c'est vrai jusqu'\`{a} l'ordre $k$, on a $n_s\in p^k\left(\mathbf{Z}/p^{i(s)}\mathbf{Z}\right)$. En particulier $n_s=0$ si $i(s)\leqslant k$. On regarde donc les $s$ pour lesquels $i(s)\geqslant k+1$. Par hypoth\`{e}se, $$\sum_{i(s)\geqslant k+1}\!\!\!{n_s\bar{s}}=0.$$ Posons $n_s=p^km_s$ avec $m_s\in \mathbf{Z}/p^{i(s)}\mathbf{Z}$; on a $$p^k\!\!\!\sum_{i(s)\geqslant k+1}\!\!\!{m_s\bar{s}}=0,$$ c'est-\`{a}-dire $$\sum_{i(s)\geqslant k+1}\!\!\!{m_s\bar{s}}\ \in\,G_k,$$ d'o\`{u}, par projection, $$\sum_{i(s)\geqslant k+1}\!\!\!{m_s s}\ \in\,V_k.$$ Vu le choix de $S$, on en d\'{e}duit que $m_s\equiv 0 \pmod{p}$. Donc $p^{k+1}$ divise $n_s$ et on en d\'{e}duit que $n_s$ appartient \`{a} $p^{k+1}\!\left(\mathbf{Z}/p^{i(s)}\mathbf{Z}\right)$.~\hfill{$\square$} \bigskip{\it Exercice. } Si $G$ est un module sur $\mathbf{Z}/p^n\mathbf{Z}$, tout sous-module de $G$ isomorphe \`{a} $\mathbf{Z}/p^n\mathbf{Z}$ est facteur direct. \bigskip \noindent{\it Application des $2$-groupes: constructions par r\`{e}gle et compas.} Soit $K$ un corps de caract\'{e}ristique quelconque et soit $L$ une extension galoisienne de $K$ dont le groupe de Galois $G$ est un $2$-groupe. D'apr\`{e}s le cor. \ref{corogrpenilp}, il existe $G'$ normal dans $G$, d'indice $2$, donc un corps interm\'{e}diaire $L'$, fix\'{e} par $G'$, qui est une extension quadratique de $K$. Si on it\`{e}re, $L/K$ appara\^{i}t comme une tour d'extensions quadratiques.\\ R\'{e}ciproquement, si $L/K$ est une telle tour, l'extension galoisienne engendr\'{e}e a pour groupe de Galois un $2$-groupe. On a ainsi une caract\'{e}risation des extensions galoisiennes dont le groupe est un $2$-groupe. Si la caract\'{e}ristique de $K$ est distincte de $2$, une extension quadratique est de la forme $K\big[\sqrt{a}\, \big]\simeq K[X]/(X^2-a)$ o\`{u} $a\in K^*\mathbf{-} {K^*}^2$. (Si $\mathrm{car} K=2$ on remplace $X^2-a$ par $X^2+X+a$).\\ Ceci rejoint le probl\`{e}me des nombres constructibles par r\`{e}gle et compas: ce sont les nombres (alg\'{e}briques sur $\mathbf{Q}$) contenus dans une extension galoisienne de $\mathbf{Q}$ dont le groupe de Galois est un $2$-groupe. \bigskip{\it Exemple. } (Impossibilit\'e de la duplication du cube) Le nombre$\sqrt[3]{2}$ n'est pas constructible par r\`{e}gle et compas, car $X^3-2$ est irr\'{e}ductible, et son degr\'e n'est pas une puissance de $2$. \section{Sous-groupe de Frattini} \label{sgFrattini} Soit $G$ un groupe fini. On appelle \emph{sous-groupe de Frattini} de $G$ et on note $\Phi(G)$ l'intersection des sous-groupes maximaux\footnote{Un sous-groupe $H$ de $G$ est dit \emph{maximal} s'il est distinct de $G$ et maximal pour cette propri\'{e}t\'{e}; on dit alors que l'action de $G$ sur $G/H$ est \emph{primitive}.} de $G$. C'est un sous-groupe caract\'{e}ristique de~$G$. Un probl\`{e}me int\'{e}ressant est de savoir \`{a} quelles conditions une partie $S$ de $G$ engendre le groupe $G$. On a la proposition suivante: \begin{prop} Soit $S$ une partie de $G$ et soit $H$ le sous-groupe engendr\'{e} par $S$. On a $H=G$ si et seulement si $H.\Phi(G)=G$, i.e. si $S$ engendre $G/\Phi(G)$. \end{prop} En effet, si $H.\Phi(G)=G$ et $H\neq G$, il existe $H'$ maximal contenant $H$ et distinct de $G$; $\Phi(G)$ est aussi contenu dans $H'$ par d\'{e}finition: $G=H.\Phi(G)$ est donc contenu dans $H'$, ce qui est absurde.~\hfill{$\square$} \begin{thm} Le groupe $\Phi(G)$ est nilpotent. \end{thm} On va utiliser la caract\'{e}risation $(3)$ du th. \ref{caractnilp}. Soit $S$ un $p$-Sylow de $\Phi(G)$ (pour un $p$ premier quelconque). On a vu dans l'\'{e}tude des groupes de Sylow (prop. \ref{prop2.10}) que $G=\Phi(G).N_G(S)$. D'apr\`{e}s la proposition ci-dessus, on a $N_G(S)=G$, donc $S$ est normal dans $G$, donc dans $\Phi(G)$, donc est l'unique $p$-Sylow de $\Phi(G)$ qui est alors nilpotent.~\hfill{$\square$} \bigskip Dans le cas o\`{u} $G$ est un $p$-groupe, on a une caract\'{e}risation simple de $\Phi(G)$: \begin{thm} Si $G$ est un $p$-groupe, $\Phi(G)$ est le sous-groupe engendr\'{e} par les commutateurs et les puissances $p$-i\`{e}mes de $G$, i.e. $\Phi(G)=(G,G).G^p$. \end{thm} Si $G$ est d'ordre $p^n$, ses sous-groupes $H$ maximaux sont d'ordre $p^{n-1}$, donc sont normaux, donc sont des noyaux d'homomorphismes surjectifs de $G$ dans $G/H\simeq \mathbf{Z}/p\mathbf{Z}$ (et r\'{e}ciproquement un tel homomorphisme d\'{e}finit un sous-groupe maximal de $G$). Donc $\Phi(G)$ est l'intersection des noyaux d'homomorphismes surjectifs de $G$ dans $\mathbf{Z}/p\mathbf{Z}$. Or un tel homomorphisme est trivial sur $(G,G)$ et $G^p$. Donc $(G,G).G^p\subset\Phi(G)$.\\ R\'{e}ciproquement $V=G/(G,G).G^p$ est un espace vectoriel sur $\mathbf{Z}/p\mathbf{Z}$, dans lequel $0$ est intersection d'hyperplans. Or un hyperplan est le noyau d'un homomorphisme de $V$ dans $\mathbf{Z}/p\mathbf{Z}$. Le groupe $(G,G).G^p$ contient donc $\Phi(G)$. Finalement $\Phi(G)=(G,G).G^p$.~\hfill{$\square$} \bigskip{\it Application. } $G/\Phi(G)$ est donc le plus grand quotient de $G$ qui soit ab\'{e}lien \'{e}l\'{e}mentaire (i.e. produit de groupes cycliques d'ordre $p$).\label{elementaire} \begin{coro} Une partie $S$ de $G$ engendre $G$ si et seulement si son image dans $G/(G,G).G^p$ engendre ce groupe. \end{coro} En effet, on a alors $\langle S\rangle .\Phi(G)=G$ donc $\langle S \rangle=G$.~\hfill{$\square$} \bigskip Ainsi le cardinal minimum d'une partie $S$ g\'{e}n\'{e}ratrice de $G$ est $\mathrm{dim}\,_{\mathbf{F}_p}\big(G/\Phi(G)\big)$ (dans le cas o\`{u} $G$ est un $p$-groupe). \bigskip \noindent{\it Caract\'{e}risations par les sous-groupes \`{a} deux g\'{e}n\'{e}rateurs.} On peut dans la m\^{e}me veine \'{e}tudier si les propri\'{e}t\'{e}s de certains sous-groupes de $G$ permettent de d\'{e}montrer des propri\'{e}t\'{e}s analogues pour le groupe tout entier. \begin{prop} Soit $G$ un groupe (resp. un groupe fini). Supposons que tout sous-groupe de $G$ engendr\'{e} par deux \'{e}l\'{e}ments soit commutatif (resp. nilpotent). Alors $G$ est commutatif (resp. nilpotent). \end{prop} Soient $x,y\in G$. Le groupe $\langle x,y\rangle$ est commutatif, donc $xy=yx$. Pour les groupes nilpotents finis, on utilise la caract\'{e}risation $(5)$ du th. \ref{caractnilp}.~\hfill{$\square$} \bigskip On peut se demander si un th\'{e}or\`{e}me analogue est vrai pour les groupes r\'{e}solubles. On va d'abord d\'{e}finir la notion de \emph{groupe simple minimal}. Soit $G$ un groupe fini simple non ab\'{e}lien. On dit que $G$ est \emph{minimal}\label{minimal} si tout sous-groupe de $G$ distinct de $G$ est r\'{e}soluble. \begin{lemme}\label{3.22} Si $G$ n'est pas r\'{e}soluble, il existe un sous-groupe $H$ de $G$ et un sous-groupe normal $K$ de $H$, tels que $H/K$ soit simple minimal. \end{lemme} {\it Exemple. } $G=\mathcal{A}_6$ est simple (non minimal); on peut prendre $H=\mathcal{A}_5$ et $K=\{1\}$. \bigskip{\it D\'{e}monstration. } Soit $H$ un sous-groupe non r\'{e}soluble minimal de $G$ et soit $K$ un sous-groupe normal de $H$, distinct de $H$, et maximal. Le groupe $K$ est r\'{e}soluble (car strictement contenu dans $H$). Le quotient $H/K$ est simple (car $K$ est maximal) et non ab\'{e}lien (sinon $H$ serait r\'{e}soluble); il est de plus minimal (tout sous-groupe s'obtient comme quotient par $K$ d'un sous-groupe $H'$ contenant $K$ et contenu dans $H$, donc est r\'{e}soluble). \bigskip On a alors une r\'{e}ponse partielle \`{a} notre question: \begin{prop} Les deux assertions suivantes sont \'{e}quivalentes: \begin{enumerate} \item[(1)] Tout groupe simple minimal peut \^{e}tre engendr\'{e} par deux \'{e}l\'{e}ments. \item[(2)] Tout groupe tel que ses sous-groupes engendr\'{e}s par deux \'{e}l\'{e}ments soient r\'{e}solubles est r\'{e}soluble. \end{enumerate} \end{prop} Supposons $(2)$. Soit $G$ simple minimal; si $\langle x,y\rangle\neq G$ pour tout couple $(x,y)\in G\times G$, alors $\langle x,y\rangle$ est r\'{e}soluble comme sous-groupe de $G$ strict et d'apr\`{e}s $(2)$, $G$ est aussi r\'{e}soluble, ce qui est impossible. Supposons $(1)$. Si $G$ n'est pas r\'{e}soluble, il existe $H$ et $K$ comme dans le lemme \ref{3.22}. Le groupe $H/K$ est simple minimal donc engendr\'{e} par deux \'{e}l\'{e}ments $\bar{x}$ et $\bar{y}$. Soit $H'$ le sous-groupe de $H$ engendr\'{e} par $x$ et $y$ (repr\'{e}sentants respectifs de $\bar{x}$ et $\bar{y}$). C'est un groupe r\'{e}soluble par hypoth\`{e}se; or $H/K$ est l'image par projection de $H'$ donc est aussi r\'{e}soluble, ce qui est absurde.~\hfill{$\square$} \bigskip On est donc ramen\'{e} au probl\`{e}me suivant: trouver la liste de tous les groupes simples minimaux et chercher s'ils sont engendr\'{e}s par deux \'{e}l\'{e}ments. Ce probl\`{e}me a \'{e}t\'{e} r\'{e}solu par Thompson\label{thmThompson}, qui a montr\'{e}\footnote{R\'{e}f\'{e}rences: J.G. Thompson, {\it Non solvable finite groups all of whose local subgroups are solvable, I, II, $\dots$, VI}, Bull. A.M.S. $74$ ($\mit 1968$), $383-437$; Pac. J. Math. $33$ ($\mit 1970$), $451-536$; $\dots$; Pac. J. Math. $51$ ($\mit 1974$), $573-630$.\\ \indent\ \ Il existe une d\'{e}monstration ind\'{e}pendante du th\'{e}or\`{e}me de classification de Thompson: P. Flavell, {\it Finite groups in which two elements generate a solvable group}, Invent. math. $121$ $({\mit 1995})$, $279-285$.} que tout groupe simple minimal est isomorphe \`{a} l'un des suivants (que l'on peut engendrer par deux \'{e}l\'{e}ments): \begin{itemize} \item $\mathbf{PSL}_2(\mathbf{F}_p)$, $p\geqslant 5$, $p\not\equiv \pm 1 \pmod{5}$, \item $\mathbf{PSL}_2(\mathbf{F}_{2^p})$, $p$ premier $\geqslant 3$, \item $\mathbf{PSL}_2(\mathbf{F}_{3^p})$, $p$ premier $\geqslant 3$, \item $\mathbf{PSL}_3(\mathbf{F}_3)$, \item groupes de Suzuki $Sz(2^p)$, $p$ premier $\geqslant 3$. \end{itemize} \bigskip Signalons un probl\`{e}me: est-il vrai qu'un groupe simple non ab\'{e}lien qui est \emph{minimal} au sens na\"{i}f (i.e. qui ne contient pas de sous-groupe propre non ab\'{e}lien qui soit simple) est minimal au sens d\'{e}fini plus haut?\footnote{R\'{e}ponse: oui, d'apr\`{e}s la classification des groupes simples.} \chapter{Cohomologie et extensions}\label{chap4} \section{D\'{e}finitions} Soient $G$ un groupe (not\'{e} multiplicativement) et $A$ un $G$-module (autrement dit un groupe ab\'{e}lien not\'{e} additivement sur lequel $G$ op\`{e}re par automorphismes). On note $sa$ le transform\'{e} de l'\'{e}l\'{e}ment $a\in A$ par l'\'{e}l\'{e}ment $s\in G$. On a $$\begin{array}{rcl} (st)a & = & s(ta),\\ 1a & = & a,\\ s(a_1+a_2) & =& sa_1+sa_2, \end{array}$$ si $s,t\in G$ et $a,a_1,a_2\in A$. Des exemples d'une telle situation sont donn\'{e}s par:\\ $(1)$ Action triviale d'un groupe $G$ sur un groupe ab\'{e}lien $A$: $sa=a$ pour $s\in G$ et $a\in A$. $(2)$ Si $L$ est une extension galoisienne du corps $K$, de groupe de Galois $G$, alors $G$ op\`{e}re par automorphismes sur $L$ muni de l'addition ou $L^*$ muni de la multiplication. \begin{defi} Soit $n$ un entier positif ou nul. On appelle \emph{$n$-cocha\^{i}ne}, ou \emph{cocha\^{i}ne de degr\'{e} $n$} sur $G$ \`{a} valeurs dans $A$ toute fonction de $n$ variables de $G$ \`{a} valeurs dans $A$: $$f:\left\{\!\! \begin{array}{rcl} G\times G\times \cdots \times G & \longrightarrow & A\\ (s_1,s_2,\dots,s_n) & \longmapsto & f(s_1,s_2,\dots,s_n). \end{array} \right.$$ \end{defi} L'ensemble des cocha\^{i}nes, muni de l'addition induite par celle de $A$, forme un groupe ab\'{e}lien not\'{e} $C^n(G,A)$. \bigskip{\it Exemples.\\ } $n=0$: par convention, une fonction de $0$ variable \`{a} valeurs dans $A$ est un \'{e}l\'{e}ment de $A$. D'o\`{u} $C^0(G,A)=A$. On note $f_a$ l'\'{e}l\'{e}ment de $C^0(G,A)$ correspondant \`{a} l'\'{e}l\'{e}ment $a\in A$. $n=1$: $C^1(G,A)=\{f:G\rightarrow A\}$. $n=2$: $C^2(G,A)=\{f:G\times G\rightarrow A\}$. \begin{defi} Si $f\in C^n(G,A)$, on appelle \emph{cobord} de $f$ et on note $df$ l'\'{e}l\'{e}ment de $C^{n+1}(G,A)$ d\'{e}fini par la formule: $$\begin{array}{rcl} df(s_1,\dots,s_n,s_{n+1}) & = & s_1f(s_2,\dots,s_{n+1})+\displaystyle\sum_{i=1}^{n}{(-1)^if(s_1,\dots,s_{i-1},s_i s_{i+1},\dots)}\\ {} & {} & \hfill{{} +(-1)^{n+1}f(s_1,\dots,s_n).}\end{array}$$ \end{defi} Regardons ce qu'est $d$ pour les petites valeurs de $n$.\\ $\bullet$ $d:\ C^0(G,A)\longrightarrow C^1(G,A)$. Soit $a\in A$; on cherche $df_a$. On a $df_a(s)=sa-a$. Noter que $df_a=0$ si et seulement si $a$ est fix\'{e} par $G$.\\ $\bullet$ $d:\ C^1(G,A)\longrightarrow C^2(G,A)$. Soit $f$ une $1$-cocha\^{i}ne; on a $$df(s,t)=sf(t)-f(st)+f(s).$$ $\bullet$ $d:\ C^2(G,A)\longrightarrow C^3(G,A)$. Soit $f$ une $2$-cocha\^{i}ne; on a $$df(u,v,w)=uf(v,w)-f(uv,w)+f(u,vw)-f(u,v).$$ \begin{thm}[Formule fondamentale]\label{fondam} On a $d\circ d=0$. Autrement dit, le compos\'{e} $$\xymatrix{\relax C^n(G,A) \ar[r]^-{d} & C^{n+1}(G,A) \ar[r]^d & C^{n+2}(G,A)}$$ est nul. \end{thm} Nous allons faire la v\'{e}rification seulement dans les cas $n=0$ et $n=1$, laissant en exercice la v\'{e}rification g\'{e}n\'{e}rale. Soit $a\in A$. On a $df_a(s)=sa-a$ d'o\`{u} \begin{eqnarray*} d\circ d\,(f_a)(s,t) & = & sdf_a(t)-df_a(st)+df_a(s)\\ {} & = & s(ta-a)-(sta-a)+(sa-a)\\ {} & = & 0. \end{eqnarray*} Regardons maintenant $f\in C^1(G,A)$: $$\begin{array}{rcl} d\circ d\,(f)(u,v,w) & = & u\,df(v,w)-df(uv,w)+df(u,vw)-df(u,v)\\ {} & = & u\big(vf(w)-f(vw)+f(v)\big)-\big(uvf(w)-f(uvw)+f(uv)\big)\\ {} & {} & \hspace{1.4cm} {} +\big(uf(vw)-f(uvw)+f(u)\big)-\big(uf(v)-f(uv)+f(u)\big)\\ {} & = & 0. \end{array}\eqno\square$$ \begin{defi} Une $n$-cocha\^{i}ne $f$ est dite un \emph{$n$-cocycle} si $df=0$. Elle est dite un \emph{$n$-~cobord} s'il existe une $(n-1)$-cocha\^{i}ne $g$ telle que $f=dg$. \end{defi} \label{cobord} D'apr\`{e}s le th. \ref{fondam}, tout $n$-cobord est un $n$-cocycle. On note $Z^n(G,A)$ le groupe des $n$-cocycles et $B^n(G,A)$ le groupe des $n$-cobords.\\ On note $H^n(G,A)$ le groupe quotient $Z^n(G,A)/B^n(G,A)$ et on l'appelle le \emph{$n$-i\`{e}me groupe de cohomologie de $G$ \`{a} valeurs dans $A$}. \bigskip{\it Exemples.\\ } $(1)$ Pour $n=0$, on convient que $B^0=\{0\}$. En notant $A^G$ le groupe des \'{e}l\'{e}ments de $A$ fix\'{e}s par $G$, on a vu que $$df_a=0\Longleftrightarrow a\in A^G,$$ et donc $H^0(G,A)=A^G$. $(2)$ Pour $n=1$, un \'{e}l\'{e}ment de $Z^1(G,A)$ est une application $f$ de $G$ dans $A$ telle que $df(s,t)=0$ pour tous $s,t\in G$, ce qui donne $$f(st)=sf(t)+f(s).$$ On dit que $f$ est un \emph{homomorphisme crois\'{e}}. Si l'action de $G$ sur $A$ est triviale, on a $sf(t)=f(t)$ et $f$ est un homomorphisme de $G$ dans $A$; comme $B^1(G,A)=\{0\}$, on a alors $$H^1(G,A)=\mathrm{Hom}(G,A),$$ o\`{u} $\mathrm{Hom}(G,A)$ est le groupe des homomorphismes de groupes de $G$ dans $A$. $(3)$ Pour $n=2$, une $2$-cocha\^{i}ne $f$ est un $2$-cocycle si $$uf(v,w)-f(uv,w)+f(u,vw)-f(u,v)=0$$ pour tous $u,v,w\in G$. Une telle cocha\^{i}ne s'appelle aussi un \emph{syst\`{e}me de facteurs}. \section{Extensions}\label{extensions} \begin{defi} Soient $A$ et $G$ deux groupes. On dit que $E$ est une \emph{extension} de $G$ par $A$ si l'on a une suite exacte $$\xymatrix{\{1\} \ar[r] & A \ar[r] & E \ar[r] & G \ar[r] & \{1\}}$$ avec $A$ normal dans $E$. \end{defi} {\it Remarque. } Dans ce \S, on suppose $A$ commutatif. \bigskip Toute extension $E$ de $G$ par $A$ d\'{e}finit une action de $G$ sur $A$ de la mani\`{e}re suivante: remarquons d'abord que $E$ agit sur $A$ par automorphismes int\'{e}rieurs (puisque $A$ est normal dans $E$); on a un homomorphisme $$\left\{\!\!\begin{array}{rcl} E & \longrightarrow & \mathrm{Aut}(A)\\ e & \longmapsto & \mathrm{Int}(e)_{|A},\\ \end{array}\right.$$ qui passe au quotient $G$: en effet, si $s\in G$, on choisit $e\in E$ qui rel\`{e}ve $s$; alors $\mathrm{Int}(e)$ ne d\'{e}pend pas du choix du rel\`{e}vement de $s$; changer $e$ en $e'$ au dessus de $s$ revient en effet \`{a} le multiplier par un \'{e}l\'{e}ment $a$ de $A$, or $a$ agit trivialement sur $A$ par automorphismes int\'{e}rieurs puisque $A$ est ab\'{e}lien. Donc $G$ agit sur $A$: $$\xymatrix{ E \ar[rr] \ar[rd] && \mathrm{Aut}(A) \\ & G \ar[ur]}$$ On va donc consid\'{e}rer $A$ comme un $G$-module; les lois de groupe \'{e}tant \'{e}crites multiplicativement, la loi d'action de $G$ sur $A$ sera \'{e}crite ${}^s\!a$ pour $a\in A$ et $s\in G$. On va associer \`{a} toute extension de $G$ par $A$ une classe de cohomologie de $H^2(G,A)$ qui d\'{e}termine cette extension \`{a} isomorphisme pr\`{e}s. Et l'on verra que tout \'{e}l\'{e}ment de $H^2(G,A)$ peut \^{e}tre obtenu ainsi, cf. th. \ref{th4.3}. Soit $E$ une extension de $G$ par $A$; on a une surjection $\pi$ de $E$ sur $G$. \begin{defi} Une \emph{section} $h$ de $\pi$ est une application de $G$ dans $E$ telle que $\pi\circ h=\mathrm{Id}_G$. $$\xymatrix{E \ar[d]_\pi \\ G \ar@/_0.5cm/[u]_h}$$ \end{defi}\label{section} Au-dessus de $s\in G$, on choisit un point dans la fibre $\pi^{-1}(s)$. Tout \'{e}l\'{e}ment $e\in E$ s'\'{e}crit alors de mani\`{e}re unique $ah(x)$, avec $a\in A$ et $x\in G$ (en fait $x=\pi(e)$). Cherchons \`{a} mettre sous la forme $ch(z)$ l'\'{e}l\'{e}ment $ah(x)bh(y)$. On a $$ah(x)bh(y)=ah(x)bh(x)^{-1}h(x)h(y).$$ L'action de $x\in G$ sur $A$ est donn\'{e}e par l'action de l'automorphisme int\'{e}rieur d'un \'{e}l\'{e}ment de $E$ au-dessus de $x$, par exemple $h(x)$. Donc $h(x)bh(x)^{-1}={}^xb$ (qui est dans $A$, puisque $A$ est normal). Posons $$h(x)h(y)=f_h(x,y)h(xy).$$ On a $f_h(x,y)\in A$ puisque $h(x)h(y)$ et $h(xy)$ ont m\^{e}me image dans $G$ par $\pi$. On a finalement obtenu: $$ah(x)bh(y)=a\,{}^xbf_h(x,y)h(xy)$$ avec $a\,{}^xbf_h(x,y)\in A$. \bigskip Nous allons maintenant voir comment $f_h$ varie avec $h$. Soient donc $h$ et $h'$ deux sections de $\pi$ ($h,h':G\rightarrow E$). Alors $h(s)$ et $h'(s)$ diff\`{e}rent par un \'{e}l\'{e}ment de $A$. Posons $h'(s)=l(s)h(s)$; l'application $l$ est une $1$-cocha\^{i}ne de $G$ \`{a} valeurs dans $A$. Calculons $f_{h'}$ \`{a} l'aide de $l$ et de $f_h$. On a $$h'(s)h'(t)=f_{h'}(s,t)h'(st)=f_{h'}(s,t)l(st)h(st),$$ mais \begin{eqnarray*} h'(s)h'(t) & = & l(s)h(s)l(t)h(t)\\ {} & = & l(s)h(s)l(t)h(s)^{-1}h(s)h(t)\\ {} & = & l(s)\,{}^sl(t)f_h(s,t)h(st), \end{eqnarray*} d'o\`{u} l'on tire \begin{eqnarray*} f_{h'}(s,t) & = & l(s)\,{}^sl(t)f_h(s,t)l(st)^{-1}\\ {} & = & f_h(s,t)\,{}^sl(t)l(s)l(st)^{-1}, \end{eqnarray*} car $A$ est commutatif. Or, en notation multiplicative, on a $$dl(s,t)={}^sl(t)l(s)l(st)^{-1}.$$ D'o\`{u} $$f_{h'}=f_h\,dl.$$ Donc, quand $h$ varie, $f_h$ ne change que par multiplication par un cobord. On peut donc associer \`{a} $E$ la classe de cohomologie de $f_h$ dans $H^2(G,A)$; appelons $e$ cette classe. Quand trouve-t-on $e=0$? Cela signifie (en notation multiplicative) qu'il existe une section $h$ telle que $f_h(s,t)=1$ pour tous $s,t\in G$, i.e. que $h$ est un homomorphisme. \begin{defi} Une extension $E$ de $G$ par $A$ est dite \emph{triviale} s'il existe un homomorphisme $h:G\rightarrow E$ telle que $\pi\circ h=\mathrm{Id}_G$ (ou, de fa\c{c}on \'{e}quivalente, si $e=0$). \end{defi} Examinons une telle extension: tout \'{e}l\'{e}ment de $E$ s'\'{e}crit $ah(s)$ de mani\`{e}re unique et $ah(s)bh(t)=a\,{}^sbh(st)$. Donc on conna\^{i}t $E$ d\`{e}s qu'on conna\^{i}t $A$, $G$ et l'action de $G$ sur $A$. Le groupe $E$ est isomorphe au groupe des couples $(a,s)$ avec $a\in A$ et $s\in G$, muni de la loi $$(a,s)(b,t)=(a\,{}^sb,st).$$ On appelle un tel $E$ un \emph{produit semi-direct}\label{semidirect1} de $G$ par $A$. On vient de voir: la classe nulle de $H^2(G,A)$ correspond \`{a} l'extension triviale de $G$ par $A$, qui est le produit semi-direct de $G$ par $A$ d\'{e}fini par l'action de $G$ sur $A$. \begin{thm} L'application $f_h$ est un $2$-cocycle de $G$ \`{a} valeurs dans $A$. \end{thm} Il faut v\'{e}rifier que $f_h$ appartient au noyau de $d$, l'homomorphisme de cobord. L'\'{e}criture est ici multiplicative; il faut donc voir que $$df_h(u,v,w)=1$$ pour tous $u,v,w\in G$; or $df_h$ s'\'{e}crit $$df_h(u,v,w)={}^u\!f_h(v,w)f_h(u,vw)f_h(uv,w)^{-1}f_h(u,v)^{-1}.$$ Nous allons \'{e}crire $h(u)h(v)h(w)$ sous la forme $ah(uvw)$ avec $a\in A$ de deux mani\`{e}res diff\'{e}rentes en utilisant l'associativit\'{e} de la loi de groupe dans $E$.\\ On a $$\big(h(u)h(v)\big)h(w)=f_h(u,v)f_h(uv,w)h(uvw)$$ et $$h(u)\big(h(v)h(w)\big)={}^u\!f_h(v,w)f_h(u,vw)h(uvw)$$ d'o\`{u} $${}^u\!f_h(v,w)f_h(u,vw)=f_h(u,v)f_h(uv,w)$$ ce qui est bien $$df_h(u,v,w)=1.\eqno\square$$ Nous allons enfin voir: \begin{thm}\label{th4.3} Toute classe de cohomologie de $H^2(G,A)$ correspond \`{a} une extension de $G$ par $A$. \end{thm} On va reconstruire la situation pr\'{e}c\'{e}dente: soit $f\in Z^2(G,A)$. D\'{e}finissons $E$ ensemblistement par $E=A\times G$. On d\'{e}finit la loi de $E$ par $$(a,s)(b,t)=\big(a\,{}^sbf(s,t),st\big).$$ Tout d'abord $E$ est un groupe:\\ $\bullet$ La loi est associative: le calcul fait ci-dessus pour voir que $f_h$ est un $2$-cocycle \`{a} partir de l'associativit\'{e} de la loi de $E$ se reprend \`{a} l'envers.\\ $\bullet$ Si $\varepsilon=f(1,1)^{-1}$, alors l'\'{e}l\'{e}ment $(\varepsilon,1)$ est \'{e}l\'{e}ment neutre. En effet, $$(a,s)(\varepsilon,1)=\big(a{}^s\!\varepsilon f(s,1),s\big)$$ or $f$ est un $2$-cocycle donc $df=1$ et $$df(s,1,1)={}^s\!f(1,1)f(s,1)^{-1}f(s,1)f(s,1)$$ donc $$1=df(s,1,1)={}^s\!\varepsilon^{-1}f(s,1)^{-1}$$ et $(\varepsilon,1)$ est bien \'{e}l\'{e}ment neutre.\\ $\bullet$ On fait de m\^{e}me le calcul de l'inverse.\\ On a un homomorphisme surjectif \'{e}vident de $E$ dans $G$: $$\left\{\!\! \begin{array}{rcl} E & \longrightarrow & G\\ (a,s) & \longmapsto & s \end{array}\right.$$ et l'application $$\left\{\!\! \begin{array}{rcl} A & \longrightarrow & E\\ a & \longmapsto & (a\varepsilon ,1) \end{array}\right.$$ est un homorphisme (car $A$ est ab\'{e}lien) \'{e}videmment injectif. Finalement on a bien: $$\xymatrix{\{1\} \ar[r] & A \ar[r] & E \ar[r] & G \ar[r] & \{1\}.}\eqno\square$$ \bigskip{\it Interpr\'{e}tation de $H^1(G,A)$ en termes d'extensions.} Soit $E$ une extension triviale de $G$ par $A$. Choisissons une section $h:G\rightarrow E$ qui soit un homomorphisme (ce qui identifie $E$ au produit semi-direct $G.A$). Soit $h'$ une autre section; on peut \'{e}crire $h'$ de fa\c{c}on unique comme $h'=l.h$, o\`{u} $l$ est une $1$-cocha\^{i}ne $G\rightarrow A$. On a $f_{h'}=f_h.dl=dl$ puisque $f_h=1$. Pour que $h'$ soit un homomorphisme, il faut et il suffit que $f_{h'}=1$, i.e. que $dl=1$, autrement dit que $l$ soit un $1$-cocycle. D'autre part, si on conjugue $h$ par un \'{e}l\'{e}ment $a$ de $A$, on obtient une section qui est un homomorphisme. Soit $h'$ cette section. A quoi cela correspond-il en termes de $l$? On a $$h'(x)=ah(x)a^{-1}=l(x)h(x)$$ avec $l(x)=a{}^x\!a^{-1}$. Donc $l=df_a$ (o\`{u} $f_a$ est l'\'{e}l\'{e}ment de $C^0(G,A)$ correspondant \`{a} $a$). Donc $l$ doit \^{e}tre un cobord. D'o\`{u}: \begin{thm} Les classes de conjugaison (par les \'{e}l\'{e}ments de $A$, ou de $G$) des sections de $E$ qui sont des homomorphismes correspondent bijectivement aux \'{e}l\'{e}ments du groupe de cohomologie $H^1(G,A)$. \end{thm} [Noter que cette correspondance \emph{d\'{e}pend} du choix de $h$. Une fa\c{c}on plus intrins\`{e}que de s'exprimer consiste \`{a} dire que l'ensemble des classes de sections-homomorphismes est un espace principal homog\`{e}ne (\og torseur\fg) sous l'action de $H^1(G,A)$.] \begin{coro} Pour que les sections de $\pi$ qui sont des homomorphismes soient conjugu\'{e}es, il faut et il suffit que $H^1(G,A)=\{0\}$. \end{coro} \section{Groupes finis: un crit\`{e}re de nullit\'{e}}\label{4.3} Soit $G$ un groupe \`{a} $m$ \'{e}l\'{e}ments et soit $A$ un $G$-module. \begin{thm} Soient $n\geqslant 1$ et $x\in H^n(G,A)$. On a $mx=0$. \end{thm} Soit $f\in Z^n(G,A)$ un $n$-cocycle repr\'{e}sentant $x$. Il faut construire $F\in C^{n-1}(G,A)$ tel que $dF=mf$.\\ Prenons $F_1(s_1,\dots,s_{n-1})=\sum_{s\in G}{f(s_1,\dots,s_{n-1},s)}$. Comme $f\in Z^n(G,A)$, on a $df=0$. Or $$\begin{array}{rcl} df(s_1,\dots,s_{n+1}) & = & \displaystyle s_1f(s_2,\dots,s_{n+1})-f(s_1s_2,s_3,\dots,s_{n+1})+\cdots\\ {} & {} & {}\\ {} & {} & \hfill{{} +(-1)^nf(s_1,\dots,s_ns_{n+1})+(-1)^{n+1}f(s_1,\dots,s_n)} \\ {} & = & 0. \end{array}$$ Donc $$\begin{array}{rcl} \displaystyle \sum_{s_{n+1}\in G}{df(s_1,\dots,s_{n+1})} & = & s_1F_1(s_2,\dots,s_n)-F_1(s_1s_2,\dots,s_n)+\cdots\\ {} & {} & \hfill{{} +(-1)^nF_1(s_1,\dots,s_{n-1})+(-1)^{n+1}mf(s_1,\dots,s_n).} \end{array}$$ On a utilis\'{e} le fait que si $s_{n+1}$ parcourt $G$, $s_ns_{n+1}$ aussi ($s_n$ \'{e}tant fix\'{e}). On a ainsi obtenu $$(-1)^nmf(s_1,\dots,s_n)=dF_1(s_1,\dots,s_n).$$ On pose donc $F=(-1)^n F_1$ qui v\'{e}rifie $dF=mf$, d'o\`{u} le r\'{e}sultat.~\hfill{$\square$} \begin{coro} Si l'application $a\mapsto ma$ est un automorphisme de $A$ ($m$ \'{e}tant l'ordre de $G$) alors $H^n(G,A)=\{0\}$ pour tout $n\geqslant 1$. \end{coro} En effet, $x\mapsto mx$ est alors un automorphisme de $C^n(G,A)$ qui commute \`{a} $d$. Donc c'est un automorphisme de $H^n(G,A)$ par passage au quotient. Or c'est dans ce cas l'application nulle d'o\`{u} $H^n(G,A)=\{0\}$.~\hfill{$\square$} \begin{coro} Si $G$ et $A$ sont finis d'ordres premiers entre eux alors $H^n(G,A)=\{0\}$ pour tout $n\geqslant 1$. \end{coro} En effet $a\mapsto ma$ est alors un automorphisme de $A$.~\hfill{$\square$} \begin{coro} Si $G$ et $A$ sont finis d'ordres premiers entre eux alors: \begin{enumerate} \item[(1)] Toute extension $E$ de $G$ par $A$ est triviale. \item[(2)] Deux homomorphismes sections de $G\rightarrow E$ sont conjugu\'{e}s par un \'{e}l\'{e}ment de $A$. \end{enumerate} \end{coro} On a $H^n(G,A)=\{0\}$ si $n\geqslant 1$. Le cas $n=2$ donne $(1)$ et le cas $n=1$ donne $(2)$ d'apr\`{e}s l'\'{e}tude faite en \ref{extensions}.~\hfill{$\square$} \section{Extensions de groupes d'ordres premiers entre eux}\label{4.4} Nous allons \'{e}tendre certains r\'{e}sultats sur les extensions d'un groupe $G$ par un groupe $A$ commutatif au cas o\`{u} $A$ est r\'{e}soluble ou m\^{e}me quelconque. \begin{thm}[Zassenhaus]\label{Zassen} Soient $A$ et $G$ deux groupes finis d'ordres premiers entre eux et consid\'{e}rons une extension $\{1\}\rightarrow A\rightarrow E\rightarrow G\rightarrow \{1\}$. Alors: \begin{enumerate} \item[(1)] Il existe un sous-groupe de $E$ (\emph{suppl\'{e}mentaire de $A$}) qui se projette isomorphiquement sur $G$ ($E$ est produit semi-direct). \item[(2)] Si $A$ ou $G$ est r\'{e}soluble, deux tels sous-groupes sont conjugu\'{e}s par un \'{e}l\'{e}ment de $A$ (ou de $E$, cela revient au m\^{e}me). \end{enumerate} \end{thm} On raisonne par r\'{e}currence sur $|E|$; on peut supposer $A$ et $G$ distincts de $\{1\}$. {\it Premier cas: $A$ est r\'{e}soluble.} On d\'{e}montre d'abord le \begin{lemme}\label{4.4.2} Soit $X$ un groupe r\'{e}soluble non r\'{e}duit \`{a} $\{1\}$. Il existe un nombre premier $p$ et un $p$-sous-groupe $Y$ de $X$ distinct de $\{1\}$ tel que $Y$ soit ab\'{e}lien \'{e}l\'{e}mentaire et caract\'{e}ristique. \end{lemme} On rappelle qu'un $p$-groupe ab\'{e}lien est dit \emph{\'{e}l\'{e}mentaire} si ses \'{e}l\'{e}ments distincts de $1$ sont d'ordre $p$ et qu'un sous-groupe d'un groupe $X$ est caract\'{e}ristique s'il est stable par tout automorphisme de $X$. \bigskip {\it D\'{e}monstration du lemme. } Soient $D^i(X)$ les d\'{e}riv\'{e}s successifs de $X$. Comme $X$ est r\'{e}soluble, il existe $i$ tel que $D^i(X)$ est distinct de $\{1\}$ et $D^{i+1}(X)$ est r\'{e}duit \`{a} $\{1\}$. Alors $D^i(X)$ est un sous-groupe de $X$ ab\'{e}lien et diff\'{e}rent de $\{1\}$. De plus, il est caract\'{e}ristique. Soit alors $p$ divisant l'ordre de $D^i(X)$ et soit $Y$ le groupe des \'{e}l\'{e}ments de $D^i(X)$ d'ordre divisant $p$. Alors $Y$ est ab\'{e}lien, diff\'{e}rent de $\{1\}$, caract\'{e}ristique (un automorphisme de $X$ transforme un \'{e}l\'{e}ment d'ordre $p$ en un autre de m\^{e}me ordre) et est un $p$-groupe \'{e}l\'{e}mentaire.~\hfill{$\square$} \bigskip {\it Retour \`{a} la d\'{e}monstration du th\'{e}or\`{e}me. } Appliquons le lemme avec $X=A$ et $Y=A'$ et remarquons que $A'$ est normal dans $E$: un automorphisme int\'{e}rieur de $E$ restreint \`{a} $A$ est un automorphisme de $A$ (car $A$ est normal dans $E$) et laisse donc $A'$, qui est caract\'{e}ristique, invariant.\\ Si $A=A'$, alors $A$ est ab\'{e}lien et le th\'{e}or\`{e}me est connu. Sinon, comme $A'$ est normal dans $E$, on peut passer au quotient par $A'$ et on obtient la suite exacte $$\xymatrix{\{1\} \ar[r] & A/A' \ar[r] & E/A' \ar[r] & G \ar[r] & \{1\}}.$$ La situation se d\'{e}crit par le diagramme suivant: $$\xymatrix{& E \ar[d]\\ & E/A' \ar[d]\\ G \ar@{.>}[uur] \ar@{.>}[ur] \ar[r] & E/A}$$ Comme $E/A'$ est de cardinal strictement inf\'{e}rieur \`{a} celui de $E$, l'hypoth\`{e}se de r\'{e}currence entra\^{i}ne que $G$ se rel\`{e}ve en un sous-groupe $G'$ de $E/A'$. Soit $E'$ l'image r\'{e}ciproque de $G'$ par la projection $E\rightarrow E/A'$. Alors on a la suite exacte $$\xymatrix{\{1\} \ar[r] & A' \ar[r] & E' \ar[r] & G' \ar[r] & \{1\}}.$$ Or $A'$ est ab\'{e}lien. D'apr\`{e}s le \S\ \ref{4.3}, on peut donc relever $G'$ en un sous-groupe de $E'$. On obtient ainsi un rel\`{e}vement de $G$ dans $E$. Montrons que deux tels rel\`{e}vements $G'$ et $G''$ sont conjugu\'{e}s par un \'{e}l\'{e}ment de $A$. On a $$E=A.G'\;\mbox{ et }\; E=A.G''.$$ L'hypoth\`{e}se de r\'{e}currence, appliqu\'{e}e \`{a} $E/A'$, montre qu'il existe $a\in A$ tel que $aG'a^{-1}$ et $G''$ aient m\^{e}me image dans $E/A'$. Quitte \`{a} remplacer $G'$ par $aG'a^{-1}$, on peut donc supposer que $A'.G'=A'.G''$. La conjugaison par un \'{e}l\'{e}ment de $A$ de $G'$ et $G''$ r\'{e}sulte alors du cas ab\'{e}lien (cf. \S\ \ref{4.3}), appliqu\'{e} \`{a} $A'.G'=A'.G''$. \bigskip {\it Deuxi\`{e}me cas: assertion $(1)$ dans le cas g\'{e}n\'{e}ral.} Soit $p$ premier divisant l'ordre de $A$ et soit $S$ un $p$-Sylow de $A$ (cf. \S\ \ref{2.2}). Soit $E'$ le normalisateur dans $E$ de $S$. D'apr\`{e}s le \S\ \ref{2.3}, on a $E=A.E'$. Soit $A'=E'\cap A$; $A'$ est normal dans $E'$ et l'on a la suite exacte $$\xymatrix{\{1\} \ar[r] & A' \ar[r] & E' \ar[r] & G \ar[r] & \{1\}}.$$ Distinguons deux cas:\\ $\bullet$ Si $|E'|<|E|$, l'hypoth\`{e}se de r\'{e}currence permet de relever $G$ dans $E'$, donc dans $E$.\\ $\bullet$ Si $|E'|=|E|$ alors $S$ est normal dans $E$ donc aussi dans $A$. On passe au quotient: $$\xymatrix{\{1\} \ar[r] & A/S \ar[r] & E/S \ar[r] & G \ar[r] & \{1\}}$$ avec $E/S$ de cardinal strictement inf\'{e}rieur \`{a} celui de $E$. Par l'hypoth\`{e}se de r\'{e}currence, $G$ se rel\`{e}ve en $G_1$ de $E/S$. Soit $E_1$ l'image r\'{e}ciproque de $G_1$ par la projection $E\rightarrow E/S$. On a la suite exacte $$\xymatrix{\{1\} \ar[r] & S \ar[r] & E_1 \ar[r] & G \ar[r] &\{1\}.}$$ Or $S$ est un $p$-groupe donc est r\'{e}soluble et l'on est ramen\'{e} au premier cas. \bigskip {\it Troisi\`{e}me cas: assertion $(2)$ lorsque $G$ est r\'{e}soluble.} Soient $G$ et $G'$ deux rel\`{e}vements de $G$ dans $E$. On a $$E=A.G'\;\mbox{ et }\; E=A.G''.$$ Soient $p$ un nombre premier et $I$ un sous-groupe ab\'{e}lien normal diff\'{e}rent de $\{1\}$ de $G$ (cf. lemme \ref{4.4.2}) et soit $\widetilde{I}$ son image r\'{e}ciproque dans $E$ par la projection $E\rightarrow G$. Soient $I'=\widetilde{I}\cap G'$ et $I''=\widetilde{I}\cap G''$. On a $$A.I'=A.I''\; (=\widetilde{I}).$$ Les groupes $I'$ et $I''$ sont des $p$-Sylow de $\widetilde{I}$; il existe donc $x\in \widetilde{I}$ tel que $I''=xI'x^{-1}$; si on \'{e}crit $x$ sous la forme $ay$ avec $a\in A$ et $y\in I'$, on a $I''=aI'a^{-1}$. Quitte \`{a} remplacer $I'$ par $aI'a^{-1}$, on peut donc supposer $I''=I'$.\\ Soit $N$ le normalisateur de $I'=I''$ dans $E$. On a $G'\subset N$ et $G''\subset N$. Si $N$ est distinct de $E$, l'hypoth\`{e}se de r\'{e}currence appliqu\'{e}e \`{a} $N$ montre que $G'$ et $G''$ sont conjugu\'{e}s. Si $N=E$, autrement dit si $I'$ est normal dans $E$, l'hypoth\`{e}se de r\'{e}currence appliqu\'{e}e \`{a} $E/I'$ montre qu'il existe $a\in A$ tel que $I'.aG'a^{-1}=I'.G''$. Puisque $I'$ est normal et contenu \`{a} la fois dans $G'$ et $G''$, cela entra\^{i}ne $$aG'a^{-1}=G'',$$ d'o\`{u} le r\'{e}sultat.~\hfill{$\square$} \bigskip{\it Remarque. } L'hypoth\`{e}se \og $A$ ou $G$ est r\'{e}soluble\fg\ faite dans $(2)$ est automatiquement satisfaite d'apr\`{e}s le th\'{e}or\`{e}me de Feit-Thompson (cf. \S\ \ref{grpe resol}) disant que tout groupe d'ordre impair est r\'{e}soluble. \section{Rel\`{e}vements d'homomorphismes}\label{4.5} Soient $\{1\}\rightarrow A\rightarrow E\stackrel{\pi}{\rightarrow} \Phi\rightarrow\{1\}$ une suite exacte , $G$ un groupe et $\varphi$ un homomorphisme de $G$ dans $\Phi$. Peut-on relever $\varphi$ en un homomorphisme $\psi$ de $G$ dans $E$? $$\xymatrix{\{1\} \ar[r] & A \ar[r] & E \ar[r]^{\pi} & \Phi \ar[r] & \{1\}\\ {} & {} & {} & G \ar[u]_\varphi \ar@{.>}[ul]^\psi}$$ La question \'{e}quivaut \`{a} celle du rel\`{e}vement de $G$ dans une extension $E_\varphi$ de $G$ par $A$ associ\'{e}e \`{a} $\varphi$, d\'{e}finie de la fa\c{c}on suivante: $$E_\varphi=\{(g,e)\in G\times E\ |\ \varphi(g)=\pi(e)\}$$ muni de la loi de groupe habituelle pour le produit cart\'{e}sien. Alors $A$ se plonge dans $E_\varphi$ par $a\mapsto (1,a)$ et $E_\varphi$ se projette sur $G$ par $(g,e)\mapsto g$. $$\xymatrix{E_\varphi \ar[r] \ar[d] & E \ar[d]^\pi\\ G \ar[r]^\varphi & \Phi}$$ On a la suite exacte $$\xymatrix{\{1\} \ar[r] & A \ar[r] & E_\varphi \ar[r] & G \ar[r] & \{1\}}.$$ (on dit parfois que $E_\varphi$ est \emph{l'image r\'{e}ciproque} (\og pull-back\fg) de l'extension $E$ par l'homomorphisme $\varphi$). Voyons l'\'{e}quivalence des deux probl\`{e}mes. Soit $\psi$ un rel\`{e}vement de $\varphi$. Alors l'ensemble $G_\psi=\{(g,\psi(g)),\ g\in G\}$ est un sous-groupe de $E_\varphi$ qui est un rel\`{e}vement de $G$.\\ Soit maintenant $G'$ un rel\`{e}vement de $G$. Alors $G'$ est form\'{e} de couples $(g,e)$ avec $g\in G$ et $e\in E$, chaque $g\in G$ apparaissant dans un et un seul couple. Alors $\psi$ d\'{e}fini par $\psi(g)=e$ est un homomorphisme qui rel\`{e}ve $\varphi$.\\ De plus, deux rel\`{e}vements $\psi'$ et $\psi''$ sont conjugu\'{e}s par $a\in A$ si et seulement si $G_{\psi'}$ et $G_{\psi''}$ sont conjugu\'{e}s par $(1,a)\in E_\varphi$. Le \S\ \ref{4.4} donne alors le \begin{thm} Soit $\{1\}\rightarrow A\rightarrow E \rightarrow \Phi\rightarrow\{1\}$ une suite exacte et soit $\varphi$ un homomorphisme d'un groupe $G$ dans le groupe $\Phi$. Supposons $G$ et $A$ finis d'ordres premiers entre eux. Alors: \begin{enumerate} \item[(1)] Il existe un homomorphisme $\psi$ de $G$ dans $E$ qui rel\`{e}ve $\varphi$. \item[(2)] Si $G$ ou $A$ est r\'{e}soluble, deux tels homomorphismes sont conjugu\'{e}s par un \'{e}l\'{e}ment de $A$. \end{enumerate} \end{thm} {\it Application. } On se donne un homomorphisme $\varphi: G\rightarrow \mathbf{GL}_n(\mathbf{Z}/p\mathbf{Z})$ o\`{u} $p$ ne divise pas l'ordre de $G$. On va voir qu'\emph{on peut relever $\varphi$ en $\varphi_\alpha: G\rightarrow \mathbf{GL}_n(\mathbf{Z}/p^\alpha\mathbf{Z})$ pour tout $\alpha \geqslant 1$}. Commen\c{c}ons par relever $\varphi$ en $\varphi_2$. On a la suite exacte $$\xymatrix{\{1\} \ar[r] & A \ar[r] & \mathbf{GL}_n(\mathbf{Z}/p^2\mathbf{Z}) \ar[r] & \mathbf{GL}_n(\mathbf{Z}/p\mathbf{Z}) \ar[r] & \{1\}}$$ o\`{u} $A$ est form\'{e} des matrices de la forme $1+pX$ avec $X$ matrice $n\times n$ modulo $p$ et o\`{u} l'application de $\mathbf{GL}_n(\mathbf{Z}/p^2\mathbf{Z})$ dans $\mathbf{GL}_n(\mathbf{Z}/p\mathbf{Z})$ est la r\'{e}duction modulo $p$. Le groupe $A$ est alors isomorphe \`{a} $\mathbf{M}_n(\mathbf{Z}/p\mathbf{Z})$ qui est un $p$-groupe ab\'{e}lien. On peut donc appliquer le th\'{e}or\`{e}me pr\'{e}c\'{e}dent et relever $\varphi$ en $\varphi_2$ de mani\`{e}re essentiellement unique.\\ Le m\^{e}me argument permet de relever $\varphi_\alpha$ en $\varphi_{\alpha+1}$. On a la suite exacte $$\xymatrix{\{1\} \ar[r] & A \ar[r] & \mathbf{GL}_n(\mathbf{Z}/p^{\alpha +1}\mathbf{Z}) \ar[r] & \mathbf{GL}_n(\mathbf{Z}/p^{\alpha}\mathbf{Z}) \ar[r] & \{1\}\\ && G \ar[u]^{\varphi_{\alpha +1}} \ar[ur]_{\varphi_{\alpha}}}.$$ On peut passer \`{a} la limite projective: comme $\varprojlim {(\mathbf{Z}/p^\alpha\mathbf{Z})}=\mathbf{Z}_p$, on obtient une repr\'{e}sentation $$\varphi_\infty: \xymatrix{G \ar[r] & \mathbf{GL}_n(\mathbf{Z}_p)\ \ar@{^{(}->}[r] & \mathbf{GL}_n(\mathbf{Q}_p)}.$$ Or $\mathbf{Q}_p$ est de caract\'{e}ristique $0$: ainsi, \`{a} partir d'une repr\'{e}sentation en caract\'{e}ristique $p$, on en obtient une en caract\'{e}risque $0$. \chapter{Groupes r\'{e}solubles et sous-groupes de Hall} Nous allons essayer de g\'{e}n\'{e}raliser les th\'{e}or\`{e}mes de Sylow. Le probl\`{e}me \'{e}tait alors le suivant: soit $G$ un groupe d'ordre $\prod_{p}{p^{\alpha(p)}}$ (o\`{u} $p$ est premier), existe-t-il, pour tout nombre premier $p$, un sous-groupe de $G$ d'ordre $p^{\alpha(p)}$?\\ On peut se demander si, plus g\'{e}n\'{e}ralement, pour tout $n$ divisant l'ordre de $G$, on peut trouver un sous-groupe de $G$ d'ordre $n$. C'est vrai si $G$ est nilpotent, mais faux sans hypoth\`{e}se sur $G$; m\^{e}me \og $G$ r\'{e}soluble \fg\ est insuffisant: le groupe $\mathcal{A}_4$, d'ordre $12$ est r\'{e}soluble et n'a pas de sous-groupe d'ordre $6$. On va donc faire des hypoth\`{e}ses plus restrictives sur $n$. \section{$\Pi$-sous-groupes}\label{Pisg} Soit $\Pi$ un ensemble de nombres premiers et soit $\Pi'$ son compl\'{e}mentaire. Si $n\in \mathbf{N}$, on \'{e}crit $n=n_{\Pi}n_{\Pi'}$, avec $n_{\Pi}$ (resp. $n_{\Pi'}$) divisible uniquement par des \'{e}l\'{e}ments de $\Pi$ (resp. $\Pi'$). Un groupe $G$ est appel\'{e} un \emph{$\Pi$-groupe} si tous les facteurs premiers de l'ordre de $G$ appartiennent \`{a} $\Pi$.\\ Le probl\`{e}me consiste en la recherche des $\Pi$-sous-groupes d'un groupe donn\'{e} et de ses $\Pi$-sous-groupes maximaux tels qu'ils sont d\'{e}finis ci-dessous. \begin{defi} Soit $G$ un groupe et soit $\Pi$ un ensemble de nombres premiers. On appelle \emph{$\Pi$-Sylow} ou \emph{$\Pi$-sous-groupe de Hall} de $G$ un sous-groupe $H$ tel que $|H|=|G|_\Pi$. \end{defi}\label{Hall} {\it Remarque. } Si $\Pi=\{p\}$, un $\Pi$-Sylow de $G$ est un $p$-Sylow de $G$. \begin{thm}[P. Hall]\label{5.1.1} Soient $G$ un groupe r\'{e}soluble et $\Pi$ un ensemble de nombres premiers. Alors: \begin{enumerate} \item[(1)] $G$ poss\`{e}de des $\Pi$-Sylow. \item[(2)] Soient $S_{\Pi}$ un $\Pi$-Sylow de $G$ et $H$ un $\Pi$-sous-groupe de $G$. Alors $H$ est contenu dans un conjugu\'{e} de $S_{\Pi}$. \end{enumerate} \end{thm} La d\'{e}monstration de ce th\'{e}or\`{e}me sera donn\'{e}e au \S\ \ref{5.4}. \begin{coro} Deux $\Pi$-Sylow d'un groupe r\'{e}soluble sont conjugu\'{e}s. \end{coro} L'hypoth\`{e}se \og r\'{e}soluble\fg\ est essentielle: \begin{thm}\label{5.1.2} Si pour tout ensemble $\Pi$ de nombres premiers, $G$ poss\`{e}de un $\Pi$-Sylow, alors $G$ est r\'{e}soluble. \end{thm} La d\'{e}monstration sera donn\'{e}e au \S\ \ref{5.6}. \begin{thm}[Burnside]\label{thmburn5.4} Soient $p$ et $q$ deux nombres premiers. Tout groupe d'ordre $p^aq^b$ ($a,b\in \mathbf{N}$) est r\'{e}soluble. \end{thm} \label{Burnside2} En effet, la question de l'existence de $\Pi$-Sylow ne se pose vraiment que si $\Pi=\{p\}$ ou $\{q\}$ ou $\{p,q\}$. Les th\'{e}or\`{e}mes de Sylow (cf. \S\ \ref{2.2}) r\'{e}pondent dans les deux premiers cas et $G$ lui-m\^{e}me convient dans le troisi\`{e}me. Le th. \ref{5.1.2} assure alors que $G$ est r\'{e}soluble.~\hfill{$\square$} \bigskip En fait le th\'{e}or\`{e}me de Burnside sera d\'{e}montr\'{e} en annexe (cf. th. \ref{thmburn}) par la th\'{e}orie des caract\`{e}res et il sera utilis\'{e} dans la d\'{e}monstration du th. \ref{5.1.2}. \section{Pr\'{e}liminaires: sous-groupes permutables}\label{5.2} Nous allons d\'{e}montrer quelques lemmes sur les produits de sous-groupes.\\ Soient $A$ et $B$ deux sous-groupes d'un groupe $G$. Notons $A.B$ l'ensemble des produits $ab$, o\`{u} $a\in A$ et $b\in B$. \begin{lemme}\label{5.2.1} Il y a \'{e}quivalence entre: \begin{enumerate} \item[(1)] $A.B=B.A$. \item[(2)] $A.B$ est un sous-groupe de $G$. \end{enumerate} \end{lemme} $(1)\Rightarrow (2)$ car si $A.B=B.A$, on a $A.B.A.B\subset A.A.B.B\subset A.B$ et $(A.B)^{-1}\subset B.A=A.B$ et $A.B$ est un sous-groupe de $G$. $(2)\Rightarrow (1)$ Si $A.B$ est un sous-groupe de $G$, on a $A.B=(A.B)^{-1}=B.A$.~\hfill{$\square$} \bigskip On dit que deux groupes $A$ et $B$ sont \emph{permutables} si $A.B=B.A$. \begin{lemme}\label{5.2.2} Soient $A_1,\dots,A_n$ des sous-groupes de $G$ deux \`{a} deux permutables. Alors $A_1\dots A_n$ est un sous-groupe de $G$. \end{lemme} La d\'{e}monstration se fait par r\'{e}currence sur $n$. Le lemme \ref{5.2.1} donne le cas $n=2$. D'apr\`{e}s l'hypoth\`{e}se de r\'{e}currence, $A_1\dots A_{n-1}$ est un groupe. Il est permutable avec $A_n$ car $A_1\dots A_{n-1}.A_n=A_1\dots A_n.A_{n-1}$, d'o\`{u} apr\`{e}s $(n-1)$ op\'{e}rations $A_1\dots A_{n-1}.A_n=A_n.A_1\dots A_{n-1}$. D'apr\`{e}s le lemme \ref{5.2.1}, $A_1\dots A_n$ est un sous-groupe de $G$.~\hfill{$\square$} \pagebreak[2] \begin{lemme}\label{5.2.3} Il y a \'{e}quivalence entre: \begin{enumerate} \item[(1)] $A.B=G$. \item[(1')] $B.A=G$. \item[(2)] $G$ op\`{e}re transitivement sur $G/A\times G/B$. \end{enumerate} De plus, si $G$ est fini, ces propri\'{e}t\'{e}s sont \'{e}quivalentes \`{a} chacune des suivantes: \begin{enumerate} \item[(3)] $(G:A\cap B)=(G:A).(G:B)$. \item[(3')] $(G:A\cap B)\geqslant(G:A).(G:B)$. \end{enumerate} \end{lemme} En effet:\\ $(1) \Leftrightarrow(1')$ car si $A.B=G$, $A.B$ est un sous-groupe et d'apr\`{e}s le lemme \ref{5.2.1}, on a $A.B=B.A$, donc $B.A=G$. $(1)\Rightarrow (2)$ Il s'agit de prouver que pour tous $g_1,g_2\in G$, il existe $g\in G$ tel que $g\in g_1A$ et $g\in g_2B$. Or par hypoth\`{e}se, il existe $a\in A$ et $b\in B$ tels que $g_1^{-1}g_2=ab$, d'o\`{u} $g_1a=g_2b^{-1}$ et l'\'{e}l\'{e}ment $g=g_1a=g_2b^{-1}$ convient. $(2)\Rightarrow (1)$ Le groupe $G$ op\`{e}re transitivement sur $G/A\times G/B$. Prenons donc, pour tout $g_1\in G$, un \'{e}l\'{e}ment $g\in G$ tel que $g\in 1.A$ et $g\in g_1.B$. Cela entraine $g_1\in A.B$, i.e. $A.B=G$. Soit maintenant $G$ un groupe fini. Montrons $(2)\Leftrightarrow (3)$. Soit $\dot{1}$ l'image de l'\'{e}l\'{e}ment unit\'{e} de $G$ dans $G/A$ (resp. $G/B$). Le stablisateur de $(\dot{1},\dot{1})$ par l'action de $G$ sur $G/A\times G/B$ est $A\cap B$. Le nombre $n$ d'\'{e}l\'{e}ments de l'orbite de $(\dot{1},\dot{1})$ est donc l'indice $(G:A\cap B)$ de $A\cap B$ dans $G$. Or $$\begin{array}{rcl} G\ \mbox{op\`{e}re transitivement sur}\ G/A\times G/B & \Longleftrightarrow & n=|G/A|\times |G/B| \\ {} & \Longleftrightarrow & n=(G:A)(G:B) \\ {} & \Longleftrightarrow & (G:A\cap B)=(G:A)(G:B), \\ \end{array}$$ ce qui est bien l'\'{e}quivalence entre $(2)$ et $(3)$. $(3')\Leftrightarrow (3)$ car $(G:A\cap B)$ est le cardinal de l'orbite de $(1,1)$, cardinal major\'{e} par celui de $G/A\times G/B$, qui est $(G:A)(G:B)$.~\hfill{$\square$} \begin{lemme}\label{5.2.4} Les propri\'{e}t\'{e}s du lemme \ref{5.2.3} sont vraies si les indices de $A$ et $B$ dans $G$ sont premiers entre eux. \end{lemme} En effet, $(G:A\cap B)$ est divisible par $(G:A)$ et $(G:B)$ donc par leur produit, ce qui prouve $(3')$ du lemme pr\'{e}c\'{e}dent.~\hfill{$\square$} \section{Syst\`{e}mes permutables de sous-groupes de Sylow}\label{5.3} Soit $G$ un groupe. Pour tout nombre premier $p$, choisissons un $p$-Sylow $H_p$ de $G$. Nous dirons que le syst\`{e}me $\{H_p\}$ est \emph{permutable} si les $H_p$ sont deux \`{a} deux permutables au sens du \S\ \ref{5.2}. Dans ce cas, si $\Pi$ est un ensemble de nombres premiers, le groupe $H_{\Pi}=\prod_{p\in\Pi}{H_p}$ est un $\Pi$-sous-groupe de $G$. \begin{thm}\label{5.3.1} Si $G$ est r\'{e}soluble, $G$ poss\`{e}de un syst\`{e}me permutable de sous-groupes de Sylow. \end{thm} La d\'{e}monstration se fait par r\'{e}currence sur l'ordre de $G$. On suppose $G\neq\{1\}$; d'apr\`{e}s le lemme \ref{4.4.2}, il existe alors un nombre premier $p_0$ et un $p_0$-sous-groupe normal $A$ de $G$ distinct de $\{1\}$. D'apr\`{e}s l'hypoth\`{e}se de r\'{e}currence, le groupe $G/A$ poss\`{e}de un syst\`{e}me permutable $\{H'_p\}$ de $p$-Sylow. Soit $H'=\prod_{p\neq p_0}{H'_p}$; c'est un sous-groupe de $G/A$ d'ordre $\prod_{p\neq p_0}{|H'_p|}$. Soit $G'$ son image r\'{e}ciproque dans $G$, on a une suite exacte: $$\xymatrix{\{1\} \ar[r] & A \ar[r] & G' \ar[r] & H' \ar[r] & \{1\}}.$$ Comme $A$ et $H'$ sont d'ordres premiers entre eux, il existe un sous-groupe $H$ de $G$ qui rel\`{e}ve $H'$ (cf. \S 4.4). Si $p\neq p_0$, posons $H_p$ le sous-groupe de $H$ qui rel\`{e}ve $H'_p$; les $H_p$ sont des $p$-Sylow de $G$ deux \`{a} deux permutables. Pour $p=p_0$, d\'{e}finissons $H_p$ comme l'image r\'{e}ciproque de $H'_{p_0}$ dans $G$. C'est un $p_0$-Sylow de $G$ qui permute aux $H_p$ ($p\neq p_0$). Le syst\`{e}me $\{H_p\}$ r\'{e}pond donc \`{a} la question.~\hfill{$\square$} \section{D\'{e}monstration du th. \ref{5.1.1}}\label{5.4} L'assertion $(1)$ sur l'existence de $\Pi$-Sylow r\'{e}sulte du th. \ref{5.3.1} et du lemme \ref{5.2.2}. Prouvons l'assertion $(2)$ par r\'{e}currence sur l'ordre de $G$. Prenons comme au \S\ \ref{5.3} un $p_0$-sous-groupe normal $A$ de $G$ distinct de $\{1\}$. Soient $H'$ et $S'_{\Pi}$ les images respectives de $H$ et $S_{\Pi}$ dans $G'=G/A$. D'apr\`{e}s l'hypoth\`{e}se de r\'{e}currence $H'$ est contenu dans un conjugu\'{e} de $S'_{\Pi}$. Quitte \`{a} remplacer $H'$ par un de ses conjugu\'{e}s, on peut donc supposer $H'\subset S'_{\Pi}$. Il faut maintenant examiner deux cas:\\ $\bullet$ $p_0\in\Pi$. Alors $A\subset S_{\Pi}$ car $S_{\Pi}$ contient un $p_0$-Sylow $S_0$ de $G$ et comme $A$ est normal, $A\subset S_0$ (cf. les th\'{e}or\`{e}mes de Sylow). L'inclusion $H'\subset S'_{\Pi}$ donne alors $H\subset S_{\Pi}$.\\ $\bullet$ $p_0\notin\Pi$. Alors les ordres de $A$ et $S_{\Pi}$ sont premiers entre eux et on a $A\cap H=\{1\}$, et $A\cap S_{\Pi}=\{1\}$. Les projections $H\rightarrow H'$ et $S_{\Pi}\rightarrow S'_{\Pi}$ sont des isomorphismes. Soit $\widetilde{H}$ le sous-groupe de $S_{\Pi}$ qui se projette sur $H'$; dans $A.H$ les groupes $H$ et $\widetilde{H}$ sont des rel\`{e}vements de $H'$; ils sont donc conjugu\'{e}s (voir le \S\ \ref{4.4}, th. \ref{Zassen}).~\hfill{$\square$} \section{Un crit\`{e}re de r\'{e}solubilit\'{e}}\label{5.5} \begin{thm}[Wielandt]\label{5.5.1} Soit $G$ un groupe fini et soient $H_1$, $H_2$, $H_3$ trois sous-groupes de $G$. Si les $H_i$ sont r\'{e}solubles et si leurs indices sont premiers entre eux deux \`{a} deux, alors $G$ est r\'{e}soluble. \end{thm} La d\'{e}monstration se fait par r\'{e}currence sur l'ordre de $G$. Remarquons tout d'abord que $G=H_1.H_2$. En effet, les indices $(G:H_1)$ et $(G:H_2)$ sont premiers entre eux, donc comme chacun d'eux divise $(G:H_1\cap H_2)$, on a $(G:H_1\cap H_2)\geqslant (G: H_1)(G:H_2)$ et d'apr\`{e}s le lemme \ref{5.2.3}, on a $G=H_1.H_2$.\\ On peut supposer que $H_1\not=\{1\}$. D'apr\`{e}s le lemme \ref{4.4.2}, il existe un nombre premier $p$ et un $p$-sous-groupe normal $A$ de $H_1$ diff\'{e}rent de $\{1\}$. On peut supposer que $p$ ne divise pas $(G:H_2)$. Alors $H_2$ contient un $p$-Sylow de $G$ , donc un conjugu\'{e} de $A$. Comme $G=H_1.H_2$, tout conjugu\'{e} de $A$ est de la forme $h_2^{-1}h_1^{-1}Ah_1h_2$ avec $h_i\in H_i$ ($i=1,2$) et comme $A$ est normal dans $H_1$, et qu'un de ses conjugu\'{e}s est contenu dans $H_2$, tous ses conjugu\'{e}s sont dans $H_2$.\\ Soit $\widetilde{A}$ le sous-groupe de $G$ engendr\'{e} par les conjugu\'{e}s de $A$. Alors $\widetilde{A}$ est normal dans $G$ et contenu dans $H_2$, donc $\widetilde{A}$ est r\'{e}soluble. Soit $H'_i$ l'image de $H_i$ dans $G'=G/\widetilde{A}$. Les indices $(G':H'_i)$ sont premiers entre eux deux \`{a} deux (car $(G':H'_i)$ divise $(G:H_i)$) et les $H'_i$ sont r\'{e}solubles. L'hypoth\`{e}se de r\'{e}currence montre alors que $G'$ est r\'{e}soluble; donc $G$ est r\'{e}soluble.~\hfill{$\square$} \section{D\'{e}monstration du th. \ref{5.1.2}}\label{5.6} Soit $p$ un nombre premier et soit $G$ un groupe. On appelle \emph{$p$-compl\'{e}ment} de $G$ tout sous-groupe $H$ de $G$ qui est un $p'$-Sylow o\`{u} $p'$ est l'ensemble des nombres premiers diff\'{e}rents de $p$. Nous allons d\'{e}montrer le th. \ref{5.1.2} sous la forme apparemment plus forte suivante: \begin{thm} Si, pour tout nombre premier $p$, le groupe $G$ a un $p$-compl\'{e}ment, alors $G$ est r\'{e}soluble. \end{thm} On raisonne par r\'{e}currence sur $|G|$. On distingue deux cas.\\ $\bullet$ Le nombre des facteurs premiers de $|G|$ est $\leqslant 2$, autrement dit $|G|$ est de la forme $p^aq^b$, o\`{u} $p$ et $q$ sont premiers. D'apr\`{e}s un th\'{e}or\`{e}me de Burnside, (d\'{e}montr\'{e} gr\^{a}ce \`{a} la th\'{e}orie des caract\`{e}res, cf. \S\ \ref{Burn}), $G$ est r\'{e}soluble.\\ $\bullet$ Le nombre des facteurs de l'ordre de $G$ est sup\'{e}rieur ou \'{e}gal \`{a} $3$. Soient $p_i$ ($i=1, 2, 3$) de tels facteurs et $H_i$ ($i=1, 2, 3$) un $p_i$-compl\'{e}ment de $G$. Alors les indices $(G:H_i)$ pour $i=1, 2, 3$ sont premiers entre eux deux \`{a} deux.\\ De plus, $H_i$ poss\`{e}de un $p$-compl\'{e}ment pour tout nombre premier $p$. En effet, si $p=p_i$, alors $H_i$ est son propre $p_i$-compl\'{e}ment. Sinon, soit $H_p$ un $p$-compl\'{e}ment pour $G$; comme $(G:H_i)$ et $(G:H_p)$ sont premiers entre eux, le lemme \ref{5.2.4} montre que $$(G:H_i\cap H_p)=(G:H_i)(G:H_p),$$ d'o\`{u} $(H_i:H_i\cap H_p)$ est la plus grande puissance de $p$ divisant l'ordre de $H_i$. Comme $H_i\cap H_p$ est un $p'$-groupe, $H_i\cap H_p$ est un $p$-compl\'{e}ment de $H_i$. D'apr\`{e}s l'hypoth\`{e}se de r\'{e}currence, $H_i$ est r\'{e}soluble. Mais alors $G$ satisfait aux hypoth\`{e}ses du th. \ref{5.5.1}, donc $G$ est r\'{e}soluble.~\hfill{$\square$} \chapter{Groupes de Frobenius} \section{R\'{e}union des conjugu\'{e}s d'un sous-groupe} \label{6.1} \begin{thm}[Jordan]\label{remplissage} Soit $G$ un groupe fini et soit $H$ un sous-groupe de $G$ distinct de $G$; alors $\bigcup_{g\in G}{(gHg^{-1})}\neq G$. De fa\c{c}on plus pr\'{e}cise, on a: $$\Big|\bigcup_{g\in G}{gHg^{-1}}\Big|\leqslant \left|G\right|-\left(\frac{|G|}{|H|}-1\right).$$ \end{thm} On sait \'{e}videmment que $1$ appartient \`{a} $H\cap (gHg^{-1})$ pour tout $g\in G$. On raisonne sur $G\mathbf{-}\{1\}$. On a $$\bigcup_{g\in G}{\left(gHg^{-1}\mathbf{-}\{1\}\right)}=\bigcup_{g\in G/H}{\left(gHg^{-1}\mathbf{-}\{1\}\right)}$$ d'o\`{u} $$\Big|\bigcup_{g\in G}{\left(gHg^{-1}\mathbf{-}\{1\}\right)}\Big|\leqslant \frac{|G|}{|H|}\big(|H|-1\big)$$ puis $$\Big|\bigcup_{g\in G}{gHg^{-1}}\Big|\leqslant |G|-\frac{|G|}{|H|}+1.\eqno\square$$ \bigskip On va voir que cette propri\'{e}t\'{e} reste vraie sans supposer que $G$ est fini, pourvu que $G/H$ le soit. On utilise un lemme: \begin{lemme}\label{lem6.2} Soit $G$ un groupe et soit $H$ un sous-groupe de $G$ d'indice fini $n$. Il existe un sous-groupe $N$ normal dans $G$ et contenu dans $H$, tel que l'indice $(G:N)$ divise $n!$. \end{lemme} En effet, le groupe $G$ agit sur $X=G/H$ qui a $n$ \'{e}l\'{e}ments. On obtient ainsi un homomorphisme $\varphi$ de $G$ dans le groupe $\mathcal{S}_X$ des permutations de $X$ qui est de cardinal $n!$. Le groupe $N=\ker\varphi$ r\'{e}pond \`{a} la question.~\hfill{$\square$} \bigskip Si l'on applique le th. \ref{remplissage} \`{a} $G/N$ et $H/N$, on constate que la r\'{e}union des conjugu\'{e}s de $H$ ne remplit pas $G$ modulo $N$ donc, {\it a fortiori}, que $\bigcup_{g\in G}{gHg^{-1}}\neq G$.~\hfill{$\square$} \bigskip{\it Remarque. } Le cas $G=SO_3(\mathbf{R})$ et $H=\mathbf{S}_1$ montre que l'hypoth\`{e}se $(G:H)<\infty$ ne peut \^{e}tre supprim\'{e}e.\bigskip Mentionnons deux reformulations du th. \ref{remplissage}: {\bf Th\'{e}or\`{e}me 6.1'}\, {\it Si un sous-groupe $H$ de $G$ rencontre toutes les classes de conjugaison de $G$, on a $H=G$.} (C'est un crit\`{e}re souvent utilis\'{e} en th\'{e}orie des nombres, $G$ \'{e}tant un groupe de Galois.) {\bf Th\'{e}or\`{e}me 6.1''}\, {\it Si $G$ op\`{e}re transitivement sur un ensemble $X$, et si $|X|\geqslant 2$, il existe un \'{e}l\'{e}ment de $G$ qui op\`{e}re sans point fixe.} En effet, si $H$ est le fixateur d'un point de $X$, on choisit un \'{e}l\'{e}ment qui n'appartient \`{a} aucun conjugu\'{e} de $H$.\hfill{$\square$} \bigskip Voici deux applications du th\'{e}or\`{e}me ci-dessus: {\it Tout corps fini est commutatif (th\'{e}or\`{e}me de Wedderburn).}\label{Wedderburn} En effet, soit $D$ un corps fini et soit $F$ son centre. On sait que $(D:F)$ est un carr\'{e} $n^2$ et que tout $x\in D$ est contenu dans un sous-corps $L$ commutatif, contenant $F$ et tel que $(L:F)=n$. Comme deux tels sous-corps sont isomorphes, le th\'{e}or\`{e}me de Skolem-Noether montre qu'ils sont conjugu\'{e}s. Si $L$ est l'un d'eux et si l'on pose $G=D^*$ et $H=L^*$, on a $G=\bigcup gHg^{-1}$, d'o\`{u} $G=H$, $n=1$ et $D$ est commutatif. {\it Racine d'une \'{e}quation modulo $p$.} Soit $f=X^n+a_1X^{n-1}+\cdots+a_n$ un polyn\^{o}me \`{a} coefficients dans $\mathbf{Z}$, irr\'{e}ductible sur $\mathbf{Q}$. Si $p$ est un nombre premier, notons $f_p$ la r\'{e}duction de $f$ modulo $p$; c'est un \'{e}l\'{e}ment de $\mathbf{F}_p[X]$. Notons ${\mathcal P}_f$ l'ensemble des $p$ tels que $f_p$ ait une racines (au moins) dans $\mathbf{F}_p$. On va voir que la densit\'{e} de ${\mathcal P}_f$ est strictement inf\'{e}rieure \`{a} $1$ d\`{e}s que $n\geqslant 2$. [On dit que $\mathcal P$ a une densit\'{e} \'{e}gale \`{a} $\rho$ si $$\frac{|\{p\leqslant x,\ p\in {\mathcal P}\}|}{|\{p\leqslant x\}|} \longrightarrow \rho\quad \mbox{pour $x\rightarrow +\infty$.]}$$ Soit $X=\{x_1,\dots,x_n\}$ l'ensemble des racines de $f$ dans une extension de $\mathbf{Q}$ et soit $G$ le groupe de Galois de $f$. Ce groupe op\`{e}re transitivement sur $X$; on a $X\simeq G/H$ o\`{u} $H$ est le fixateur de $x_1$. On d\'{e}montre (th\'{e}or\`{e}me de Chebotarev-Frobenius)\label{Chebotarev-Frobenius} que la densit\'{e} de ${\mathcal P}_f$ existe, et est \'{e}gale \`{a} $$\frac{1}{|G|}\Big|\bigcup_{g\in G}{gHg^{-1}}\Big|.$$ Vu le th\'{e}or\`{e}me ci-dessus, cette densit\'{e} est $<1$. \bigskip {\it Corollaire.} Si $n\geqslant 2$, il existe une infinit\'{e} de $p$ tels que $f_p$ n'ait aucune racine dans $\mathbf{F}_p$. \bigskip Pour plus de d\'{e}tails, voir J.-P Serre, {\it On a theorem of Jordan}, Bull. A.M.S. $40$ $(\mit 2003)$, $429-440$, reproduit dans Doc.Math.$1$, seconde \'edition, SMF,$2008$. \section{Groupes de Frobenius: d\'{e}finition}\label{frob} On va d\'{e}sormais s'int\'{e}resser aux couples $(G,H)$ tels que $$\Big|\bigcup_{g\in G}{gHg^{-1}}\Big|=\left|G\right|-\left(\frac{|G|}{|H|}-1\right).$$ Cela signifie que $(gHg^{-1}\mathbf{-}\{1\})$ et $(hHh^{-1}\mathbf{-}\{1\})$ sont disjoints si $g$ et $h$ ne sont pas congrus modulo $H$, ou encore que $H$ et $gHg^{-1}$ sont d'intersection r\'{e}duite \`{a} $\{1\}$ si $g\notin H$. On dit que $H$ \og ne rencontre pas ses conjugu\'{e}s\fg.\\ On s'int\'{e}resse au cas o\`{u} $H$ est un sous-groupe propre de $G$. Soit $X=G/H$. Une propri\'{e}t\'{e} \'{e}quivalente est que tout \'{e}l\'{e}ment de $G$ (distinct de 1) a au plus un point fixe dans $X$ si on fait agir $G$ sur $X$, ou encore: tout \'{e}l\'{e}ment de $G$ qui fixe deux points est l'identit\'{e}. \bigskip{\it Exemple. } Soit $G$ le groupe des transformations $h$ de la forme $h(x)=ax+b$, o\`{u} $a$ et $b$ sont dans un corps fini $\mathbf{F}$, avec $a\neq 0$. Soit $H$ le sous-groupe $\{x\mapsto ax\}$. Si $N$ est le sous-groupe de $G$ des translations, $N$ est normal dans $G$, et $G$ est le produit semi-direct de $H$ par $N$. Alors $(G,H)$ est un exemple de la situation pr\'{e}c\'{e}dente. \bigskip On va g\'{e}n\'{e}raliser: \begin{defi} Un groupe $G$ est dit \emph{de Frobenius} s'il poss\`{e}de un sous-groupe $H$ distinct de $\{1\}$ et de $G$ tel que $\big|\bigcup_{g\in G}{gHg^{-1}}\big|=\left|G\right|-\left({|G|}/{|H|}-1\right)$. On parle dans ce cas pour $(G,H)$ de \emph{couple de Frobenius}. \end{defi}\label{couplefrob} {\it Exemples.\\ } $(1)$ Soient $N$ et $H$ deux groupes finis, o\`{u} $H$ agit sur $N$: \`{a} tout $h\in H$, on associe $\sigma_h:\ N\rightarrow N$ d\'{e}fini par $\sigma_h(n)=hnh^{-1}$. On a $\sigma_{h_1h_2}=\sigma_{h_1}\circ\sigma_{h_2}$ pour tous $h_1,h_2\in H$. Soit $G$ le produit semi-direct correspondant. Cherchons \`{a} quelle condition $(G,H)$ est de Frobenius. Il faut et il suffit que $H\cap nHn^{-1}=\{1\}$ pour tout $n\in N\mathbf{-} \{1\}$. En effet, soit $h\in H\cap nHn^{-1}$; alors $h$ s'\'{e}crit $nh'n^{-1}$ avec $h'\in H$. On quotiente modulo $N$, ce qui fournit $h=h'$ ($G/N\simeq H$). Donc $h=nhn^{-1}$ soit encore $n=h^{-1}nh=\sigma_{h^{-1}}(n)$. Donc $n$ est fix\'{e} par $\sigma_{h^{-1}}$. Si $h\neq 1$, alors n\'{e}cessairement $n=1$.\\ Une condition n\'{e}cessaire et suffisante pour que $(G,H)$ soit de Frobenius est qu'il n'existe pas de couple $(h,n)$ avec $h\neq 1$ et $n\neq 1$ tel que $\sigma_h(n)=n$, ou encore que $H$ op\`{e}re librement sur $N\mathbf{-}\{1\}$. On a $\bigcup_{g\in G}{gHg^{-1}}=\{1\}\cup (G\mathbf{-} N)$ d'o\`{u} $G\mathbf{-}\bigcup_{g\in G}{gHg^{-1}}=~N\mathbf{-}\{1\}$ (en effet $\bigcup_{g\in G}{gHg^{-1}}\subset\{1\}\cup (G\mathbf{-} N)$ et il suffit de compter les \'{e}l\'{e}ments pour conclure). $(2)$ Soit $p$ un nombre premier et soit $\mathbf{F}$ un corps fini contenant une racine $\xi$ $p$-i\`{e}me de l'unit\'{e}. Soit $N$ l'ensemble des matrices $p\times p$ triangulaires sup\'{e}rieures \`{a} \'{e}l\'{e}ments diagonaux \'{e}gaux \`{a} $1$. C'est un groupe. Soit $H$ le groupe cyclique engendr\'{e} par $$\left(% \begin{array}{ccccc} 1 & 0 & \ldots & \ldots & 0 \\ 0 & \xi & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \xi^{p-2} & 0 \\ 0 & \ldots & \ldots & 0 & \xi^{p-1} \\ \end{array}% \right).$$ Alors $H$ normalise $N$. Le groupe $G=N.H$ est le groupe des matrices triangulaires sup\'{e}rieures avec des $\xi^k$ sur la diagonale. On v\'{e}rifie que l'action de $H$ est sans point fixe.\bigskip Ces exemples sont en fait caract\'{e}ristiques; on a en effet le \begin{thm}[Frobenius] Soit $(G,H)$ un couple de Frobenius. Alors l'ensemble $N$ des \'{e}l\'{e}ments de $G$ non conjugu\'{e}s \`{a} un \'{e}l\'{e}ment de $H$ (ou \'{e}gaux \`{a} $1$) est un sous-groupe normal et on a $G=N.H$. \end{thm}\label{thmfrobenius} Le point cl\'{e} de la d\'{e}monstration est que $N$ est effectivement un sous-groupe: il repose sur la th\'{e}orie des caract\`{e}res; nous le d\'{e}montrerons plus tard (cf. Annexe, th. \ref{frobannexe}). Le groupe $N$ est alors normal (car invariant par conjugaison). D'autre part: $$\Big|\bigcup_{g\in G}{gHg^{-1}}\Big|=\left|G\right|-\big((G:H)-1\big),$$ donc $|N|=(G:H)$. Enfin $N\cap H=\{1\}$, d'o\`{u} $G=N.H$.~\hfill{$\square$}\bigskip On d\'{e}montre (nous ne le ferons pas) qu'un groupe $G$ ne peut \^{e}tre un groupe de Frobenius que \og d'une seule mani\`{e}re\fg: si $(G,H_1)$ et $(G,H_2)$ sont des couples de Frobenius, alors $H_1$ est conjugu\'{e} de $H_2$. En particulier le sous-groupe normal $N$ est unique. On cherche maintenant \`{a} classer les groupes de Frobenius en \'{e}tudiant la structure de $N$ et celle de $H$. \section{Structure de $N$} On suppose que $N$ et $H$ sont distincts de $\{1\}$ et qu'ils interviennent dans le groupe de Frobenius $G$. Choisissons $x\in H$ d'ordre premier $p$. L'\'{e}l\'{e}ment $x$ d\'{e}finit un automorphisme de $N$ d'ordre $p$ sans point fixe $\neq 1$. D'o\`{u}: $N$ intervient dans un groupe de Frobenius si et seulement s'il poss\`{e}de un automorphisme $\sigma$ d'ordre premier et sans point fixe distinct de $1$. \begin{prop}\label{prop6.4} Soit $\sigma$ un automorphisme d'ordre $p$ (non n\'{e}cessairement premier) d'un groupe fini $N$ sans point fixe autre que $1$. Alors: \begin{enumerate} \item[(1)] L'application $x\mapsto x^{-1}\sigma(x)$ (de $N$ dans $N$) est bijective. \item[(2)] Pour tout $x\in N$, on a $x\sigma(x)\sigma^2(x)\cdots\sigma^{p-1}(x)=1$. \item[(3)] Si $x$ et $\sigma(x)$ sont conjugu\'{e}s dans $N$ alors $x=1$. \end{enumerate} \end{prop} $(1)$ Comme $N$ est fini, il suffit de montrer que l'application est injective. Supposons $x^{-1}\sigma(x)=y^{-1}\sigma(y)$ avec $x,y\in N$. Alors $yx^{-1}=\sigma(yx^{-1})$ donc l'\'{e}l\'{e}ment $yx^{-1}$ est fix\'{e} par $\sigma$ donc est \'{e}gal \`{a} $1$. $(3)$ Soit $x\in N$ et supposons qu'il existe $a\in N$ tel que $\sigma(x)=axa^{-1}$. D'apr\`{e}s $(1)$, il existe $b\in N$ tel que $a^{-1}=b^{-1}\sigma(b)$. Alors $\sigma(x)=\sigma^{-1}(b)bxb^{-1}\sigma(b)$ donc $\sigma(bxb^{-1})=bxb^{-1}$, ce qui implique $bxb^{-1}=1$ puis $x=1$. $(2)$ Soit $a=x\sigma(x)\sigma^2(x)\cdots\sigma^{p-1}(x)$. On a $$\sigma(a) = \sigma(x)\sigma^2(x)\cdots\sigma^{p-1}(x)x = x^{-1}ax,$$ donc $a=1$ d'apr\`{e}s $(3)$.~\hfill{$\square$} \begin{coro} Si $l$ est un nombre premier, il existe un $l$-Sylow de $N$ stable par $\sigma$. \end{coro} Soit $S$ un $l$-Sylow de $N$. Le groupe $\sigma(S)$ est aussi un $l$-Sylow de $N$, donc il existe $a\in N$ tel que $aSa^{-1}=\sigma(S)$. On \'{e}crit $a^{-1}=b^{-1}\sigma(b)$, d'o\`{u} $\sigma(b^{-1})bSb^{-1}\sigma(b)=\sigma(S)$, soit $bSb^{-1}=\sigma(b)\sigma(S)\sigma(b^{-1})=\sigma(bSb^{-1})$. Donc $bSb^{-1}$ est un $l$-Sylow de $N$ stable par~ $\sigma$.~\hfill{$\square$} \begin{coro} Si $a\in N$, l'automorphisme $\sigma_a: x\mapsto a\sigma(x)a^{-1}$ est conjugu\'{e} \`{a} $\sigma$ dans $\mathrm{Aut}(G)$; en particulier, il est d'ordre $p$ et sans point fixe. \end{coro} D'apr\`{e}s \ref{prop6.4} $(1)$, il existe $b\in G$ tel que $a=b^{-1}\sigma(b)$; alors $\sigma_a(x)=b^{-1}\sigma(bxb^{-1})b$, donc $b\sigma_a(x)b^{-1}=\sigma(bxb^{-1})$ i.e. le diagramme suivant est commutatif: $$\xymatrix{N \ar[r]^{\sigma_a} \ar[d]_{{\rm conjugaison\ par }\; b^{-1}}& N \ar[d]^{{\rm \ conjugaison\ par }\; b^{-1}}\\ N \ar[r]_{\sigma} & N}$$ Ceci donne imm\'{e}diatement le r\'{e}sultat.~\hfill{$\square$} \bigskip{\it Exemples.\\ } $(1)$ Si $p=2$, on a $x\sigma(x)=1$ pour tout $x\in N$, donc $\sigma(x)=x^{-1}$. Comme $\sigma$ est un automorphisme, $N$ est ab\'{e}lien. $(2)$ Pour le cas $p=3$ (Burnside), posons $\sigma(x)=x'$ et $\sigma^2(x)=x''$. La prop. \ref{prop6.4}~$(2)$ appliqu\'{e}e \`{a} $\sigma$ et $\sigma^2$ donne $xx'x''=1$ et $xx''x'=1$, donc $x'$ et $x''$ commutent; pour les autres c'est \'{e}vident, donc $x$, $x'$ et $x''$ commutent. De m\^{e}me $x$ et $ax'a^{-1}$ commutent pour tout $a$, ainsi que $x$ et $ax''a^{-1}$. Donc $x'$ et $x''$ commutent \`{a} tout conjugu\'{e} de $x$. Or $x=(x'x'')^{-1}$ donc $N$ la propri\'{e}t\'{e} suivante: deux \'{e}l\'{e}ments conjugu\'{e}s commutent, donc aussi $x$ et $(x,y)$. Finalement $\big(x,(x,y)\big)=1$ pour tous $x,y\in N$. Le groupe d\'{e}riv\'{e} de $N$ est contenu dans le centre de $N$; cela entra\^{i}ne que $N$ est nilpotent de classe au plus $2$. $(3)$ Le cas $p=5$ a \'{e}t\'{e} trait\'{e} par Higman: le groupe $N$ est alors nilpotent de classe au plus $6$ (c'est la meilleure borne possible).\bigskip Thompson a g\'{e}n\'{e}ralis\'{e} ces r\'{e}sultats par le \begin{thm}[Thompson] $N$ est nilpotent. \end{thm}\label{thmThompson2} Pour une d\'{e}monstration, cf. \cite[Kap. V, Haupsatz $8$.$14$]{Hupp}. En ce qui concerne la classe de $N$, Higman a conjectur\'{e} que si $p$ est l'ordre de $G$, la classe de $N$ est $\leqslant\frac{p^2-1}{4}$. \section{Structure de $H$}\label{6.4} Nous dirons que $H$ a la propri\'{e}t\'{e} $\mathcal F$ s'il existe un groupe $G$ contenant $H$ et distinct de $H$ tel que le couple $(G,H)$ soit un couple de Frobenius. D'apr\`{e}s les th\'{e}or\`{e}mes de Frobenius et de Thompson, cela revient \`{a} dire qu'il existe un groupe nilpotent $N\neq\{1\}$ sur lequel $H$ op\`{e}re \emph{sans point fixe} (i.e. librement sur $N\mathbf{-}\{1\}$).\bigskip {\it Exemple. } Soit $\mathbf{F}$ un corps fini de caract\'{e}ristique $l$ et soit $H$ un sous-groupe de $\mathbf{SL}_2(\mathbf{F})$ d'ordre premier \`{a} $l$. Si l'on prend pour $N$ le $\mathbf{F}$-espace vectoriel $\mathbf{F}^2$, on v\'{e}rifie facilement que $H$ op\`{e}re librement sur $N\mathbf{-}\{0\}$. Donc $H$ a la propri\'{e}t\'{e} $\mathcal F$. (Ceci s'applique notamment au groupe icosa\'{e}dral binaire d'ordre $120$, groupe qui n'est pas r\'{e}soluble.) \begin{thm} Soit $H$ un groupe fini. Les propri\'{e}t\'{e}s suivantes sont \'{e}quivalentes: \begin{enumerate} \item[(1)] $H$ a la propri\'{e}t\'{e} $\mathcal F$ (i.e. $H$ intervient dans un couple de Frobenius). \item[(2)] Il existe un corps $K$ et une repr\'{e}sentation lin\'{e}aire $\rho: H\rightarrow \mathbf{GL}_n(K)$, avec $n\geqslant 1$, telle que $\rho$ soit \og sans point fixe\fg (i.e. $H$ op\`{e}re librement sur $K^n\mathbf{-}\{0\}$). \item[(3)] Pour tout corps $K$ dont la caract\'{e}ristique ne divise pas $|H|$, il existe une repr\'{e}sentation lin\'{e}aire $\rho: H\rightarrow \mathbf{GL}_n(K)$ sans point fixe. \item[(4)] On peut faire op\'{e}rer $H$ lin\'{e}airement et librement sur une sph\`{e}re $\mathbf{S}_{n-1}$. \end{enumerate} \end{thm} [Noter que $(2)$ et $(3)$ entra\^{i}nent que $\rho$ est fid\`{e}le.] Avant de donner la d\'{e}monstration, faisons quelques remarques sur la propri\'{e}t\'{e} suivante d'un corps $K$, not\'{e}e $(2_K)$: il existe une repr\'{e}sentation $H\rightarrow \mathbf{GL}_n(K)$ sans point fixe (avec $n\geqslant 1$). (a) \emph{Cette propri\'{e}t\'{e} ne d\'{e}pend que de la caract\'{e}ristique $p$ de $K$.}\\ En effet, si elle est satisfaite par $K$ et si $x$ est un vecteur non nul de $K^n$, les transform\'{e}s par $H$ de $x$ engendrent un espace vectoriel de dimension finie $N$ sur le corps premier $K_0$ (i.e. $\mathbf{F}_p$ ou $\mathbf{Q}$), d'o\`{u} une repr\'{e}sentation $H \rightarrow \mathbf{GL}_N(K_0)$ sans point fixe. Par extension des scalaires, on en d\'{e}duit une pour tout corps contenant $K_0$. (b) \emph{Si $(2_K)$ est vraie, la caract\'{e}ristique $p$ de $K$ est, soit $0$, soit un nombre premier ne divisant pas $|H|$.}\\ En effet, si $H$ op\`{e}re librement sur $\mathbf{F}_p^n\mathbf{-}\{0\}$, l'ordre de $H$ divise $p^n-1$ et n'est donc pas divisible par $p$. (c) \emph{La propri\'{e}t\'{e} $(2_K)$ en caract\'{e}ristique $0$ entra\^{i}ne la propri\'{e}t\'{e} analogue en toute caract\'{e}ristique $p$ ne divisant pas $|H|$.}\\ En effet, d'apr\`{e}s (a), il existe un $\mathbf{Q}$-espace vectoriel $V$ de dimension finie $\geqslant 1$ o\`{u} $H$ op\`{e}re sans point fixe. Soit $x\in V$ non nul et soit $L$ le $\mathbf{Z}$-r\'{e}seau de $V$ engendr\'{e} par les transform\'{e}s de $x$ par $H$. Le groupe $H$ op\`{e}re sans point fixe sur $L$. Il op\`{e}re aussi sur le $\mathbf{F}_p$-espace vectoriel $V_p=L/pL$. Montrons que cette action est sans point fixe, d\`{e}s lors que $p$ ne divise pas l'ordre de $H$. Si $s\in H$ est d'ordre $m$, l'automorphisme $s_V$ de $V$ d\'{e}fini par $s$ est tel que $s_V^m=1$, et il n'admet pas $1$ pour valeur propre. On a donc $$1+s_V+s_V^2+\cdots+s_V^{m-1}=0.$$ {\it A fortiori}, la m\^{e}me \'{e}quation vaut dans $V_p$ et elle entra\^{i}ne (vu que $m$ est premier \`{a} $p$) que $sx\neq x$ pour tout $x\in V_p$ non nul. D'o\`{u} la propri\'{e}t\'{e} $(2_K)$ pour $\mathbf{F}_p$. (d) \emph{La propri\'{e}t\'{e} $(2_K)$ en caract\'{e}ristique $p\neq 0$ entra\^{i}ne la propri\'{e}t\'{e} analogue en caract\'{e}ristique $0$.}\\ Soit en effet $\rho_p:H\rightarrow \mathbf{GL}_n(\mathbf{Z}/p\mathbf{Z})$ une repr\'{e}sentation lin\'{e}aire de $H$ sans point fixe. D'apr\`{e}s (b), $p$ ne divise pas $|H|$. D'apr\`{e}s ce qu'on a vu au chapitre \ref{chap4}, on peut relever $\rho_p$ en un homomorphisme $\rho_{p^\infty}: H\rightarrow \mathbf{GL}_n(\mathbf{Z}_p)$, o\`{u} $\mathbf{Z}_p=\displaystyle{\lim_{\longleftarrow}{\mathbf{Z}/p^\nu\mathbf{Z}}}$ est l'anneau des entiers $p$-adiques. Comme $\mathbf{Z}_p\subset\mathbf{Q}_p$, on obtient ainsi une repr\'{e}sentation lin\'{e}aire $H\rightarrow \mathbf{GL}_n(\mathbf{Q}_p)$ de caract\'{e}ristique nulle. Cette repr\'{e}sentation est sans point fixe. En effet, si $x=(x_1,\dots,x_n)$ est un vecteur fixe non nul, on peut supposer (quitte \`{a} multiplier $x$ par un scalaire) que les $x_i$ appartiennent \`{a} $\mathbf{Z}_p$ et que l'un d'eux n'est pas divisible par $p$. La r\'{e}duction modulo $p$ des $x_i$ donne alors un vecteur non nul de $\mathbf{F}_p^n$ fixe par $H$, contrairement \`{a} l'hypoth\`{e}se.\bigskip Ceci fait, la d\'{e}monstration du th\'{e}or\`{e}me est imm\'{e}diate. En effet, il r\'{e}sulte de (a), (b), (c) et (d) que $(2_K)$ est ind\'{e}pendante de $K$. D'o\`{u} l'\'{e}quivalence $(2)\Leftrightarrow (3)$ du th\'{e}or\`{e}me. Prouvons maintenant: $(1)\Rightarrow (2)$ Si $H$ op\`{e}re sans point fixe sur le groupe nilpotent $N\neq\{1\}$, le centre $C$ de $N$ n'est pas r\'{e}duit \`{a} $\{1\}$. Si $p$ est un facteur premier de $|C|$, le groupe $\mathcal{C}_p$ des \'{e}l\'{e}ments $x\in N$ tels que $x^p=1$ est un $\mathbf{F}_p$-espace vectoriel non nul o\`{u} $H$ op\`{e}re sans point fixe. $(3)\Rightarrow (1)$ On choisit pour $K$ un corps fini. On obtient ainsi une action sans point fixe de $H$ sur un groupe ab\'{e}lien \'{e}l\'{e}mentaire. $(4)\Rightarrow (2)$ On prend $K=\mathbf{R}$. $(3)\Rightarrow (4)$ On prend $K=\mathbf{R}$. On obtient une repr\'{e}sentation lin\'{e}aire $\rho: H\rightarrow \mathbf{GL}_n(\mathbf{R})$ sans point fixe. Comme $H$ est fini, il existe sur $\mathbf{R}^n$ une forme quadratique d\'{e}finie positive invariante par $H$ (prendre la somme des transform\'{e}s par $H$ de la forme quadratique standard $\sum{x_i^2}$): quitte \`{a} conjuguer $\rho$, on peut donc supposer que $\rho(H)$ est contenu dans le groupe orthogonal ${\mathbf O}_n(\mathbf{R})$, donc laisse stable la sph\`{e}re $\mathbf{S}_{n-1}$ d'\'{e}quation $$\sum_{i=1}^{n}{x_i^2}=1.$$ Ceci ach\`{e}ve la d\'{e}monstration du th\'{e}or\`{e}me.~\hfill{$\square$} \bigskip{\it Remarques.\\ } $(1)$ On peut classer les groupes $H$ ayant la propri\'{e}t\'{e} $\mathcal F$, cf. \cite{Wolf}. $(2)$ Dans $(4)$, on ne peut pas supprimer la condition que $H$ op\`{e}re lin\'{e}airement sur $\mathbf{S}_{n-1}$. Exemple: $\mathbf{SL}_2(\mathbf{F}_p)$ pour $p\geqslant 7$.\bigskip {\it Exercice. } Soit $H$ un groupe ayant la propri\'{e}t\'{e} $\mathcal F$. Montrer que:\\ $(i)$ Tout sous-groupe ab\'{e}lien de $H$ est cyclique. $(ii)$ Si $p$ et $q$ sont premiers, tout sous-groupe d'ordre $pq$ de $H$ est cyclique. Inversement, si $H$ est r\'{e}soluble et si $(i)$ et $(ii)$ sont v\'{e}rifi\'{e}es, alors $H$ a la propri\'{e}t\'{e} $\mathcal F$ (th\'{e}or\`{e}me de Vincent). Par contre, on peut montrer que le groupe $\mathbf{SL}_2(\mathbf{F}_{17})$ a les propri\'{e}t\'{e}s $(i)$ et $(ii)$, mais pas la propri\'{e}t\'{e} $\mathcal F$. \chapter{Transfert} \section{D\'{e}finition}\label{7.1} Soient $G$ un groupe et $H$ un sous-groupe de $G$ d'indice fini. Soit $X=G/H$ l'ensemble des classes \`{a} gauche selon $H$. Pour tout $x\in X$, on choisit un repr\'{e}sentant $\bar{x}$ de $x$ dans $G$. Le groupe $G$ agit sur $X$. Si $s\in G$ et $x\in X$, l'\'{e}l\'{e}ment $s\bar{x}$ de $G$ a pour image $sx$ dans $X$. Si $\overline{sx}$ d\'{e}signe le repr\'{e}sentant de $sx$, il existe donc $h_{s,x}\in H$ tel que $s\bar{x}=\overline{sx}\,h_{s,x}$. On pose: $$\mathrm{Ver}(s)=\prod_{x\in X}{h_{s,x}} \pmod{(H,H)},$$ o\`{u} le produit est calcul\'{e} dans le groupe $H^{ab}=H/(H,H)$. \begin{thm}[Schur] L'application $\mathrm{Ver}:G\rightarrow H^{ab}$ d\'{e}finie ci-dessus est un homomorphisme et ne d\'{e}pend pas du choix du syst\`{e}me de repr\'{e}sentants $\{\bar{x}\}_{x\in X}$. \end{thm} On commence par montrer que l'application $\mathrm{Ver}$ est bien d\'{e}finie. Soit donc $\{\bar{x}'\}_{x\in X}$ un autre syst\`{e}me de repr\'{e}sentants; calculons $\mathrm{Ver} '(s)$ le produit relatif aux $\bar{x}'$. L'\'{e}l\'{e}ment $\bar{x}'\in X$ a pour image $x$ dans $X$; il existe donc $h_x\in H$ tel que $\bar{x}'=\bar{x}\,h_x$.\\ On a: $$\begin{array}{rcl} s\,\bar{x}' & = & s\,\bar{x}\,h_x\\ & = & \overline{sx}\,h_{s,x}\,h_x \\ & = & \overline{sx}\,h_{sx}\,h^{-1}_{sx}\,h_{s,x}\,h_x \\ & = & (\overline{sx})'\,h^{-1}_{sx}\,h_{s,x}\,h_x, \\ \end{array}$$ d'o\`{u} $$\begin{array}{rcl} \mathrm{Ver} '(s) & = & \prod{h^{-1}_{sx}\,h_{s,x}\,h_x}\pmod{(H,H)} \\ & = & \left(\prod{h_{sx}}\right)^{-1}\prod{h_{s,x}}\prod{h_x}\pmod{(H,H)},\\ \end{array}$$ car $H^{ab}$ est ab\'{e}lien\footnote{Afin d'\'{e}viter les lourdeurs d'\'{e}criture, il est sous-entendu que les produits sont \'{e}tendus aux $x\in X$.}. Or $\prod{h_{sx}}=\prod{h_x}$ car $sx$ parcourt $X$ quand $x$ parcourt $X$, d'o\`{u}: $$\mathrm{Ver} '(s)=\textstyle\prod{h_{s,x}}=\mathrm{Ver}(s)\pmod{(H,H)},$$ et l'application $\mathrm{Ver}$ est bien d\'{e}finie.\\ Montrons \`{a} pr\'{e}sent qu'il s'agit d'un homomorphisme. Soient $s,t\in G$, on a: $$\begin{array}{rcl} st\,\bar{x} & = & s\,\overline{tx}\,h_{t,x} \\ & = & \overline{stx}\,h_{s,tx}\,h_{t,x}, \\ \end{array}$$ d'o\`{u}: $$\begin{array}{rcl} \mathrm{Ver}(st) & = &\prod{h_{s,tx}\,h_{t,x}}\pmod{(H,H)} \\ & = & \prod{h_{s,tx}}\prod{h_{t,x}}\pmod{(H,H)}, \end{array}$$ car $H^{ab}$ est ab\'{e}lien. Or $\prod{h_{s,tx}}=\prod{h_{s,x}}$ car $tx$ parcourt $X$ quand $x$ parcourt $X$.\\ D'o\`{u} $\mathrm{Ver} (st)=\prod{h_{s,x}}\prod{h_{t,x}}=\mathrm{Ver} (s)\mathrm{Ver} (t)\pmod{(H,H)}$.~\hfill{$\square$} \bigskip Comme $H^{ab}$ est ab\'{e}lien, l'homomorphisme $\mathrm{Ver}$ se factorise en un homomorphisme de $G^{ab}$ dans $H^{ab}$ (que nous noterons encore $\mathrm{Ver}$), appel\'{e} \emph{transfert}. \bigskip {\it Remarque. } Le transfert est \emph{fonctoriel} pour les isomorphismes: si $\sigma$ est un isomorphisme du couple $(G,H)$ sur le couple $(G',H')$, le diagramme $$\xymatrix{ G^{ab}\ar[r]^\sigma \ar[d]_{\mathrm{Ver}} & G'^{ab} \ar[d]^{\mathrm{Ver}} \\ H^{ab} \ar[r]^\sigma & H'^{ab} }$$ est commutatif (il suffit de constater que si $\{\bar{x}\}$ est un syst\`{e}me de repr\'{e}sentants de $G/H$ alors $\{\sigma(\bar{x})\}$ en est un pour $G'/H'$).\\ En particulier, si l'on prend $G=G'$, $H=H'$ et $\sigma(x)=gxg^{-1}$, avec $g\in N_G(H)$, ceci montre que l'image de l'homomorphisme $\mathrm{Ver}~:~G^{ab}~\rightarrow~H^{ab}$ est contenue dans l'ensemble des \'{e}l\'{e}ments de $H^{ab}$ fix\'{e}s par $N_G(H)$. \section{Calcul du transfert}\label{7.2} Soit $H$ un sous-groupe d'indice fini de $G$ et soit $X=G/H$. L'\'{e}l\'{e}ment $s\in G$ op\`{e}re sur $X$; soit $C$ le sous-groupe cyclique de $G$ engendr\'{e} par $s$. Alors $C$ d\'{e}coupe $X$ en orbites $O_\alpha$. Soient $f_\alpha=|O_\alpha|$ et $x_\alpha\in O_\alpha$. On a $s^{f_\alpha}x_\alpha=x_\alpha$. Si $g_\alpha$ est un repr\'{e}sentant de $x_\alpha$, on a donc: $$s^{f_\alpha}g_\alpha =g_\alpha h_\alpha,\ \mbox{avec } h_\alpha \in H.$$ \begin{prop}\label{7.2.1} On a $\mathrm{Ver}(s)=\prod_{\alpha}{h_\alpha }=\prod_{\alpha}{g_\alpha ^{-1} s^{f_\alpha} g_\alpha }\pmod{(H,H)}.$ \end{prop} On prend pour syst\`{e}me de repr\'{e}sentants de $X$ les \'{e}l\'{e}ments $s^ig_\alpha $, avec $0\leqslant i<f_\alpha $. Si le repr\'{e}sentant de $x\in X$ est de la forme $s^{f_\alpha -1}g_\alpha $, l'\'{e}l\'{e}ment $h_{s,x}$ correspondant de $H$ est $h_\alpha $; les autres $h_{s,x}$ sont \'{e}gaux \`{a} $1$. La proposition en r\'{e}sulte. ~\hfill{$\square$} \begin{coro}\label{7.2.2} Soit $\varphi$ un homomorphisme de $H^{ab}$ dans un groupe $A$. On suppose que $\varphi(h)=\varphi(h')$ lorsque les \'{e}l\'{e}ments $h,h'\in H$ sont conjugu\'{e}s dans $G$. Alors $$\varphi\big(\mathrm{Ver} (h)\big)=\varphi(h)^n,$$ pour tout $h\in H$, o\`{u} $n=(G:H)$. \end{coro} En effet, on a $\varphi\big(\mathrm{Ver}(h)\big)=\prod_{\alpha}{\varphi(g_\alpha ^{-1}h^{f_\alpha }g_\alpha )}$. Or les \'{e}l\'{e}ments $g_\alpha ^{-1} h^{f_\alpha }g_\alpha $ et $h^{f_\alpha }$ sont conjugu\'{e}s dans $G$, donc on a: $$\textstyle{\varphi\big(\mathrm{Ver}(h)\big)=\prod_{\alpha}{\varphi(h^{f_\alpha })}= \prod_{\alpha}{\varphi(h)^{f_\alpha }}}.$$ Le r\'{e}sultat se d\'{e}duit alors de la relation $\sum_{\alpha}{f_\alpha }=\sum_{\alpha}{|O_\alpha |}=|X|=n.$ ~\hfill{$\square$} \bigskip Comme $H\subset G$, on a un morphisme naturel $H^{ab}\rightarrow G^{ab}$. \begin{coro} L'homomorphisme compos\'{e} $G^{ab}\stackrel{\mathrm{Ver}}{\longrightarrow}H^{ab}\longrightarrow G^{ab}$ est $s\mapsto s^n$. \end{coro} C'est imm\'{e}diat par la proposition, car $$g_\alpha ^{-1}s^{f_\alpha }g_\alpha = s^{f_\alpha }\pmod{(G,G)}\hspace{0.5cm} \mbox{et}\hspace{0.5cm} \textstyle\sum_{\alpha}{f_\alpha }=|X|=n.\eqno\square$$ \begin{coro} Si $G$ est ab\'{e}lien, alors $\mathrm{Ver}:G\rightarrow H$ est donn\'{e} par $s\mapsto s^n.$ \end{coro} \section{Exemples d'utilisation du transfert}\label{7.3} \subsection{Premier exemple (Gauss)}\label{Gauss} On fixe un nombre premier $p\neq 2$. Soit $G=\mathbf{F}_p^*$ et soit $H=\{\pm 1\}$. Alors l'indice de $H$ dans $G$ est $(p-1)/2$ et le transfert est donn\'{e} par $\mathrm{Ver} (x)=x^{(p-1)/2}$ pour $x\in \mathbf{F}_p^*$. Or c'est aussi le \emph{symbole de Legendre $\big(\frac{x}{p}\big)$}. Ceci nous donne alors un moyen de calculer $\big(\frac{x}{p}\big)$.\\ On choisit le syst\`{e}me de repr\'{e}sentants de $X=G/H$ donn\'{e} par ${S=\{1,2,\dots,(p-1)/2\}.}$ Soit $x\in G$ et soit $s\in S$. Si $xs\in S$, alors le $h_{s,x}$ correspondant vaut $1$; sinon c'est $-1$. On pose donc: $$\varepsilon(x,s) = \left\{\!\!\begin{array}{ll} 1 & \mbox{si}\ xs\in S, \\ -1 & \mbox{si}\ xs\notin S. \\ \end{array}\right.$$ On a $\mathrm{Ver} (x)=\prod_{s\in S}{\varepsilon}(x,s)$\quad (\emph{lemme de Gauss}). \bigskip Calculons par exemple $\big(\frac{2}{p}\big)$ pour $p\neq 2$. On a $p=1+2m$ et: $$\begin{array}{rcll} \big(\frac{2}{p}\big) & = & (-1)^{m/2} & \mbox{si $m$ est pair,} \\ & & & \\ \big(\frac{2}{p}\big) & = & (-1)^{(m+1)/2} & \mbox{si $m$ est impair.} \\ \end{array}$$ D'o\`{u}: \begin{eqnarray*} \textstyle p\equiv 1 \pmod{8} \Longrightarrow \big(\frac{2}{p}\big)=+1,\\ \textstyle p\equiv 3 \pmod{8} \Longrightarrow \big(\frac{2}{p}\big)=-1,\\ \textstyle p\equiv 5 \pmod{8} \Longrightarrow \big(\frac{2}{p}\big)=-1,\\ \textstyle p\equiv 7 \pmod{8} \Longrightarrow \big(\frac{2}{p}\big)=+1,\\ \end{eqnarray*} ce qui se r\'{e}sume par: $\big(\frac{2}{p}\big)=1 \Longleftrightarrow p\equiv \pm 1 \pmod{8}$. \subsection{Second exemple} \begin{prop} Si $G$ est un groupe sans torsion contenant un sous-groupe $H$ d'indice fini isomorphe \`{a} $\mathbf{Z}$, alors $G$ lui-m\^{e}me est isomorphe \`{a} $\mathbf{Z}$. \end{prop} Quitte \`{a} remplacer $H$ par l'intersection de ses conjugu\'{e}s, on peut supposer $H$ normal dans $G$. Le groupe $G$ agit sur $H$, d'o\`{u} un homomorphisme $\varepsilon:G\rightarrow\mathrm{Aut} (H)=\{\pm 1\}.$ Soit $G'$ le noyau de $\varepsilon$; alors $H\subset G'$ car $H$, \'{e}tant ab\'{e}lien, agit trivialement sur lui-m\^{e}me par automorphismes int\'{e}rieurs ($H$ est m\^{e}me contenu dans le centre de $G'$ car celui-ci agit trivialement sur $H$). Donc le transfert $\mathrm{Ver} :G'^{ab}\rightarrow H^{ab}=\mathbf{Z}$ est \'{e}gal \`{a} $x\mapsto x^n$ o\`{u} $n=(G':H)$. Soit $\Phi$ le noyau de $\mathrm{Ver} :G'\rightarrow H^{ab}$, alors $\Phi\cap H=\{1\}$ car $H$ est isomorphe \`{a} $\mathbf{Z}$, donc $\Phi$ est fini et m\^{e}me $\Phi=\{1\}$ puisque $G$ est sans torsion. Donc $G'$ est isomorphe \`{a} $\mathbf{Z}$. Si $G$ est \'egal \`{a} $G'$, on a fini.\\ Sinon, on a $(G:G')=2$, $G'\simeq \mathbf{Z}$ et $G/G'$ op\`{e}re sur $G'$ par $y\mapsto y^{\pm 1}$. Soit donc $x\in G\mathbf{-}G'$ tel que, pour un certain $y\in G'$, on ait $xyx^{-1}=y^{-1}$. Alors $x^2\in G'$ car l'indice de $G'$ dans $G$ est $2$. Prenons alors $y=x^2$, il vient $xx^2x^{-1}=x^{-2}$ d'o\`{u} $x^2=x^{-2}$ donc $x=1$ car $G$ est sans torsion. Donc $G$ est isomorphe \`{a} $\mathbf{Z}$.~\hfill{$\square$} \bigskip {\it Remarque. } On a un r\'{e}sultat analogue (mais bien moins \'{e}l\'{e}mentaire) pour les groupes libres non ab\'{e}liens\label{Stallings-Swan} (Stallings-Swan\footnote{R\'{e}f\'{e}rences: J.R. Stallings, {\it On torsion-free groups with infinitely many ends}, Ann. Math. $88$ $(\mit 1968)$, $312-334$. R. Swan, {\it Groups of cohomological dimension one}, J. Algebra $12$ $(\mit 1969)$, $585-610$.}). \section{Transfert dans un sous-groupe de Sylow}\label{7.4} \smallskip A partir de maintenant, on suppose que $G$ est fini. \medskip \begin{thm} Soient $H$ un $p$-Sylow d'un groupe $G$ et $\varphi:H\rightarrow A$ un homomorphisme \`{a} valeurs dans un $p$-groupe ab\'{e}lien $A$. Alors~: \begin{enumerate} \item[(1)] Pour qu'il existe un prolongement de $\varphi$ en un homomorphisme de $G$ dans $A$, il faut et il suffit que, pour tous $h,h'\in H$ conjugu\'{e}s dans $G$, on ait $\varphi(h)=\varphi(h').$ \item[(2)] Si cette condition est v\'{e}rifi\'{e}e, alors le prolongement est unique et donn\'{e} par: $s\mapsto \varphi\big(\mathrm{Ver}(s)\big)^{1/n}$, o\`{u} $n=(G:H)$, ce qui a un sens car $n$ est premier \`{a} $p$. \end{enumerate} \end{thm} $(1)$ La condition est n\'{e}cessaire car si $\widetilde{\varphi}$ est un prolongement de $\varphi$ \`{a} $G$, alors pour $h\in H$ et $g\in G$ avec $g^{-1}hg\in H$, on a $$\varphi(g^{-1}hg)=\widetilde{\varphi}(g)^{-1}\varphi(h)\widetilde {\varphi}(g)=\varphi(h)$$ car $A$ est ab\'{e}lien.\\ La condition est suffisante car, $n$ \'{e}tant premier \`{a} $p$ et $A$ \'{e}tant un $p$-groupe, $\varphi\big(\mathrm{Ver}(s)\big)^{1/n}$ a un sens (pour tout $a\in A$, il existe $b\in A$ unique tel que $b^n=a$) et l'application $s\mapsto \varphi\big(\mathrm{Ver}(s)\big)^{1/n}$ convient d'apr\`{e}s le cor. \ref{7.2.2}. $(2)$ Ce prolongement est unique car $\varphi$ est n\'{e}cessairement \'{e}gal \`{a} $1$ sur les $p'$-Sylow de $G$ lorsque $p'\neq p.$ ~\hfill{$\square$} \begin{thm} Soient $H$ un $p$-Sylow ab\'{e}lien de $G$ et $N$ son normalisateur dans $G$. Alors l'image de l'homomorphisme $\mathrm{Ver}:G^{ab}\rightarrow H^{ab}=H$ est l'ensemble des \'{e}l\'{e}ments de $H$ fix\'{e}s par $N$ (i.e. les \'{e}l\'{e}ments de $H$ qui sont dans le centre de $N$). \end{thm} Par la remarque faite \`{a} la fin du \S\ \ref{7.1}, on a d\'{e}j\`{a} l'inclusion de l'image de $\mathrm{Ver}$ dans $H^N=\{h\in H|\ xhx^{-1}=h, \forall\, x\in N\}$.\\ Montrons l'\'{e}galit\'{e}. On a $N\supset H$ et $(N:H)$ est premier \`{a} $p$ car $H$ est un $p$-Sylow. On d\'{e}finit l'homomorphisme $\varphi:H\rightarrow H^N$ par $\varphi(h)=\big(\prod_{x\in N/H}{xhx^{-1}}\big)^{1/(N:H)}.$\\ On a bien $\prod_{x\in N/H}{xhx^{-1}}\in H^N$ car si $x'\in N$, on a: $$x'\big(\textstyle\prod_{x\in N/H}{xhx^{-1}}\big)x'^{-1}=\prod_{x\in N/H}{x'xhx^{-1}x'^{-1}}=\prod_{x\in N/H}{xhx^{-1}}.$$ De plus, comme $H$ est ab\'{e}lien, si $h,h'\in H$ sont conjugu\'{e}s dans $G$, ils le sont dans $N$ (cf. \S\ \ref{2.4}) et on a alors $\varphi(h)=\varphi(h')$. D'apr\`{e}s le cor. \ref{7.2.2}, on a $$\varphi\big(\mathrm{Ver}(h)\big)=\varphi(h)^n,$$ pour $h\in H$, o\`{u} $n$ est l'indice de $H$ dans $G$. Or, pour $h\in H^N$, on a $\varphi(h)=h$ et comme $\mathrm{Ver}(h)\in H^N$, on a, pour $h\in H^N$: $$\mathrm{Ver}(h)=\varphi\big(\mathrm{Ver}(h)\big)=\varphi(h)^n,$$ i.e. $\mathrm{Ver}(h)=h^n$ si $h\in H^N$.\\ Comme $H^N$ est un $p$-groupe et que $n$ est premier \`{a} $p$, on obtient tous les \'{e}l\'{e}ments de $H^N$ par les puissances $n$-i\`{e}mes d'\'{e}l\'{e}ments de $H^N$. Donc $\mathrm{Im\, }(\mathrm{Ver})=H^N.$ ~\hfill{$\square$} \begin{thm}\label{7.4.3} Soit $H$ un $p$-Sylow de $G$. Supposons que $H$ soit ab\'{e}lien diff\'{e}rent de $\{1\}$ et que $G$ n'ait pas de quotient cyclique d'ordre $p$. Soit $N$ le normalisateur de $H$ dans $G$. Alors~: \begin{enumerate} \item[(1)] L'ensemble $H^N$ des \'{e}l\'{e}ments de $H$ fix\'{e}s par $N$ est r\'{e}duit \`{a} $\{1\}$. \item[(2)] Si $r$ est le rang de $H$ (nombre minimum de g\'{e}n\'{e}rateurs), il existe un nombre premier $l$, distinct de $p$, qui divise \`{a} la fois $(N:H)$ et $\prod_{i=1}^r(p^i-1).$ \end{enumerate} \end{thm} $(1)$ Si $H^N\neq\{1\}$, on a alors un homomorphisme non trivial $\mathrm{Ver}:G\rightarrow H^N$, o\`{u} $H^N$ est un $p$-groupe, d'o\`{u} l'on tire un quotient cyclique d'ordre $p$ de $G$. Pour $(2)$, soit $H_p$ le sous-groupe de $H$ form\'{e} des \'{e}l\'{e}ments $x$ tels que $x^p=1$; c'est un $\mathbf{F}\!_p$-espace vectoriel de dimension $r$ \big(puisque $H=\prod_{i=1}^{r}(\mathbf{Z}/p^{n_i}\mathbf{Z})$\big). L'action de $N$ sur $H_p$ est non triviale par $(1)$; elle d\'{e}finit un sous-groupe $\Phi$ de $\mathrm{Aut}(H_p)\simeq\mathbf{GL}_r(\mathbf{Z}/p\mathbf{Z}).$ Si $l$ est un facteur premier de l'ordre de $\Phi$ alors $l$ divise l'ordre de $N/H$ car $\Phi$ est un quotient de $N/H$ (en effet, $\Phi$ est d\'{e}fini par l'action de $N$ sur $H$, qui est en fait une action de $N/H$ car $H$ \'{e}tant ab\'{e}lien, il agit trivialement sur lui-m\^{e}me). On a $l\neq p$, car $p$ ne divise pas l'ordre de $N/H$; comme $\Phi$ est un sous-groupe de $\mathbf{GL}_r(\mathbf{Z}/p\mathbf{Z})$, $l$ divise l'ordre de $\mathbf{GL}_r(\mathbf{Z}/p\mathbf{Z})$, qui est $p^{r(r-1)/2}\prod_{i=1}^{r}(p^i-1)$. D'o\`{u} $(2)$. ~\hfill{$\square$} \begin{coro}\label{coro7.9} Si $p=2$, le sous-groupe $H$ n'est pas cyclique. \end{coro} En effet, on a alors $r\geqslant 2$ par le th. \ref{7.4.3}, mais on peut en donner une d\'{e}monstration directe: si $H$ est un $2$-Sylow cyclique, soit $h$ un g\'{e}n\'{e}rateur de $H$. Le groupe agit sur lui-m\^{e}me par translations. L'\'{e}l\'{e}ment $h$ d\'{e}coupe alors $G$ en $|G/H|$ orbites. On envoie $G$ dans $\{\pm 1\}$ en associant \`{a} $x\in G$ la signature de la permutation que $x$ effectue sur $G$ par l'action de translations. Or $h$ est form\'{e} d'un nombre impair (i.e. pr\'{e}cis\'{e}ment $|G/H|$) de cycles de la forme $(x,hx,\dots,h^{2^n-1}x)$, o\`{u} $|H|=2^n$. Chacun de ces cycles est de signature $(-1)$, donc la signature de $h$ est $(-1)$. On vient d'exhiber un homomorphisme non trivial de $G$ dans $\{\pm 1\}$, d'o\`{u} contradiction. ~\hfill{$\square$} \bigskip {\it Remarque. } Le th. \ref{7.4.3} montre que $N_G(H)\neq H$ quand celui-ci est un $p$-Sylow ab\'{e}lien d'un groupe sans quotient cyclique d'ordre $p$ (sinon $H^N=H\neq\{1\}$). \section{Application: groupes simples d'ordre im\-pair inf\'{e}rieur \`{a} $2000$} En fait, on va montrer qu'il n'existe pas de groupe $G$ d'ordre impair, avec $1<|G|<~2000$, tel que $G=(G,G)$.\\ D'apr\`{e}s le th\'{e}or\`{e}me de Burnside (cf. Annexe, th. \ref{thmburn}), il y a au moins trois facteurs premiers dans $|G|$; si $p^{\alpha}$ est le plus petit de ces facteurs, on a $p^{3\alpha}<2000$, ce qui donne $5$ cas possibles: $p^{\alpha}=3,5, 7, 9$ ou $11$. \emph{Cas $p^{\alpha}=3$}: le groupe $G$ a un $3$-Sylow d'ordre $3$, donc cyclique, donc ab\'{e}lien. Soit $N$ son normalisateur. D'apr\`{e}s le th. \ref{7.4.3}, il existe $l$ premier distinct de $3$ qui divise \`a la fois $|N|$ et $(p-1)=2$. Or $|N|$ est impair, c'est impossible. \emph{Cas $p^{\alpha}=5$}: il s'exclut par un raisonnement analogue. \emph{Cas $p^{\alpha}=9$}: on remarque de m\^{e}me que le $3$-Sylow est d'ordre $3^{2}$, donc ab\'{e}lien. Et le m\^{e}me raisonnement (dans les deux cas possibles: $r=1$ ou $r=2$) exclut ce cas. \emph{Cas $p^{\alpha}=7$}: par le m\^{e}me th\'{e}or\`{e}me, on doit avoir $l$ premier divisant $|N|$ (impair) et $p-1=6$. Donc $3$ divise $|G|$. Comme les cas pr\'{e}c\'{e}dents excluent $p^{\alpha}=3$ ou $9$, on a: $3^{3}$ divise $|G|$. Par le th\'{e}or\`{e}me de Burnside, on a $q$ premier, distinct de $3$ et de $7$, divisant l'ordre de $G$. Donc $|G|\geqslant 3^{3}.7.q^{\beta}$ et $q^{\beta}\geqslant 11$ (car si $q=5$, un cas d\'{e}j\`{a} vu montre que $\beta\geqslant2$). Or c'est impossible, car $3^3.7.11>2000$. \emph{Cas $p^{\alpha}=11$}: le th. \ref{7.4.3}, appliqu\'{e} au $11$-Sylow, montre qu'il existe $l$ divisant $|N|$ et $p-1=10$. D'apr\`{e}s un cas pr\'{e}c\'{e}dent, on a alors $|G|\geqslant 11.5^{2}.q^{\beta}$ et $q^{\beta}\geqslant 13$, ce qui est impossible.\hfill{$\square$} \section{Application: groupes simples non ab\'{e}liens d'ordre inf\'{e}rieur \`{a} $200$} Dans ce \S, on suppose $|G|\leqslant 200$. \begin{prop} \ \begin{itemize} \item[(1)] On suppose $G=(G,G)$ et $G\neq\{1\}$. Alors l'ordre de $G$ est $60,120,168$ ou $180$. \item[(2)] Si $G$ est simple non ab\'{e}lien, alors l'ordre de $G$ est $60$ ou $168$ et $G$ est isomorphe \`{a} $\mathcal{A}_{5}$ ou $\mathbf{PSL}_{2}(\mathbf{F}_{7})$. \end{itemize} \end{prop} $(1)$ D'apr\`{e}s le \S\ pr\'{e}c\'{e}dent, l'ordre de $G$ est pair et il est m\^{e}me divisible par $4$ d'apr\`{e}s le cor. \ref{coro7.9} qui affirme la non-existence de $2$-Sylow cycliques.\\ {\it Cas o\`{u} le $2$-Sylow $H$ est d'ordre $4$}. C'est alors $\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}$. Soit $N=N_{G}(H)$; $N$ agit non trivialement sur $H$ (puisque $H^{N}=\{1\}$, cf. th. \ref{7.4.3}), d'o\`{u} un homomorphisme non trivial $N\rightarrow \mathrm{Aut}(\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z})$ (qui est d'ordre $6$). Si $N$ s'envoie sur un sous-groupe d'ordre $2$ de $\mathrm{Aut} H$, alors $H^{N}\simeq \mathbf{Z}/2\mathbf{Z}$ (il suffit pour le voir d'examiner les automorphismes de $\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}$). Donc $3$ divise l'ordre de $N$, donc aussi l'ordre de $G$. D'o\`{u} cinq possibilit\'{e}s:\\ $\bullet$ $|G|=4.3.13$: soit $H$ un $13$-Sylow et soit $N$ son normalisateur. Alors $(G:N)$ est le nombre des $13$-Sylow de $G$ donc $(G:N)\equiv 1\pmod{13}$; comme $(G:N)$ divise $4.3$ cela implique $(G:N)=1$ et donc $H$ est normal: c'est impossible.\\ $\bullet$ $|G|=4.3.11$: soit $N$ le normalisateur d'un $11$-Sylow $H$; alors $(G:N)$ divise $4.3$ et $(G:N)\equiv 1 \pmod{11}$, donc ou $(G:N)=1$ et c'est impossible, ou $(G:N)=12$ et alors $N=H$ puis $H^N=H$ (puisque $H$ est ab\'{e}lien) et c'est impossible (cf. \S\ \ref{7.4}).\\ $\bullet$ $|G|=4.3.7$: soit $N$ le normalisateur d'un $7$-Sylow $H$. Alors $(G:N)$ divise $12$ et $(G:N)\equiv 1\pmod{7}$; donc $(G:N)=1$: c'est impossible.\\ $\bullet$ Restent les cas $|G|=4.3.5=60$ ou $|G|=4.3^{2}.5=180$ (les autres cas sont \'{e}limin\'{e}s car sinon $|G|>200$).\\ {\it Cas o\`{u} le $2$-Sylow $H$ est d'ordre $8$}. On a deux cas possibles: $|G|=8.3.5$ ou $|G|=8.3.7$, les autres cas donnent un ordre de $G$ trop grand. De m\^{e}me, le cas $|H|>8$ est \'{e}limin\'{e} pour des raisons d'ordre. Ceci donne l'assertion $(1)$. $(2)$ Etudions les cas $|G|=4.3^{2}.5$ et $|G|=8.3.5$: soit $H$ un $5$-Sylow et soit $N$ son normalisateur; alors $(G:N)\equiv 1 \pmod{5}$ et $(G:N)$ divise $4.3^{2}$ dans un cas, $8.3$ dans l'autre. Dans les deux cas, la seule possibilit\'{e} est $(G:N)=6$. Soit $X$ l'ensemble des $5$-Sylow de~$G$. Le groupe $G$ s'envoie dans le groupe des permutations de $X$, c'est-\`{a}-dire $\mathcal{S}_{6}$. Comme $G$ est simple, il s'envoie dans $\mathcal{A}_{6}$. Or $\mathcal{A}_{6}$ est d'ordre $360$ et $G$ d'ordre $180$ ou $120$. Le groupe $\mathcal{A}_{6}$ ne peut avoir de sous-groupe d'indice $m$ avec $1<m<6$ (ici ce serait $3$ ou $2$) sinon $\mathcal{A}_{6}$ se plongerait dans $\mathcal{S}_{m}$, ce qui est impossible puisque $|\mathcal{A}_{6}|>|\mathcal{S}_{m}|$. Les seuls ordres possibles de groupes simples non ab\'{e}liens inf\'{e}rieurs \`{a} $200$ sont donc $4.3.5=60$ et $8.3.7=168$.\\ {\it Structure des groupes simples d'ordre $60$ et $168$}. {\it Ordre $60$}: soit $H$ un $2$-Sylow de $G$; alors $H$ ne peut pas \^{e}tre cyclique (cor. \ref{coro7.9}) et donc (th. \ref{7.4.3}) $3$ divise $|N|$ ($N$ normalisateur de $H$). Donc $12$ divise $|N|$ et $N\neq G$ donc $|N|=12$. Ainsi $G/N$ est d'ordre $5$ et on a un homomorphisme non trivial de $G$ dans $\mathcal{S}_{5}$. Comme $G$ est simple, cela donne un plongement de $G$ dans $\mathcal{A}_{5}$ et pour des raisons d'ordre, on a $G=\mathcal{A}_{5}$. {\it Ordre $168$}: soit $H$ un $7$-Sylow de $G$ et soit $N$ son normalisateur. Alors $(G:N)$ divise $8.3$ et $(G:N)\equiv 1\pmod{7}$. Comme $N\neq G$, on a $(G:N)=8$. Donc $|N|=21$. Consid\'{e}rons la suite exacte $\{1\}\rightarrow H\rightarrow N \rightarrow N/H \rightarrow \{1\}$. Comme l'ordre de $H$ est premier \`{a} celui de $N/H$, le groupe $N$ est produit semi-direct de $N/H$ et de $H$ (cf. th. \ref{Zassen}). Le groupe $N$ a donc deux g\'{e}n\'{e}rateurs: $\alpha$ (g\'{e}n\'{e}rateur de $H$) avec $\alpha^{7}=1$ et $\beta$ (g\'{e}n\'{e}rateur de $N/H$) avec $\beta^{3}=1$. L'automorphisme $x\mapsto \beta x\beta^{-1}$ de $H$ est d'ordre $3$; c'est donc, soit $x\mapsto x^{2}$, soit $x\mapsto x^{-2}$. Quitte \`{a} remplacer $\beta$ par $\beta^{-1}$, on peut supposer que c'est $x\mapsto x^{2}$. On a alors $\beta\alpha\beta^{-1}=\alpha^{2}$.\\ Soit $X$ l'ensemble des $7$-Sylow de $G$. Alors $H$ op\`{e}re sur $X$ et fixe lui-m\^{e}me comme \'{e}l\'{e}ment de $X$; appelons cet \'{e}l\'{e}ment $\infty$. On a $X=\{\infty\}\cup X_{0}$ avec $|X_{0}|=7$. Le groupe $H$ op\`{e}re librement sur $X_{0}$ (car $H$ est cyclique d'ordre $7$). L'\'{e}l\'{e}ment $\beta$ op\`{e}re sur $X$ et fixe $\infty$ car $\beta\in N$. Comme $\beta^3=1$, il existe $x_{0}\in X_{0}$ tel que $\beta x_{0}=x_{0}$. Alors $$X=\{x_{0},\alpha x_{0},\dots,\alpha^{6} x_{0},\infty \}.$$ On identifie $X$ \`{a} $\mathbf{P}_{1}(\mathbf{F}_{7})$ en indexant $\alpha^{i} x_{0}$ par $i$.\\ L'\'{e}l\'{e}ment $\alpha$ op\`{e}re sur $\mathbf{P}_{1}(\mathbf{F}_{7})$ par : $\alpha(i)=i+1$ si $i<6$, $\alpha(6)=0$ et $\alpha(\infty)=\infty $. L'\'{e}l\'{e}ment $\beta$ op\`{e}re par: $\beta(\infty)=\infty$ , $\beta(0)=0$; de plus $\beta\alpha=\alpha^{2}\beta$ d'o\`{u} $\beta(i+1)=\beta(i)+2$ et $\beta(i)=2i$ pour tout $i$. Ainsi $\alpha$ agit sur $\mathbf{P}_{1}(\mathbf{F}_{7})$ comme une translation et $\beta$ comme une homoth\'{e}tie. Soit $C$ le sous-groupe cyclique de $N$ engendr\'{e} par $\beta$ et soit $M$ son normalisateur dans $G$. Comme $C$ est un $3$-Sylow cyclique de $G$, $2$ divise $|M|$ (cf. th. \ref{7.4.3}). Le groupe $M$ agit de fa\c{c}on non triviale sur $C$ (cf. \S\ \ref{7.4}) et donc il existe $\gamma$ tel que $\gamma C\gamma^{-1}=C$ et $\gamma\beta\gamma^{-1}=\beta^{-1}$. Comme $\gamma\notin C$ et $\gamma\neq \alpha^{n}$ (car $\alpha\notin M$), $\gamma$ peut \^{e}tre choisi d'ordre $2^{n}$.\\ L'\'{e}l\'{e}ment $\gamma$ transforme une orbite de $C$ en une orbite de $C$ donc $\gamma(\{0,\infty\})=\{0,\infty\}$. Or $\gamma$ op\`{e}re sans point fixe sur $X$, car sinon il serait conjugu\'{e} \`{a} un \'{e}l\'{e}ment de $N$, ce qui est impossible puisque $|N|$ est impair. Donc $\gamma(0)=\infty$ et $\gamma(\infty)=0$. Comme $\gamma^{2}$ fixe $\infty$, on a $\gamma^2\in N$. Comme $\gamma$ est d'ordre pair, on a $\gamma^{2}=1$. Donc $\gamma$ permute les deux orbites $\{1,2,4\}$ et $\{3,6,5\}$. Posons $\gamma(1)=\lambda$. Alors $\lambda$ est \'{e}gal \`{a} $3,6$ ou $5$. Comme $\gamma\beta=\beta^{-1}\gamma$, on a $\gamma(2i)\equiv\gamma(i)/2\pmod{7}$. D'o\`{u} $\gamma(i)=\lambda/i$ et $\gamma$ est une homographie. D'o\`{u} $\gamma\in \mathbf{PGL}_{2}(\mathbf{F}_{7})$. Comme $-\lambda$ est un carr\'{e}, on a $$\gamma(i)=\frac{-\mu}{\mu^{-1}i}$$ avec $\mu^{2}=-\lambda$. Le d\'{e}terminant de la matrice $$\left( \begin{array}{cc} 0 & -\mu \\ \mu^{-1} & 0 \\ \end{array} \right) $$ \'{e}tant \'{e}gal \`{a} $1$, on a $\gamma\in \mathbf{PSL}_{2}(\mathbf{F}_{7})$.\\ Or $\alpha, \beta$ et $\gamma$ engendrent $G$. En effet, soit $G'$ le sous-groupe de $G$ engendr\'{e} par ces \'{e}l\'{e}ments. Alors $G'$ contient $N$ et $\gamma$ (d'ordre pair). Si $G'\neq G$, alors $G'$ est d'indice $2$ ou $4$ et $G$ s'envoie dans $\mathcal{A}_{4}$ ou $\mathcal{A}_{2}$, ce qui est impossible. Donc $G=G'$. On a un homomorphisme injectif $G\rightarrow \mathbf{PSL}_{2}(\mathbf{F}_{7})$. Comme ces deux groupes ont le m\^{e}me ordre, c'est un isomorphisme.\hfill{$\square$}
1,116,691,498,079
arxiv
\section{Introduction} Gamma-ray burst(GRB), the most violent explosion cosmic source, has been identified as the cosmological event since 1997(van Paradijs et al. 1997; Metzger et al. 1997). Recently GRB 090423 has been explored at high redshift above 8(Salvaterra et al. 2009a; Tanvir et al. 2009). The long-duration GRB(LGRB) progenitors are proposed to be the massive collapsing stars (Woosley 1993; Kumar, Narayan \& Johnson 2008). Some long bursts have been observed to be associated with supernova events(Hjorth et al. 2003; Stanek et al. 2003; Malesani et al. 2004; Mazzali et al. 2006; Xu et al. 2008), hence having a common star-forming origin(Paczynski 1998). Indeed, long GRBs can be found in the star formation galaxies and these galaxies are dominated by the young stellar population(Christensen et al. 2004). In general, GRBs favor a metal-poor environment(Fynbo et al. 2006; Kewley et al. 2007) and the hosts have low stellar masses(Wiersema et al. 2007). Jakobsson et al. (2005) proposed that GRB host galaxies, at least those high redshift($z>2$) hosts, trace the star formation of the universe in an unbiased way. The high global star formation rate(SFR) history at redshift larger than 6 (Hopkins \& Beacom 2006; Yan et al. 2009) indicates the possibility of high-redshift GRB production and the detection of host galaxies. From the research of Y\"{u}ksel et al. (2008) and Kistler et al. (2009), there could be a link from star formation to the GRB production in the high redshift universe, in which the GRB luminosity function is involved. Moreover, the evolution of the GRB luminosity function has been investigated by Salvaterra et al. (2009b). All of these evidence provide the strong clue to study the intrinsic link from SFR to GRB production and the possible evolutionary properties of GRBs and their hosts. The grains and metals produced by the host galaxy will take effects on the GRB afterglow emissions. Thus, the GRB progenitors and their environments can be expressed by the absorption features of GRB afterglows. The heavy attenuation in the X-ray band has been given in the statistic results from Campana et al. (2010), indicating a dense surrounding environment of those GRBs. In the mean while, it is also interesting to understand whether this kind of strong attenuation is intrinsically evolved with redshift. On the other hand, the characteristics of the corresponding absorption in the optical band are still under debate. Although the approximate dust extinction law of GRB host galaxies has been given by Chen, Li \& Wei (2006) and Li et al. (2008), in order to have an explanation of dust obscuration and especially to interpret some X-ray detected but optical faint bursts(so-called dark bursts, Akerlof \& Swan 2007; Kann et al. 2007; Perley et al. 2009), the physical origin associated with the star formation and galactic evolution should be studied in an unified scenario. In this paper, we specify one physical model of star-forming and metal-poor galaxies being as the hosts of long GRBs, exploiting the physical recipes from Granato et al. (2004). In the general scenario of Granato et al. (2004), at each redshift bin, the SFR and galaxy mass in the given dark halo potential well have been calculated, with the effects on the kinetic feedback of supernova and central black hole. Under this framework, the different evolutionary stages of galaxies and the central black holes with different physical conditions have been investigated(e.g., Cirasuolo et al. 2005 about the properties of E/S0 galaxies; Lapi et al. 2006 about the active galactic nucleus luminosity function; Granato et al. 2006 about the submillimeter galaxies). In particular, Mao et al. (2007) calculated the UV luminosities and the relative dust attenuation in the star-forming and metal-poor galaxies, Lapi et al (2008) estimated the long GRB progenitor rates and redshift distribution. Since the updated X-ray/optical observations on the GRB afterglows and host galaxies have been performed sequentially by Castro Cer\'{o}n et al. (2008), Evans et al. (2009), Savaglio, Glazebrook \& Le Borgne (2009), Levesque et al. (2009a) and Fynbo et al. (2009), in this context, it is necessary to further compare some properties calculated by our model with these updated observational data. We extend the former calculation from Mao et al. (2007), attempting to understand the physical origin of the long GRB production and the GRB environment, especially, we reveal that some properties from afterglow emissions and GRB hosts have shown the possible intrinsic cosmological evolution. Throughout the paper, we adopt cosmological parameters: $h=0.7$, $\Omega_M=0.3$, and $\Omega_\Lambda =0.7$. \section{Model Predictions} \subsection{Model Review} In the following we report briefly on some key aspects under the framework of star formation in the protogalaxies from our recipes(see also Appendix A of Mao et al. 2007 and Lapi et al. 2008 for details). In general, the star formation process and the central black hole growth are inside the given virialized dark halo with the mass $M_{halo}$. The cooling gas will infall toward the center of the dark halo and form into the stars and galaxy; in the mean while it will be heated by the central black hole activity. Thus, the total infalling gas $\dot M_{inf}=-\dot M_{cond}-\dot M^{BH}_{inf}$ includes two parts: one is the condensation gas toward the center of the dark halo $M_{cond}$, the other is the gas removed by the central black hole activity $M^{BH}_{inf}$. The condensation timescale $t_{cond}$ is the maximum between the dynamic timescale and the cooling timescale at the halo virial radius. Thus, the cold gas $\dot M_{cold}=\dot M_{cond}-(1-R)\dot M_{\ast}-\dot M_{cold}^{SN}-\dot M_{cold}^{BH}$, where $\dot M_{\ast}=M_{cold}/t_{\ast}$ is the SFR and R is the fraction of gas transferred to the cold component by the evolved stars. Adopting the initial mass function(IMF) by Romano et al. (2002), we have $R\sim 0.3$, $\dot M_{cold}^{SN}$ and $\dot M_{cold}^{BH}$ are the feedback from supernova and central black hole respectively. Therefore, with the scaling approximation, we have SFR $\dot M_{\ast}(t)=M_{inf}(0)(e^{-t/t_{cond}}-e^{-s\gamma t/t_{cond}})/t_{cond}(\gamma-1/s)$, where t is the evolutionary time, $\gamma=1-R+\beta_{SN}$, $\beta_{SN}$ is the ratio between star formation feedback by supernova and SFR, $s\sim t_{cond}/t_{\ast}\sim 5$. In a virialized dark matter halo, the total gas $M_{inf}(0)$ is about 18\% of the dark halo mass. The condensation timescale can be estimated as $t_{cond}=4\times10^8((1+z)/7)^{-1.5}(M_{halo}/10^{12}M_\odot)^{0.2}~yr$. The central black hole quenches the star formation in the halo effectively after the time about $t_{BH}=2.5\times 10^8((1+z)/7)^{-1.5}F(M_{halo}/10^{12}M_\odot)~yr$, where $F(x)=1$ for $x\ge 1$ and $F(x)=x^{-1}$ for $x\le 1$. In other words, the cooling gas inside the virialized dark halo forms into stars and galaxy, the star formation process begins and persists with a relatively high rate until a few times of $10^8$ yr so that the central seed black hole growth is enough to be a supermassive black hole and it shines as a quasar. After that, the central black hole will release the kinetic feedback, heat the cold gas, and quench the star formation. In this paper, the GRB host is believed to be the young, star-forming galaxies. During the forming time about a few times of $10^7$ yr, the star formation process is violently on-going. The masses of these host galaxies are in general less than $10^{10}M_{\odot}$(Savaglio et al. 2009). About these GRB host galaxies, three physical inputs emphasized below can decide the whole recipes: (1) redshift z: at different redshifts, the star formation and galactic evolution processes are different; (2) dark halo mass $M_{halo}$: as Mao et al. (2007) and Lapi et al. (2008) pointed out, the host dark halos in which GRBs occur are relatively small, usually, they are less than $10^{12} M_{\odot}$. In this paper, we select $5\times 10^{11} M_{\odot}$ as a reference value; this is consistent with the simulation results of Courty et al. (2007) and Campisi et al. (2009); (3) evolutionary time t: the GRB host galaxy is in the initial stage of the galaxy evolution; this initial time t is about a few times of $10^7$ yr, less than about a few times of $10^8$ yr. This value is supported by the observations from Th\"{o}ne et al. (2008) and Han et al. (2010). It is noted that at this stage the central black hole seed does not have enough growth to be a supermassive black hole; thus, the quasar feedback can be ignored in our calculations. The evolutionary time t can be roughly estimated as the starburst time in one starburst galaxy as well and the central black hole activity takes negligible effects on the galactic evolution. Thus, after these three inputs are given, the SFR in the GRB host galaxies can be decided. \subsection{Results} We begin the procedure from the SFR and galactic mass calculation of the hosts, the B-band absolute magnitudes of the hosts can be derived from the empirical relation of Savaglio et al. (2009). We can also obtain the metallicity distribution by applying the mass-metallicity relation of Savaglio et al. (2005). Through the transition from UV band attenuation $A_{uv}$ to the dust absorption $A_v$, we follow the recipes of Mao et al. (2007), in which the results of Calzetti et al. (2000) have been adopted, as it is suitable for the high redshift star-forming galaxies. We further transfer $A_v$ to X-ray column density $N_{H,x}$ using the average value obtained by Schady et al. (2010). With the SFR and metallicity properties, the $A_v$ distribution can be derived as well. \subsubsection{Star Formation and Metallicity of GRB Host Galaxies} As we assume that long GRBs occur inside the young and star-forming galaxies, the star formation process plays a key role on the environment of GRB production. With the reference values about $5.0\times 10^{11}M_\odot$ for a dark halo mass and $5.0\times 10^7~yr$ for the galactic evolution time, we have reproduced the SFR at each redshift, the stellar mass of host galaxy can be derived as SFR multiplied by the galactic formation timescale. The galactic formation timescale can be defined as $t_g=t\cdot ((1+z)/7)^{-1.5}(M_{halo}/10^{12}M_\odot)^{0.2}~yr$, which has the same index numbers of condensation timescale. We have shown the SFR of GRB host galaxies at each redshift in Fig. \ref{f1}. In this figure, the observational SFR values that spread over a wide range, from 0.01$M_\odot/yr$ to about $10M_\odot/yr$ within the redshift bin $0<z<1$, are shown. Our model predicts in general that SFR values are larger toward the redshift larger than 1. The relation between the SFR and the stellar mass of GRB host galaxy is given in Fig. \ref{f2}. This possible correlation is also mentioned by Savaglio et al. (2009). In our model, we illustrate that this correlation at given redshift with a given dark halo mass could be due to the growth of the host protogalaxies under the certain galactic formation timescale. However, there is no straightforward relation shown by the observational data in Fig. \ref{f2}. We note that within the different dark halos the SFRs and the stellar masses are different. Furthermore, some galaxies with relatively larger masses at lower redshift might have experienced twice or even more times of starbursts during their lifetimes. Especially, it is found easily in the plot that the infrared-selected host galaxies have larger stellar masses. This complicated situation indicates that at low redshift, the GRB host galaxies might not have a monolithic evolutionary process. The starburst triggered by merging or interaction can happen as well. The further discussion about the host galaxies in the low-redshift universe will be given in Section 3. In the work of Courty et al. (2007), the ratio between SFR and B-band luminosity of GRB host galaxy has been investigated with the observational data from Christensen et al. (2004). Here, we adopt the observational results by Savaglio et al. (2009). Assuming the correlation between SFR and B-band absolute magnitude, we find that the data are well described by the scaling relation $logSFR=-(0.36\pm 0.01)M_B-(6.72\pm 0.27)$. Therefore, using this scaling relation we have the B-band absolute magnitude of GRB host galaxies in each redshift bin, the results are shown in Fig. \ref{f3}. It is suggested by Malesani et al. (2009) that at higher redshift the GRB host galaxies could be brighter. In our model, we point out that this is the result coming from the intrinsic SFR redshift distribution. In order to investigate the possible metallicity distribution, in this paper, assuming that the mass-metallicity relation and its redshift evolution(Savaglio et al. 2005) are valid for the GRB host galaxies as well, after calculation on the stellar masses of GRB hosts, we obtain the metallicity evolution as shown in Fig. \ref{f4}. The metallicity values of the host galaxies slightly decrease toward the higher redshift. This finding is consistent with that obtained by Li (2008). We caution that the metal-poor case is not a necessary input condition in our model; thus, the metallicity may not be the essence for GRB production. Further metallicity estimations of GRB host galaxies are given in Section 3. \subsubsection{Afterglow Absorptions} X-ray Telescope(XRT), one of the instruments onboard the {\it Swift} satellite, has supplied the important X-ray data in the 0.3-10 KeV band for the GRB research. The Swift-XRT analysis has been performed automatically, and the spectral results for Swift-observed GRBs have been presented by Evans et al. (2009). Usually the X-ray spectrum is fitted by an absorbed power law. Thus, the X-ray photon index and the corresponding neutral hydrogen column density $N_{H,x}$ of each GRB can be achieved. We select each $N_{H,x}$ value of redshift-measured GRB and plot these values in Fig. \ref{f5}. According to the model described by Mao et al. (2007), the UV band absorption $A_{UV}=0.35(\dot M_\ast/M_\odot~yr^{-1})^{0.45}(Z/Z_\odot)^{0.8}$ is a function of SFR and metallicity. Following the calculation of Mao et al. (2007), we calculate from UV attenuation to $E(B-V)$ using $E(B-V)=A_{UV}/11$ by Calzetti et al. (2000). The results are in agreement with the observations(see Fig. 2 of Mao et al. 2007). With $R_v=3.1$, we obtained the dust attenuation $A_v$. With the data observed by Swift-XRT and Swift-UV/Optical Telescope(UVOT), Schady et al. (2007 \& 2010) modeled the spectral energy distributions and derived the ratio between $N_{H,x}$ and $A_v$. As the ratios derived from Schady et al. (2010) might be varied with the redshift, assuming the linear relation between redshift and the ratio with a logarithmic scale, we perform the linear regression on the data and obtain the optimized relation as $log(N_{H,x}/A_v~10^{21}cm^{-2})=1.24log(1+z)+0.79$ with the average standard deviation 0.37. Then we use this relation to transfer $A_v$ to $N_{H,x}$ and compare the results to the X-ray absorption data\footnote{As the XRT spectra are processed by the standard software XSPEC, in which the metallicity is fixed as the solar value, thus the observational data and the calculations of $N_{H,x}$ are all uniformed by the solar metallicity.} in Fig. \ref{f5} panel (a). However, we note that the selection effects are included in the results of Evans et al.(2009) and Campana et al.(2010). As pointed out by Campana et al. (2010), at high redshift larger than 4, the intrinsic X-ray emissions suffer lower absorption; thus, the X-ray afterglows with low X-ray column densities are hard to be identified by Swift-XRT. On the other hand, as we use the data from Schady et al. (2010), although it was claimed that generally the selection effects on the distribution of host column densities are not significant, in our paper, in order to investigate the possible selection effects on our results, first, we check the possible $N_{H,x}-z$ relation and the $A_v-z$ relation respectively from the data of Schady et al. (2010). We see that the correlation between $N_{H,x}$ and redshift has the efficient $r=0.67$ with null hypotheses 0.0006, this relation could be due to the selection effect mentioned above, but we do not find any possible relation between $A_v$ and redshift. As $N_{H,x}$ increases with redshift, $N_{H,x}/A_v$ also increases with redshift. Second, aiming to avoid this selection effect, we use the average value $<N_{H,x}/A_v>=3.3\times 10^{22} cm^{-2}$ given by Schady et al. (2010) to transfer $A_v$ to $N_{H,x}$ again. We plot the results in Fig. \ref{f5} panel (b). By using the mean value of $N_{H,x}/A_v$, the selection effect can be effectively depressed. After the depression of selection effect, our model results still show a slight trend of X-ray absorption evolution. This evolution trend may be intrinsic. From our model, we see that the X-ray absorption is originally from the SFR. SFR has the redshift evolution as $SFR\sim (1+z)^{2.71}$. Under the assumption of solar metallicity, we have the intrinsic X-ray attenuation $N_{H,x}\sim (1+z)^{1.22}$. Therefore, we conclude that the SFR redshift evolution is the dominant reason for the X-ray attenuation evolution shown in Fig. \ref{f5} (b). If we use the linear relation between $N_{H,x}/A_v$ and redshift, meaning the possible selection effects are included, we have the final results shown in Fig \ref{f5} (a). We see that the intrinsic evolution plus the selection effects can fit the observational data of Evans et al. (2009) and Campana et al. (2010) well. Through the analysis above we clearly see, that the final results of GRB X-ray absorption are the calculations of intrinsic SFR redshift evolution, modified by the variation between $N_{H,x}/A_v$ and redshift. The later could be due to the selection effect. From Fig. \ref{f5}, we see that the observational data have large scatter. On the other hand, our model provides the different values under the different dark halo masses and evolutionary time. Therefore, we also conclude that the large absorption is due to the longer galactic evolution time within the massive dark halo, while the small attenuation is due to the shorter galactic evolution time within the smaller dark halo. From the theoretical point of view, we confirm that the absorption is from the local environment of GRB, as suggested by Campana et al. (2010), Nardini et al. (2009), and Zheng et al. (2009) from the data analysis. The quantities of neutral gas in the host galaxies can be obtained from the optical spectra. By the measurement of Ly$\alpha$ absorption, Fynbo et al. (2009) have established one sample in which 33 values of neutral hydrogen column density $N_{H,opt}$ are derived. The range of these values is from $10^{17}cm^{-2}$ to $10^{23}cm^{-2}$(see Fig. 10 of Fynbo et al. 2009), while the true distribution of $N_{H,opt}$ may extend to the higher column densities. On the other hand, the damped Ly$\alpha$ system with the neutral hydrogen number exceeding $2\times 10^{20}cm^{-2}$ has the possibility of star formation to form a protogalaxy (see the simulations by Pontzen et al. 2009 recently); thus, it could be the GRB host. But the thin cloud with the smaller neutral hydrogen values might be intervening along the line of sight between the observer and the GRB place; this kind of thin cloud with the column density less than $\sim 10^{20}cm^{-2}$ might not be related to the GRB host. Also, in this paper, we assume that GRB hosts are rich in neutral gas. Therefore, we only select the $N_{H,opt}$ values larger than $10^{20}cm^{-2}$ and compare them with the corresponding X-ray absorption $N_{H,x}$ values. We find the relation between X-ray absorption and optical neutral gas shown in Fig. \ref{f6}: $log N_{H,x}=(0.49\pm0.04)log N_{H,opt}+(11.3\pm 0.9)$. The linear correlation coefficient is 0.58 with the probability 0.001. Through this weak relationship, it is likely to find the trace of possible cosmic evolution of neutral gas $N_{H,opt}$, similar to the evolution of X-ray $N_{H,x}$ in Fig. \ref{f5}. Suppose that GRBs are the unbiased tracers of star formation at high redshift, this possible $N_{H,opt}$ distribution may give an interpretation to the observations of HI gas evolution by Prochaska \& Wolfe (2009). However, we caution that the relation between X-ray absorption and optical neutral gas may have larger uncertainties, due to the limited redshift range from 2 to 3. A complete sample is required to investigate this relation in the future. From our model, we see that dust absorption $A_v$ is the function of SFR and metallicity. Combining the effects of both SFR and metallicity, we obtain the redshift distribution of dust absorption shown in Fig. \ref{f7}. We see that the Av values from our model are slightly increasing with redshift. From the data of Schady et al. (2010), we do not find any prominent evidence of $A_v$ variation. But it was claimed by Kann et al. (2007) that the $A_v$ value decreases with increasing redshift. Here, after the comparison between the two data sets given by Schady et al. (2010) and Kann et al. (2007), we find that the two data sets are not consistent with each other: some $A_v$ values of the same burst have large difference. The details are listed in the caption to Fig. \ref{f7}. At high redshift, the values of Kann et al. (2007) are lower than those of Schady et al. (2010), while at low redshift, the values of Kann et al. (2007) are larger than those of Schady et al. (2010). In general, low-mass stars take a long time to evolve to asymptotic giant branch (AGB) phase and to produce the dust; thus, AGB population only dominate the dust production at local universe. It is suggested that at high-redshift the dust factory is supernova explosion. However, recently, AGB population has been found to be the source of dust production at high redshift universe(Valiante et al. 2009): dust production is mainly from supernova at the beginning of the evolution, but for the time larger than $3\times 10^7$ yr, the dust contribution from AGB stars increases. From the calculation of Valiante et al. (2009), we see that at time $1.0\times 10^8$ yr, AGB dust production is still 10 times lower than that of supernova. Thus, in our paper, including the AGB dust production, the dust extinction $A_v$ is added by a factor of 8\%. While at the time $3\times 10^8$ yr, the AGB dust production is as same as supernova dust production. Thus, we estimate from our model that the total dust extinction $A_v$ is 1.7 times of the original value in which only supernova production is included. Therefore, we clearly see that both AGB and supernova are the origin of dust production at the late evolution time larger than $10^8$ yr. Finally, we cannot ignore the selection effect: at high redshift, the GRBs and their hosts with high absorption values are difficult to be detected by optical telescopes. \section{Discussions} Under the framework of galaxy formation scenario, Lapi et al. (2008) predicted the GRB progenitor rate and redshift distribution. In this paper, without the information of GRB rates and the cosmological star formation density, we attempt to reveal some properties of GRBs and their host galaxies, which have intrinsic redshift distributions. The distributions of these properties with redshift are found to be originally from the star formation in the star-forming galaxies. Given a proper galactic evolutionary time and a reasonable dark halo mass, the final results can be obtained by the model calculation. These results are compared with all kinds of observational data. At high redshift, the GRB host galaxy has a plenty of neutral gas, suffering much violent star formation. After the short-time stellar evolution phase, the metal and dust are released by massive stars; thus, the optical and X-ray GRB emissions will have a strong attenuation locally at high redshift. The star formation activity, evolving from relatively massive hosts at high redshift to dwarf galaxies at low redshift, is a coincidence with the so-called downsizing scenario(e.g., Heavens et al. 2004). However, at lower redshift, the situation turns to be more complicated. From the morphological statistics by Conselice et al. (2005) and Wainwright et al. (2007), the GRB hosts present a broad diversity of galaxy types. About 1/3 host galaxies in the sample of Savaglio et al. (2009) are mergers, while in our model the merging and interaction processes are not taken into account. In fact, Conselice et al. (2005) found that the GRB hosts at $z>1$ are different from those at $z<1$ in terms of light concentration and the morphological size. Through the study of galaxy mass distribution, GRB hosts tracing star formation might be biased at low redshift(Kocevski et al. 2009). It is also complex that the hosts at $z<1$ are not representative of the general galaxy population(Levesque et al. 2009a). Thus, the properties of these low-redshift GRB hosts presented in this paper could not be reproduced by any monolithic process. At least, some low-redshift galaxies may undergo multiple star-forming processes during their whole lifetimes. GRB production can be accompanied with any single starburst event. From the analysis in this paper, we see that the absorption of GRB X-ray and optical emissions is relatively strong. The strong intrinsic attenuation of GRB host galaxies may produce some dark bursts, defined by the index $\beta_{ox}<0.5$, where $\beta_{ox}$ is the flux density ratio between optical and X-ray bands(Jakobsson et al. 2004). Rol et al. (2005) proposed several extinction origins from their preliminary results. From our calculations, we see that the heavy attenuation may occur due to the following three possibilities: (1) the local environment of the host is metal-enriched, metallicity is higher, and/or, the host galaxy in the massive dark halo larger than $10^{12}M_\odot$ may have strong absorption. For example, at redshift 2.5, $Z=1.0Z_\odot$, halo mass $M_{halo}=5.0\times 10^{12}M_{\odot}$, after the galactic evolving time $1.0\times 10^8~yr$, we have dust extinction $A_v=1.0$ and the corresponding X-ray absorption $N_{H,x}=4.7\times 10^{22}cm^{-2}$; (2) the dust and metals surrounding the GRB in the host galaxy are distributed in an inhomogeneous way; there could be heavy absorption through the line of sight, but in other directions the absorption is slight. Also, in our model, we assume that the $A_v$ and $N_{H,x}$ are measured locally and do not change significantly if the dust and gas extend out to a few tens to hundreds of pc from the burst(Perna \& Lazzati 2002, D'Elia et al. 2009); however, suppose the observed optical extinction is due to the grain absorption far beyond this local region of GRB, the $A_v$-$N_{H,x}$ correlation obtained by Schady et al. (2007, 2010) may be invalid and our calculations are strongly biased; (3) as mentioned in Section 2.2.2, the dust produced by the AGB population at high redshift should be taken into account. In order to further understand the metal production of GRB environment, we roughly re-estimate the metallicity of GRB hosts under our framework. The mass of metal $M_{metal}=SFR\cdot f\cdot f_{dep}\cdot M_{dust}/M_{star}$, where f is the ratio of massive stars to all stars, and $f_{dep}$ is the ratio of mental converted from the dust. $f=0.47$ is the case for the stars with the mass larger than $2M_\odot$ by our adopted IMF, $f_{dep}=1.0$ means that all the dust can be transferred to metals. Metallicity is defined by $Z=M_{metal}/M_{gas}$. From the SFR calculated by Granato et al. (2004) and Mao et al. (2007), as an example, at redshift 6, we obtain the metallicity as $Z\sim 2.75\times 10^{-2}(M_{dust}/M_{star})$. If we take a supernova with the dust production of $10^{-3}$ solar mass(Pozzo et al. 2004), we obtain the upper limit of metallicity $Z\sim 10^{-3}Z_{\odot}$, which is lower than the measurement($Z>0.02Z_\odot$) of GRB 050904(Campana et al. 2007). If we take the dust mass 0.08-0.3 solar mass per primordial massive supernova(Todini \& Ferrara 2001), we have the result which is consistent with the observation. The estimation values of Population I/II metallicity are lower than the observational values at high redshift, meaning that the imprints from some primordial objects(Kawai et al. 2006), such as pop III stars and mini-quasars, have to be included in the possible cosmic evolution properties of these GRB host galaxies. According to this estimation, the metal-enriched environment of GRB host galaxy naturally gives the reason of the strong attenuation in X-ray and optical band measurements. In our model, the initial galactic evolutionary time of about $10^7$ yr of host galaxies is given, but the corresponding metallicity about $Z\sim 0.3Z_\odot$(Lapi et al. 2008) is not a necessary condition, as mentioned by Levesque et al. (2009b) that low metallicity may not be required for a relativistic explosion. With our model, the massive dark halo above $10^{12}M_\odot$ can host the GRB galaxy in which the metallicity is relatively high, although most GRB host galaxies are inside the dark halos with the masses less than $10^{12}M_\odot$. On the other hand, a host galaxy with a top-heavy IMF, meaning that much more massive stars are involved, can produce more metals in relatively short time during the galactic evolution phase. For instance, the Wolf-Rayet star with a mass of 80$M_\odot$ and initial metallicity $Z=0.001$ has the possibility to self-enrich the HII region(Kr\"{o}ger, Hensler \& Freyer 2006) and to produce the GRB event(Eldridge et al. 2006). In this paper, we have calculated the SFR, galactic mass, and metallicity of the GRB host galaxies. The absorption variations with redshift in the X-ray and optical bands are presented as well. Some selection effects have been taken into account through our calculation. Other observational biases should also be considered. All the redshift measurements come from the optical observations so that some optical-faint GRBs and host galaxies are ignored. Moreover, at high redshift only most luminous galaxies with high SFRs can be detected, indicating that some low-luminosity cases are not included. However, our calculations come from the intrinsic star formation of GRB host galaxies. Thus, due to all these selection effects and observational bias mentioned in the paper, the intrinsic properties of GRB afterglows and the hosts by the model calculations have some differences to those from observations. As SFR evolution plays a dominant role in the calculations, compared to the situation at low redshift, in general, star formation in the metal-poor environment at high redshift may provide more powerful GRB explosion. Therefore, although the effective threshold is given by Kistler et al. (2009), we speculate that improving the sensitivity of detectors on the high-energy telescopes is not strongly useful to catch more high-redshift but faint GRBs, since low-energy-released GRBs are almost absent in the high-redshift universe. \acknowledgments We are grateful to Dr. Salvaterra, R. and Campana S. for their helpful discussion. We thank the referee to give us the constructive suggestions. This work is supported by the following research grants (P.I. Guido Chincarini): ASI grant Swift I/011/07/0, by the Ministry of University and Research of Italy (PRIN MIUR 2007TNYZXL), by MAE and by the University of Milano Bicocca (Italy).
1,116,691,498,080
arxiv
\section{Introduction}\label{sec:intro} Since the discovery of 51 Peg\,b by \citet{Mayor1995}, the first extrasolar planet orbiting a solar-like star, over 4800 exoplanets\footnote{See e.g., \href{https://exoplanetarchive.ipac.caltech.edu/}{https://exoplanetarchive.ipac.caltech.edu} (as of January 5, 2022).} have been detected, including almost 900 using the radial-velocity technique. Those distant worlds cover a broad diversity of orbital properties \citep{Udry2007,Winn2014}, expected to be fossil traces of the formation process of these systems and potentially linked as well to the properties and evolutionary stages of their host stars and their environments. Models of planetary formation were at first developed based on the system we know best, the Solar System, and have evolved significantly in the past twenty years with the increasing flow of information derived from observations of exoplanet systems. The observed diversity of planet properties finds its origin in the physical process at play coupled with the local conditions during the formation of the system. Today, two main competing paradigms are proposed for planet formation: the core accretion model: a dust-to-planet bottom-up scenario \citep[e.g.,][]{Lissauer1993,Pollack1996,Alibert2005}, which can lead to the formation of gas giants in a few million years, and the disk's gravitational instability \citep{Boss1997, Durisen2006}, which can form massive planets on a very short timescale of a few thousand years \citep[see][for reviews on those processes]{Helled2014,Raymond2014}. Both agree on the formation of substellar companions from the circum-stellar accretion disk, but then differ depending on the initial environmental conditions in the disk, planet-disk, and planet-planet interactions. \begin{table*}[t] \centering \begin{threeparttable} \caption{Planet-search programs monitoring evolved stars.} \begin{tabular}{|l|l|} \hline \hline Survey & References \\ \hline the Lick G- and K-giant survey & \citet{Frink2001, Hekker2006b} \\ the ESO planet search program & \citet{Setiawan2003a} \\ the Okayama Planet Search Program & \citet{Sato2005} \\ \hspace{15pt}with the collaborative survey "EAPS-Net" & \citet{Izumiura2005} \\ the Tautenburg observatory Planet Search & \citet{Hatzes2005, Dollinger2007a} \\ Retired A-stars and their companions & \citet{Johnson2006} \\ the CORALIE \& HARPS search in open clusters & \citet{Lovis2007} \\ \hspace{15pt}with the follow-up program & \citet{DelgadoMena2018} \\ the Penn States Torún Planet Search & \citet{Niedzielski2007} \\ \hspace{15pt}with the follow-up program Tracking Advanced Planetary Systems & \citet{Niedzielski2015b} \\ the BOAO K-giant survey & \citet{Han2010} \\ the Pan-Pacific Planet Search & \citet{Wittenmyer2011} \\ the Exoplanet aRound Evolved StarS project & \citet{Jones2011} \\ the Boyunsen Planet Search & \citet{Lee2011} \\ \hline \end{tabular} \label{tab:surveys} \end{threeparttable} \end{table*} Large planet-search surveys first focused on solar-type and very low-mass stars, leaving aside more massive stellar hosts that were more complex to observe and study. As the impact of the stellar mass on planet formation is still debated, it is of great interest to study the population of planets around intermediate-mass stars, that is in the $1.5-5\,M_{\odot}$ range. Such systems are especially useful to probe the two main competing formation models. In the early phase of the process, the stellar mass seems to have little effect on the protoplanetary disk formation and evolution; however, after 3 million years stars with masses $> 2\,M_{\odot}$ start showing significant differences compared to lower mass stars, such as stronger radiation fields and higher accretion rates \citep{Ribas2015}. These impact the evolution of protoplanetary disks significantly, and by $\sim$10\,Myr there are no more disks around those higher mass stars. The typical timescale of core accretion could become problematic for massive stars that have shorter disk lifetimes \citep{Lagrange2000}. However, searching for planets orbiting main-sequence stars of intermediate masses (A to mid-F types) proves to be a challenge for Doppler searches, mainly due to the too few absorption lines present in early-type dwarfs as a consequence of their high effective temperatures, and secondly due to the rotational broadening of the lines (typical rotational velocities of 50-200\,km\,s$^{-1}$ for A-type stars, 10-100\,km\,s$^{-1}$ for early-F stars \citep{Galland2005}). A method to extract the radial-velocity from the spectrum in Fourier space was developed by \citet{Chelli2000} and then adapted and applied to early-type stars by \citet{Galland2005}. The typical radial-velocity uncertainties obtained were on the order of 100-300 and 10-50\,m\,s$^{-1}$ (normalized to a signal-to-noise ratio (S/N) = 200) for A- and F-type stars, respectively. With this technique, the team confirmed the existence of the known planet around the F7V star HD\,120136 (Tau Boo) announced by \citet{Butler1997}. The orbital parameters from \citet{Galland2005} were consistent with the values previously found. These results confirmed the accuracy of the computed radial-velocities and the possibility to detect companions in the massive, giant planetary domain for A- and F-type stars with substantial v\,sini, using high-resolution, stable spectrographs such as HARPS. On the other hand, stars will inflate during their evolution toward the red giant branch (RGB). The effective temperature thus reduces significantly, making many more absorption lines visible in the spectra. Moreover, the rotation of the star slows down, reducing the broadening effect on the lines and making them sharper. Those stars are thus suitable bright proxies for radial-velocity planet searches around intermediate-mass stars. The analysis and interpretation of the variability of radial-velocity time series of giant stars can, however, be challenging because of their intrinsic variability, which, moreover, can also be periodic \citep[e.g.,][]{Hekker2006a,Hekker2007b}. Disentangling stellar from potential planetary contributions represents a challenge in the search for long-period and low-mass companions around evolved stars. Bringing observational constraints on the formation and evolution of planetary systems relies principally on the determination of orbital parameters such as semi-major axes and eccentricities. It also relies on the knowledge of host star properties such as mass, radius, age, metallicity, and the abundances of individual elements. For giant stars, the mass and age are poorly constrained with the classical method of isochrone fitting because the evolutionary tracks in the HR diagram are too close to each other. The uncertainties on the observations (stellar magnitudes and colors) and systematics of the models lead to typical relative uncertainties on the stellar masses of 80-100\%. \citet{Lovis2007}, followed by \citet{Sato2007a} and \citet{Pasquini2012}, overcame this difficulty. They studied giant star populations in open clusters, for which better ages can be determined. This lead them to better mass estimates from stellar evolution models (\citet{Lovis2007} using the Padova models at solar metallicity \citep{Girardi2000}). More recently, GAIA DR2 unprecedented homogeneous photometric and astrometric data covering the whole sky allows for more precise age determination \citep[e.g.,][derived parameters such as age, distance modulus, and extinction for a sample of 269 open clusters]{Bossini2019}. In the late 1980s, a few surveys monitored giant star activity to better understand the origin of the observed large radial-velocity variations for late giant stars, with periods from days \citep{Hatzes1994} to hundreds of days \citep{Walker1989,Hatzes1993,Hatzes1999} and amplitudes of hundreds of m\,s$^{-1}$ \citep{Udry1999}. Such a variability can be explained by a combination of stellar intrinsic activity such as oscillations, pulsations or the presence of substellar companions. Several surveys searching for stable reference stars reported that many giants show relatively small radial-velocity standard deviations for early giant stars: $\sigma_{RV} \leq$ 20\,m\,s$^{-1}$ for several among the 86 K giants followed by \citet{Frink2001} and the 34 K giants observed by \citet{Hekker2006b}. These results show that giant stars are suitable for the detection of substellar companions with radial-velocity measurements. It is worth mentioning the case of Gamma Cep here \citep{Campbell1988}, which is also considered as intrinsically variable despite the K1 giant spectral type. \citet{Frink2002}, as part of a radial-velocity survey of K giants \citep{Frink2001}, announced the detection of a 8.9\,M$_{jup}$ (minimal mass) companion orbiting the K2 III giant $\iota$ Draconis with a period of 536\,d period and an eccentricity of 0.70, making it the first substellar companion discovered orbiting a giant star. Since then, several radial-velocity surveys have been launched to follow evolved stars with intermediate masses. The list of these programs is presented in \autoref{tab:surveys}. As of November 2020, they have led to the discovery of 186 substellar companions around evolved stars in 164 systems\footnote{The list was established from the NASA Exoplanet Archive, accessible at \href{https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls\&config=PSCompPars}{https://exoplanetarchive.ipac.caltech.edu}, by selecting the hosts with log\,g\,$\leq$\,3.5\,cm\,s$^{-2}$. The list thus contains giant and subgiant hosts.}. \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{graph_M_V_VS_B-V_lims.pdf} \caption{Color-magnitude diagram from Hipparcos measurements \citep{ESA1997} for stars with parallax precisions better than 14\% (in gray), with clear localization of our sample (in orange). The dashed lines represent the selection limits at absolute magnitude $M_V < 2.5$ (green), and between the lower (red) and higher $B-V$ values at 0.78 and 1.06. The planet host candidates presented in this paper (HD\,22532, HD\,64121, and HD\,69123) are highlighted in red.} \label{fig:hr_hipp} \end{figure} In this context, a survey of a volume-limited sample of evolved stars of intermediate masses, the CORALIE radial-velocity Search for Companions ArounD Evolved Stars (CASCADES), was initiated in 2006. The sample is presented in \autoref{sec:survey}, with the methods used to derive the stellar parameters described in \autoref{sec:methods}. \Autoref{sec:obs_analysis} describes the complete time series acquisition process and analysis, from the search for periodicities in the activity indicators to the Keplerian fitting of the radial velocities, which lead to orbital solutions of three newly discovered planetary companions in \autoref{sec:results}. Additional detections and statistical analysis of the survey will be presented in a series of subsequent publications. Finally, in \autoref{sec:conclusion} we discuss some implications of the first discoveries and provide concluding remarks in the broader context of the population of giant stars hosting substellar companions. \section{The CASCADES survey}\label{sec:survey} \subsection{Goals and sample definition}\label{sub:sample_def} \begin{figure}[th!] \centering \includegraphics[width=1\columnwidth]{hist_Vmag.pdf}\\ \includegraphics[width=1\columnwidth]{hist_dist_GAIA_bailer18.pdf} \caption{Distributions of CASCADES survey sample (filled orange histogram), compared with the published giant stars known to host planet companions (dashed histogram). The top panel displays the apparent magnitude (TYCHO-2 catalog, \citet{Hog2000}) and the bottom one the distance (GAIA DR2, \citet{Bailer-Jones2018}). The CASCADES original 2006 sample and its 2011 extension are differentiated by lighter and darker orange shades, respectively. The positions of HD\,22532, HD\,64121, and HD\,69123 are represented by red lines.} \label{fig:hist_v} \end{figure} In the context described above, in 2006 we (i.e., Christophe Lovis and Michel Mayor) launched a precise radial-velocity survey of evolved stars of intermediate masses, which we refer to as "giant stars" in this paper. The main motivation was to better understand the formation of planetary systems and their evolution around stars more massive than the Sun by completing the existing statistical properties of giant host stars and their companions. To conduct a well-defined statistical study, we chose the following criteria for the definition of the sample: \begin{figure}[th!] \centering \adjincludegraphics[width=1\columnwidth, trim={0 0 0 {-.042\height}},clip]{hist_rv_rms_raw.pdf}\\ \includegraphics[width=1\columnwidth]{hist_rv_rms_raw_zoom.pdf} \caption{Distribution of the radial-velocity dispersion observed for the stars in the sample. Top: Full range in logarithmic scale. Bottom: Zoomed-in image of smaller values, in linear scale. The positions of HD\,22532, HD\,64121, and HD\,69123 are represented by red lines.} \label{fig:hist_rms} \end{figure} We defined a volume-limited sample, $d\leq300$\,pc. The selection was done in 2005 from the Hipparcos catalogue \citep{ESA1997}. We only selected stars from the catalog with a precision on the parallax better than 10\,\%. To increase the statistics, the sample was extended in 2011 to targets with a parallax precision better than $14\,\%$. We selected stars with $M_V < 2.5$ and $B-V>0.78$, to avoid stars on the main sequence. We selected only early-type giants, with G and K spectral types and luminosity class III. To avoid later types, which are known to be intrinsically variable, we introduced a color cut-off at $B-V<1.06$. We avoided close visual binaries for which we might have contamination by the secondary in the spectrograph fiber. The limit on the separation was set at $6''$. Finally, we selected stars observable by Euler, in the southern hemisphere, with a declination below $-25^{\circ}$, to be complementary to existing surveys in the north reaching down to $\delta=-25^{\circ}$. The criteria and the final sample of 641 stars are represented in \autoref{fig:hr_hipp} displaying the Hipparcos color-magnitude diagram with the selected sample highlighted. In \autoref{fig:hist_v}, we show the distribution of stellar apparent magnitudes and distances for the sample. We paid particular attention to the potential bias induced by the criterion on the parallax precision when carrying out the statistical study of the sample. Because of its size, timespan of observations, and the quantity and quality of the measurements, the above-defined planet search program is expected to significantly improve our knowledge of planetary systems around giant stars. \subsection{Instrument description and observations} \label{sub:instru} Observations for this survey began at the end of 2006 and have been conducted since then with the CORALIE spectrograph on the 1.2-m Leonard Euler Swiss telescope located at La Silla Observatory (Chile). CORALIE is a 2-fiber-fed echelle spectrograph (2" fibers on the sky). It covers the 3880-6810 Å wavelength interval with 68 orders. The spectral resolution of the instrument was originally of 50\,000. The observations are performed in the so-called simultaneous thorium mode\footnote{A thorium-argon lamp illuminates fiber B at the same time as the star is observed on fiber A. Both spectra are recorded on the CCD.}. In 2007 and 2014, CORALIE went through two significant hardware upgrades to improve the overall performance of the instrument, such as improving the throughput of the instrument (gain of 2 magnitudes) and increasing the resolution to 60\,000. A Fabry-Perrot etalon was installed as well to replace the thorium-argon lamp used to track the variation of the spectrograph during the night. For simplicity, we refer to each dataset as {\footnotesize CORALIE-98} (or COR98) for the first version of the instrument, {\footnotesize CORALIE-07} (or COR07) for the 2007 upgrade and {\footnotesize CORALIE-14} (or COR14) for the 2014 instrument upgrade. The instrumental precision evolved through these upgrades, from 5\,m\,s$^{-1}$ for COR98, to 8\,m\,s$^{-1}$ for COR07, and finally to 2.5-3.2\,m\,s$^{-1}$ for COR14. Complete information on instrumental aspects and the precisions reached are given in, for instance, \citet{Queloz2000,Segransan2010,Segransan2021}. \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{graph_n_rv_VS_timespan_rv_rms_raw.pdf}\\ \caption{Number of measurements per star in the sample plotted against the respective time span of the observations. The points are color-coded by the radial-velocity rms. The star markers represent HD\,22532, HD\,64121, and HD\,69123.} \label{fig:pts_tspan} \end{figure} Considering the size of the sample and the first results on the intrinsic variability of giant stars obtained by earlier surveys, we tried to optimize the exposure time in our program to match the limit set by stellar variation. Evolved stars often show distributions of the dispersion of radial-velocity time series ranging from $\sim$ 5\,m\,s$^{-1}$ up to a few hundreds of m\,s$^{-1}$, with a mode at $\sim$ 15\,m\,s$^{-1}$ \citep{Lovis2004,Hekker2006b,Quirrenbach2011}. One of our objectives is also to test if this intrinsic limit can be lowered by modeling stellar variability, and thus to be able to recognise low-amplitude planet signatures. To reach a sufficient precision ($\leq$ 2-3\,ms$^{-1}$), exposure times between 3 and 5 minutes are adequate for these very bright stars. The CASCADES survey is also interesting in the broader context of projects running on the telescope. The brightness of the stars in the sample makes them ideal back-up targets when weather conditions are unfavorable. Almost 15\,000 radial-velocity measurements have been obtained so far for the 641 stars of the sample. The study of the radial-velocity dispersion (see \autoref{fig:hist_rms}) confirms a distribution with a peak around 13\,m\,s$^{-1}$, with values as low as $\sim$ 4\,m\,s$^{-1}$. We also see a significant tail at higher values with a secondary peak around 20\,km\,s$^{-1}$ corresponding to binary systems. Our observational effort is illustrated in \autoref{fig:pts_tspan}, with the three host stars presented in this paper (located with a $\star$ symbol) being among the most observed in the sample in terms of duration and number of data points. The program is continuing to obtain a minimum of 20 points per star and so can perform a solid statistical analysis of the sample. \section{Determination of stellar properties}\label{sec:methods} \subsection{Spectrocopic parameters of the stars in the sample}\label{sub:spec_param} The analysis of high-resolution spectra can provide reliable stellar parameters such as effective temperature $T_{eff}$, surface gravity $log\,g,$ and iron metallicity ratio $[Fe/H]$ (which we refer to as metallicity in this paper for simplicity). \citet{Alves2015} presented a catalog of precise stellar atmospheric parameters and iron abundances for a sample of 257 G and K field evolved stars, all of them part of our sample, using UVES and CORALIE spectra. The approach, based on \citet{Santos2004}, uses the classic curve-of-growth method. The equivalent width of a set of Fe I and II lines is measured and their abundances are calculated. Then, the stellar parameters are derived when excitation and ionization balances are satisfied simultaneously under the assumption of local thermodynamic equilibrium. For a detailed description of the method and the results, we direct the reader to \citet{Santos2004,Sousa2014,Alves2015}. Before 2014, the CORALIE spectra obtained for precise RV measurements were polluted by the strongest lines of the Thorium-Argon spectrum from the calibration fiber, and thus they were not suitable for a precise spectroscopic analysis. Dedicated additional spectroscopic observations (without calibration) were then obtained for our sample stars. After the CORALIE upgrade in 2014, the calibration spectrum from the Fabry-Perrot etalon was no longer polluting the stellar spectrum, and the observations of radial-velocity measurements can also be used for the spectroscopic analysis. We stacked the CORALIE-14 spectra from all stars in our sample to reach a high enough S/N (the median S/N of the master spectra is about 170), and we derived the spectroscopic parameters $T_{eff}$, $log\,g$ and $[Fe/H]$ using the ARES\citep{Sousa2007,Sousa2015} $+$ MOOG \citep{Sneden1973} methodology following \citet{Sousa2014}. For most of the stars, with T$_{eff}$\,<\,5200\,K, the iron line list presented in \citet{Tsantaki2013} was used. While for the hotter stars we used the standard line list presented in \citet{Sousa2008a}. We then compared these results with the ones presented in \citet{Alves2015} for the subsample of 254 common stars. \Autoref{fig:comp_param_stell} shows the good agreement found between the parameters obtained from the UVES and CORALIE spectra: in the case of metallicity, we observe an apparent positive offset of the order of the dispersion of the data around the 1:1 correlation, $\sim0.04$\,[dex] in favor of our estimation. More than 50\% of the subsample stars are inside the $1\sigma$ region and 90\% are inside $2\sigma$. These results confirm the quality of the CORALIE-14 spectra to extract reliable atmospheric parameters from them. \begin{figure*}[t!] \centering \includegraphics[width=.33\textwidth]{comp_Teff_tsantaki_VS_Teff_sousa19.pdf} \includegraphics[width=.33\textwidth]{comp_feh_tsantaki_VS_feh_sousa19.pdf} \includegraphics[width=.33\textwidth]{comp_logg_tsantaki_VS_logg_sousa19.pdf} \caption{Comparison plots of spectroscopic parameters extracted from CORALIE (this paper) and UVES \citep{Alves2015} spectra of the subsample of 254 stars in common. The black diagonal line represents the 1:1 correlation, and the red line represents the linear fit of the data. At the bottom of each figure, the residuals compared to the 1:1 correlation are shown, with their 1 and 2\,$\sigma$ dispersions represented by the shaded regions. Left: Effective temperatures seem to be perfectly correlated, with a dispersion of $\sim38$\,K. Middle: Metallicity ratio of iron [Fe/H] shows an apparent positive offset of the order of the dispersion of the data around the 1:1 correlation, $\sim0.04$\,[dex] in favor of our estimation. More than 50\% of the subsample is inside the $1\sigma$ region and 90\% inside the $2\sigma$. Right: Logarithm of the surface gravity shows a good correlation, with an offset $\sim0.05\,cm.s^{-2}$ lower than apparent dispersion of the data around the 1:1 correlation. More than 70\% of the subsample is inside the $1\sigma$ region.} \label{fig:comp_param_stell} \end{figure*} \begin{figure}[t!] \centering \adjincludegraphics[width=.49\columnwidth, trim={0 0 {.02\height} {-0.035\height}},clip]{comp_Teff_GAIA_VS_Teff_sousa19.pdf} \adjincludegraphics[width=.49\columnwidth, trim={{.02\height} 0 0 {-0.035\height}},clip]{comp_R_GAIA_VS_R_GAIAspec.pdf} \caption{Comparison plots of photometric (from \citep{Brown2018}) and spectroscopic (from this paper) determinations of the effective temperatures and the stellar radii. The black diagonal line represents the 1:1 correlation and the red line represents the linear fit of the data. At the bottom of each figure, the residuals compared to the 1:1 correlation are shown, with their 1 and 2\,$\sigma$ dispersions represented by the shaded regions. Left: Effective temperatures seem to show a linear trend, but this is not significant compared to the dispersion of the date of $\sim110$\,K, inside which more than 85\% of the data is located. Right: Radii are in good agreement but show an apparent trend and increasing dispersion for masses above than $\sim15\,M_{\odot}$.} \label{fig:comp_spec_gaia} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{graph_evol_tracks_BaSTI.pdf} \caption{Positions in Hertzsprung-Russell diagram of the subsample of 620 stars for which we derived spectroscopic parameters. The three host stars we focus on in the present paper are highlighted as red dots. We adopted the luminosities obtained with the method described in \autoref{sub:stellar_mass}. The evolutionary tracks are from models of \citet{Pietrinferni2004} for different stellar masses ($1.0$, $1.2$, $1.5$, $1.7$, $2.0$, $2.5$, $3.0$, $4.0,$ and $5.0$ $M_{\odot}$ from bottom to top). They are for models with solar metallicity.} \label{fig:evoltrack} \end{figure} \subsection{Stellar luminosities, radii, and masses}\label{sub:stellar_mass} We derived the luminosity $L$ for the stars in our sample using the Gaia DR2 parallaxes corrected by \citet{Bailer-Jones2018}\footnote{\citet{Bailer-Jones2018} provided purely geometric distance estimates by using an inference procedure that accounts for the nonlinearity of the transformation (inversion of the parallax) and the asymmetry of the resulting probability distribution.}, $V$-band magnitudes from \citet{Hog2000}, and the bolometric correction relation $BC$ of \citet{Alonso1999}\footnote{Considering the short distance of the stars of the sample, the extinction was not taken into account.}. We then used this luminosity and the spectroscopic effective temperature $T_{eff}$ to compute the stellar radii using the Stephan-Boltzmann relation. The uncertainties of the radii were estimated using a Monte Carlo approach. We compared our temperatures and radii with the GAIA DR2 values and found them to be in good agreement, as illustrated in \autoref{fig:comp_spec_gaia}. The derived luminosities and spectroscopic effective temperatures are plotted in the Hertzsprung-Russell diagram in \autoref{fig:evoltrack}, together with the stellar evolutionary tracks at solar metallicity of \citet{Pietrinferni2004}\footnote{Available on the BaSTI database \href{http://basti.oa-teramo.inaf.it/index.html}{http://basti.oa-teramo.inaf.it.}}. Those were used to estimate the masses of our stars using the SPInS software \citep{Lebreton2020}\footnote{\href{https://dreese.pages.obspm.fr/spins/index.html}{https://dreese.pages.obspm.fr/spins/index.html}, which employs a global Markov Chain Monte Carlo (MCMC) approach taking into account the different timescales at various evolutionary stages and interpolation between the tracks.}. The approach compares the luminosity, effective temperature, logarithm of surface gravity, and [Fe/H] of individual objects to theoretical evolutionary tracks and accounts for the observational errors in these four quantities. Giant stars are located in the area of the Hertzsprung-Russell diagram where individual evolutionary tracks are close to each other; thus, the derived precisions on the stellar masses might be overestimated. However, in our sample, the degeneracy between the horizontal branch (HB) and RGB is not too pronounced. Comparisons with masses derived from detailed asteroseismic modeling (see \autoref{subsec:masses_astero}) show some small differences. These discrepancies mainly originate from the use of a fixed enrichment law in the grid of \citet{Pietrinferni2004}, such that the chemical composition a the stars of our sample is fully determined from the determination of $\left[Fe/H\right] $ in our modeling using SPInS. This was not the case for the seismic modeling pipeline, where the additional constraints justified allowing for an additional free parameter. Thus, solutions with a chemical composition deviating from a fixed enrichment law where helium and metal abundances are tied together were possible outcomes of the modeling procedure. This is particularly visible for the two stars with Fe/H $\sim-0.2$ studied in \autoref{subsec:masses_astero} (HD\,22532 and HD\,64121), for which the asteroseismic analysis reveals an initial helium abundance slightly higher than the solar one despite their subsolar metallicity. This situation of course cannot be reproduced by models with an initial helium abundance fixed by the metallicity, which then leads to an overestimation of the stellar mass determined by SPInS to compensate for the incorrect helium abundance and reproduce the observed location in the HR diagram. Nevertheless, we can consider that these fits still allow us to globally estimate the stellar masses of our sample of stars. The obtained distribution for all stars is shown in \autoref{fig:params_distrib}. The median of the distribution is found around $2.1$ $M_{\odot}$, corresponding to intermediate-mass stars. However, as mentioned above, some uncertainty on the evolutionary stage of a subsample of stars of our sample could have affected our determinations, especially if we consider deviations from a given chemical enrichment law. Indeed, some stars of our sample identified here as being around $\approx 2.0$ $M_{\odot}$ on the RGB could actually be low-mass stars in the red clump. This degeneracy could be lifted using asteroseismic observations of dipolar oscillation modes that allow us to unambiguously determine the evolutionary stage of these stars \citep{Beck2011,Bedding2011}. \begin{figure*}[t!] \centering \adjincludegraphics[width=1\textwidth, trim={0 0 0 0},clip]{stellar_params_new.pdf} \caption{Distributions and relations between stellar parameters for our subsample of 620 stars. Top left: Metallicity distribution of the stars of our sample (colored in orange) compared with the same distribution for the $\sim1000$ stars (dashed histogram) in the CORALIE volume-limited sample \citep{Udry2000,Santos2001}. Top right: Distribution of the stellar masses obtained from track fitting. The corresponding kernel density estimation is overplotted in orange, using a Gaussian kernel. Bottom left: Mass vs. metallicity relation. (bottom right) Metallicity vs. logarithm of surface gravity. The two black lines were drawn by eye and show the biases in the samples due to the B-V cut-off. The red-dashed rectangle delimits the area of the potential unbiased subsample. The three planet hosts presented in this paper are represented by red dots.} \label{fig:params_distrib} \end{figure*} \Autoref{tab:sample_table} shows example lines of the complete set of stellar parameters for our sample, available online\footnote{Available at \href{https://cdsweb.u-strasbg.fr}{CDS} and \href{https://dace.unige.ch/catalogs/?catalog=CASCADES}{https://dace.unige.ch.}}. We also illustrate our results in \autoref{fig:params_distrib}. The metallicity distribution decreases with increasing [Fe/H] for [Fe/H]\,>\,0.0-0.1, similarly to the metallicity distribution \citep{Santos2001} of a large, volume-limited sample of dwarf stars in the solar neighborhood, included in the CORALIE survey \citep{Udry2000}. We observe that our sample of giants is lacking the metal-rich and very metal-poor stars. This tendency has been observed in many studies \citep[e.g.,][]{Luck2007,Takeda2008,Ghezzi2010,Adibekyan2015a,Adibekyan2019}. It may be related to the fact that giants, most of them being more massive, are younger than their dwarf counterparts. They thus do not have time to migrate far from the inner to the outer disks of the galaxy during their short lifetimes \citep{Wang2013,Minchev2013}. \citet{Adibekyan2019} also addresses the role of the age-metallicity dispersion relation \citep{DaSilva2006,Maldonado2013}, as well as potential selection effects due to B-V color cut-off \citep{Mortier2013}, which excludes low-log-g stars with high metallicity and high-log-g stars with low metallicity. We illustrate this effect in \autoref{fig:params_distrib}, in the same way as \citep{Adibekyan2015a}, by drawing diagonal lines that show the biases in the sample due to the color cut-off. We also represent the area that would correspond to an unbiased subsample inside a cut rectangle (red dashed rectangle). We will perform a detailed statistical study of the stellar parameters of our sample in future work. \subsection{Asteroseismic masses for the three planet hosts}\label{subsec:masses_astero} To go further and improve the mass estimation for the host star, we performed a detailed seismic analysis of the TESS short-cadence photometric data \citep{Ricker2014}, following the methodology of \citet{Buldgen2019}. This asteroseismology approach has the considerable advantage of leading to a highly precise and accurate mass estimate independently of any stellar evolution models. We used the method to extract masses for the three stellar hosts presented in this paper (see \autoref{tab:stellar_params}) to obtain a better estimation of the minimum mass of their planetary companions. The seismic masses can also be used as a benchmark to assess the accuracy of the masses obtained from evolutionary models, which in this case appear to be overestimated by an offset of $\sim$0.3-0.4\,M$_\odot$ but are consistent within 3-4 $\sigma$. This aspect will be addressed in a forthcoming paper once more asteroseismic masses are available. In practice, the mass estimates we present here are a result of the combination of seismic inversions of the mean density with the values of the stellar radii derived from GAIA parameters. The seismic inversion of the mean density was carried out following the methodology of \citet{Buldgen2019} and validated on eclipsing binaries. This estimate still depends on the seismic data, as well as the details of the determination of the radii from GAIA and spectrocopic data, such as the bolometric corrections and extinction laws. An in-depth description of the data extraction and seismic modeling, as well as an analysis of the orbital evolution and atmospheric evaporation of the planetary systems, can be found in a companion paper \citep{Buldgen2021}.\\ In \autoref{tab:stellar_params}, we summarize the spectroscopic parameters of the three stellar hosts announced in this paper and their masses derived from evolutionary tracks and asteroseismology. \begin{table*}[!ht] \caption{Example entries of the table of stellar parameters for the complete sample, available online at CDS.$^8$} \begin{center} \scalebox{0.562}{% \begin{tabular}{l*{19}{c}} \hline \hline HD & Sp. Type & $V$ & $B-V$ & $BC$ & $\pi$ & $d$ & $M_{V}$ & $Bp-Rp$ & $G$ & $T_{eff}$ & $log\,g$ & $[Fe/H]$ & $M_{*}$ & $L_{*}$ & $R_{*}$ \\ & & [mag] & [mag] & $BC$ & [mas] & [pc] & [mag] & [mag] & [mag] & [K] & [cm\,s$^{-2}$] & [dex] & [M$_{\odot}$] & [L$_{\odot}$] & [R$_{\odot}$] \\ & [1] & [2] & [2] & [3] & [4] & [5] & [2,4,5] & [4] & [4] & [6] & [6] & [6] & [7] & [2,3,4] & [2,3,4,6] \\ \hline 496 & K0III & 3.88 $\pm$ 0.01 & 1.00 $\pm$ 0.01 & -0.312 $\pm$ 0.016 & 24.20 $\pm$ 0.29 & 41.3 $\substack{+0.5 \\ -0.5}$ & 0.80 $\pm$ 0.03 & 1.218 $\pm$ 0.006 & 0.42 $\pm$ 0.03 & 4858 $\pm$ 41 & 2.56 $\pm$ 0.10 & -0.01 $\pm$ 0.03 & 1.93 $\pm$ 0.23 & 50.44 $\pm$ 1.46 & 10.03 $\pm$ 0.22 \\ 636 & K1/K2III & 5.29 $\pm$ 0.01 & 1.03 $\pm$ 0.01 & -0.303 $\pm$ 0.019 & 12.60 $\pm$ 0.08 & 79.2 $\substack{+0.5 \\ -0.5}$ & 0.80 $\pm$ 0.02 & 1.177 $\pm$ 0.005 & 0.47 $\pm$ 0.02 & 4879 $\pm$ 51 & 2.78 $\pm$ 0.15 & 0.19 $\pm$ 0.04 & 2.23 $\pm$ 0.09 & 50.47 $\pm$ 1.17 & 9.94 $\pm$ 0.24 \\ 770 & K0III & 6.54 $\pm$ 0.01 & 1.04 $\pm$ 0.02 & -0.317 $\pm$ 0.017 & 7.22 $\pm$ 0.04 & 137.9 $\substack{+0.7 \\ -0.7}$ & 0.84 $\pm$ 0.02 & 1.178 $\pm$ 0.003 & 0.54 $\pm$ 0.01 & 4845 $\pm$ 45 & 2.66 $\pm$ 0.10 & -0.08 $\pm$ 0.04 & 1.91 $\pm$ 0.17 & 49.14 $\pm$ 1.04 & 9.95 $\pm$ 0.21 \\ ... & & & & ... & & & & ... & & & ... & & & & ...\\ 224949 & K0III & 7.10 $\pm$ 0.01 & 0.99 $\pm$ 0.02 & -0.338 $\pm$ 0.013 & 5.73 $\pm$ 0.05 & 173.7 $\substack{+1.4 \\ -1.4}$ & 0.90 $\pm$ 0.02 & 1.183 $\pm$ 0.004 & 0.60 $\pm$ 0.02 & 4795 $\pm$ 32 & 2.49 $\pm$ 0.09 & -0.33 $\pm$ 0.03 & 1.30 $\pm$ 0.06 & 47.32 $\pm$ 1.07 & 9.97 $\pm$ 0.17 \\ \hline \end{tabular} } \begin{tablenotes} \scriptsize \item {[1]} - HIPPARCOS catalog \citep{ESA1997}, [2] - TYCHO-2 catalog \citep{Hog2000}, [3] - \citet{Alonso1999}, [4] - GAIA DR2 \citep{Brown2018}, [5] - \citet{Bailer-Jones2018}, [6] - this paper (see \autoref{sub:spec_param}), [7] - this paper, with evolutionary tracks from \citet{Pietrinferni2004}. \end{tablenotes} \end{center} \label{tab:sample_table} \end{table*} \begin{table*}[!ht] \centering \begin{threeparttable} \caption{Observed and inferred stellar parameters.} \begin{tabular}{lllccc} \hline \hline & & ref. & HD\,22532 & HD\,64121 & HD\,69123\\ TIC & & & 200851704 & 264770836 & 146264536\\ GAIA DR2 & & & {\tiny 4832768399133598080} & {\tiny 5488303966125344512} & {\tiny 5544699390684005248}\\ \hline Sp. Type & & [1] & G8III/IV & G8/K0III & K1III \\ $V$ & [mag] & [2] & 7.85 $\pm$ 0.01 & 7.44 $\pm$ 0.01 & 5.77 $\pm$ 0.01 \\ $B-V$ & [mag] & [2] & 0.89 $\pm$ 0.02 & 0.86 $\pm$ 0.02 & 1.02 $\pm$ 0.01 \\ $BC$ & & [3] & -0.250 $\pm$ 0.013 & -0.238 $\pm$ 0.012 & -0.318 $\pm$ 0.016 \\ $\pi$ & [mas] & [4] & 6.18~$\pm$~0.03 & 7.67~$\pm$~0.03 & 13.28~$\pm$~0.06 \\ $d$ & [pc] & [5] & 161.2 $\substack{+0.7 \\ -0.7}$ & 130.0 $\substack{+0.5 \\ -0.5}$ & 75.1 $\substack{+0.4 \\ -0.4}$ \\ $M_{V}$ & [mag] & [2,4,5] & 1.81 $\pm$ 0.01 & 1.87 $\pm$ 0.01 & 1.39 $\pm$ 0.01 \\ $Bp-Rp$ & [mag] & [4] & 1.087 $\pm$ 0.002 & 1.076 $\pm$ 0.004 & 1.183 $\pm$ 0.003 \\ $M_G$ & [mag] & [4] & 1.56 $\pm$ 0.01 & 1.60 $\pm$ 0.01 & 1.09 $\pm$ 0.01 \\ $T_{eff}$ & [K] & [4] & 5067 $\substack{+59 \\ -22}$ & 5066 $\substack{+58 \\ -60}$ & 4787 $\substack{+280 \\ -51}$ \\ & & [6] & 5038 $\pm$ 24 & 5078 $\pm$ 22 & 4842 $\pm$ 41 \\ $log\,g$ & [cm\,s$^{-2}$] & [6] & 3.09 $\pm$ 0.07 & 3.19 $\pm$ 0.06 & 2.86 $\pm$ 0.11 \\ $[Fe/H]$ & [dex] & [6] & -0.19 $\pm$ 0.02 & -0.21 $\pm$ 0.02 & 0.05 $\pm$ 0.03 \\ $M_{*}$ & [M$_{\odot}$] & [7] & 1.57 $\pm$ 0.07 & 1.64 $\pm$ 0.06 & 1.68 $\pm$ 0.09 \\ & & [8] & 1.20 $\pm$ 0.05 & 1.18 $\pm$ 0.05 & 1.43 $\pm$ 0.07 \\ $L_{*}$ & [L$_{\odot}$] & [2,3,4] & 18.80 $\pm$ 0.33 & 17.70 $\pm$ 0.30 & 29.51 $\pm$ 0.57 \\ $R_{*}$ & [R$_{\odot}$] & [2,3,4,6] & 5.69 $\pm$ 0.07 & 5.44 $\pm$ 0.07 & 7.72 $\pm$ 0.15 \\ \hline \end{tabular} \begin{tablenotes} \small \item {[1]} - HIPPARCOS catalog \citep{ESA1997}, [2] - TYCHO-2 catalog \citep{Hog2000}, [3] - \citet{Alonso1999}, [4] - GAIA DR2 \citep{Brown2018}, [5] - \citet{Bailer-Jones2018}, [6] - this paper (see \autoref{sub:spec_param}), [7] - this paper, with evolutionary tracks from \citet{Pietrinferni2004}, [8] - \citet{Buldgen2021}. \end{tablenotes} \label{tab:stellar_params} \end{threeparttable} \end{table*} \section{Data acquisition and analysis}\label{sec:obs_analysis} \subsection{Data acquisition and processing}\label{subsec:ts_obs} For each target, we collected several tens of radial-velocity data over a median time span of 13 years, with a typical S/N = 70 for an exposure time between 180 and 300\,s\footnote{Following the 2007 and 2014 upgrades, we have to fit a small radial-velocity offset between the three versions of the CORALIE instrument, the values of the offset depending on several aspects such as the considered star or the correlation mask used. We thus consider the three versions of CORALIE as three different instruments.}. \cref{tab:timeseries_hd22532,tab:timeseries_hd64121,tab:timeseries_hd69123} give the list of these measurements with their instrumental error bars. We first analyzed the radial-velocity time series using the radial-velocity module of the Data \& Analysis Center for Exoplanets (DACE) web platform,\footnote{\href{https://dace.unige.ch/radialVelocities/?}{https://dace.unige.ch/radialVelocities/?.}} which provides an open access to a wide range of exoplanets' observational and theoretical data with the corresponding data visualization and analysis tools. The formalism of the radial-velocity data analysis implemented in DACE is described in Ségransan et al. (submitted, Appendix~A) and is mainly based on algorithms presented in \citet{Diaz2014} and \citet{Delisle2016, Delisle2018}. Our general approach for a periodic signal search is the following. For each time series, we follow an iterative process consisting of looking for successive significant dominant peaks in the periodograms of the corresponding radial-velocity residuals. At each step, the radial-velocity residuals are computed by readjusting the model composed of the N-independent Keplerians, potential linear or quadratic drift terms to fit long-term trends, the individual instrumental offsets, and additional noise. We fit a combination of white noise terms corresponding to individual instrumental precisions\footnote{The instrumental precisions are well constrained for each version of CORALIE, calibrated on non-active stars: $\sigma_{COR98}=5.0\pm0.5$\,m\,s$^{-1}$, $\sigma_{COR07}=8.0\pm0.5$\,m\,s$^{-1}$, and $\sigma_{COR14}=3.0\pm0.5$\,m\,s$^{-1}$. Those values are used as priors on the instrumental noise terms in \autoref{sec:results}.} and a global noise term attributed to intrinsic stellar jitter. This approach allows us to obtain an idea of how much noise can be attributed to stellar physics; however, one must be aware of the degeneracy between those two sources of noise, which is only partially lifted by using strong priors on the instrumental noise. The final error bars on the velocities correspond to the quadratic sum of the error computed by the data reduction software, the instrumental noise and the stellar jitter. We proceeded with the periodicity search by computing the periodogram of the data in the $1-10\,000$ days\footnote{Using the algorithm implemented on DACE (see \citet{Delisle2020b}) and setting the upper bound of the periodogram at approximately twice the time span of the survey.} range and using the false alarm probability (FAP) to assess the significance of the signal, following the formalism of \citet{Baluev2008}. Significant signals can have different origins, and they are discussed in \autoref{subsec:stell_lineprof}. \subsection{Stellar activity and line profile analysis}\label{subsec:stell_lineprof} Stellar activity in giant stars originates from different phenomena. Short period modulations of the order of hours to days (first discussed by \citet{Walker1989,Hatzes1993,Hatzes1994}) are understood to be the result of solar-like radial pulsations (p, g, or mixed modes) \citep{Frandsen2002,DeRidder2006,Hekker2006a}. Concerning longer period variations, mechanisms such as magnetic cycles \citep{Santos2010,Dumusque2011a}, rotational modulation of features on the stellar surface (star spots, granulation, etc. \citep{Lambert1987,Larson1993,DelgadoMena2018}), beating of modes, or a combination of all three are to be considered. Non-radial oscillations have also been discussed \citep{Hekker2006c} and confirmed by \citet{DeRidder2009,Hekker2010b} as a source of periodic modulation of the spectroscopic cross-correlation profile. Those modes can have lifetimes of up to several hundreds of days \citep{Dupret2009}. The careful monitoring of the spectral line profile via the cross-correlation function (CCF) and of the chromospheric activity indicators is essential to help distinguish between planetary signals and stellar-induced variations of the radial velocities. Our estimate of the star's radial velocity is based on the CCF technique \citep{Griffin1967,Baranne1979,Queloz2001}, which creates a sort of mean spectral line from the thousands of lines used in the correlation, and of significantly higher S/N compared to a single line. In order for stellar activity to significantly impact the CCF profile, and thus the radial-velocity value, it would have to affect the majority of the spectral lines. Such an effect could cause deformations in the line profile and possibly mimic a planetary signal. Computing the contrast, radial-velocity, full width at half maximum (FWHM), and the bisector inverse span (BIS), which are linked to the first four moments of the line profile, gives enough information to precisely control the evolution of the profile along the time series \citep{Aerts2000}. Magnetic activity enhances the emission from the stellar corona and chromosphere, resulting in emissions in the X-Ray and UV regions, as well as emissions in the cores of the \textit{H\&K Ca II} lines and H$\alpha$. The H$\alpha$ activity index is sensitive to solar prominences and chromospheric activity. The reversal emission in the line core of \textit{Ca II H\&K} (S-index) \citep{Wilson1978}, which measures the contributions from the stellar photosphere and chromosphere, and the $log\,R'_{HK}$ activity index, which measures the chromospheric contribution of the \textit{H\&K Ca} lines excluding the photospheric component, cannot be directly computed from the CORALIE spectra in a reliable way, because of the low S/N in this part of the spectra. The time series and corresponding periodograms of those line profiles and of the H$\alpha$ chromospheric indicator \citep[following the method described in][]{Boisse2009,GomesdaSilva2011} are produced systematically to check for any signs of periodicity and a possible origin of the radial-velocity variations. Correlations between these indicators and radial velocities were also checked. Causes such as stellar pulsations can be ruled out by comparing the behavior of line profiles from different spectral regions; for pulsating stars, the temporal and phasing behavior of the moments should remain the same for any spectral region (a signature should also be present in the BIS \citep{Hatzes1999}). For detailed examples, we invite the reader to consult the analyses of \cite{Briquet2001b} or \citet{Briquet2004}, which attempted to discriminate between stellar pulsation and rotational modulation (presence of stellar spots) as the source of observed periodic variability, using \textit{Si II} and \textit{He I} lines in slow pulsating B stars. In the case of rotational modulation, the BIS and the S-index should vary in phase and with the same period as the radial-velocities, which is a period that should also correspond to the stellar rotation period. However, phase shifts have been observed, for example, in the case of the G2 dwarf HD\,41248 \citep{Santos2014}. The star exhibits a 25\,day periodicity in the radial-velocity, FWHM, and $log\,R'_{HK}$ time series, probably explained by rotational modulation combined with a strong differential rotation of the star. We should, however, stress here that giant stars are still not fully understood, and we have to keep in mind that the absence of periodic signals in line-shape variations does not mean for sure that the radial-velocity signal is induced by a planetary companion. It remains, however, our best interpretation of the observations. \section{Analysis of individual systems: Orbital solutions}\label{sec:results} Following the approach described in \autoref{subsec:ts_obs}, we analyzed the long time series of observations obtained for the three targets presented in this paper. The final parameters of each system were computed using the MCMC algorithm implemented in DACE (developed by \citet{Diaz2014,Diaz2016}) to probe the complete parameter space, with $1.6$ million iterations. We fit the following parameters for the Keplerian model: log\,P and log\,K to better explore ranges of several orders of magnitude with a uniform prior. $\sqrt{e \cos{\omega}}$ and $\sqrt{e \sin{\omega}}$ to obtain a uniform prior for the eccentricity. Finally, we obtained $\lambda_0,$ the mean longitude at epoch of reference (i.e., $BJD=2\,455\,500$ [d]), with a uniform prior. We used a uniform prior for the COR07 offset of reference, and Gaussian priors for the relative offsets between COR07 and COR98/14: $\Delta\,RV_{COR98-COR07}$: $\mathcal{N}(0,4)$ m\,s$^{-1}$ and $\Delta\,RV_{COR14-COR07}$: $\mathcal{N}(14,4)$ m\,s$^{-1}$. We also used Gaussian priors for the instrumental noise: $\sigma_{COR98}$: $\mathcal{N}(5,1)$ m\,s$^{-1}$, $\sigma_{COR07}$: $\mathcal{N}(8,1.5)$ m\,s$^{-1 }$ , and $\sigma_{COR14}$: $\mathcal{N}(3,0.5)$ m\,s$^{-1}$. Finally, we used a uniform prior for the stellar jitter parameter. For all three targets, the fit instrumental noises (on top of the photon noise) are in the range of the individual instrumental precisions. In the case of HD\,64121, the fit stellar jitter clearly dominates over the instrumental precision, with a level of $\sim$17\,m\,s$^{-1}$. For each time series, we checked for periodicities and correlations in the activity-related products of the high-resolution spectra mentioned above. \subsection{HD 22532}\label{subsec:hd22532} For HD\,22532\footnote{TIC\,200851704 ; GAIA DR2 4832768399133598080} , we detect a $\sim$873\,day periodic variation of the radial velocities, which, fit by a Keplerian model, corresponds to a planet in a quasi-circular orbit, 1.9\,au away from its star, and with a semi-amplitude of 40\,m\,s$^{-1}$ corresponding to a minimum mass of 2.1\,M$_J$ (using $M_{\star}=1.2$\,M$_{\odot}$ from \autoref{tab:stellar_params}). We observe in the periodogram of the H$\alpha$ activity index of HD\,22532 (see bottom periodogram in \autoref{fig:timeseries_hd22532}) a non-significant peak at $\sim$810 days, with a FAP level well below 10\%, which is at the same level as the higher frequency white noise. We checked for a linear correlation with the radial-velocities, and the weighted Pearson coefficient was found as low as $R_{P}=-0.396\pm0.068$. We also computed the weighted Spearman's rank $R_S=0.411\pm0.066$, which is also considered a low correlation. The important dispersion of the H$\alpha$ data points and the low significance of a period approximately 60 days shorter than the one detected in the radial velocities indicates that those variations are most likely not at the origin of - nor are they linked to - the radial-velocity signal. \begin{figure}[!p] \centering \adjincludegraphics[width=1\columnwidth, trim={0 0 0 {-.004\height}},clip]{HD22532_mcmc_figs_new.pdf} \caption{Top panel: Radial velocities of HD\,22532 from CORALIE (COR98 in blue, COR07 in orange and COR14 in green) with the best Keplerian model superimposed (solid line), and corresponding residuals around the solution. Second panel: Phased radial-velocity solution. Third panel: Periodograms of the radial-velocity time series, the residuals of the radial-velocities after substraction of the fit periodic signal, and the periodiodogram of the H$\alpha$ activity index time series. The red vertical line indicates the period of the orbital solution (872.6\,days). Horizontal lines are the FAP levels at 10\% (continuous), 1\% (dashed), and 0.1\% (dotted-dashed).} \label{fig:timeseries_hd22532} \end{figure} \begin{figure}[!pt] \centering \adjincludegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{HD64121_mcmc_figs_new.pdf} \caption{Same as \autoref{fig:timeseries_hd22532}, but for HD\,64121. The period of the best solution is 623.0\,days.} \label{fig:timeseries_hd64121} \end{figure} \begin{figure}[!pt] \centering \adjincludegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{HD69123_mcmc_figs_new.pdf} \caption{Same as \autoref{fig:timeseries_hd22532}, but for HD\,69123. The period of the best solution is 1193.3\,days.} \label{fig:timeseries_hd69123} \end{figure} \subsection{HD 64121}\label{subsec:hd64121} In the case of HD\,64121\footnote{TIC\,264770836 ; GAIA DR2 5488303966125344512}, a $\sim$623 day periodic variation is fit by a Keplerian model. It corresponds to a planet in a low-eccentricity orbit, 1.5\,au away from its star, with a semi-amplitude of $\sim$55\,m\,s$^{-1}$ and corresponding to a minimum mass of 2.6\,M$_J$ ($M_{\star}=1.18$\,M$_{\odot}$). The periodogram of the radial-velocity residuals, after substraction of the fit, presents a non-significant peak at $\sim$1\,000\,days at the same level as the higher-frequency noise. HD\,64121 also exhibits a similar non-significant peak in the periodogram of the H$\alpha$ activity index (see bottom periodogram in \autoref{fig:timeseries_hd64121}), at $\sim$550 days. The weighted Pearson correlation coefficient with the radial-velocities was found to be non-significant at $R_P=0.072\pm0.116$. We also computed the weighted Spearman's rank $R_S=0.100\pm0.118$, which is also non-significant. We reach the same conclusion as for HD\,22532, that those variations are most likely not at the origin of - nor are they linked to - the radial-velocity signal. \subsection{HD 69123}\label{subsec:hd69123} Finally, HD\,69123\footnote{TIC\,146264536; GAIA DR2 5544699390684005248.} presents the longest periodic variation, with a $\sim$1193 day signal corresponding to a planet in a slightly eccentric orbit of $e=0.2$. The semi-major axis of the planetary orbit is $\sim$2.5\,au, and the semi-amplitude $\sim$47\,m\,s$^{-1}$ leads to a minimum mass of 3\,M$_J$ for the planetary companion ($M_{\star}=1.43$\,M$_{\odot}$). HD\,69123 presents a peak in the periodogram of the H$\alpha$ activity index, with a FAP level below 1\% (bottom panel in \autoref{fig:timeseries_hd69123}), at a period of $\sim$367 days. We suspect this almost one-year periodicity to be caused by a telluric contamination of the H$\alpha$ line, and potentially water lines. \subsection{Intrinsic variability and final solutions} For the three stars, none of our other activity indicators (contrast, FWHM and BIS) show any similar periodicity or significant correlation with the radial velocities (see \autoref{apdx:perio_act_indic}). We also checked the V-band photometric data available in the All-Sky Automated Survey (ASAS-3, \citet{Pojmanski2002}) for our stars. This survey is very interesting as it is one of the only surveys with a time span of almost nine years. For reasons of consistency and reliability of the data post-processing (mainly correction of saturation and camera focus stability due to instrumental issues), we have to consider this data with caution when using it to check for variability due to intrinsic stellar processes or surface rotational modulation. We discuss this matter in more detail in \citet{Pezzotti2021}. No periodicities linked to the ones detected in the radial-velocity data have been found for any of the three stars presented in this paper. \begin{table*}[!htb] \centering \begin{threeparttable} \caption{Radial-velocity observation statistics, best-fit solutions of the model with instrumental offsets, nuisance parameters, Keplerian orbital parameters, and inferred planetary parameters.} \begin{tabular}{llccc} \hline \hline & & HD\,22532b & HD\,64121b & HD\,69123b \\ \hline \multicolumn{5}{c}{\textbf{Observations}}\\ \hline $N_{obs}$ & & 52 & 36 & 36 \\ $T_{span}$ & $[days]$ & 5016 & 4853 & 4507 \\ $rms_{tot}$ & $[m.s^{-1}]$ & 31.15 & 44.93 & 31.68 \\ $rms_{res}$ & $[m.s^{-1}]$ & 8.44 & 15.93 & 8.42 \\ $\chi^2_{red}$ & & 1.30 & 1.44 & 1.69 \\ \hline \multicolumn{5}{c}{\textbf{Offsets $^{(1)}$}}\\ \hline $\Delta\,RV_{COR98-COR07}$ & $[m/s]$ & 2.0~$\pm$~2.3 & -0.1~$\pm$~3.7 & -4.1~$\pm$~3.5 \\ $\Delta\,RV_{COR07-COR07}$ & $[m/s]$ & 29248.9~$\pm$~1.5 & -4117.9~$\pm$~4.0 & 27476.7~$\pm$~2.4 \\ $\Delta\,RV_{COR14-COR07}$ & $[m/s]$ & 20.9~$\pm$~2.2 & 15.1~$\pm$~3.4 & 21.9~$\pm$~2.8 \\ \hline \multicolumn{5}{c}{\textbf{Instrumental Noises}}\\ \hline $\sigma_{COR98}$ & $[m/s]$ & 4.7~$\pm$~1.0 & 4.9~$\pm$~1.0 & 5.2~$\pm$~1.0 \\ $\sigma_{COR07}$ & $[m/s]$ & 6.8~$\pm$~1.2 & 7.8~$\pm$~1.5 & 7.8~$\pm$~1.4 \\ $\sigma_{COR14}$ & $[m/s]$ & 3.1~$\pm$~0.5 & 3.0~$\pm$~0.5 & 3.0~$\pm$~0.5 \\ \hline \multicolumn{5}{c}{\textbf{Stellar Jitter}}\\ \hline $\sigma_{jit}$ & $[m.s^{-1}]$ & 2.1~$\pm$~1.6 & 16.8~$\pm$~2.6 & 7.2~$\pm$~1.8 \\ \hline \multicolumn{5}{c}{\textbf{Keplerians}}\\ \hline $P$ & $[days]$ & 872.6~$\pm$~2.8 & 623.0~$\pm$~3.4 & 1193.3~$\pm$~7.0 \\ $K$ & $[m.s^{-1}]$ & 40.0~$\pm$~1.6 & 55.2~$\pm$~4.1 & 46.8~$\pm$~2.4 \\ $e$ & & 0.03~$\pm$~0.03 & 0.11~$\pm$~0.07 & 0.19~$\pm$~0.06 \\ $\omega$ & $[deg]$ & 169.1~$\pm$~88.7 & 2.7~$\pm$~56.0 & -67.3~$\pm$~21.7 \\ $\lambda_0$ $^{(2)}$ & $[deg]$ & 110.7~$\pm$~2.3 & -77.5~$\pm$~7.3 & 227.6~$\pm$~4.5 \\ $T_p$ $^{(2)}$ & $[rjd]$ & 5575.0~$\pm$~221.0 & 5653.0~$\pm$~130.0 & 5715.7~$\pm$~64.6 \\ \hline $a$ & $[au]$ & 1.900~$\pm$~0.004 & 1.510~$\pm$~0.006 & 2.482~$\pm$~0.010 \\ $m_2\,sin\,i$ $^{(3)}$ & $[M_J]$ & 2.12~$\pm$~0.09 & 2.56~$\pm$~0.19 & 3.04~$\pm$~0.16 \\ \hline \end{tabular} \begin{tablenotes} \small \item {$^1$} The reference instrument is COR07. \item {$^2$} The mean longitude is given at $BJD=2\,455\,500$ [d], while $2\,450\,000$ has been subtracted from the date of passage through periastron ($T_p$). \item {$^3$} Using the model-independent mass from seismic inversions \citep[see][]{Buldgen2021}).\end{tablenotes} \label{tab:orbit_params} \end{threeparttable} \end{table*} We are thus fairly confident that the observed radial-velocity periodic variations are not due to chromospheric stellar activity, or rotational modulation of surface features such as spots, which would require an significant percentage of the stellar surface to be covered. We also found no indication of long-period, non-radial oscillation modes (neither matching periodicities nor corresponding harmonics in the line profile moments). We thus consider that the observed radial-velocity signals are due to planetary companions orbiting the stars. The resulting models and residuals are shown in \cref{fig:timeseries_hd22532,fig:timeseries_hd64121,fig:timeseries_hd69123}, overplotted on the radial-velocities. In \autoref{tab:orbit_params}, we present the statistics of the distributions (i.e., the median and standard deviation) of the more common set of Keplerian parameters P, K, e, $\omega$ and T$_P$, as well as the distributions of the semi-major axis and minimum masses, derived from the MCMC chains of the fit parameters. In \autoref{apdx:corner}, we present the corner plots of the posterior distributions of the fit parameters for each star. The weighted rms scatter of the radial velocities $rms_{tot}$ and residuals to the Keplerian fit $rms_{res}$ are also provided in the table. For all three targets, the rms of the residuals are comparable to those of single giant stars with similar $B-V$ \citep[see][Fig.\,3]{Hekker2006b}. \section{Discussion and conclusion}\label{sec:conclusion} Since 2006, we have been conducting a high-precision radial-velocity survey of a volume-limited sample of 641 giant stars using the CORALIE spectrograph on the 1.2\,m Leonard Euler Swiss telescope at La Silla Observatory (Chile). Our goal is to better understand the formation and evolution of planets around stars more massive than the Sun, including the evolutionary stage of stars toward the giant branch, through a statistical study of the properties of the detected planet population. The sample is volume limited, targeting giant stars in the southern hemisphere (declination $<25^{\circ}$) inside a 300\,pc radius around the Sun. The evolutionary stage of the stars was constrained by magnitude ($M_V<2.5$) and color ($0.78<B-V<1.06$) cut-offs to avoid main-sequence stars and intrinsically variable late-type giants. We derived reliable spectroscopic parameters from CORALIE-14 spectra \citep[following][]{Santos2004,Sousa2014,Alves2015}. Our sample shows a distribution of a metallicity ratio of iron similar to the one of stars in the solar neighborhood, peaking between 0.0 and 0.1\,dex, but missing very low and rich metallicity stars. This may be explained by the young age of the giants, compared to their dwarf counterparts, which did not leave them enough time to migrate in the Galaxy \citet{Wang2013,Minchev2013}. A color cut-off bias could also be part of the explanation for this effect, excluding the low-log-g stars with high metallicity and the high-log-g stars with low metallicity, as discussed in \citet{Mortier2013, Adibekyan2019}. We also obtained stellar masses for the sample with global parameters fitting, using evolutionary tracks from \citet{Pietrinferni2004}, using the SPInS software \citep{Lebreton2020}. The distribution ranges approximately from 0.75 to 4\,M$_{\odot}$, with a maximum around 2\,M$_{\odot}$. This paper is the first of a series in which we present the first results of the survey, namely the detection and characterization of three new planetary companions orbiting the giant stars HD\,22532, HD\,64121, and HD\,69123, taking advantage of asteroseismic masses, following the methodology of \citet{Buldgen2019}, obtained with the TESS data \citep{Ricker2014}. For each star, we systematically checked for any correlation with chromospheric activity, rotational modulation of surface features, or long-term non-radial pulsations. We also consulted the corresponding ASAS-3 photometry time series \citep{Pojmanski2002}, spanning 6.8 to 7.4 years. No significant periodicities or correlations linked to the radial-velocity signal detected have been found. \begin{figure*}[t!] \centering \adjincludegraphics[width=1\textwidth, trim={0 0 0 0},clip]{pla_params.pdf} \caption{Stellar and planetary parameter relations for the 186 discovered planets orbiting giant stars. The three planets presented in this paper are represented by red dots.} \label{fig:pla_params} \end{figure*} The new planets are typical representatives of the known population of planets around giant stars, considering their masses, semi-major axes, and low eccentricities. This is illustrated in \cref{fig:pla_params} displaying the new detections together with the planet candidates from the literature. Most planets discovered around giant stars have eccentricities below 0.2-0.3 \citep[e.g.,][]{Jones2014,Yilmaz2017}, and are at distances, that is, farther away than 1\,au from the central star. Monitoring of the CASCADES sample is continuing. As an interesting by-product, it is also bringing important information on stellar binaries and star-brown-dwarf systems. The formation scenario for the latter is still unclear. The system may form initially as a binary star with an extreme mass ratio, or through a formation process comparable to the one for planets in the proto-stellar disk, via disk instability or core accretion. The maximum mass of planets forming in a massive disk is not known. Forthcoming papers will present additional planetary candidates, as well as potential brown dwarfs and spectroscopic binaries found in the sample. We will also address through a statistical study of our sample, the main open questions linked with the planet population orbiting intermediate-mass (evolved) stars: distribution of orbital properties as constraints for planet formation models, correlations between planet characteristics and occurrence rate with primary star properties such as mass and metallicity. In this context, some planet-host stars from our sample are particularly well suited for a deep asteroseismic analysis, giving access to their internal structure. The available information includes well-constrained planetary signals, long, photometric, high-precision time series from high-cadence observations with TESS, and accurate spectroscopic parameters. Asteroseismologic analysis can provide precious information concerning the past and future evolution of such systems. Among the highly interesting related questions is the one of the impact of stellar evolution on the planet orbits and of the potential engulfment of planets by the star \citep{Pezzotti2021}, for instance to explain the apparent lack of close-in, short-period planets ($P\leqslant100$\,days, $a\leqslant0.5$\,au). The second and third publications of the CASCADES series will focus on the asteroseimic analysis of the three stellar hosts presented in this paper \citep{Buldgen2021} and the analysis of a new planet-host star \citep{Pezzotti2021}, for which the full evolution of the system can be modeled. Giant stars hosting planets are good candidates for planetary transit searches. Due to the increase in radius at the giant stage, companions of giant stars have a higher probability of transit than planets around main-sequence stars. However, as these planets are on long-period orbits, the transit duration is on the order of tens to hundreds of hours. Moreover, because of the relative size of the planet and the star, the expected transit depth is very small. For our sample stars, they are in the $10-1000$ ppm range. Although very limiting for ground-based observations, these two aspects are tractable from space with satellites such as TESS \citep{Ricker2014} and CHEOPS \citep{Benz2020}. The three systems described in this paper present transit probabilities around 1.5\,\% and transit depth between 170 and 350\,ppm (considering planets with a 1\,R$_J$ radius). Unfortunately, none of these candidates has thus far had a transit time prediction in the window of the TESS observations. \begin{acknowledgements} We thank all observers at La Silla Observatory from the past fourteen years for their contribution to the observations and the quality of their work. We acknowledge financial support from the Swiss National Science Foundation (SNSF) for the project 2020-178930. This work has, in part, also been carried out within the framework of the National Centre for Competence in Research PlanetS supported by SNSF. In particular, this publication makes use of the The Data \& Analysis Center for Exoplanets (DACE, https://dace.unige.ch), a platform of Planets, based at the University of Geneva (CH), dedicated to extrasolar planet data visualisation, exchange and analysis. G.B. acknowledges funding from the SNF AMBIZIONE grant No. 185805 (Seismic inversions and modelling of transport processes in stars). P.E. has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 833925, project STAREX). C.P. acknowledges funding from the Swiss National Science Foundation (project Interacting Stars, number 200020-172505). V. A. acknowledges the support from FCT through Investigador FCT contract no. IF/00650/2015/CP1273/CT0001. N.C.S acknowledges support from FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalização by these grants: UID/FIS/04434/2019; UIDB/04434/2020; UIDP/04434/2020; PTDC/FIS-AST/32113/2017 \& POCI-01-0145-FEDER-032113; PTDC/FIS-AST/28953/2017 \& POCI-01-0145-FEDER-028953. S.G.S acknowledges the support from FCT through Investigador FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC). N.L. acknowledges financial support from "Programme National de Physique Stellaire" (PNPS) of CNRS/INSU, France. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. \end{acknowledgements} \bibliographystyle{aa} \section{Introduction}\label{sec:intro} Since the discovery of 51 Peg\,b by \citet{Mayor1995}, the first extrasolar planet orbiting a solar-like star, over 4800 exoplanets\footnote{See e.g., \href{https://exoplanetarchive.ipac.caltech.edu/}{https://exoplanetarchive.ipac.caltech.edu} (as of January 5, 2022).} have been detected, including almost 900 using the radial-velocity technique. Those distant worlds cover a broad diversity of orbital properties \citep{Udry2007,Winn2014}, expected to be fossil traces of the formation process of these systems and potentially linked as well to the properties and evolutionary stages of their host stars and their environments. Models of planetary formation were at first developed based on the system we know best, the Solar System, and have evolved significantly in the past twenty years with the increasing flow of information derived from observations of exoplanet systems. The observed diversity of planet properties finds its origin in the physical process at play coupled with the local conditions during the formation of the system. Today, two main competing paradigms are proposed for planet formation: the core accretion model: a dust-to-planet bottom-up scenario \citep[e.g.,][]{Lissauer1993,Pollack1996,Alibert2005}, which can lead to the formation of gas giants in a few million years, and the disk's gravitational instability \citep{Boss1997, Durisen2006}, which can form massive planets on a very short timescale of a few thousand years \citep[see][for reviews on those processes]{Helled2014,Raymond2014}. Both agree on the formation of substellar companions from the circum-stellar accretion disk, but then differ depending on the initial environmental conditions in the disk, planet-disk, and planet-planet interactions. \begin{table*}[t] \centering \begin{threeparttable} \caption{Planet-search programs monitoring evolved stars.} \begin{tabular}{|l|l|} \hline \hline Survey & References \\ \hline the Lick G- and K-giant survey & \citet{Frink2001, Hekker2006b} \\ the ESO planet search program & \citet{Setiawan2003a} \\ the Okayama Planet Search Program & \citet{Sato2005} \\ \hspace{15pt}with the collaborative survey "EAPS-Net" & \citet{Izumiura2005} \\ the Tautenburg observatory Planet Search & \citet{Hatzes2005, Dollinger2007a} \\ Retired A-stars and their companions & \citet{Johnson2006} \\ the CORALIE \& HARPS search in open clusters & \citet{Lovis2007} \\ \hspace{15pt}with the follow-up program & \citet{DelgadoMena2018} \\ the Penn States Torún Planet Search & \citet{Niedzielski2007} \\ \hspace{15pt}with the follow-up program Tracking Advanced Planetary Systems & \citet{Niedzielski2015b} \\ the BOAO K-giant survey & \citet{Han2010} \\ the Pan-Pacific Planet Search & \citet{Wittenmyer2011} \\ the Exoplanet aRound Evolved StarS project & \citet{Jones2011} \\ the Boyunsen Planet Search & \citet{Lee2011} \\ \hline \end{tabular} \label{tab:surveys} \end{threeparttable} \end{table*} Large planet-search surveys first focused on solar-type and very low-mass stars, leaving aside more massive stellar hosts that were more complex to observe and study. As the impact of the stellar mass on planet formation is still debated, it is of great interest to study the population of planets around intermediate-mass stars, that is in the $1.5-5\,M_{\odot}$ range. Such systems are especially useful to probe the two main competing formation models. In the early phase of the process, the stellar mass seems to have little effect on the protoplanetary disk formation and evolution; however, after 3 million years stars with masses $> 2\,M_{\odot}$ start showing significant differences compared to lower mass stars, such as stronger radiation fields and higher accretion rates \citep{Ribas2015}. These impact the evolution of protoplanetary disks significantly, and by $\sim$10\,Myr there are no more disks around those higher mass stars. The typical timescale of core accretion could become problematic for massive stars that have shorter disk lifetimes \citep{Lagrange2000}. However, searching for planets orbiting main-sequence stars of intermediate masses (A to mid-F types) proves to be a challenge for Doppler searches, mainly due to the too few absorption lines present in early-type dwarfs as a consequence of their high effective temperatures, and secondly due to the rotational broadening of the lines (typical rotational velocities of 50-200\,km\,s$^{-1}$ for A-type stars, 10-100\,km\,s$^{-1}$ for early-F stars \citep{Galland2005}). A method to extract the radial-velocity from the spectrum in Fourier space was developed by \citet{Chelli2000} and then adapted and applied to early-type stars by \citet{Galland2005}. The typical radial-velocity uncertainties obtained were on the order of 100-300 and 10-50\,m\,s$^{-1}$ (normalized to a signal-to-noise ratio (S/N) = 200) for A- and F-type stars, respectively. With this technique, the team confirmed the existence of the known planet around the F7V star HD\,120136 (Tau Boo) announced by \citet{Butler1997}. The orbital parameters from \citet{Galland2005} were consistent with the values previously found. These results confirmed the accuracy of the computed radial-velocities and the possibility to detect companions in the massive, giant planetary domain for A- and F-type stars with substantial v\,sini, using high-resolution, stable spectrographs such as HARPS. On the other hand, stars will inflate during their evolution toward the red giant branch (RGB). The effective temperature thus reduces significantly, making many more absorption lines visible in the spectra. Moreover, the rotation of the star slows down, reducing the broadening effect on the lines and making them sharper. Those stars are thus suitable bright proxies for radial-velocity planet searches around intermediate-mass stars. The analysis and interpretation of the variability of radial-velocity time series of giant stars can, however, be challenging because of their intrinsic variability, which, moreover, can also be periodic \citep[e.g.,][]{Hekker2006a,Hekker2007b}. Disentangling stellar from potential planetary contributions represents a challenge in the search for long-period and low-mass companions around evolved stars. Bringing observational constraints on the formation and evolution of planetary systems relies principally on the determination of orbital parameters such as semi-major axes and eccentricities. It also relies on the knowledge of host star properties such as mass, radius, age, metallicity, and the abundances of individual elements. For giant stars, the mass and age are poorly constrained with the classical method of isochrone fitting because the evolutionary tracks in the HR diagram are too close to each other. The uncertainties on the observations (stellar magnitudes and colors) and systematics of the models lead to typical relative uncertainties on the stellar masses of 80-100\%. \citet{Lovis2007}, followed by \citet{Sato2007a} and \citet{Pasquini2012}, overcame this difficulty. They studied giant star populations in open clusters, for which better ages can be determined. This lead them to better mass estimates from stellar evolution models (\citet{Lovis2007} using the Padova models at solar metallicity \citep{Girardi2000}). More recently, GAIA DR2 unprecedented homogeneous photometric and astrometric data covering the whole sky allows for more precise age determination \citep[e.g.,][derived parameters such as age, distance modulus, and extinction for a sample of 269 open clusters]{Bossini2019}. In the late 1980s, a few surveys monitored giant star activity to better understand the origin of the observed large radial-velocity variations for late giant stars, with periods from days \citep{Hatzes1994} to hundreds of days \citep{Walker1989,Hatzes1993,Hatzes1999} and amplitudes of hundreds of m\,s$^{-1}$ \citep{Udry1999}. Such a variability can be explained by a combination of stellar intrinsic activity such as oscillations, pulsations or the presence of substellar companions. Several surveys searching for stable reference stars reported that many giants show relatively small radial-velocity standard deviations for early giant stars: $\sigma_{RV} \leq$ 20\,m\,s$^{-1}$ for several among the 86 K giants followed by \citet{Frink2001} and the 34 K giants observed by \citet{Hekker2006b}. These results show that giant stars are suitable for the detection of substellar companions with radial-velocity measurements. It is worth mentioning the case of Gamma Cep here \citep{Campbell1988}, which is also considered as intrinsically variable despite the K1 giant spectral type. \citet{Frink2002}, as part of a radial-velocity survey of K giants \citep{Frink2001}, announced the detection of a 8.9\,M$_{jup}$ (minimal mass) companion orbiting the K2 III giant $\iota$ Draconis with a period of 536\,d period and an eccentricity of 0.70, making it the first substellar companion discovered orbiting a giant star. Since then, several radial-velocity surveys have been launched to follow evolved stars with intermediate masses. The list of these programs is presented in \autoref{tab:surveys}. As of November 2020, they have led to the discovery of 186 substellar companions around evolved stars in 164 systems\footnote{The list was established from the NASA Exoplanet Archive, accessible at \href{https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls\&config=PSCompPars}{https://exoplanetarchive.ipac.caltech.edu}, by selecting the hosts with log\,g\,$\leq$\,3.5\,cm\,s$^{-2}$. The list thus contains giant and subgiant hosts.}. \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{graph_M_V_VS_B-V_lims.pdf} \caption{Color-magnitude diagram from Hipparcos measurements \citep{ESA1997} for stars with parallax precisions better than 14\% (in gray), with clear localization of our sample (in orange). The dashed lines represent the selection limits at absolute magnitude $M_V < 2.5$ (green), and between the lower (red) and higher $B-V$ values at 0.78 and 1.06. The planet host candidates presented in this paper (HD\,22532, HD\,64121, and HD\,69123) are highlighted in red.} \label{fig:hr_hipp} \end{figure} In this context, a survey of a volume-limited sample of evolved stars of intermediate masses, the CORALIE radial-velocity Search for Companions ArounD Evolved Stars (CASCADES), was initiated in 2006. The sample is presented in \autoref{sec:survey}, with the methods used to derive the stellar parameters described in \autoref{sec:methods}. \Autoref{sec:obs_analysis} describes the complete time series acquisition process and analysis, from the search for periodicities in the activity indicators to the Keplerian fitting of the radial velocities, which lead to orbital solutions of three newly discovered planetary companions in \autoref{sec:results}. Additional detections and statistical analysis of the survey will be presented in a series of subsequent publications. Finally, in \autoref{sec:conclusion} we discuss some implications of the first discoveries and provide concluding remarks in the broader context of the population of giant stars hosting substellar companions. \section{The CASCADES survey}\label{sec:survey} \subsection{Goals and sample definition}\label{sub:sample_def} \begin{figure}[th!] \centering \includegraphics[width=1\columnwidth]{hist_Vmag.pdf}\\ \includegraphics[width=1\columnwidth]{hist_dist_GAIA_bailer18.pdf} \caption{Distributions of CASCADES survey sample (filled orange histogram), compared with the published giant stars known to host planet companions (dashed histogram). The top panel displays the apparent magnitude (TYCHO-2 catalog, \citet{Hog2000}) and the bottom one the distance (GAIA DR2, \citet{Bailer-Jones2018}). The CASCADES original 2006 sample and its 2011 extension are differentiated by lighter and darker orange shades, respectively. The positions of HD\,22532, HD\,64121, and HD\,69123 are represented by red lines.} \label{fig:hist_v} \end{figure} In the context described above, in 2006 we (i.e., Christophe Lovis and Michel Mayor) launched a precise radial-velocity survey of evolved stars of intermediate masses, which we refer to as "giant stars" in this paper. The main motivation was to better understand the formation of planetary systems and their evolution around stars more massive than the Sun by completing the existing statistical properties of giant host stars and their companions. To conduct a well-defined statistical study, we chose the following criteria for the definition of the sample: \begin{figure}[th!] \centering \adjincludegraphics[width=1\columnwidth, trim={0 0 0 {-.042\height}},clip]{hist_rv_rms_raw.pdf}\\ \includegraphics[width=1\columnwidth]{hist_rv_rms_raw_zoom.pdf} \caption{Distribution of the radial-velocity dispersion observed for the stars in the sample. Top: Full range in logarithmic scale. Bottom: Zoomed-in image of smaller values, in linear scale. The positions of HD\,22532, HD\,64121, and HD\,69123 are represented by red lines.} \label{fig:hist_rms} \end{figure} We defined a volume-limited sample, $d\leq300$\,pc. The selection was done in 2005 from the Hipparcos catalogue \citep{ESA1997}. We only selected stars from the catalog with a precision on the parallax better than 10\,\%. To increase the statistics, the sample was extended in 2011 to targets with a parallax precision better than $14\,\%$. We selected stars with $M_V < 2.5$ and $B-V>0.78$, to avoid stars on the main sequence. We selected only early-type giants, with G and K spectral types and luminosity class III. To avoid later types, which are known to be intrinsically variable, we introduced a color cut-off at $B-V<1.06$. We avoided close visual binaries for which we might have contamination by the secondary in the spectrograph fiber. The limit on the separation was set at $6''$. Finally, we selected stars observable by Euler, in the southern hemisphere, with a declination below $-25^{\circ}$, to be complementary to existing surveys in the north reaching down to $\delta=-25^{\circ}$. The criteria and the final sample of 641 stars are represented in \autoref{fig:hr_hipp} displaying the Hipparcos color-magnitude diagram with the selected sample highlighted. In \autoref{fig:hist_v}, we show the distribution of stellar apparent magnitudes and distances for the sample. We paid particular attention to the potential bias induced by the criterion on the parallax precision when carrying out the statistical study of the sample. Because of its size, timespan of observations, and the quantity and quality of the measurements, the above-defined planet search program is expected to significantly improve our knowledge of planetary systems around giant stars. \subsection{Instrument description and observations} \label{sub:instru} Observations for this survey began at the end of 2006 and have been conducted since then with the CORALIE spectrograph on the 1.2-m Leonard Euler Swiss telescope located at La Silla Observatory (Chile). CORALIE is a 2-fiber-fed echelle spectrograph (2" fibers on the sky). It covers the 3880-6810 Å wavelength interval with 68 orders. The spectral resolution of the instrument was originally of 50\,000. The observations are performed in the so-called simultaneous thorium mode\footnote{A thorium-argon lamp illuminates fiber B at the same time as the star is observed on fiber A. Both spectra are recorded on the CCD.}. In 2007 and 2014, CORALIE went through two significant hardware upgrades to improve the overall performance of the instrument, such as improving the throughput of the instrument (gain of 2 magnitudes) and increasing the resolution to 60\,000. A Fabry-Perrot etalon was installed as well to replace the thorium-argon lamp used to track the variation of the spectrograph during the night. For simplicity, we refer to each dataset as {\footnotesize CORALIE-98} (or COR98) for the first version of the instrument, {\footnotesize CORALIE-07} (or COR07) for the 2007 upgrade and {\footnotesize CORALIE-14} (or COR14) for the 2014 instrument upgrade. The instrumental precision evolved through these upgrades, from 5\,m\,s$^{-1}$ for COR98, to 8\,m\,s$^{-1}$ for COR07, and finally to 2.5-3.2\,m\,s$^{-1}$ for COR14. Complete information on instrumental aspects and the precisions reached are given in, for instance, \citet{Queloz2000,Segransan2010,Segransan2021}. \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{graph_n_rv_VS_timespan_rv_rms_raw.pdf}\\ \caption{Number of measurements per star in the sample plotted against the respective time span of the observations. The points are color-coded by the radial-velocity rms. The star markers represent HD\,22532, HD\,64121, and HD\,69123.} \label{fig:pts_tspan} \end{figure} Considering the size of the sample and the first results on the intrinsic variability of giant stars obtained by earlier surveys, we tried to optimize the exposure time in our program to match the limit set by stellar variation. Evolved stars often show distributions of the dispersion of radial-velocity time series ranging from $\sim$ 5\,m\,s$^{-1}$ up to a few hundreds of m\,s$^{-1}$, with a mode at $\sim$ 15\,m\,s$^{-1}$ \citep{Lovis2004,Hekker2006b,Quirrenbach2011}. One of our objectives is also to test if this intrinsic limit can be lowered by modeling stellar variability, and thus to be able to recognise low-amplitude planet signatures. To reach a sufficient precision ($\leq$ 2-3\,ms$^{-1}$), exposure times between 3 and 5 minutes are adequate for these very bright stars. The CASCADES survey is also interesting in the broader context of projects running on the telescope. The brightness of the stars in the sample makes them ideal back-up targets when weather conditions are unfavorable. Almost 15\,000 radial-velocity measurements have been obtained so far for the 641 stars of the sample. The study of the radial-velocity dispersion (see \autoref{fig:hist_rms}) confirms a distribution with a peak around 13\,m\,s$^{-1}$, with values as low as $\sim$ 4\,m\,s$^{-1}$. We also see a significant tail at higher values with a secondary peak around 20\,km\,s$^{-1}$ corresponding to binary systems. Our observational effort is illustrated in \autoref{fig:pts_tspan}, with the three host stars presented in this paper (located with a $\star$ symbol) being among the most observed in the sample in terms of duration and number of data points. The program is continuing to obtain a minimum of 20 points per star and so can perform a solid statistical analysis of the sample. \section{Determination of stellar properties}\label{sec:methods} \subsection{Spectrocopic parameters of the stars in the sample}\label{sub:spec_param} The analysis of high-resolution spectra can provide reliable stellar parameters such as effective temperature $T_{eff}$, surface gravity $log\,g,$ and iron metallicity ratio $[Fe/H]$ (which we refer to as metallicity in this paper for simplicity). \citet{Alves2015} presented a catalog of precise stellar atmospheric parameters and iron abundances for a sample of 257 G and K field evolved stars, all of them part of our sample, using UVES and CORALIE spectra. The approach, based on \citet{Santos2004}, uses the classic curve-of-growth method. The equivalent width of a set of Fe I and II lines is measured and their abundances are calculated. Then, the stellar parameters are derived when excitation and ionization balances are satisfied simultaneously under the assumption of local thermodynamic equilibrium. For a detailed description of the method and the results, we direct the reader to \citet{Santos2004,Sousa2014,Alves2015}. Before 2014, the CORALIE spectra obtained for precise RV measurements were polluted by the strongest lines of the Thorium-Argon spectrum from the calibration fiber, and thus they were not suitable for a precise spectroscopic analysis. Dedicated additional spectroscopic observations (without calibration) were then obtained for our sample stars. After the CORALIE upgrade in 2014, the calibration spectrum from the Fabry-Perrot etalon was no longer polluting the stellar spectrum, and the observations of radial-velocity measurements can also be used for the spectroscopic analysis. We stacked the CORALIE-14 spectra from all stars in our sample to reach a high enough S/N (the median S/N of the master spectra is about 170), and we derived the spectroscopic parameters $T_{eff}$, $log\,g$ and $[Fe/H]$ using the ARES\citep{Sousa2007,Sousa2015} $+$ MOOG \citep{Sneden1973} methodology following \citet{Sousa2014}. For most of the stars, with T$_{eff}$\,<\,5200\,K, the iron line list presented in \citet{Tsantaki2013} was used. While for the hotter stars we used the standard line list presented in \citet{Sousa2008a}. We then compared these results with the ones presented in \citet{Alves2015} for the subsample of 254 common stars. \Autoref{fig:comp_param_stell} shows the good agreement found between the parameters obtained from the UVES and CORALIE spectra: in the case of metallicity, we observe an apparent positive offset of the order of the dispersion of the data around the 1:1 correlation, $\sim0.04$\,[dex] in favor of our estimation. More than 50\% of the subsample stars are inside the $1\sigma$ region and 90\% are inside $2\sigma$. These results confirm the quality of the CORALIE-14 spectra to extract reliable atmospheric parameters from them. \begin{figure*}[t!] \centering \includegraphics[width=.33\textwidth]{comp_Teff_tsantaki_VS_Teff_sousa19.pdf} \includegraphics[width=.33\textwidth]{comp_feh_tsantaki_VS_feh_sousa19.pdf} \includegraphics[width=.33\textwidth]{comp_logg_tsantaki_VS_logg_sousa19.pdf} \caption{Comparison plots of spectroscopic parameters extracted from CORALIE (this paper) and UVES \citep{Alves2015} spectra of the subsample of 254 stars in common. The black diagonal line represents the 1:1 correlation, and the red line represents the linear fit of the data. At the bottom of each figure, the residuals compared to the 1:1 correlation are shown, with their 1 and 2\,$\sigma$ dispersions represented by the shaded regions. Left: Effective temperatures seem to be perfectly correlated, with a dispersion of $\sim38$\,K. Middle: Metallicity ratio of iron [Fe/H] shows an apparent positive offset of the order of the dispersion of the data around the 1:1 correlation, $\sim0.04$\,[dex] in favor of our estimation. More than 50\% of the subsample is inside the $1\sigma$ region and 90\% inside the $2\sigma$. Right: Logarithm of the surface gravity shows a good correlation, with an offset $\sim0.05\,cm.s^{-2}$ lower than apparent dispersion of the data around the 1:1 correlation. More than 70\% of the subsample is inside the $1\sigma$ region.} \label{fig:comp_param_stell} \end{figure*} \begin{figure}[t!] \centering \adjincludegraphics[width=.49\columnwidth, trim={0 0 {.02\height} {-0.035\height}},clip]{comp_Teff_GAIA_VS_Teff_sousa19.pdf} \adjincludegraphics[width=.49\columnwidth, trim={{.02\height} 0 0 {-0.035\height}},clip]{comp_R_GAIA_VS_R_GAIAspec.pdf} \caption{Comparison plots of photometric (from \citep{Brown2018}) and spectroscopic (from this paper) determinations of the effective temperatures and the stellar radii. The black diagonal line represents the 1:1 correlation and the red line represents the linear fit of the data. At the bottom of each figure, the residuals compared to the 1:1 correlation are shown, with their 1 and 2\,$\sigma$ dispersions represented by the shaded regions. Left: Effective temperatures seem to show a linear trend, but this is not significant compared to the dispersion of the date of $\sim110$\,K, inside which more than 85\% of the data is located. Right: Radii are in good agreement but show an apparent trend and increasing dispersion for masses above than $\sim15\,M_{\odot}$.} \label{fig:comp_spec_gaia} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{graph_evol_tracks_BaSTI.pdf} \caption{Positions in Hertzsprung-Russell diagram of the subsample of 620 stars for which we derived spectroscopic parameters. The three host stars we focus on in the present paper are highlighted as red dots. We adopted the luminosities obtained with the method described in \autoref{sub:stellar_mass}. The evolutionary tracks are from models of \citet{Pietrinferni2004} for different stellar masses ($1.0$, $1.2$, $1.5$, $1.7$, $2.0$, $2.5$, $3.0$, $4.0,$ and $5.0$ $M_{\odot}$ from bottom to top). They are for models with solar metallicity.} \label{fig:evoltrack} \end{figure} \subsection{Stellar luminosities, radii, and masses}\label{sub:stellar_mass} We derived the luminosity $L$ for the stars in our sample using the Gaia DR2 parallaxes corrected by \citet{Bailer-Jones2018}\footnote{\citet{Bailer-Jones2018} provided purely geometric distance estimates by using an inference procedure that accounts for the nonlinearity of the transformation (inversion of the parallax) and the asymmetry of the resulting probability distribution.}, $V$-band magnitudes from \citet{Hog2000}, and the bolometric correction relation $BC$ of \citet{Alonso1999}\footnote{Considering the short distance of the stars of the sample, the extinction was not taken into account.}. We then used this luminosity and the spectroscopic effective temperature $T_{eff}$ to compute the stellar radii using the Stephan-Boltzmann relation. The uncertainties of the radii were estimated using a Monte Carlo approach. We compared our temperatures and radii with the GAIA DR2 values and found them to be in good agreement, as illustrated in \autoref{fig:comp_spec_gaia}. The derived luminosities and spectroscopic effective temperatures are plotted in the Hertzsprung-Russell diagram in \autoref{fig:evoltrack}, together with the stellar evolutionary tracks at solar metallicity of \citet{Pietrinferni2004}\footnote{Available on the BaSTI database \href{http://basti.oa-teramo.inaf.it/index.html}{http://basti.oa-teramo.inaf.it.}}. Those were used to estimate the masses of our stars using the SPInS software \citep{Lebreton2020}\footnote{\href{https://dreese.pages.obspm.fr/spins/index.html}{https://dreese.pages.obspm.fr/spins/index.html}, which employs a global Markov Chain Monte Carlo (MCMC) approach taking into account the different timescales at various evolutionary stages and interpolation between the tracks.}. The approach compares the luminosity, effective temperature, logarithm of surface gravity, and [Fe/H] of individual objects to theoretical evolutionary tracks and accounts for the observational errors in these four quantities. Giant stars are located in the area of the Hertzsprung-Russell diagram where individual evolutionary tracks are close to each other; thus, the derived precisions on the stellar masses might be overestimated. However, in our sample, the degeneracy between the horizontal branch (HB) and RGB is not too pronounced. Comparisons with masses derived from detailed asteroseismic modeling (see \autoref{subsec:masses_astero}) show some small differences. These discrepancies mainly originate from the use of a fixed enrichment law in the grid of \citet{Pietrinferni2004}, such that the chemical composition a the stars of our sample is fully determined from the determination of $\left[Fe/H\right] $ in our modeling using SPInS. This was not the case for the seismic modeling pipeline, where the additional constraints justified allowing for an additional free parameter. Thus, solutions with a chemical composition deviating from a fixed enrichment law where helium and metal abundances are tied together were possible outcomes of the modeling procedure. This is particularly visible for the two stars with Fe/H $\sim-0.2$ studied in \autoref{subsec:masses_astero} (HD\,22532 and HD\,64121), for which the asteroseismic analysis reveals an initial helium abundance slightly higher than the solar one despite their subsolar metallicity. This situation of course cannot be reproduced by models with an initial helium abundance fixed by the metallicity, which then leads to an overestimation of the stellar mass determined by SPInS to compensate for the incorrect helium abundance and reproduce the observed location in the HR diagram. Nevertheless, we can consider that these fits still allow us to globally estimate the stellar masses of our sample of stars. The obtained distribution for all stars is shown in \autoref{fig:params_distrib}. The median of the distribution is found around $2.1$ $M_{\odot}$, corresponding to intermediate-mass stars. However, as mentioned above, some uncertainty on the evolutionary stage of a subsample of stars of our sample could have affected our determinations, especially if we consider deviations from a given chemical enrichment law. Indeed, some stars of our sample identified here as being around $\approx 2.0$ $M_{\odot}$ on the RGB could actually be low-mass stars in the red clump. This degeneracy could be lifted using asteroseismic observations of dipolar oscillation modes that allow us to unambiguously determine the evolutionary stage of these stars \citep{Beck2011,Bedding2011}. \begin{figure*}[t!] \centering \adjincludegraphics[width=1\textwidth, trim={0 0 0 0},clip]{stellar_params_new.pdf} \caption{Distributions and relations between stellar parameters for our subsample of 620 stars. Top left: Metallicity distribution of the stars of our sample (colored in orange) compared with the same distribution for the $\sim1000$ stars (dashed histogram) in the CORALIE volume-limited sample \citep{Udry2000,Santos2001}. Top right: Distribution of the stellar masses obtained from track fitting. The corresponding kernel density estimation is overplotted in orange, using a Gaussian kernel. Bottom left: Mass vs. metallicity relation. (bottom right) Metallicity vs. logarithm of surface gravity. The two black lines were drawn by eye and show the biases in the samples due to the B-V cut-off. The red-dashed rectangle delimits the area of the potential unbiased subsample. The three planet hosts presented in this paper are represented by red dots.} \label{fig:params_distrib} \end{figure*} \Autoref{tab:sample_table} shows example lines of the complete set of stellar parameters for our sample, available online\footnote{Available at \href{https://cdsweb.u-strasbg.fr}{CDS} and \href{https://dace.unige.ch/catalogs/?catalog=CASCADES}{https://dace.unige.ch.}}. We also illustrate our results in \autoref{fig:params_distrib}. The metallicity distribution decreases with increasing [Fe/H] for [Fe/H]\,>\,0.0-0.1, similarly to the metallicity distribution \citep{Santos2001} of a large, volume-limited sample of dwarf stars in the solar neighborhood, included in the CORALIE survey \citep{Udry2000}. We observe that our sample of giants is lacking the metal-rich and very metal-poor stars. This tendency has been observed in many studies \citep[e.g.,][]{Luck2007,Takeda2008,Ghezzi2010,Adibekyan2015a,Adibekyan2019}. It may be related to the fact that giants, most of them being more massive, are younger than their dwarf counterparts. They thus do not have time to migrate far from the inner to the outer disks of the galaxy during their short lifetimes \citep{Wang2013,Minchev2013}. \citet{Adibekyan2019} also addresses the role of the age-metallicity dispersion relation \citep{DaSilva2006,Maldonado2013}, as well as potential selection effects due to B-V color cut-off \citep{Mortier2013}, which excludes low-log-g stars with high metallicity and high-log-g stars with low metallicity. We illustrate this effect in \autoref{fig:params_distrib}, in the same way as \citep{Adibekyan2015a}, by drawing diagonal lines that show the biases in the sample due to the color cut-off. We also represent the area that would correspond to an unbiased subsample inside a cut rectangle (red dashed rectangle). We will perform a detailed statistical study of the stellar parameters of our sample in future work. \subsection{Asteroseismic masses for the three planet hosts}\label{subsec:masses_astero} To go further and improve the mass estimation for the host star, we performed a detailed seismic analysis of the TESS short-cadence photometric data \citep{Ricker2014}, following the methodology of \citet{Buldgen2019}. This asteroseismology approach has the considerable advantage of leading to a highly precise and accurate mass estimate independently of any stellar evolution models. We used the method to extract masses for the three stellar hosts presented in this paper (see \autoref{tab:stellar_params}) to obtain a better estimation of the minimum mass of their planetary companions. The seismic masses can also be used as a benchmark to assess the accuracy of the masses obtained from evolutionary models, which in this case appear to be overestimated by an offset of $\sim$0.3-0.4\,M$_\odot$ but are consistent within 3-4 $\sigma$. This aspect will be addressed in a forthcoming paper once more asteroseismic masses are available. In practice, the mass estimates we present here are a result of the combination of seismic inversions of the mean density with the values of the stellar radii derived from GAIA parameters. The seismic inversion of the mean density was carried out following the methodology of \citet{Buldgen2019} and validated on eclipsing binaries. This estimate still depends on the seismic data, as well as the details of the determination of the radii from GAIA and spectrocopic data, such as the bolometric corrections and extinction laws. An in-depth description of the data extraction and seismic modeling, as well as an analysis of the orbital evolution and atmospheric evaporation of the planetary systems, can be found in a companion paper \citep{Buldgen2021}.\\ In \autoref{tab:stellar_params}, we summarize the spectroscopic parameters of the three stellar hosts announced in this paper and their masses derived from evolutionary tracks and asteroseismology. \begin{table*}[!ht] \caption{Example entries of the table of stellar parameters for the complete sample, available online at CDS.$^8$} \begin{center} \scalebox{0.562}{% \begin{tabular}{l*{19}{c}} \hline \hline HD & Sp. Type & $V$ & $B-V$ & $BC$ & $\pi$ & $d$ & $M_{V}$ & $Bp-Rp$ & $G$ & $T_{eff}$ & $log\,g$ & $[Fe/H]$ & $M_{*}$ & $L_{*}$ & $R_{*}$ \\ & & [mag] & [mag] & $BC$ & [mas] & [pc] & [mag] & [mag] & [mag] & [K] & [cm\,s$^{-2}$] & [dex] & [M$_{\odot}$] & [L$_{\odot}$] & [R$_{\odot}$] \\ & [1] & [2] & [2] & [3] & [4] & [5] & [2,4,5] & [4] & [4] & [6] & [6] & [6] & [7] & [2,3,4] & [2,3,4,6] \\ \hline 496 & K0III & 3.88 $\pm$ 0.01 & 1.00 $\pm$ 0.01 & -0.312 $\pm$ 0.016 & 24.20 $\pm$ 0.29 & 41.3 $\substack{+0.5 \\ -0.5}$ & 0.80 $\pm$ 0.03 & 1.218 $\pm$ 0.006 & 0.42 $\pm$ 0.03 & 4858 $\pm$ 41 & 2.56 $\pm$ 0.10 & -0.01 $\pm$ 0.03 & 1.93 $\pm$ 0.23 & 50.44 $\pm$ 1.46 & 10.03 $\pm$ 0.22 \\ 636 & K1/K2III & 5.29 $\pm$ 0.01 & 1.03 $\pm$ 0.01 & -0.303 $\pm$ 0.019 & 12.60 $\pm$ 0.08 & 79.2 $\substack{+0.5 \\ -0.5}$ & 0.80 $\pm$ 0.02 & 1.177 $\pm$ 0.005 & 0.47 $\pm$ 0.02 & 4879 $\pm$ 51 & 2.78 $\pm$ 0.15 & 0.19 $\pm$ 0.04 & 2.23 $\pm$ 0.09 & 50.47 $\pm$ 1.17 & 9.94 $\pm$ 0.24 \\ 770 & K0III & 6.54 $\pm$ 0.01 & 1.04 $\pm$ 0.02 & -0.317 $\pm$ 0.017 & 7.22 $\pm$ 0.04 & 137.9 $\substack{+0.7 \\ -0.7}$ & 0.84 $\pm$ 0.02 & 1.178 $\pm$ 0.003 & 0.54 $\pm$ 0.01 & 4845 $\pm$ 45 & 2.66 $\pm$ 0.10 & -0.08 $\pm$ 0.04 & 1.91 $\pm$ 0.17 & 49.14 $\pm$ 1.04 & 9.95 $\pm$ 0.21 \\ ... & & & & ... & & & & ... & & & ... & & & & ...\\ 224949 & K0III & 7.10 $\pm$ 0.01 & 0.99 $\pm$ 0.02 & -0.338 $\pm$ 0.013 & 5.73 $\pm$ 0.05 & 173.7 $\substack{+1.4 \\ -1.4}$ & 0.90 $\pm$ 0.02 & 1.183 $\pm$ 0.004 & 0.60 $\pm$ 0.02 & 4795 $\pm$ 32 & 2.49 $\pm$ 0.09 & -0.33 $\pm$ 0.03 & 1.30 $\pm$ 0.06 & 47.32 $\pm$ 1.07 & 9.97 $\pm$ 0.17 \\ \hline \end{tabular} } \begin{tablenotes} \scriptsize \item {[1]} - HIPPARCOS catalog \citep{ESA1997}, [2] - TYCHO-2 catalog \citep{Hog2000}, [3] - \citet{Alonso1999}, [4] - GAIA DR2 \citep{Brown2018}, [5] - \citet{Bailer-Jones2018}, [6] - this paper (see \autoref{sub:spec_param}), [7] - this paper, with evolutionary tracks from \citet{Pietrinferni2004}. \end{tablenotes} \end{center} \label{tab:sample_table} \end{table*} \begin{table*}[!ht] \centering \begin{threeparttable} \caption{Observed and inferred stellar parameters.} \begin{tabular}{lllccc} \hline \hline & & ref. & HD\,22532 & HD\,64121 & HD\,69123\\ TIC & & & 200851704 & 264770836 & 146264536\\ GAIA DR2 & & & {\tiny 4832768399133598080} & {\tiny 5488303966125344512} & {\tiny 5544699390684005248}\\ \hline Sp. Type & & [1] & G8III/IV & G8/K0III & K1III \\ $V$ & [mag] & [2] & 7.85 $\pm$ 0.01 & 7.44 $\pm$ 0.01 & 5.77 $\pm$ 0.01 \\ $B-V$ & [mag] & [2] & 0.89 $\pm$ 0.02 & 0.86 $\pm$ 0.02 & 1.02 $\pm$ 0.01 \\ $BC$ & & [3] & -0.250 $\pm$ 0.013 & -0.238 $\pm$ 0.012 & -0.318 $\pm$ 0.016 \\ $\pi$ & [mas] & [4] & 6.18~$\pm$~0.03 & 7.67~$\pm$~0.03 & 13.28~$\pm$~0.06 \\ $d$ & [pc] & [5] & 161.2 $\substack{+0.7 \\ -0.7}$ & 130.0 $\substack{+0.5 \\ -0.5}$ & 75.1 $\substack{+0.4 \\ -0.4}$ \\ $M_{V}$ & [mag] & [2,4,5] & 1.81 $\pm$ 0.01 & 1.87 $\pm$ 0.01 & 1.39 $\pm$ 0.01 \\ $Bp-Rp$ & [mag] & [4] & 1.087 $\pm$ 0.002 & 1.076 $\pm$ 0.004 & 1.183 $\pm$ 0.003 \\ $M_G$ & [mag] & [4] & 1.56 $\pm$ 0.01 & 1.60 $\pm$ 0.01 & 1.09 $\pm$ 0.01 \\ $T_{eff}$ & [K] & [4] & 5067 $\substack{+59 \\ -22}$ & 5066 $\substack{+58 \\ -60}$ & 4787 $\substack{+280 \\ -51}$ \\ & & [6] & 5038 $\pm$ 24 & 5078 $\pm$ 22 & 4842 $\pm$ 41 \\ $log\,g$ & [cm\,s$^{-2}$] & [6] & 3.09 $\pm$ 0.07 & 3.19 $\pm$ 0.06 & 2.86 $\pm$ 0.11 \\ $[Fe/H]$ & [dex] & [6] & -0.19 $\pm$ 0.02 & -0.21 $\pm$ 0.02 & 0.05 $\pm$ 0.03 \\ $M_{*}$ & [M$_{\odot}$] & [7] & 1.57 $\pm$ 0.07 & 1.64 $\pm$ 0.06 & 1.68 $\pm$ 0.09 \\ & & [8] & 1.20 $\pm$ 0.05 & 1.18 $\pm$ 0.05 & 1.43 $\pm$ 0.07 \\ $L_{*}$ & [L$_{\odot}$] & [2,3,4] & 18.80 $\pm$ 0.33 & 17.70 $\pm$ 0.30 & 29.51 $\pm$ 0.57 \\ $R_{*}$ & [R$_{\odot}$] & [2,3,4,6] & 5.69 $\pm$ 0.07 & 5.44 $\pm$ 0.07 & 7.72 $\pm$ 0.15 \\ \hline \end{tabular} \begin{tablenotes} \small \item {[1]} - HIPPARCOS catalog \citep{ESA1997}, [2] - TYCHO-2 catalog \citep{Hog2000}, [3] - \citet{Alonso1999}, [4] - GAIA DR2 \citep{Brown2018}, [5] - \citet{Bailer-Jones2018}, [6] - this paper (see \autoref{sub:spec_param}), [7] - this paper, with evolutionary tracks from \citet{Pietrinferni2004}, [8] - \citet{Buldgen2021}. \end{tablenotes} \label{tab:stellar_params} \end{threeparttable} \end{table*} \section{Data acquisition and analysis}\label{sec:obs_analysis} \subsection{Data acquisition and processing}\label{subsec:ts_obs} For each target, we collected several tens of radial-velocity data over a median time span of 13 years, with a typical S/N = 70 for an exposure time between 180 and 300\,s\footnote{Following the 2007 and 2014 upgrades, we have to fit a small radial-velocity offset between the three versions of the CORALIE instrument, the values of the offset depending on several aspects such as the considered star or the correlation mask used. We thus consider the three versions of CORALIE as three different instruments.}. \cref{tab:timeseries_hd22532,tab:timeseries_hd64121,tab:timeseries_hd69123} give the list of these measurements with their instrumental error bars. We first analyzed the radial-velocity time series using the radial-velocity module of the Data \& Analysis Center for Exoplanets (DACE) web platform,\footnote{\href{https://dace.unige.ch/radialVelocities/?}{https://dace.unige.ch/radialVelocities/?.}} which provides an open access to a wide range of exoplanets' observational and theoretical data with the corresponding data visualization and analysis tools. The formalism of the radial-velocity data analysis implemented in DACE is described in Ségransan et al. (submitted, Appendix~A) and is mainly based on algorithms presented in \citet{Diaz2014} and \citet{Delisle2016, Delisle2018}. Our general approach for a periodic signal search is the following. For each time series, we follow an iterative process consisting of looking for successive significant dominant peaks in the periodograms of the corresponding radial-velocity residuals. At each step, the radial-velocity residuals are computed by readjusting the model composed of the N-independent Keplerians, potential linear or quadratic drift terms to fit long-term trends, the individual instrumental offsets, and additional noise. We fit a combination of white noise terms corresponding to individual instrumental precisions\footnote{The instrumental precisions are well constrained for each version of CORALIE, calibrated on non-active stars: $\sigma_{COR98}=5.0\pm0.5$\,m\,s$^{-1}$, $\sigma_{COR07}=8.0\pm0.5$\,m\,s$^{-1}$, and $\sigma_{COR14}=3.0\pm0.5$\,m\,s$^{-1}$. Those values are used as priors on the instrumental noise terms in \autoref{sec:results}.} and a global noise term attributed to intrinsic stellar jitter. This approach allows us to obtain an idea of how much noise can be attributed to stellar physics; however, one must be aware of the degeneracy between those two sources of noise, which is only partially lifted by using strong priors on the instrumental noise. The final error bars on the velocities correspond to the quadratic sum of the error computed by the data reduction software, the instrumental noise and the stellar jitter. We proceeded with the periodicity search by computing the periodogram of the data in the $1-10\,000$ days\footnote{Using the algorithm implemented on DACE (see \citet{Delisle2020b}) and setting the upper bound of the periodogram at approximately twice the time span of the survey.} range and using the false alarm probability (FAP) to assess the significance of the signal, following the formalism of \citet{Baluev2008}. Significant signals can have different origins, and they are discussed in \autoref{subsec:stell_lineprof}. \subsection{Stellar activity and line profile analysis}\label{subsec:stell_lineprof} Stellar activity in giant stars originates from different phenomena. Short period modulations of the order of hours to days (first discussed by \citet{Walker1989,Hatzes1993,Hatzes1994}) are understood to be the result of solar-like radial pulsations (p, g, or mixed modes) \citep{Frandsen2002,DeRidder2006,Hekker2006a}. Concerning longer period variations, mechanisms such as magnetic cycles \citep{Santos2010,Dumusque2011a}, rotational modulation of features on the stellar surface (star spots, granulation, etc. \citep{Lambert1987,Larson1993,DelgadoMena2018}), beating of modes, or a combination of all three are to be considered. Non-radial oscillations have also been discussed \citep{Hekker2006c} and confirmed by \citet{DeRidder2009,Hekker2010b} as a source of periodic modulation of the spectroscopic cross-correlation profile. Those modes can have lifetimes of up to several hundreds of days \citep{Dupret2009}. The careful monitoring of the spectral line profile via the cross-correlation function (CCF) and of the chromospheric activity indicators is essential to help distinguish between planetary signals and stellar-induced variations of the radial velocities. Our estimate of the star's radial velocity is based on the CCF technique \citep{Griffin1967,Baranne1979,Queloz2001}, which creates a sort of mean spectral line from the thousands of lines used in the correlation, and of significantly higher S/N compared to a single line. In order for stellar activity to significantly impact the CCF profile, and thus the radial-velocity value, it would have to affect the majority of the spectral lines. Such an effect could cause deformations in the line profile and possibly mimic a planetary signal. Computing the contrast, radial-velocity, full width at half maximum (FWHM), and the bisector inverse span (BIS), which are linked to the first four moments of the line profile, gives enough information to precisely control the evolution of the profile along the time series \citep{Aerts2000}. Magnetic activity enhances the emission from the stellar corona and chromosphere, resulting in emissions in the X-Ray and UV regions, as well as emissions in the cores of the \textit{H\&K Ca II} lines and H$\alpha$. The H$\alpha$ activity index is sensitive to solar prominences and chromospheric activity. The reversal emission in the line core of \textit{Ca II H\&K} (S-index) \citep{Wilson1978}, which measures the contributions from the stellar photosphere and chromosphere, and the $log\,R'_{HK}$ activity index, which measures the chromospheric contribution of the \textit{H\&K Ca} lines excluding the photospheric component, cannot be directly computed from the CORALIE spectra in a reliable way, because of the low S/N in this part of the spectra. The time series and corresponding periodograms of those line profiles and of the H$\alpha$ chromospheric indicator \citep[following the method described in][]{Boisse2009,GomesdaSilva2011} are produced systematically to check for any signs of periodicity and a possible origin of the radial-velocity variations. Correlations between these indicators and radial velocities were also checked. Causes such as stellar pulsations can be ruled out by comparing the behavior of line profiles from different spectral regions; for pulsating stars, the temporal and phasing behavior of the moments should remain the same for any spectral region (a signature should also be present in the BIS \citep{Hatzes1999}). For detailed examples, we invite the reader to consult the analyses of \cite{Briquet2001b} or \citet{Briquet2004}, which attempted to discriminate between stellar pulsation and rotational modulation (presence of stellar spots) as the source of observed periodic variability, using \textit{Si II} and \textit{He I} lines in slow pulsating B stars. In the case of rotational modulation, the BIS and the S-index should vary in phase and with the same period as the radial-velocities, which is a period that should also correspond to the stellar rotation period. However, phase shifts have been observed, for example, in the case of the G2 dwarf HD\,41248 \citep{Santos2014}. The star exhibits a 25\,day periodicity in the radial-velocity, FWHM, and $log\,R'_{HK}$ time series, probably explained by rotational modulation combined with a strong differential rotation of the star. We should, however, stress here that giant stars are still not fully understood, and we have to keep in mind that the absence of periodic signals in line-shape variations does not mean for sure that the radial-velocity signal is induced by a planetary companion. It remains, however, our best interpretation of the observations. \section{Analysis of individual systems: Orbital solutions}\label{sec:results} Following the approach described in \autoref{subsec:ts_obs}, we analyzed the long time series of observations obtained for the three targets presented in this paper. The final parameters of each system were computed using the MCMC algorithm implemented in DACE (developed by \citet{Diaz2014,Diaz2016}) to probe the complete parameter space, with $1.6$ million iterations. We fit the following parameters for the Keplerian model: log\,P and log\,K to better explore ranges of several orders of magnitude with a uniform prior. $\sqrt{e \cos{\omega}}$ and $\sqrt{e \sin{\omega}}$ to obtain a uniform prior for the eccentricity. Finally, we obtained $\lambda_0,$ the mean longitude at epoch of reference (i.e., $BJD=2\,455\,500$ [d]), with a uniform prior. We used a uniform prior for the COR07 offset of reference, and Gaussian priors for the relative offsets between COR07 and COR98/14: $\Delta\,RV_{COR98-COR07}$: $\mathcal{N}(0,4)$ m\,s$^{-1}$ and $\Delta\,RV_{COR14-COR07}$: $\mathcal{N}(14,4)$ m\,s$^{-1}$. We also used Gaussian priors for the instrumental noise: $\sigma_{COR98}$: $\mathcal{N}(5,1)$ m\,s$^{-1}$, $\sigma_{COR07}$: $\mathcal{N}(8,1.5)$ m\,s$^{-1 }$ , and $\sigma_{COR14}$: $\mathcal{N}(3,0.5)$ m\,s$^{-1}$. Finally, we used a uniform prior for the stellar jitter parameter. For all three targets, the fit instrumental noises (on top of the photon noise) are in the range of the individual instrumental precisions. In the case of HD\,64121, the fit stellar jitter clearly dominates over the instrumental precision, with a level of $\sim$17\,m\,s$^{-1}$. For each time series, we checked for periodicities and correlations in the activity-related products of the high-resolution spectra mentioned above. \subsection{HD 22532}\label{subsec:hd22532} For HD\,22532\footnote{TIC\,200851704 ; GAIA DR2 4832768399133598080} , we detect a $\sim$873\,day periodic variation of the radial velocities, which, fit by a Keplerian model, corresponds to a planet in a quasi-circular orbit, 1.9\,au away from its star, and with a semi-amplitude of 40\,m\,s$^{-1}$ corresponding to a minimum mass of 2.1\,M$_J$ (using $M_{\star}=1.2$\,M$_{\odot}$ from \autoref{tab:stellar_params}). We observe in the periodogram of the H$\alpha$ activity index of HD\,22532 (see bottom periodogram in \autoref{fig:timeseries_hd22532}) a non-significant peak at $\sim$810 days, with a FAP level well below 10\%, which is at the same level as the higher frequency white noise. We checked for a linear correlation with the radial-velocities, and the weighted Pearson coefficient was found as low as $R_{P}=-0.396\pm0.068$. We also computed the weighted Spearman's rank $R_S=0.411\pm0.066$, which is also considered a low correlation. The important dispersion of the H$\alpha$ data points and the low significance of a period approximately 60 days shorter than the one detected in the radial velocities indicates that those variations are most likely not at the origin of - nor are they linked to - the radial-velocity signal. \begin{figure}[!p] \centering \adjincludegraphics[width=1\columnwidth, trim={0 0 0 {-.004\height}},clip]{HD22532_mcmc_figs_new.pdf} \caption{Top panel: Radial velocities of HD\,22532 from CORALIE (COR98 in blue, COR07 in orange and COR14 in green) with the best Keplerian model superimposed (solid line), and corresponding residuals around the solution. Second panel: Phased radial-velocity solution. Third panel: Periodograms of the radial-velocity time series, the residuals of the radial-velocities after substraction of the fit periodic signal, and the periodiodogram of the H$\alpha$ activity index time series. The red vertical line indicates the period of the orbital solution (872.6\,days). Horizontal lines are the FAP levels at 10\% (continuous), 1\% (dashed), and 0.1\% (dotted-dashed).} \label{fig:timeseries_hd22532} \end{figure} \begin{figure}[!pt] \centering \adjincludegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{HD64121_mcmc_figs_new.pdf} \caption{Same as \autoref{fig:timeseries_hd22532}, but for HD\,64121. The period of the best solution is 623.0\,days.} \label{fig:timeseries_hd64121} \end{figure} \begin{figure}[!pt] \centering \adjincludegraphics[width=1\columnwidth, trim={0 0 0 0},clip]{HD69123_mcmc_figs_new.pdf} \caption{Same as \autoref{fig:timeseries_hd22532}, but for HD\,69123. The period of the best solution is 1193.3\,days.} \label{fig:timeseries_hd69123} \end{figure} \subsection{HD 64121}\label{subsec:hd64121} In the case of HD\,64121\footnote{TIC\,264770836 ; GAIA DR2 5488303966125344512}, a $\sim$623 day periodic variation is fit by a Keplerian model. It corresponds to a planet in a low-eccentricity orbit, 1.5\,au away from its star, with a semi-amplitude of $\sim$55\,m\,s$^{-1}$ and corresponding to a minimum mass of 2.6\,M$_J$ ($M_{\star}=1.18$\,M$_{\odot}$). The periodogram of the radial-velocity residuals, after substraction of the fit, presents a non-significant peak at $\sim$1\,000\,days at the same level as the higher-frequency noise. HD\,64121 also exhibits a similar non-significant peak in the periodogram of the H$\alpha$ activity index (see bottom periodogram in \autoref{fig:timeseries_hd64121}), at $\sim$550 days. The weighted Pearson correlation coefficient with the radial-velocities was found to be non-significant at $R_P=0.072\pm0.116$. We also computed the weighted Spearman's rank $R_S=0.100\pm0.118$, which is also non-significant. We reach the same conclusion as for HD\,22532, that those variations are most likely not at the origin of - nor are they linked to - the radial-velocity signal. \subsection{HD 69123}\label{subsec:hd69123} Finally, HD\,69123\footnote{TIC\,146264536; GAIA DR2 5544699390684005248.} presents the longest periodic variation, with a $\sim$1193 day signal corresponding to a planet in a slightly eccentric orbit of $e=0.2$. The semi-major axis of the planetary orbit is $\sim$2.5\,au, and the semi-amplitude $\sim$47\,m\,s$^{-1}$ leads to a minimum mass of 3\,M$_J$ for the planetary companion ($M_{\star}=1.43$\,M$_{\odot}$). HD\,69123 presents a peak in the periodogram of the H$\alpha$ activity index, with a FAP level below 1\% (bottom panel in \autoref{fig:timeseries_hd69123}), at a period of $\sim$367 days. We suspect this almost one-year periodicity to be caused by a telluric contamination of the H$\alpha$ line, and potentially water lines. \subsection{Intrinsic variability and final solutions} For the three stars, none of our other activity indicators (contrast, FWHM and BIS) show any similar periodicity or significant correlation with the radial velocities (see \autoref{apdx:perio_act_indic}). We also checked the V-band photometric data available in the All-Sky Automated Survey (ASAS-3, \citet{Pojmanski2002}) for our stars. This survey is very interesting as it is one of the only surveys with a time span of almost nine years. For reasons of consistency and reliability of the data post-processing (mainly correction of saturation and camera focus stability due to instrumental issues), we have to consider this data with caution when using it to check for variability due to intrinsic stellar processes or surface rotational modulation. We discuss this matter in more detail in \citet{Pezzotti2021}. No periodicities linked to the ones detected in the radial-velocity data have been found for any of the three stars presented in this paper. \begin{table*}[!htb] \centering \begin{threeparttable} \caption{Radial-velocity observation statistics, best-fit solutions of the model with instrumental offsets, nuisance parameters, Keplerian orbital parameters, and inferred planetary parameters.} \begin{tabular}{llccc} \hline \hline & & HD\,22532b & HD\,64121b & HD\,69123b \\ \hline \multicolumn{5}{c}{\textbf{Observations}}\\ \hline $N_{obs}$ & & 52 & 36 & 36 \\ $T_{span}$ & $[days]$ & 5016 & 4853 & 4507 \\ $rms_{tot}$ & $[m.s^{-1}]$ & 31.15 & 44.93 & 31.68 \\ $rms_{res}$ & $[m.s^{-1}]$ & 8.44 & 15.93 & 8.42 \\ $\chi^2_{red}$ & & 1.30 & 1.44 & 1.69 \\ \hline \multicolumn{5}{c}{\textbf{Offsets $^{(1)}$}}\\ \hline $\Delta\,RV_{COR98-COR07}$ & $[m/s]$ & 2.0~$\pm$~2.3 & -0.1~$\pm$~3.7 & -4.1~$\pm$~3.5 \\ $\Delta\,RV_{COR07-COR07}$ & $[m/s]$ & 29248.9~$\pm$~1.5 & -4117.9~$\pm$~4.0 & 27476.7~$\pm$~2.4 \\ $\Delta\,RV_{COR14-COR07}$ & $[m/s]$ & 20.9~$\pm$~2.2 & 15.1~$\pm$~3.4 & 21.9~$\pm$~2.8 \\ \hline \multicolumn{5}{c}{\textbf{Instrumental Noises}}\\ \hline $\sigma_{COR98}$ & $[m/s]$ & 4.7~$\pm$~1.0 & 4.9~$\pm$~1.0 & 5.2~$\pm$~1.0 \\ $\sigma_{COR07}$ & $[m/s]$ & 6.8~$\pm$~1.2 & 7.8~$\pm$~1.5 & 7.8~$\pm$~1.4 \\ $\sigma_{COR14}$ & $[m/s]$ & 3.1~$\pm$~0.5 & 3.0~$\pm$~0.5 & 3.0~$\pm$~0.5 \\ \hline \multicolumn{5}{c}{\textbf{Stellar Jitter}}\\ \hline $\sigma_{jit}$ & $[m.s^{-1}]$ & 2.1~$\pm$~1.6 & 16.8~$\pm$~2.6 & 7.2~$\pm$~1.8 \\ \hline \multicolumn{5}{c}{\textbf{Keplerians}}\\ \hline $P$ & $[days]$ & 872.6~$\pm$~2.8 & 623.0~$\pm$~3.4 & 1193.3~$\pm$~7.0 \\ $K$ & $[m.s^{-1}]$ & 40.0~$\pm$~1.6 & 55.2~$\pm$~4.1 & 46.8~$\pm$~2.4 \\ $e$ & & 0.03~$\pm$~0.03 & 0.11~$\pm$~0.07 & 0.19~$\pm$~0.06 \\ $\omega$ & $[deg]$ & 169.1~$\pm$~88.7 & 2.7~$\pm$~56.0 & -67.3~$\pm$~21.7 \\ $\lambda_0$ $^{(2)}$ & $[deg]$ & 110.7~$\pm$~2.3 & -77.5~$\pm$~7.3 & 227.6~$\pm$~4.5 \\ $T_p$ $^{(2)}$ & $[rjd]$ & 5575.0~$\pm$~221.0 & 5653.0~$\pm$~130.0 & 5715.7~$\pm$~64.6 \\ \hline $a$ & $[au]$ & 1.900~$\pm$~0.004 & 1.510~$\pm$~0.006 & 2.482~$\pm$~0.010 \\ $m_2\,sin\,i$ $^{(3)}$ & $[M_J]$ & 2.12~$\pm$~0.09 & 2.56~$\pm$~0.19 & 3.04~$\pm$~0.16 \\ \hline \end{tabular} \begin{tablenotes} \small \item {$^1$} The reference instrument is COR07. \item {$^2$} The mean longitude is given at $BJD=2\,455\,500$ [d], while $2\,450\,000$ has been subtracted from the date of passage through periastron ($T_p$). \item {$^3$} Using the model-independent mass from seismic inversions \citep[see][]{Buldgen2021}).\end{tablenotes} \label{tab:orbit_params} \end{threeparttable} \end{table*} We are thus fairly confident that the observed radial-velocity periodic variations are not due to chromospheric stellar activity, or rotational modulation of surface features such as spots, which would require an significant percentage of the stellar surface to be covered. We also found no indication of long-period, non-radial oscillation modes (neither matching periodicities nor corresponding harmonics in the line profile moments). We thus consider that the observed radial-velocity signals are due to planetary companions orbiting the stars. The resulting models and residuals are shown in \cref{fig:timeseries_hd22532,fig:timeseries_hd64121,fig:timeseries_hd69123}, overplotted on the radial-velocities. In \autoref{tab:orbit_params}, we present the statistics of the distributions (i.e., the median and standard deviation) of the more common set of Keplerian parameters P, K, e, $\omega$ and T$_P$, as well as the distributions of the semi-major axis and minimum masses, derived from the MCMC chains of the fit parameters. In \autoref{apdx:corner}, we present the corner plots of the posterior distributions of the fit parameters for each star. The weighted rms scatter of the radial velocities $rms_{tot}$ and residuals to the Keplerian fit $rms_{res}$ are also provided in the table. For all three targets, the rms of the residuals are comparable to those of single giant stars with similar $B-V$ \citep[see][Fig.\,3]{Hekker2006b}. \section{Discussion and conclusion}\label{sec:conclusion} Since 2006, we have been conducting a high-precision radial-velocity survey of a volume-limited sample of 641 giant stars using the CORALIE spectrograph on the 1.2\,m Leonard Euler Swiss telescope at La Silla Observatory (Chile). Our goal is to better understand the formation and evolution of planets around stars more massive than the Sun, including the evolutionary stage of stars toward the giant branch, through a statistical study of the properties of the detected planet population. The sample is volume limited, targeting giant stars in the southern hemisphere (declination $<25^{\circ}$) inside a 300\,pc radius around the Sun. The evolutionary stage of the stars was constrained by magnitude ($M_V<2.5$) and color ($0.78<B-V<1.06$) cut-offs to avoid main-sequence stars and intrinsically variable late-type giants. We derived reliable spectroscopic parameters from CORALIE-14 spectra \citep[following][]{Santos2004,Sousa2014,Alves2015}. Our sample shows a distribution of a metallicity ratio of iron similar to the one of stars in the solar neighborhood, peaking between 0.0 and 0.1\,dex, but missing very low and rich metallicity stars. This may be explained by the young age of the giants, compared to their dwarf counterparts, which did not leave them enough time to migrate in the Galaxy \citet{Wang2013,Minchev2013}. A color cut-off bias could also be part of the explanation for this effect, excluding the low-log-g stars with high metallicity and the high-log-g stars with low metallicity, as discussed in \citet{Mortier2013, Adibekyan2019}. We also obtained stellar masses for the sample with global parameters fitting, using evolutionary tracks from \citet{Pietrinferni2004}, using the SPInS software \citep{Lebreton2020}. The distribution ranges approximately from 0.75 to 4\,M$_{\odot}$, with a maximum around 2\,M$_{\odot}$. This paper is the first of a series in which we present the first results of the survey, namely the detection and characterization of three new planetary companions orbiting the giant stars HD\,22532, HD\,64121, and HD\,69123, taking advantage of asteroseismic masses, following the methodology of \citet{Buldgen2019}, obtained with the TESS data \citep{Ricker2014}. For each star, we systematically checked for any correlation with chromospheric activity, rotational modulation of surface features, or long-term non-radial pulsations. We also consulted the corresponding ASAS-3 photometry time series \citep{Pojmanski2002}, spanning 6.8 to 7.4 years. No significant periodicities or correlations linked to the radial-velocity signal detected have been found. \begin{figure*}[t!] \centering \adjincludegraphics[width=1\textwidth, trim={0 0 0 0},clip]{pla_params.pdf} \caption{Stellar and planetary parameter relations for the 186 discovered planets orbiting giant stars. The three planets presented in this paper are represented by red dots.} \label{fig:pla_params} \end{figure*} The new planets are typical representatives of the known population of planets around giant stars, considering their masses, semi-major axes, and low eccentricities. This is illustrated in \cref{fig:pla_params} displaying the new detections together with the planet candidates from the literature. Most planets discovered around giant stars have eccentricities below 0.2-0.3 \citep[e.g.,][]{Jones2014,Yilmaz2017}, and are at distances, that is, farther away than 1\,au from the central star. Monitoring of the CASCADES sample is continuing. As an interesting by-product, it is also bringing important information on stellar binaries and star-brown-dwarf systems. The formation scenario for the latter is still unclear. The system may form initially as a binary star with an extreme mass ratio, or through a formation process comparable to the one for planets in the proto-stellar disk, via disk instability or core accretion. The maximum mass of planets forming in a massive disk is not known. Forthcoming papers will present additional planetary candidates, as well as potential brown dwarfs and spectroscopic binaries found in the sample. We will also address through a statistical study of our sample, the main open questions linked with the planet population orbiting intermediate-mass (evolved) stars: distribution of orbital properties as constraints for planet formation models, correlations between planet characteristics and occurrence rate with primary star properties such as mass and metallicity. In this context, some planet-host stars from our sample are particularly well suited for a deep asteroseismic analysis, giving access to their internal structure. The available information includes well-constrained planetary signals, long, photometric, high-precision time series from high-cadence observations with TESS, and accurate spectroscopic parameters. Asteroseismologic analysis can provide precious information concerning the past and future evolution of such systems. Among the highly interesting related questions is the one of the impact of stellar evolution on the planet orbits and of the potential engulfment of planets by the star \citep{Pezzotti2021}, for instance to explain the apparent lack of close-in, short-period planets ($P\leqslant100$\,days, $a\leqslant0.5$\,au). The second and third publications of the CASCADES series will focus on the asteroseimic analysis of the three stellar hosts presented in this paper \citep{Buldgen2021} and the analysis of a new planet-host star \citep{Pezzotti2021}, for which the full evolution of the system can be modeled. Giant stars hosting planets are good candidates for planetary transit searches. Due to the increase in radius at the giant stage, companions of giant stars have a higher probability of transit than planets around main-sequence stars. However, as these planets are on long-period orbits, the transit duration is on the order of tens to hundreds of hours. Moreover, because of the relative size of the planet and the star, the expected transit depth is very small. For our sample stars, they are in the $10-1000$ ppm range. Although very limiting for ground-based observations, these two aspects are tractable from space with satellites such as TESS \citep{Ricker2014} and CHEOPS \citep{Benz2020}. The three systems described in this paper present transit probabilities around 1.5\,\% and transit depth between 170 and 350\,ppm (considering planets with a 1\,R$_J$ radius). Unfortunately, none of these candidates has thus far had a transit time prediction in the window of the TESS observations. \begin{acknowledgements} We thank all observers at La Silla Observatory from the past fourteen years for their contribution to the observations and the quality of their work. We acknowledge financial support from the Swiss National Science Foundation (SNSF) for the project 2020-178930. This work has, in part, also been carried out within the framework of the National Centre for Competence in Research PlanetS supported by SNSF. In particular, this publication makes use of the The Data \& Analysis Center for Exoplanets (DACE, https://dace.unige.ch), a platform of Planets, based at the University of Geneva (CH), dedicated to extrasolar planet data visualisation, exchange and analysis. G.B. acknowledges funding from the SNF AMBIZIONE grant No. 185805 (Seismic inversions and modelling of transport processes in stars). P.E. has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 833925, project STAREX). C.P. acknowledges funding from the Swiss National Science Foundation (project Interacting Stars, number 200020-172505). V. A. acknowledges the support from FCT through Investigador FCT contract no. IF/00650/2015/CP1273/CT0001. N.C.S acknowledges support from FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalização by these grants: UID/FIS/04434/2019; UIDB/04434/2020; UIDP/04434/2020; PTDC/FIS-AST/32113/2017 \& POCI-01-0145-FEDER-032113; PTDC/FIS-AST/28953/2017 \& POCI-01-0145-FEDER-028953. S.G.S acknowledges the support from FCT through Investigador FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC). N.L. acknowledges financial support from "Programme National de Physique Stellaire" (PNPS) of CNRS/INSU, France. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,498,081
arxiv
\section{Introduction} \label{sec - Introduction} In the last years much effort has been spent to identify the weak points of Low Density Parity Check (LDPC) code graphs, responsible for error floors of iterative decoders. After the introduction of the seminal concept of \emph{pseudocodewords} \cite{Bib-FreKoetTIT01},\cite{Bib-KoetVonISTC03} it is now ascertained that these errors are caused by small subsets of nodes of the Tanner graph that act as attractors for iterative decoders, even if they are not the support of valid codewords. These structures have been named \emph{trapping sets} \cite{RichErrorFloors},\cite{Vasic06},\cite{ChiVasJSAC09} or \emph{absorbing sets} \cite{Dol_ICC07},\cite{Dol_JSAC09},\cite{Dol_TCOM09} or \emph{absorption sets} \cite{Schlegel}, defined in slightly different ways. In this paper we build mainly on \cite{Schlegel} and \cite{Dol_ICC07},\cite{Dol_TCOM09},\cite{Dolecek}. The first merit of \cite{Dol_ICC07},\cite{Dolecek} has been to define Absorbing Sets (ASs) from a purely topological point of view. Moreover, the authors have analyzed the effects of ASs on finite precision iterative decoders, on the basis of hardware and Importance Sampling simulations \cite{Dol_TCOM09},\cite{Dol_JSAC09}. ASs behavior depends on the decoder quantization and in \cite{Dol_TCOM09} they are classified as \emph{weak} or \emph{strong} depending on whether they can be resolved or not by properly tuning the decoder dynamics. In \cite{Dolecek_PostProc} the same research group proposes a postprocessing method to resolve ASs, once the iterative decoder is trapped. In \cite{Schlegel} the author defines \emph{absorption sets} (equivalent to ASs) and identifies a variety of ASs for the LDPC code used in the IEEE 802.3an standard. The linear model of \cite{SunTahFitz}, suitable for Min-Sum (MS) decoding, is refined to meet the behavior of belief propagation decoders. Under some hypotheses, the error probability level can be computed assuming an unsaturated LDPC decoder. Loosely speaking, in this model an AS is solved if messages coming from the rest of the graph tend to infinity with a rate higher than the wrong messages inside the AS. In practical implementations, messages cannot get arbitrarily large. Besides, hypotheses on the growth rate of the messages entering the AS are needed. In \cite{Schlegel} Density Evolution (DE) is used, but this is accurate only for LDPC codes with infinite (in practice, very large) block lengths. In \cite{Bib-ButSieALL11} the saturation is taken into account and the input growth rate is evaluated via Discretized DE or empirically via simulation. In \cite{Vasic06} and successive works, the authors rate the trapping set dangerousness with the \emph{critical number}, that is valid for hard decoders but fails to discriminate between the soft entries of the iterative decoder. In this paper, we look for a concise, quantitative way to rate the ASs' dangerousness with soft decoding. We focus on Min-Sum (MS) soft decoding that is the basis for any LDPC decoder implementation, leaving aside more theoretical algorithms such as Sum-Product (SPA) or Linear Programming (LP). We study the evolution of the messages propagating inside the AS, when the all-zero codeword is transmitted. Unlike \cite{Schlegel}, we assume a limited dynamic of the Log Likelihood Ratios (LLRs) as in a practical decoder implementation. The AS dangerousness can be characterized by a \emph{threshold} $\tau$. We show that, under certain hypotheses, the decoder convergence towards the right codeword can fail only if there exist channel LLRs smaller than or equal to $\tau$. When all channel LLRs are larger than $\tau$, successful decoding is assured. We also show with examples that ASs with greater $\tau$ are more harmful than ASs with smaller $\tau$. Finally, we provide an efficient algorithm to find $\tau$. For many ASs, $\tau<0$. In these cases \emph{we can deactivate ASs simply setting two saturation levels}, one for extrinsic messages (in our system model, this level is normalized to $1$), and another level, smaller than $|\tau|$, for channel LLRs. This way the code designer can concentrate all efforts on avoiding only the most dangerous ASs, letting the receiver automatically deactivate the other ones with extrinsic messages strong enough to unlock them. The article is organized as follows. Section \ref{sec - System model} settles the system model. Section \ref{sec - Equilibria} introduces the notion of equilibria and thresholds. Section \ref{sec - Generalized Equilibria} deals with generalized equilibria, a tool to study ASs with arbitrary structure. Section \ref{sec - Limit cycles} deals with limit cycles. Section \ref{sec - Chaotic} studies the message passing behavior above threshold, and provides a method to deactivate many ASs. Section \ref{sec - Examples} shows practical examples of ASs that behave as predicted by our model during MS decoding on real complete LDPC graphs. Section \ref{sec - Search algorithm} proposes an efficient algorithm to compute $\tau$. Section \ref{sec - Other Properties} highlights other interesting properties. Finally, Section \ref{sec - Conclusions} concludes the paper. \section{System model and definitions} \label{sec - System model} We recall that a subset $\mathcal{D}$ of variable nodes (VNs) in a Tanner graph is an absorbing set $(a,b)$ if \cite{Dol_ICC07} \begin{itemize} \item every VN in $\mathcal{D}$ has strictly more boundary Check Nodes (CNs) in $\mathcal{E}(\mathcal{D})$ than in $\mathcal{O}(\mathcal{D})$, being $\mathcal{E}(\mathcal{D})$ and $\mathcal{O}(\mathcal{D})$ the set of boundary CNs connected to $\mathcal{D}$ an \emph{even} or \emph{odd} number of times, respectively; \item the cardinality of $\mathcal{D}$ and $\mathcal{O}(\mathcal{D})$ are $a$ and $b$, respectively. \end{itemize} Besides, $\mathcal{D}$ is a \emph{fully absorbing set} if also all VNs outside $\mathcal{D}$ have strictly less neighbors in $\mathcal{O}(\mathcal{D})$ than outside. In \cite{Dolecek} it is observed that a pattern of all-ones for the VNs in $\mathcal{D}$ is a stable concurrent of the all-zeros pattern for the iterative bit-flipping decoder, notwithstanding a set $\mathcal{O}(\mathcal{D})$ of \emph{unsatisfied boundary CNs} (dark CNs in Fig. \ref{fig - ASs topology}). ASs behave in a similar manner also under iterative soft decoding, as shown and discussed in \cite{Dol_TCOM09}, \cite{Dol_JSAC09}. \begin{figure} \centering \includegraphics[width=\figwidthlarge,clip]{./AbsorbingSetsTopologies}\\ \caption{Three absorbing sets: in Fig. \ref{fig - ASs topology}(a), a maximal $(4,4)$ AS; in Fig. \ref{fig - ASs topology}(b), a $(5,3)$ AS; in Fig. \ref{fig - ASs topology}(c), a $(4,0)$ AS, that is also the support of a codeword.}\label{fig - ASs topology} \end{figure} If all CNs are connected to $\mathcal{D}$ no more than twice, the AS is \emph{elementary}. Elementary ASs are usually the most dangerous. Given the code girth, elementary absorbing sets can have smaller values of $a$ and $b$ than non elementary ones \cite{Dol_ITA10}. If ASs are the support of \emph{near-codewords} \cite{McKPos}, the smaller $a$ is, the higher the probability of error. Besides, the smaller the ratio $b/a$, the more dangerous the AS is \cite{Dol_TCOM09}. In this paper we focus on elementary ASs only, as those in Fig. \ref{fig - ASs topology}. An AS is \emph{maximal} if $a=b$, as in Fig. \ref{fig - ASs topology}(a). Intuitively, maximal ASs are the mildest ones, since they have a large number of unsatisfied CNs. On the opposite, an AS with $b=0$ (as in Fig. \ref{fig - ASs topology}(c)) is the support of a codeword. For our analysis we assume an MS decoder, that is insensitive to scale factors. Thus we can normalize the maximum extrinsic message amplitude. We recall that, apart from saturation, the evolution of the messages inside the AS is linear (\cite{Schlegel}, \cite{SunTahFitz}) since the CNs in $\mathcal{E}(\mathcal{D})$ simply forward the input messages. The relation among the $N = 2 |\mathcal{E}(\mathcal{D})|$ internal extrinsic messages $\mathbf{x}$ generated by VNs can be tracked during the iterations, by an $N\times N$ \emph{routing} matrix $\mathbf{A}$. Basically, $A_{i,j}=1$ iff there exists an (oriented) path from message $x_j$ to message $x_i$, going across one VN. For instance, Fig. \ref{fig - messages} depicts the LLR exchange within the AS of Fig. \ref{fig - ASs topology}(b). The corresponding first row of $\mathbf{A}$ is\footnote{Matrix subscripts indicate subsets of rows and columns.} \begin{equation} \mathbf{A}_{\{1\},\{1,\ldots,N\}}=\left[\;0\;\; 0\;\; 0\;\; 0\;\; 0\;\; 0\;\; 0\;\; 0\;\; 0\;\; 1\;\; 1\;\; 0\;\right]\,. \end{equation} \begin{figure} \centering \includegraphics[width=\figwidthsmall,clip]{./ASmessages}\\ \caption{Message propagation within the AS of Fig. \ref{fig - ASs topology}(b). For simplicity, CNs in the middle of edges are not shown.}\label{fig - messages} \end{figure} To account for saturation we define the scalar function $\mathrm{sat}(x) \triangleq\mathrm{sign}(x)\cdot\min\left(|x|,1\right)$ and we say that $x$ is \emph{saturated} if $|x|\geq 1$, \emph{unsaturated} otherwise. For vectors, $\mathrm{sat}(\mathbf{x})$ is the element-wise saturation. For the time being, we consider a \emph{parallel} message passing decoder, where all VNs are simultaneously activated first, then all CNs are simultaneously activated, in turn\footnote{In the following, we will show that the results presented in this paper hold for any activation order of VNs and CNs, provided extrinsic messages are propagated just once per decoding iteration.}. The system evolution at the $k$-th iteration reads \begin{equation}\label{eq - system evolution generic dv} \mathbf{x}^{(k+1)} = \mathrm{sat}\left(\mathbf{A} \mathbf{x}^{(k)}+\mathbf{e}+\mathbf{R}\boldsymbol\lambda\right) \end{equation} where $\mathbf{x}^{(k)}$ are the extrinsic messages within the AS, $ \mathbf{e}$ is the vector of extrinsic messages entering the AS through $\mathcal{O}(\mathcal{D})$, and $\mathbf{R}$ is a \emph{repetition} matrix with size $N\times a$, that constrains the $a$ channel LLRs $\boldsymbol\lambda$ to be the same for all messages $x_j$ emanating from the same VN. Referring to Fig. \ref{fig - ASs topology}(b), \begin{spacing}{\matrixspacing} \begin{equation} \mathbf{R}^T=\begin{bmatrix} 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ \end{bmatrix}\,. \end{equation} \end{spacing}\noindent Also note that the row weight of $\mathbf{R}$ is unitary, i.e. $\mathbf{R}\mathbf{1}=\mathbf{1}$. As to the extrinsic messages entering the AS from outside, we bypass the tricky problem of modeling the dynamical behavior of the decoder in the whole graph assuming that each message entering the AS has saturated to the maximal correct LLR (i.e. $+1$, since we transmit the all-zero codeword). This is a reasonable hypothesis after a sufficient number of iterations, as observed in \cite{Dolecek_PostProc} where the authors base their postprocessing technique on this assumption. In Section \ref{sec - Examples} we show that the decoding of a large graph is in good agreement with the predictions of this model. Under this hypothesis, we can write \begin{equation} \mathbf{e}=\mathbf{R}\mathbf{d}-\mathbf{1}- \mathbf{A}\mathbf{1} \end{equation} where $\mathbf{d}$ is the $a\times 1$ vector of the VN degrees. From now on, we will consider only left-regular LDPC codes, with $d_i=3,\;\forall i$. Most of the theorems presented in the following sections can be extended to a generic VN degree vector $\mathbf{d}$. Luckily, among regular LDPC codes this is also the case with the most favorable waterfall region. If we set $\mathbf{d}=3\cdot \mathbf{1}$, then $\mathbf{Rd}-\mathbf{1}=3\cdot\mathbf{R1}-\mathbf{1}=3\cdot\mathbf{1}-\mathbf{1}=2\cdot\mathbf{1}$, and \eqref{eq - system evolution generic dv} becomes \begin{equation}\label{eq - system evolution dv 3} \mathbf{x}^{(k+1)}= \mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}^{(k)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda\right)\,. \end{equation} This equation is more expressive than \eqref{eq - system evolution generic dv}, as $\mathbf{x}^{(k)}-\mathbf{1}$ is the gap between the current state $\mathbf{x}^{(k)}$ and the values that the extrinsic messages should eventually achieve, once the AS is unlocked. Besides, we will show that $-\mathbf{1}\leq\boldsymbol\lambda\leq \mathbf{1}$. Therefore, $2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda\geq \mathbf{1}$ and $\mathbf{A} \left( \mathbf{x}^{(k)}-\mathbf{1}\right)\leq \mathbf{0}$. The two competing forces are now clearly visible. The former always helps convergence, and the latter can amplify negative terms (if the AS is not maximal, some rows of $\mathbf{A}$ have weight larger than $1$). The rest of the paper is devoted to unveil the hidden properties of \eqref{eq - system evolution dv 3}, finding sufficient conditions for correct decoding, i.e. $\mathbf{x}^{(k)}\rightarrow\mathbf{1}$ when $k\rightarrow\infty$. We will assume a conservative condition to decouple the AS behavior from the rest of the code: we do not start with $\mathbf{x}^{(0)}=\mathbf{0}$. \emph{We take into account any configuration of extrinsic messages $\mathbf{x}^{(0)}$ that may result in a convergence failure}. We start from an iteration with the rest of the decoder messages saturated to $1$. The configuration $\mathbf{x}^{(0)}$ inside the AS, which is the result of the message evolution up to that iteration, is unknown. The drawback of this approach is that we renounce to predict the probability of message configurations inside the AS leading to decoding errors. On the other hand, if no $\mathbf{x}^{(0)}$ can lock the decoder, this is true independently of the evolution of messages inside the AS. We will study equilibria, limit cycles and chaotic behaviors (i.e., aperiodic trajectories) of \eqref{eq - system evolution dv 3}, depending on the channel LLRs $\boldsymbol\lambda$ and \emph{any} initial state $\mathbf{x}^{(0)}$. \section{Equilibria, threshold definition and preliminary properties} \label{sec - Equilibria} In this Section, we study equilibria for the non-linear system \eqref{eq - system evolution dv 3}. \begin{Definition}\label{def - equlibrium} A pair $(\mathbf{x},\boldsymbol\lambda)$ is an \emph{equilibrium} iff \begin{equation} \mathbf{x}=\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda\right) \,.\end{equation} \end{Definition} Equilibria with $\mathbf{x}\neq\mathbf{1}$ are harmful. They behave as attractors for the evolution of the extrinsic messages $\mathbf{x}^{(k)}$, and can lead to uncorrect decisions. With the aim of finding the most critical ASs, those that can lead to convergence failure even with large values of $\boldsymbol\lambda$, we would like to solve the following problem: \begin{Problem}\label{prb - original problem} \begin{align} \nonumber \tau'=\max_{\boldsymbol \lambda,\mathbf{x}}&\quad\min(\boldsymbol\lambda)\\ s.t. & \quad -\mathbf{1}\leq\mathbf{x}\leq \mathbf{1},\quad \label{eq - x_j neq 1} \exists j: x_j< 1 \\ \nonumber & \quad \mathbf{x}=\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda\right) \,.\end{align} \end{Problem} The constraint \eqref{eq - x_j neq 1} restricts the search to bad equilibria, having at least one extrinsic message smaller than $1$. We call $\tau'$ the \emph{threshold}, since the AS has no bad equilibria with $\boldsymbol\lambda$ above that value. In Section \ref{sec - Chaotic} we will show that the notion of threshold does not pertain only to bad equilibria, but also to any other bad trajectory of \eqref{eq - system evolution dv 3}, not achieving $\mathbf{x}^{(\infty)}=\mathbf{1}$. In the above optimization problem, for simplicity we did not assign upper and lower bounds to the channel LLRs $\boldsymbol\lambda$. In practice, we can restrict our search in the range $-\mathbf{1}\leq\boldsymbol\lambda\leq\mathbf{1}$. \begin{Theorem}\label{thm - equilibrium -1} The pair $(\mathbf{x}=-\mathbf{1},\boldsymbol\lambda=-\mathbf{1})$ is always an equilibrium. \end{Theorem} \begin{IEEEproof} Substituting $(\mathbf{x}=-\mathbf{1},\boldsymbol\lambda=-\mathbf{1})$ in the equilibrium equation, we obtain \begin{multline} \mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda'\right) =\mathrm{sat}\left(-2\cdot\underbrace{\mathbf{A}\mathbf{1}}_{\geq \mathbf{1}}+\mathbf{1}\right)\\=-\mathbf{1}=\mathbf{x}\,. \end{multline} \end{IEEEproof} \begin{Theorem}\label{thm - equilibrium +1} The only equilibrium $(\mathbf{x},\boldsymbol\lambda)$ for a system having $\boldsymbol\lambda>\mathbf{1}$ and $-\mathbf{1}\leq\mathbf{x}\leq\mathbf{1}$ is in $\mathbf{x}=\mathbf{1}$. \end{Theorem} \begin{IEEEproof} Since $\boldsymbol\lambda>\mathbf{1}$, we can define a strictly positive quantity $\Delta\triangleq\min(\boldsymbol\lambda)-1>0$. Consider parallel message passing. Focusing on the evolution of \eqref{eq - system evolution dv 3}, \begin{multline} \mathbf{x}^{(1)}=\mathrm{sat}\left(\mathbf{A}\underbrace{(\mathbf{x}^{(0)}-\mathbf{1})}_{\geq-2\cdot\mathbf{1}}+\underbrace{2\cdot \mathbf{1}+\mathbf{R}\boldsymbol\lambda}_{\geq(3+\Delta)\mathbf{1}}\right)\\\geq\mathrm{sat}\left(-2\underbrace{\mathbf{A}\mathbf{1}}_{\leq2\cdot\mathbf{1}}+(3+\Delta)\mathbf{1}\right) \geq\mathrm{sat}(-1+\Delta)\cdot\mathbf{1}\,. \end{multline} If $\mathrm{sat}(-1+\Delta)=1$, then $\mathbf{x}^{(1)}=\mathbf{1}$, and we stop. Otherwise, $\mathrm{sat}(-1+\Delta)=-1+\Delta$ and we go on. For the generic step, assuming $\mathbf{x}^{(k)}\geq \mathrm{sat}\left(-1+k\Delta\right)\cdot\mathbf{1}$ and proceeding by recursion, we have \begin{multline} \mathbf{x}^{(k+1)}=\mathrm{sat}\left(\mathbf{A}\underbrace{(\mathbf{x}^{(k)}-\mathbf{1})}_{\geq(-2+k\Delta)\cdot\mathbf{1}}+\underbrace{2\cdot \mathbf{1}+\mathbf{R}\boldsymbol\lambda}_{\geq(3+\Delta)\mathbf{1}}\right)\\\geq\mathrm{sat}\left(-\mathbf{1}+k\Delta\underbrace{\mathbf{A}\mathbf{1}}_{\geq \mathbf{1}}+\Delta\mathbf{1}\right) \geq\mathrm{sat}\left(-1+(k+1)\Delta\right)\cdot\mathbf{1}\,. \end{multline} The same inequalities hold also in case of sequential message passing, activating CNs in arbitrary order (once per iteration). As soon as $-1+(k+1)\Delta\geq 1$, the recursion ends. We conclude that the message passing algorithm will eventually achieve $\mathbf{x}=\mathbf{1}$. \end{IEEEproof} Being $\tau'$ the result of a maximization, a straight consequence of the above two theorems is \begin{Corollary} \label{thm - alpha range} As for Problem \ref{prb - original problem}, $-1\leq\tau'\leq 1$. \end{Corollary} The two boundary values $\tau'=-1$ and $\tau'=1$ are the thresholds of maximal ASs and codewords, respectively: \begin{Theorem} Any support of a codeword has $\tau'=1$. Maximal absorbing sets have $\tau'=-1$. \end{Theorem} \begin{IEEEproof} We start from codewords. $(\mathbf{x}=-\mathbf{1},\boldsymbol\lambda=\mathbf{1})$ is a valid equilibrium for Problem \ref{prb - original problem}. Indeed: \begin{multline} \mathrm{sat}\left(\mathbf{A}(\mathbf{x}-\mathbf{1})+2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda\right)=\mathrm{sat}\left(-2\underbrace{\mathbf{A}\mathbf{1}}_{=2\cdot\mathbf{1}}+3\cdot\mathbf{1}\right)\\=\mathrm{sat}\left(-4\cdot\mathbf{1}+3\cdot\mathbf{1}\right)=-\mathbf{1}=\mathbf{x}\,. \end{multline} By Corollary \ref{thm - alpha range} we conclude that $\tau'=1$. Referring to maximal ASs, for any $-\mathbf{1}\leq\mathbf{x}^{(0)}\leq\mathbf{1}$ and $\boldsymbol\lambda>-\mathbf{1}$, we can define a strictly positive quantity $\Delta\triangleq\min(\boldsymbol\lambda)-(-1)>0$. Focusing on the evolution of \eqref{eq - system evolution dv 3}, \begin{multline} \mathbf{x}^{(1)}=\mathrm{sat}\left(\mathbf{A}\underbrace{(\mathbf{x}^{(0)}-\mathbf{1})}_{\geq-2\cdot\mathbf{1}}+\underbrace{2\cdot \mathbf{1}+\mathbf{R}\boldsymbol\lambda}_{\geq(1+\Delta)\mathbf{1}}\right)\\\geq\mathrm{sat}\left(-2\underbrace{\mathbf{A}\mathbf{1}}_{=\mathbf{1}}+(1+\Delta)\mathbf{1}\right) \geq\mathrm{sat}(-1+\Delta)\cdot\mathbf{1}\,. \end{multline} If $\mathrm{sat}(-1+\Delta)=1$, then $\mathbf{x}^{(1)}=\mathbf{1}$, and we stop. Otherwise, $\mathrm{sat}(-1+\Delta)=-1+\Delta$ and we go on. For the generic step, assuming $\mathbf{x}^{(k)}\geq \mathrm{sat}\left(-1+k\Delta\right)\cdot\mathbf{1}$ and proceeding by recursion, we obtain \begin{multline} \mathbf{x}^{(k+1)}=\mathrm{sat}\left(\mathbf{A}\underbrace{(\mathbf{x}^{(k)}-\mathbf{1})}_{\geq(-2+k\Delta)\cdot\mathbf{1}}+\underbrace{2\cdot \mathbf{1}+\mathbf{R}\boldsymbol\lambda}_{\geq(1+\Delta)\mathbf{1}}\right)\\\geq \mathrm{sat}\left(-1+(k+1)\Delta\right)\cdot\mathbf{1}\,. \end{multline} As soon as $-1+(k+1)\Delta\geq 1$, the recursion ends. The message passing algorithm will eventually achieve $\mathbf{x}=\mathbf{1}$, that is not a valid equilibrium for Problem \ref{prb - original problem}. We conclude that at least one element in $\boldsymbol\lambda$ must be equal to $-1$, therefore $\tau'=-1$. \end{IEEEproof} \section{Generalized equilibria} \label{sec - Generalized Equilibria} Most of the effort of this Section is in the reformulation of Problem \ref{prb - original problem}, to make it manageable. First, in place of equilibria, we consider a slightly more general case, removing the repetition matrix $\mathbf{R}$ and assuming $N$ unconstrained channel LLRs $\boldsymbol\lambda$. \begin{Definition}\label{def - generalized equilibrium} A pair $(\mathbf{x},\boldsymbol\lambda)$ is a \emph{generalized equilibrium} iff \begin{equation}\label{eq - generalized equilibrium} \mathbf{x}=\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}-\mathbf{1}\right)+2\cdot\mathbf{1}+\boldsymbol\lambda\right)\,. \end{equation} \end{Definition} Accordingly, we write the following optimization problem. \begin{Problem}\label{prb - generalized problem} \begin{align} \nonumber \tau^*= \max_{\boldsymbol \lambda,\mathbf{x}}&\quad\min(\boldsymbol\lambda)\\ \nonumber s.t. & \quad -\mathbf{1}\leq\mathbf{x}\leq \mathbf{1}, \quad \exists j: x_j< 1 \\ \nonumber & \quad \mathbf{x}=\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}-\mathbf{1}\right)+2\cdot\mathbf{1}+\boldsymbol\lambda\right). \end{align} \end{Problem} The following theorem holds. \begin{Theorem}\label{thm - tau'=tau*} As for Problems \ref{prb - original problem} and \ref{prb - generalized problem}, $\tau'=\tau^*$. \end{Theorem} \begin{IEEEproof} We show that $\tau'\leq\tau^*$ and $\tau'\geq\tau^*$. Every equilibrium is also a generalized equilibrium. Given a solution $(\mathbf{x}',\boldsymbol\lambda')$ of Problem \ref{prb - original problem} with $\min(\boldsymbol\lambda')=\tau'$, the solution $(\mathbf{x}^*,\boldsymbol\lambda^*)$ with $\mathbf{x}^*=\mathbf{x}'$ and $\boldsymbol\lambda^*=\mathbf{R}\boldsymbol\lambda'$ satisfies the constraints of Problem \ref{prb - original problem}. Being $\tau^*$ the result of the maximization in Problem \ref{prb - generalized problem}, we conclude that $\tau^*\geq \tau'$. On the converse, generalized equilibria may not be equilibria. Indeed, $\boldsymbol\lambda^*$ could not be compatible with the repetition forced by matrix $\mathbf{R}$. Notwithstanding this, if a generalized equilibrium $( \mathbf{x}^*, \boldsymbol\lambda^*)$ exists, then also an equilibrium $(\mathbf{x}', \boldsymbol\lambda')$ exists, with $\mathbf{x}'\leq \mathbf{x}^*$ and $\boldsymbol\lambda'=\min(\boldsymbol\lambda^*)\cdot \mathbf{1}$. Consider channel LLRs $\boldsymbol\lambda'=\min(\boldsymbol\lambda^*)\cdot \mathbf{1}$. We explicitly provide an initialization $\mathbf{x}^{(0)}$ for \eqref{eq - system evolution dv 3} that makes the extrinsic messages achieve an equilibrium $(\mathbf{x}', \boldsymbol\lambda')$, with $\mathbf{x}'\leq \mathbf{x}^*$. First, note that \begin{multline} \mathbf{x}^{(k+1)}= \mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}^{(k)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda'\right)\\=\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}^{(k)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\min(\boldsymbol\lambda^*)\underbrace{\mathbf{R}\mathbf{1}}_{=\mathbf{1}}\right)\,. \end{multline} If we set $\mathbf{x}^{(0)}=\mathbf{x}^*$, we obtain the inequality \begin{multline} \mathbf{x}^{(1)}=\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}^*-\mathbf{1}\right)+2\cdot\mathbf{1}+\min(\boldsymbol\lambda^*)\mathbf{1}\right)\\\leq\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}^*-\mathbf{1}\right)+2\cdot\mathbf{1}+\boldsymbol\lambda^*\right)=\mathbf{x}^*=\mathbf{x}^{(0)}\,. \end{multline} Proceeding by induction, \begin{multline} \mathbf{x}^{(k+1)}= \mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}^{(k)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\min(\boldsymbol\lambda^*)\mathbf{1}\right)\\ \leq \mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}^{(k-1)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\min(\boldsymbol\lambda^*)\mathbf{1}\right)=\mathbf{x}^{(k)} \end{multline} since $A_{i,j}\geq 0,\;\forall i,j$. The above equation states that the sequence $\{\mathbf{x}^{(k)}\}$ is monotonically decreasing. Yet, it cannot assume arbitrarily small values, since extrinsic messages have a lower saturation to $-1$. We conclude that $\{\mathbf{x}^{(k)}\}$ must achieve a new equilibrium $\mathbf{x}'\leq \mathbf{x}^*$. The equilibrium $(\mathbf{x}',\boldsymbol\lambda')$ satisfies all the constraints of Problem \ref{prb - original problem}. Being $\tau'$ the result of a maximization, $\tau'\geq \tau^*$. \end{IEEEproof} The above statements do not claim that the two problems are equivalent. Indeed, they can be maximized by \emph{different} pairs $(\mathbf{x},\boldsymbol\lambda)$. Anyway, as long as we are interested in the AS threshold, we can deal with Problem \ref{prb - generalized problem} instead of Problem \ref{prb - original problem} and with generalized equilibria instead of equilibria. \section{Limit cycles}\label{sec - Limit cycles} In this Section, we focus on limit cycles, i.e. on extrinsic messages that periodically take the same values. We show that they have thresholds smaller than or equal to equilibria. Therefore, we will neglect them. \begin{Definition}\label{def - limit cycle} The sequence $(\{\mathbf{x}''^{(0)},\ldots,\mathbf{x}''^{(L-1)}\},\boldsymbol\lambda'')$ is a \emph{limit cycle} with period $L$ iff $\forall k$, \begin{equation} \mathbf{x}''^{(k+1\:\mathrm{mod}\:L)}=\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}''^{(k \:\mathrm{mod}\: L)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda''\right)\,. \end{equation} \end{Definition} Limit cycles can be interpreted as equilibria of the \emph{augmented} AS, described by an augmented matrix $\mathbf{A}''$ of size $(NL)\times(NL)$. While the VN and CN activation order does not matter in case of equilibria (at the equilibrium, extrinsic messages do not change if we update them all together, or one by one in arbitrary order), this is not true in case of limit cycles. Indeed, the associated set of equations depends on the decoding order. In case of parallel message passing, one can write a system of equations with $NL$ rows, where the $l$-th horizontal stripe of $N$ equations represents the evolution of extrinsic messages from state $\mathbf{x}''^{(l-1 \mod L)}$ to $\mathbf{x}''^{(l)}$ \begin{align}\label{eq - augmented system} \begin{bmatrix} \mathbf{x}''^{(0)} \\ \mathbf{x}''^{(1)} \\ \vdots\\ \mathbf{x}''^{(L-1)} \\ \end{bmatrix} =\mathrm{sat}\left(\mathbf{A}'' \left( \begin{bmatrix} \mathbf{x}''^{(0)} \\ \mathbf{x}''^{(1)} \\ \vdots\\ \mathbf{x}''^{(L-1)} \\ \end{bmatrix} -\mathbf{1}\right)+ 2\cdot \mathbf{1} + \begin{bmatrix} \mathbf{R}\boldsymbol\lambda'' \\ \mathbf{R}\boldsymbol\lambda'' \\ \vdots\\ \mathbf{R}\boldsymbol\lambda'' \\ \end{bmatrix} \right)\,. \end{align} Instead, in sequential (or serial-C \cite{ShaLit}) decoding CNs are activated one by one, in turn, immediately updating the a-posteriori LLRs of the VNs connected thereto. The augmented matrix changes, since only the first CNs use extrinsic messages produced at the previous iteration, while all others exploit messages generated during the same iteration. We can represent this behavior defining two matrices $\mathbf{\bar{A}}$ and $\mathbf{\underline{A}}$, binary partitions of $\mathbf{A}$, \begin{equation} \bar{A}_{i,j}, \underline{A}_{i,j} \in \{0,1\},\qquad \mathbf{\bar{A}}+ \mathbf{\underline{A}} = \mathbf{A} \end{equation} and writing an augmented matrix as \begin{spacing}{\matrixspacing} \begin{equation}\label{eq - Aaugmented} \mathbf{A}''= \begin{bmatrix} \mathbf{\underline{A}}& \mathbf{0} & \cdots & \mathbf{0} & \mathbf{\bar{A}}\\ \mathbf{\bar{A}} & \mathbf{\underline{A}}& \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{\bar{A}} & & \mathbf{0} & \mathbf{0}\\ \vdots & \vdots& \ddots & \ddots & \vdots \\ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{\bar{A}} & \mathbf{\underline{A}} \\ \end{bmatrix}\,. \end{equation} \end{spacing} $\bar{\mathbf{A}}$ and $\underline{\mathbf{A}}$ have upper and lower triangular shapes, due to the sequential update order. Note that \eqref{eq - Aaugmented} is valid not only for sequential CN message passing decoding, but also for any arbitrary order\footnote{In this case, the lower and upper triangular shape is lost.}, as long as all extrinsic messages are activated in turn, once per decoding iteration. Parallel message passing is a special case of \eqref{eq - Aaugmented}, with $\mathbf{\bar{A}}=\mathbf{A}$ and $\mathbf{\underline{A}}=\mathbf{0}$. Therefore we provide the following theorem only for the most general case. \begin{Theorem}\label{thm - sequential cycles do not matter} If there exists a limit cycle $(\{\mathbf{x}''^{(0)},\mathbf{x}''^{(1)},\ldots,\mathbf{x}''^{(L-1)}\},\boldsymbol\lambda'')$, a generalized equilibrium $(\mathbf{x}^*,\boldsymbol\lambda^*)$ with $\boldsymbol\lambda^*\geq\mathbf{R}\boldsymbol\lambda''$ exists, too. \end{Theorem} \begin{IEEEproof} Consider any partition of the identity matrix $\mathbf{I}$ in $L$ binary matrices, with size $N\times N$: \begin{equation} W_{i,j}^{(l)} \in \{0,1\},\qquad \sum_{l=0}^{L-1}\mathbf{W}^{(l)}= \mathbf{I}\,. \end{equation} Then \begin{align} \nonumber \sum_{l=0}^{L-1}\mathbf{W}^{(l)}\mathbf{x}''^{(l)} =& \; \sum_{l=0}^{L-1}\mathbf{W}^{(l)}\mathrm{sat}\Big(\mathbf{\bar{A}}(\mathbf{x}''^{(l-1 \mod L)}-\mathbf{1})\\\nonumber&+ \mathbf{\underline{A}}(\mathbf{x}''^{(l)}-\mathbf{1})+2\cdot\mathbf{1}+\mathbf{R}\boldsymbol\lambda''\Big) \\\nonumber =&\; \mathrm{sat}\Bigg(\sum_{l=0}^{L-1}\mathbf{W}^{(l)}\mathbf{\bar{A}}(\mathbf{x}''^{(l-1 \mod L)}-\mathbf{1})\\&+\sum_{l=0}^{L-1}\mathbf{W}^{(l)} \mathbf{\underline{A}}(\mathbf{x}''^{(l)}-\mathbf{1}) \\ \nonumber &+2\cdot\underbrace{\sum_{l=0}^{L-1}\mathbf{W}^{(l)}}_{=\mathbf{I}}\mathbf{1}+\underbrace{\sum_{l=0}^{L-1}\mathbf{W}^{(l)}}_{=\mathbf{I}}\mathbf{R}\boldsymbol\lambda''\Bigg) \end{align} where in the second equation $\mathbf{W}^{(l)}$ enters into the $\mathrm{sat}(\cdot)$ function since $1\cdot\mathrm{sat}(x)=\mathrm{sat}(1\cdot x)$ and $0\cdot\mathrm{sat}(x)=\mathrm{sat}(0\cdot x),\;\forall x$. Choose a vector $\mathbf{x}^*$ of extrinsic messages as \begin{equation} x_j^*=\min_l\left({x''_j}^{(l)}\right),\;\forall j\,. \end{equation} As a consequence, we have $\mathbf{x}^*\leq \mathbf{x}''^{(l)},\;\forall l$ and \begin{align} \nonumber \sum_{l=0}^{L-1}\mathbf{W}^{(l)}\mathbf{x}''^{(l)} =&\; \mathrm{sat}\left(\underbrace{\sum_{l=0}^{L-1}\mathbf{W}^{(l)}}_{=\mathbf{I}}\mathbf{\bar{A}}(\mathbf{x}^*-\mathbf{1})\right.\\ \nonumber&\left.+\underbrace{\sum_{l=0}^{L-1}\mathbf{W}^{(l)}}_{=\mathbf{I}} \mathbf{\underline{A}}(\mathbf{x}^*-\mathbf{1})+2\cdot\mathbf{1}+\underbrace{\mathbf{R}\boldsymbol\lambda''+\mathbf{\Delta}}_{\triangleq\boldsymbol\lambda^*}\right) \\ =&\; \mathrm{sat}\left(\mathbf{A}(\mathbf{x}^*-\mathbf{1})+2\cdot\mathbf{1}+\boldsymbol\lambda^*\right) \end{align} with $\mathbf{\Delta}=\sum_{l=0}^{L-1}\mathbf{W}^{(l)}\mathbf{\bar{A}}(\mathbf{x}''^{(l-1 \mod L)}-\mathbf{x}^*)+\sum_{l=0}^{L-1}\mathbf{W}^{(l)} \mathbf{\underline{A}}(\mathbf{x}''^{(l)}-\mathbf{x}^*)\geq 0$, being $A_{i,j}\geq0,\forall i,j$. Finally, choose the partition $\left\{\mathbf{W}^{(l)}\right\}$ that implements the $\min(\cdot)$ function, i.e., \begin{equation} \sum_{l=0}^{L-1}\mathbf{W}^{(l)}\mathbf{x}''^{(l)} = \mathbf{x}^* \end{equation} thus achieving a generalized equilibrium $\left(\mathbf{x}^*,\boldsymbol\lambda^*\right)$ with $\boldsymbol\lambda^*= \mathbf{R}\boldsymbol\lambda''+\mathbf{\Delta}\geq \mathbf{R}\boldsymbol\lambda''$. \end{IEEEproof} A straight consequence of Theorem \ref{thm - sequential cycles do not matter} is that limit cycles can be neglected, when we compute the AS threshold. \section{Behavior analysis above threshold}\label{sec - Chaotic} We also have to take into account potential chaotic behaviors of the extrinsic messages in \eqref{eq - system evolution dv 3}. In principle, $\mathbf{x}^{(k)}$ could even evolve without achieving any equilibrium or limit cycle. Yet, above the threshold $\tau$ the extrinsic messages $\mathbf{x}^{(k)}$ achieve $\mathbf{1}$. \begin{Theorem}\label{thm - chaotic behaviors} Let $\tau$ be the solution of Problem \ref{prb - original problem} or \ref{prb - generalized problem}. If $\boldsymbol\lambda> \tau\cdot \mathbf{1}$, for any starting $\mathbf{x}^{(0)}$ with $-\mathbf{1}\leq \mathbf{x}^{(0)} \leq \mathbf{1}$, for a sufficiently large $K\geq 0$ \begin{equation} \mathbf{x}^{(k)}=\mathbf{1},\quad\forall k\geq K\,. \end{equation} \end{Theorem} \begin{IEEEproof} For the time being, consider channel messages $\mathbf{\hat{\boldsymbol\lambda}}$ that can assume only quantized values between $-1$ and $1$, with uniform step $\delta=\frac{1}{Q}$, $Q\in \mathbb{N}$. Assume that also extrinsic messages $\mathbf{\hat{x}}$ are quantized numbers, with the same step $\delta$. Therefore, $\mathbf{\hat{x}}^{(0)}$ can only assume $(2Q)^N$ different values. Letting the system \begin{equation} \mathbf{\hat{x}}^{(k+1)}= \mathrm{sat}\left(\mathbf{A} \left( \mathbf{\hat{x}}^{(k)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\mathbf{\hat{\boldsymbol\lambda}}\right) \end{equation} evolve, it is clear that extrinsic messages at every time $k>0$ must belong to the same set of $(2Q)^N$ values. When $\mathbf{\hat{\boldsymbol\lambda}}>\tau\cdot\mathbf{1}$, the analysis presented in previous Sections assures that the only remaining equilibrium is $\mathbf{x}=\mathbf{1}$. Indeed, other equilibria cannot exist since they would need $\min\left(\hat{\boldsymbol\lambda}\right)\leq\tau$. By Theorem \ref{thm - sequential cycles do not matter}, also limit cycles do not exist, both in case of parallel and sequential decoding. Therefore, the only value that $\mathbf{\hat{x}}$ can assume more than once is $\mathbf{1}$ (otherwise we would incur in equilibria or cycles). We can conclude that $\mathbf{x}=\mathbf{1}$ will be reached in at most $K=(2Q)^N$ iterations. After that, the extrinsic messages will remain constant and the absorbing set will be defused. If $\mathbf{x}^{(0)}$ and $\mathbf{\boldsymbol\lambda}$ are not quantized, we can always identify a sufficiently small quantization step $\delta$ and a quantized pair $\left(\mathbf{\hat{x}}^{(0)},\mathbf{\hat{\boldsymbol\lambda}}\right)$ s.t. \begin{equation} \mathbf{\hat{x}}^{(0)}\leq \mathbf{x}^{(0)},\qquad \tau\cdot \mathbf{1}<\mathbf{\hat{\boldsymbol\lambda}}\leq \boldsymbol\lambda \end{equation} since $\mathbb{Q}$ is dense in $\mathbb{R}$. Finally, writing the inequality \begin{multline} \mathbf{\hat{x}}^{(k+1)}= \mathrm{sat}\left(\mathbf{A} \left( \mathbf{\hat{x}}^{(k)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\mathbf{\hat{\boldsymbol\lambda}}\right)\\\leq \mathrm{sat}\left(\mathbf{A} \left( \mathbf{{x}}^{(k)}-\mathbf{1}\right)+2\cdot\mathbf{1}+\mathbf{R}\mathbf{{\boldsymbol\lambda}}\right)=\mathbf{{x}}^{(k+1)} \end{multline} and recalling that $\mathbf{\hat{x}}^{(k)}$ achieves $\mathbf{1}$ in at most $K=(2Q)^N$ iterations, we conclude that also $\mathbf{{x}}^{(k)}$ must achieve $\mathbf{1}$ in at most $K$ iterations, by the Squeeze Theorem applied to $\mathbf{\hat{x}}^{(k)} \leq \mathbf{{x}}^{(k)}\leq \mathbf{1}$. \end{IEEEproof} Theorem \ref{thm - chaotic behaviors} states that there cannot exist bad equilibria, limit cycles or chaotic behaviors (in short, \emph{bad trajectories}) if the minimum channel LLR exceeds the solution $\tau$ of Problem \ref{prb - original problem} or \ref{prb - generalized problem}. This reinforces the name \emph{threshold} assigned to $\tau$ (we do not distinguish any more between $\tau'$ and $\tau^*$), that is not limited to equilibria, but pertains to all bad trajectories. In Fig. \ref{fig - AS line}(a) we represent bad trajectories, ordering them w.r.t. their minimum channel LLR, in the range $-1\leq\lambda\leq 1$. \begin{figure} \centering \includegraphics[width=\figwidthlarge]{./AbsorbingSetLine}\\ \caption{Trajectories of an AS, ordered w.r.t. their minimum channel LLR.}\label{fig - AS line} \end{figure} By Theorems \ref{thm - sequential cycles do not matter} and \ref{thm - chaotic behaviors}, the rightmost bad trajectory is an equilibrium. The results found so far can be exploited to deactivate many ASs during the decoding process, using two different saturation levels. Without loss of generality we set the saturation level of extrinsic messages equal to $\pm 1$, and the saturation level of channel LLRs equal to $\pm L_{ch}$, with $0<L_{ch}\leq 1$. The latter saturation level defines the range of admissible channel LLRs, depicted in Figs. \ref{fig - AS line}(a) and \ref{fig - AS line}(b) as a gray box. The decoding trajectories within ASs can be very different in case of positive or negative thresholds: \begin{itemize} \item if $\tau\geq 0$, the saturation of channel LLRs to $\pm L_{ch}$ does not destroy bad trajectories. This is graphically represented in Fig. \ref{fig - AS line}(a); \item if $\tau< 0$, we can set $L_{ch}<-\tau=|\tau|$. With this choice, \emph{channel LLRs can never lead to bad trajectories}, as depicted in Fig. \ref{fig - AS line}(b). Therefore, by Theorem \ref{thm - chaotic behaviors} the AS is defused. \end{itemize} \section{Simulation Results} \label{sec - Examples} The behavior of bad structures under iterative decoding in a large code graph is in good agreement with the theory developed so far. For instance, consider the AS (5,3) with topology shown in Fig. \ref{fig - messages}. \begin{figure} \centering \includegraphics[width=\figwidthsmall,clip]{./TwoRoomApt.eps}\\ \caption{A (7,3) elementary absorbing set.}\label{fig - AS (7,3) topology} \end{figure} For this AS, $\tau = -1/3$ (a method to compute thresholds will be presented in the next Section). In Fig. \ref{fig - AS real behaviour}(a) we plot its contribution to the error-floor of an LDPC code having block size $30\,000$ and rate $4/5$. \begin{figure} \centering \includegraphics[width=\figwidthlarge,clip]{./AS_examples.eps}\\ \caption{Error floor contributions of the (5,3) and (7,3) ASs shown in Figs. \ref{fig - messages} and \ref{fig - AS (7,3) topology}, respectively. The error floors have been obtained applying Importance Sampling to a real LDPC code, under MS sequential decoding.}\label{fig - AS real behaviour} \end{figure} The simulations are run using Importance Sampling over a Gaussian channel, with SNR around 2.5 dB. We always let the quantized channel LLRs vary in the range $\left[-7,7\right]$, while the extrinsic LLRs are quantized with a varying number $q_e$ of bits. Therefore extrinsic messages belong to the interval $\left[-2^{q_e-1}+1,2^{q_e-1}-1\right]$, and $L_{ch}=\frac{7}{2^{q_e-1}-1}$. Decisions are taken after 20 iterations of MS sequential decoding. From Fig. \ref{fig - AS real behaviour}(a) it is apparent that the probability that the MS decoder be locked by the AS is lowered when $L_{ch}$ is reduced from $1$ to $7/15$. However, this is larger than $|\tau|=1/3$ and an error floor still appears. In agreement with the predictions of our theory, if we set $L_{ch}=7/31<|\tau|$ the AS is always unlocked and the error probability is zero. In Fig. \ref{fig - AS real behaviour}(b) we plot the same curves for another AS embedded in the same LDPC code. This AS is shown in Fig. \ref{fig - AS (7,3) topology} and its $\tau=-1/9$. Once again, reducing $L_{ch}$ decreases the error probability, but now $L_{ch}=7/31$ does not guarantee an error-free performance. \begin{figure} \centering \includegraphics[width=\figwidthlarge,clip]{./examples_b4.eps} \caption{Error floor contributions of 48 ASs, with various channel saturation levels.}\label{fig - ASs contribution} \end{figure} Fig. \ref{fig - ASs contribution} refers to a different code, with the same blocklength and rate. Here we have 48 different AS topologies of size $a=6,8,10,12$ (see Fig. \ref{fig - ASs contribution}(a)), $b=4$, whose thresholds are shown in Fig. \ref{fig - ASs contribution}(b). In Fig. \ref{fig - ASs contribution}(c) we plot the BER contribution of each topology, with various channel saturation levels. The results agree with the predictions of our model. If $L_{ch}=1$, all ASs contribute to the error floor. If $L_{ch}=7/15$ all the (6,4) ASs, whose threshold $\tau=-1/2$, are deactivated. With $L_{ch}=7/31$ all (8,4) ASs below threshold are deactivated. Also some ASs with threshold just above $-7/31$ gave no errors. Besides, Fig. \ref{fig - ASs contribution}(c) shows a good correlation between the thresholds and the dangerousness of the ASs. \section{A search algorithm for thresholds}\label{sec - Search algorithm} \subsection{Towards an affordable linear problem} With the aim of deriving an efficient algorithm to compute the AS threshold, we further simplify Problem \ref{prb - generalized problem}, introducing \begin{Problem}\label{prb - inequality generalized problem} \begin{align}\nonumber \tilde{\tau}= \max_{\lambda,\mathbf{x}}&\quad\lambda\\ s.t. & \quad -\mathbf{1}\leq\mathbf{x}\leq \mathbf{1} \label{eq - x bounds}\\ &\quad \exists j: x_j< 1 \label{eq - x < 1} \\ & \quad \mathbf{x}\geq\mathrm{\widetilde{sat}}\left(\mathbf{A} \left( \mathbf{x}-\mathbf{1}\right)+(2+\lambda)\mathbf{1}\right)\label{eq - inequality sat tilde} \end{align}\end{Problem} \noindent where $\mathrm{\widetilde{sat}}(x)\triangleq\min(x,1)$. With respect to Problem \ref{prb - generalized problem}, only the upper saturation is still present in $ \mathrm{\widetilde{sat}}(\cdot)$: extrinsic messages can now assume any negative value. Besides, the constraint imposed by the equilibrium equality has been relaxed, and substituted by an inequality containing only a scalar value $\lambda$. Notwithstanding these modifications, the following theorems hold: \begin{Theorem}\label{thm - tau*=tau tilde} As for Problems \ref{prb - generalized problem} and \ref{prb - inequality generalized problem}, $\tau^*= \tilde{\tau}$. \end{Theorem} \begin{IEEEproof}We show that $\tau^*\leq\tilde{\tau}$ and $\tau^*\geq\tilde{\tau}$. Assume we are given a solution $(\mathbf{x}^*,\boldsymbol\lambda^*)$ of Problem \ref{prb - generalized problem}, i.e. with $\min(\boldsymbol\lambda^*)=\tau^*$. We can exhibit a pair $(\mathbf{\tilde{x}},\tilde{\lambda})$ that satisfies the constraints of Problem \ref{prb - inequality generalized problem}. Indeed \begin{multline} \mathbf{x}^*=\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}§^*-\mathbf{1}\right)+2\cdot\mathbf{1}+\boldsymbol\lambda^*\right)\\\geq\mathrm{sat}\left(\mathbf{A} \left( \mathbf{x}^*-\mathbf{1}\right)+\left(2+\min(\boldsymbol\lambda^*)\right)\mathbf{1}\right)\\\geq\mathrm{\widetilde{sat}}\left(\mathbf{A} \left( \mathbf{x}^*-\mathbf{1}\right)+\left(2+\min(\boldsymbol\lambda^*)\right)\mathbf{1}\right)\,. \end{multline} Therefore, the pair $\left(\mathbf{\tilde{x}}=\mathbf{x}^*,\tilde{\lambda}=\min(\boldsymbol\lambda^*)\right)$ fulfills the constraints of Problem \ref{prb - inequality generalized problem}, because also $\exists j: \tilde{x}_j=x_j^*< 1$. Being $ \tilde{\tau}$ the result of a maximization, $\tilde{\tau}\geq \tilde{\lambda}= \min(\boldsymbol\lambda^*)=\tau^*$. Focusing on the converse, assume we are given a solution $(\mathbf{\tilde{x}},\tilde{\lambda})$ of Problem \ref{prb - inequality generalized problem}, with $\tilde{\lambda}=\tilde{\tau}$. No matter whether extrinsic messages are saturated or not, we can always add a positive vector $\mathbf{\Delta}\geq\mathbf{0}$ to $\tilde{\lambda}\cdot\mathbf{1}$: \begin{equation} \mathbf{\Delta}=\mathbf{\tilde{x}}-\mathrm{\widetilde{sat}}\left(\mathbf{A} \left( \mathbf{\tilde{x}}-\mathbf{1}\right)+(2+\tilde{\lambda})\mathbf{1}\right)\,. \end{equation} This way, we turn inequality \eqref{eq - inequality sat tilde} into the equality \begin{equation} \mathbf{\tilde{x}}=\mathrm{\widetilde{sat}}\left(\mathbf{A} \left( \mathbf{\tilde{x}}-\mathbf{1}\right)+2\cdot\mathbf{1}+\tilde{\lambda}\cdot\mathbf{1}+\mathbf{\Delta}\right) \,.\end{equation} The constraints of Problem \ref{prb - inequality generalized problem} set $\mathbf{x}\geq -\mathbf{1}$, thus we conclude that \begin{equation} \mathrm{\widetilde{sat}}\left(\mathbf{A} \left( \mathbf{\tilde{x}}-\mathbf{1}\right)+2\cdot\mathbf{1}+\tilde{\lambda}\cdot\mathbf{1}+\mathbf{\Delta}\right)\geq -\mathbf{1} \end{equation} and finally \begin{multline} \mathrm{\widetilde{sat}}\left(\mathbf{A} \left( \mathbf{\tilde{x}}-\mathbf{1}\right)+2\cdot\mathbf{1}+\tilde{\lambda}\cdot\mathbf{1}+\mathbf{\Delta}\right)\\= \mathrm{sat}\left(\mathbf{A} \left( \mathbf{\tilde{x}}-\mathbf{1}\right)+2\cdot\mathbf{1}+\tilde{\lambda}\cdot\mathbf{1}+\mathbf{\Delta}\right)\,. \end{multline} To conclude, if a solution $(\mathbf{\tilde{x}},\tilde{\lambda})$ of Problem \ref{prb - inequality generalized problem} with $\tilde{\lambda}=\tilde{\tau}$ exists, we can exhibit a generalized equilibrium $(\mathbf{x}^*=\mathbf{\tilde{x}},\boldsymbol\lambda^*=\tilde{\lambda}\cdot\mathbf{1}+\mathbf{\Delta})$ solution of Problem \ref{prb - generalized problem}, with $\mathbf{\Delta}\geq\mathbf{0}$. Being $\tau^*$ the result of a maximization, we conclude that $\tau^*\geq\min(\boldsymbol\lambda^*)=\min(\tilde{\lambda}\cdot\mathbf{1}+\mathbf{\Delta})\geq\tilde{\lambda}=\tilde{\tau}$. \end{IEEEproof} Once again, Problems \ref{prb - generalized problem} and \ref{prb - inequality generalized problem} are not equivalent, as the solutions are different (the second one is not even a generalized equilibrium). Anyway, the two thresholds are the same. Problem \ref{prb - inequality generalized problem} is still non-linear and multimodal. Besides, equations are still not differentiable. We further elaborate, rewriting Problem \ref{prb - inequality generalized problem} in another form that does not rely on the $\widetilde{\mathrm{sat}}(\cdot)$ function: we define a partition of $\{1,\ldots,N\}$ in the two subsets $\mathcal{S}^{unsat}$ and $\mathcal{S}^{sat}$ of unsaturated and saturated messages, respectively\footnote{The adoption of both $\mathcal{S}^{unsat}$ and $\mathcal{S}^{sat}$ is redundant, since $\mathcal{S}^{sat}=\{1,\ldots,N\} \backslash \mathcal{S}^{unsat}$, but sometimes we use both of them for compactness.}. We also introduce a permutation matrix $\mathbf{\Pi}$, that reorganizes extrinsic messages, putting the unsaturated ones on top: \begin{equation} \begin{bmatrix} \mathbf{x}_{\mathcal{S}^{unsat}} \\ \mathbf{1} \\ \end{bmatrix} = \mathbf{\Pi}\mathbf{x}\,. \end{equation} Accordingly, we permute the routing matrix $\mathbf{A}$, and divide it in four submatrices, having inputs/outputs saturated or not: \begin{equation} \mathbf{\Pi}\mathbf{A}\mathbf{\Pi}^{-1}=\begin{bmatrix} \mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}} & \mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{sat}} \\ \mathbf{A}_{\mathcal{S}^{sat},\mathcal{S}^{unsat}} & \mathbf{A}_{\mathcal{S}^{sat},\mathcal{S}^{sat}} \\ \end{bmatrix}\,. \end{equation} We are now ready to introduce \begin{Problem}\label{prb - linearized generalized problem} \begin{align} \nonumber \dot{\tau}&= \max_{\substack{\mathcal{S}^{unsat}\subseteq \{1,\ldots,N\} \\\mathcal{S}^{unsat}\neq\emptyset}} \max_{\lambda,\mathbf{x}_{\mathcal{S}^{unsat}}}\quad\lambda\\ s.t. &\quad -\mathbf{1}\leq\mathbf{x}_{\mathcal{S}^{unsat}}\leq \mathbf{1} \label{eq - x unsat bound}\\ & \quad (\mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}-\mathbf{I})\left( \mathbf{x}_{\mathcal{S}^{unsat}}-\mathbf{1}\right)+(1+\lambda)\mathbf{1}\leq \mathbf{0}\label{eq - linear prb unsat constraint} \\ & \quad \mathbf{A}_{\mathcal{S}^{sat},\mathcal{S}^{unsat}} \left( \mathbf{x}_{\mathcal{S}^{unsat}}-\mathbf{1}\right)+(1+\lambda)\mathbf{1}\geq \mathbf{0}\,.\label{eq - linear prb sat constraint} \end{align} \end{Problem} Note that the use of $\mathcal{S}^{unsat}$ in the above problem is slightly misleading: even if the outer (leftmost) maximization sets $j\in \mathcal{S}^{unsat}$, thanks to \eqref{eq - x unsat bound} the inner maximization could achieve its maximum even in $x_j=1$, and not in $x_j<1$. We shall show that this relaxation does not impair the threshold computation. The following theorem holds. \begin{Theorem}\label{thm - alpha_tilde <= alpha_dot} As for Problems \ref{prb - inequality generalized problem} and \ref{prb - linearized generalized problem}, $\tilde{\tau}= \dot{\tau}$. \end{Theorem} \begin{IEEEproof} We only give a sketch of the proof, since it is simple but quite long. We show that Problem \ref{prb - inequality generalized problem} implies Problem \ref{prb - linearized generalized problem}, and vice-versa. First, \eqref{eq - x < 1} means that $\mathcal{S}^{unsat}\neq\emptyset$. Rewriting \eqref{eq - inequality sat tilde} in the modified order, we obtain $ \mathbf{x}_{\mathcal{S}^{unsat}}\geq \mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}} \left( \mathbf{x}_{\mathcal{S}^{unsat}}-\mathbf{1}\right)+(2+\lambda)\mathbf{1} $, since in the first block of inequalities the $\mathrm{\widetilde{sat}}(\cdot)$ operator is useless. Therefore \eqref{eq - linear prb unsat constraint} must be true. As for the second block of inequalities, they hold only if the argument of the $\mathrm{\widetilde{sat}}(\cdot)$ exceeds $\mathbf{1}$, i.e., $ \mathbf{A}_{\mathcal{S}^{sat},\mathcal{S}^{unsat}} \left( \mathbf{x}_{\mathcal{S}^{unsat}}-\mathbf{1}\right)+(2+\lambda)\mathbf{1}\geq \mathbf{1} $, that immediately leads to \eqref{eq - linear prb sat constraint}. Analogous arguments hold for the converse. The only tricky point is the following. Let $(\mathbf{\dot{x}},\dot{\lambda})$ be a maximizer for Problem \ref{prb - linearized generalized problem}, with $\dot{\lambda}=\dot{\tau}$, and $\mathcal{\dot{S}}^{unsat}$ the set corresponding to this solution. As already highlighted, $\mathcal{\dot{S}}^{unsat}$ could contain indices referring to saturated variables. If at least one element of $\mathbf{x}_{\mathcal{\dot{S}}^{unsat}}$ is not saturated, this does not impair the outer maximization, since the same solution $(\mathbf{\dot{x}},\dot{\lambda})$ is a maximizer with another pattern of saturations $\mathcal{S}^{unsat}=\left\{j\in \mathcal{\dot{S}}^{unsat}: \dot{x}_j<1\right\} \neq \emptyset$. In this case, \eqref{eq - x < 1} is satisfied. On the contrary, if \emph{all} messages in $\dot{S}^{unsat}$ were saturated, we could achieve the maximum $\lambda$ not respecting \eqref{eq - x < 1}, obtaining $\dot{\tau}\geq\tilde{\tau}$. We now prove that this cannot happen. Indeed, if we set $\mathbf{x}_{\mathcal{S}^{unsat}}=\mathbf{1}$, \eqref{eq - linear prb unsat constraint} becomes $\lambda\leq-1$. Yet, similarly to Theorem \ref{thm - equilibrium -1}, there are always other legitimate solutions having $\lambda=-1$ that do not violate the constraints of Problem \ref{prb - linearized generalized problem}, e.g. $\left(\mathbf{x}=-\mathbf{1},\lambda=-1\right)$, for which \eqref{eq - linear prb sat constraint} has no meaning and \eqref{eq - linear prb unsat constraint} becomes $-2(\mathbf{A}-\mathbf{I})\mathbf{1}\leq \mathbf{0}$, that is true because $\mathbf{A}\mathbf{1}\geq\mathbf{1}$. We conclude that $\dot{\tau}\geq -1$ and that substituting $\mathbf{x}_{\mathcal{S}^{unsat}}< \mathbf{1}$ with $\mathbf{x}_{\mathcal{S}^{unsat}}\leq \mathbf{1}$ does not harm the threshold computation. \end{IEEEproof} Once again, we do not distinguish any more between $\tau'$, $\tau^*$, $\tilde{\tau}$ and $\dot{\tau}$ since they match, and simply use $\tau$. In principle, we could solve the inner maximization of Problem \ref{prb - linearized generalized problem}, repeatedly running an optimization algorithm suited to linear equality and inequality constraints (e.g., the simplex algorithm), and retaining only the largest value of $\tau$ among all possible configurations of saturated messages. This is practically unfeasible for two reasons. First, optimization algorithms are time-consuming and we should resort to them with caution. Besides, the number of configurations to test grows exponentially with $N$. Solving Problem \ref{prb - linearized generalized problem} with a brute-force search becomes unpracticable even for moderate values of $N$. In the following, we develop methods to discard most $\mathcal{S}^{unsat}$ configurations. \subsection{Pruning tests} Test 1 exploits the following two theorems: \begin{Theorem}\label{thm - w0 sufficient condition} If $\mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}$ contains at least one row with all-zero elements, there are no solutions satisfying the constrains of Problem \ref{prb - linearized generalized problem}. \end{Theorem} \begin{IEEEproof} The proof is a \emph{reductio ad absurdum}. Consider any row of $\mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}$ with null weight, say the one corresponding to the $j$-th in $\mathbf{x}$. Then, by \eqref{eq - linear prb unsat constraint}, \begin{equation} x_j\geq 2+\lambda\geq 1 \end{equation} where the second inequality holds since $\lambda\geq-1$. The above result $x_j\geq 1$ contradicts the hypothesis $j\in \mathcal{S}^{unsat}$ . \end{IEEEproof} \begin{Theorem}\label{thm - w1 sufficient condition} If $\mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}$ contains at least one row with exactly one element equal to $1$, say $A_{j,h}$, and if the column vector $\mathbf{A}_{\mathcal{S}^{sat},h}$ has weight larger than $0$, then there are no solutions satisfying the constraints of Problem \ref{prb - linearized generalized problem}. \end{Theorem} \begin{IEEEproof} The proof is a \emph{reductio ad absurdum}. Consider any non null element of $\mathbf{A}_{\mathcal{S}^{sat},h}$, say $A_{i,h}$. Note that the maximum weight of any row and column of $\mathbf{A}$ is 2, being the VN degree $d=3$. Thus, either $A_{i,h}$ is the only non-null element of the row $\mathbf{A}_{i,\mathcal{S}^{unsat}}$, or at most another element $A_{i,k}=1$ exists in $\mathbf{A}_{i,\mathcal{S}^{unsat}}$. If $\mathbf{A}_{i,\mathcal{S}^{unsat}}$ has weight 1, by \eqref{eq - linear prb sat constraint} \begin{equation} 1\leq x_h-1+2+\lambda =x_h+1+\lambda\,. \end{equation} If $\mathbf{A}_{i,\mathcal{S}^{unsat}}$ has weight 2, by \eqref{eq - linear prb sat constraint} \begin{equation} 1\leq x_h-1+x_k-1+2+\lambda\leq x_h+1+\lambda \end{equation} where the second inequality holds since $x_k\leq 1$. In either case, \begin{equation} x_j\geq x_h-1+2+\lambda\geq 1 \end{equation} where the first inequality comes from \eqref{eq - linear prb unsat constraint}. The above result $x_j\geq 1$ contradicts the hypothesis $j\in \mathcal{S}^{unsat}$ . \end{IEEEproof} Theorems \ref{thm - w0 sufficient condition} and \ref{thm - w1 sufficient condition} suggest sufficient conditions to discard configurations of $\mathcal{S}^{unsat}$. The advantage of Test 1 is simplicity. The weakness of Test 1 is that it does not take advantage of previous maximizations, with other configurations of $\mathcal{S}^{unsat}$. Test 2 exploits the threshold $\tau$ discovered up to that time. It starts initializing lower and upper bounds ($\mathbf{l}$ and $\mathbf{u}$, respectively) for the minimum channel LLR $\lambda$ and for $\mathbf{x}_{\mathcal{S}^{unsat}}$: \begin{equation} \mathbf{l}\leq\begin{bmatrix} \lambda \\ \mathbf{x}_{\mathcal{S}^{unsat}} \\ \end{bmatrix}\leq\mathbf{u} \end{equation} where \begin{equation}\label{eq - test 2 initialization} \mathbf{l}=\begin{bmatrix} \tau \\ -\mathbf{1}_{M \times 1} \end{bmatrix},\quad\quad\quad \mathbf{u}=\begin{bmatrix} \lambda_{max} \\ \mathbf{1}_{M \times 1} \end{bmatrix} \end{equation} and $M=\mathrm{card}\left(\mathcal{S}^{unsat}\right)$. In the most general case, $\lambda_{max}=1$. Yet we are mainly interested in the negative semi-axis, since in Section \ref{sec - Chaotic} we have shown that we can deactivate an AS only if the threshold is negative. Therefore, in the threshold computation algorithm we can lower $\lambda_{max}$ (of course, we must keep $\lambda_{max}\geq 0$), exchanging some information loss (we return $\min(\tau,\lambda_{max})=\lambda_{max}$ when $\tau>\lambda_{max}$), for an increased capability to discard saturation patterns, resulting in an execution speedup. Test 2 analyzes the inequality constraints of Problem \ref{prb - linearized generalized problem} in turn, rewritten as \begin{equation} \mathbf{C}\begin{bmatrix} \lambda \\ \mathbf{x}_{\mathcal{S}^{unsat}} \\ \end{bmatrix} \leq \mathbf{f} \end{equation} with \begin{eqnarray} \mathbf{C}&=& \begin{bmatrix} \mathbf{1}_{M\times 1} & \mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}-\mathbf{I} \\ -\mathbf{1}_{(N-M)\times 1} & -\mathbf{A}_{\mathcal{S}^{sat},\mathcal{S}^{unsat}} \\ \end{bmatrix}\\ \mathbf{f}&=&\begin{bmatrix} \mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}\mathbf{1}_{M\times 1}-2\cdot\mathbf{1}_{M\times 1} \\ \mathbf{1}_{(N-M)\times 1}-\mathbf{A}_{\mathcal{S}^{sat},\mathcal{S}^{unsat}}\mathbf{1}_{M\times 1} \\ \end{bmatrix}\,. \end{eqnarray} For every variable involved, the test tries to tighten the gap between the corresponding lower and upper bounds, exploiting bounds (upper or lower, depending on the coefficient signs) on the other variables. The process can terminate in two ways: \begin{enumerate} \item bounds $\mathbf{l}$ and $\mathbf{u}$ cannot be further improved, and $\mathbf{l}\leq \mathbf{u}$: $\tau$ and $\mathcal{S}^{unsat}$ are compatible with the existence of other equilibria, having thresholds larger than $\tau$; \item for at least one index $j$, we achieve $u_j<l_j$: equilibria having thresholds larger than the currently discovered $\tau$ cannot exist, for that $\mathcal{S}^{unsat}$. \end{enumerate} The initialization in \eqref{eq - test 2 initialization} influences the algorithm effectiveness: the more the discovered threshold $\tau$ gets large, the more Test 2 will effectively detect impossible configurations of $\mathcal{S}^{unsat}$, speeding up the solution of Problem \ref{prb - linearized generalized problem}. \subsection{Tree based, efficient search of the AS threshold} Test 2 is typically more effective than Test 1, as it can detect a large number of configurations of $\mathcal{S}^{unsat}$ not improving the threshold $\tau$. Yet, it can be applied only when $\mathcal{S}^{unsat}$ is formed. On the contrary, Test 1 can also be applied during the construction of $\mathcal{S}^{unsat}$: \begin{Theorem} Let $\mathcal{S}^{violation}\subseteq\mathcal{S}^{unsat}$ be the set of indices satisfying Theorems \ref{thm - w0 sufficient condition} or \ref{thm - w1 sufficient condition}. Erasing only a subset of $\mathcal{S}^{violation}$ from $\mathcal{S}^{unsat}$, the other elements in $\mathcal{S}^{violation}$ still satisfy conditions of Theorems \ref{thm - w0 sufficient condition} or \ref{thm - w1 sufficient condition}. \end{Theorem} \begin{IEEEproof} Assume that only one element involved in some violation, say the $m$-th, is erased by $\mathcal{S}^{unsat}$. The proof for a generic subset of violations, erased all together, can be achieved repeating the following argument many times, discarding one element after the other. When element $m$ passes from $\mathcal{S}^{unsat}$ to $\mathcal{S}^{sat}$, not only the row $\mathbf{A}_{m,\mathcal{S}^{unsat}}$ must be erased, but also the column $\mathbf{A}_{\mathcal{S}^{unsat},m}$ must be canceled. Looking at any other row of $\mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}$ leading to a violation, say the one corresponding to element $j$, three events can happen (see Fig. \ref{fig - row column deletion}): \begin{figure} \centering \includegraphics[width=\figwidthlarge]{./Aunsat_Asat}\\ \caption{Row-column erasure possibilities in $\mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}$.}\label{fig - row column deletion} \end{figure} \begin{enumerate} \item $\mathbf{A}_{j,\mathcal{S}^{unsat}}$ had weight 0: after column deletion, it still has weight 0 and the hypothesis of Theorem \ref{thm - w0 sufficient condition} is still valid; \item $\mathbf{A}_{j,\mathcal{S}^{unsat}}$ had weight 1, and its element equal to 1 was exactly in the $m$-th column (the erased one): after deletion, the row assumes weight 0, therefore satisfying the hypothesis of Theorem \ref{thm - w0 sufficient condition}; \item $\mathbf{A}_{j,\mathcal{S}^{unsat}}$ had weight 1, and the element equal to 1, say $A_{j,h}$ did not lie in the $m$-th column: after deletion, the row of $\mathbf{A}_{\mathcal{S}^{unsat},\mathcal{S}^{unsat}}$ still has weight 1. Since the weight of the corresponding column $\mathbf{A}_{\mathcal{S}^{sat},h}$ is still 1 (it had weight 1 by hypothesis), Theorem \ref{thm - w1 sufficient condition} still holds for the $j$-th element. \end{enumerate} Either Theorem \ref{thm - w0 sufficient condition} or \ref{thm - w1 sufficient condition} are still valid, and the other violations do not disappear. \end{IEEEproof} The above Theorem gives us the freedom to erase elements in $\mathcal{S}^{violation}$ all together from $\mathcal{S}^{unsat}$, and simultaneously add them to $\mathcal{S}^{sat}$. Therefore, we can imagine a \emph{tree search} among all possible configurations of saturated messages. At the root node, $\mathcal{S}^{sat}=\emptyset$. At successive steps, some extrinsic messages are marked as already visited (``fixed", from here on). In addition, fixed messages are labeled as saturated or not. Extrinsic messages not fixed (say ``free") are always unsaturated. This implicitly defines $\mathcal{S}^{sat}$ and $\mathcal{S}^{unsat}$. For the current configuration, Test 1 is performed. Three things can happen: \begin{itemize} \item \textbf{Case 1}: Test 1 claims that Problem \ref{prb - linearized generalized problem} may have solutions for that $\mathcal{S}^{sat}$; \item \textbf{Case 2}: Test 1 claims that $\mathcal{S}^{sat}$ is incompatible with any solution of Problem \ref{prb - linearized generalized problem}, and all the elements generating a test violation are free; \item \textbf{Case 3}: Test 1 claims that $\mathcal{S}^{sat}$ is incompatible with any solution of Problem \ref{prb - linearized generalized problem}, but some elements generating a test violation have been previously fixed (and marked as unsaturated). \end{itemize} Depending on the answer of Test 1, we expand the tree in different manners: \begin{itemize} \item in Case 1, in turn we fix one of the free messages, and branch the tree, labeling the last element as either saturated or not, calling the algorithm recursively; \item in Case 2, we fix and mark as saturated all elements of $\mathcal{S}^{unsat}$ that generate violations, and call the algorithm recursively; \item in Case 3, or if all variables have been already fixed, we take no action. \end{itemize} After Test 1, before the tree branching, we either perform optimization or not: \begin{itemize} \item in Case 1 or 2, Test 2 is executed. In case of negative result, we return $\tau=-1$; otherwise, the simplex algorithm is eventually performed to solve Problem \ref{prb - linearized generalized problem} for the current $\mathcal{S}^{unsat}$; \item in Case 3, we return the partial result $\tau=-1$. \end{itemize} This way, Test 1 speeds up the construction of $\mathcal{S}^{sat}$ and prunes many branches. Test 2 avoids the execution of the simplex algorithm for many useless configurations, not detected by Test 1. \section{Additional properties}\label{sec - Other Properties} \subsection{Punctured LDPC codes} Puncturing is a popular means to adapt the code rate or even achieve rate compatibility \cite{Bib-PisFekTIT07}. An interesting extension of our theory is that harmless ASs having $\tau<-L_{ch}$ are deactivated even in case of puncturing. \begin{Theorem}\label{thm - puncturing} Puncturing at most $a-1$ VNs of an AS does not increase the threshold. \end{Theorem} \begin{IEEEproof} Assume that an AS $(a,b)$ of threshold $\tau$ is punctured in less than $a$ VNs. Let $\boldsymbol\lambda^{p}$ be the set of channel LLRs, with null messages for the punctured VNs. First, consider the case $\tau<0$. Assume that after puncturing, a bad trajectory $\{\mathbf{x}^{(k)}\}$ exists, with $\min(\boldsymbol\lambda^{p})>\tau$. This is an absurdum, since $\boldsymbol\lambda^{p}$ is a legitimate solution without puncturing, and the definition of threshold given e.g. in Problem \ref{prb - original problem} is contradicted. This holds as long as at least one variable is not punctured, otherwise nothing is left to optimize and $\boldsymbol\lambda^{p}=\mathbf{0}>\tau\cdot\mathbf{1}$. Consider now the case $\tau\geq 0$. Since at least one entry of $\boldsymbol\lambda^{p}$ is equal to $0$, we have $\min(\boldsymbol\lambda^{p})\leq 0$. Thus $\min(\boldsymbol\lambda^{p})\leq \tau$ and the threshold of the same AS without puncturing is not exceeded. \end{IEEEproof} Therefore, ASs having $\tau<-L_{ch}$ cannot become harmful. \subsection{Thresholds are rational numbers} A final, not trivial property of thresholds is the following. \begin{Theorem}\label{thm - rational thresholds} Thresholds $\tau\in \mathbb{Q}$. \end{Theorem} \begin{IEEEproof} We focus on Problem \ref{prb - linearized generalized problem}. We will prove the theorem for any constrained saturation pattern $S^{unsat}$. Therefore, the result will hold for the maximum across all possible $S^{unsat}$. The proof is slightly cumbersome, and involves standard concepts of linear programming theory. First, note that constraints $-\mathbf{1} \leq \mathbf{x}\leq \mathbf{1}$ bound the feasible space of extrinsic messages. By Theorem \ref{thm - alpha range}, the constraint $-1 \leq \lambda \leq 1$ can be added without modifying the result. Therefore, the above constraints and the others in Problem \ref{prb - linearized generalized problem} define a \emph{polytope} $\mathcal{P}$ in $M+1$ dimensions \begin{equation} \mathbf{y}\in \mathcal{P}\Longleftrightarrow \underbrace{\begin{bmatrix} \mathbf{I}_{(M+1)\times (M+1)} \\ -\mathbf{I}_{(M+1)\times (M+1)} \\ \mathbf{C} \\ \end{bmatrix}}_{\triangleq\mathbf{C}'} \mathbf{y}\leq \underbrace{\begin{bmatrix} \mathbf{1}_{(M+1)\times 1} \\ \mathbf{1}_{(M+1)\times 1} \\ \mathbf{f} \\ \end{bmatrix}}_{\triangleq\mathbf{f}'} \end{equation} where \begin{equation} \mathbf{y} \triangleq \begin{bmatrix} \lambda \\ \mathbf{x}_{\mathcal{S}^{unsat}} \\ \end{bmatrix} \end{equation} Geometrically, the above inequality constraints reported in canonical form represent half-spaces that ``shave" the polytope. The polytope $\mathcal{P}$ is convex, since it results from the intersection of half-spaces, that are affine and therefore convex. To conclude, our optimization problem can be re-stated as \begin{equation} \max_{\mathbf{y}}\;\mathbf{w}^T\mathbf{y},\quad\quad s.t. \quad \mathbf{y}\in\mathcal{P} \end{equation} with $\mathbf{w}=\begin{bmatrix} 1 & 0 & \ldots & 0 \\ \end{bmatrix}^T $. Our feasible region cannot be empty, since we already know that a solution $(\lambda=-1, \mathbf{x}_{\mathcal{S}^{unsat}}=-\mathbf{1})$, i.e. $\mathbf{y}=-\mathbf{1}$ always exists. From Linear Programming Theory \cite{LinearProgramming}, we know that the number of independent constraints at any vertex is $M+1$ and that at least one vertex is an optimizer in linear programming problems (the latter part is the enunciation of the Fundamental Theorem of Linear Programming). Focus on a vertex $\mathbf{v}$ that is also a maximizer, and on the $M+1$ linearly independent constraints satisfied with equality in that point. Let $\mathcal{A}$ be the set of these constraints. We can write $\mathbf{C}'_{\mathcal{A},\{1,\ldots,M+1\}}\mathbf{v}=\mathbf{f}'_{\mathcal{A}}$. Being $\mathbf{C}'_{\mathcal{A},\{1,\ldots,M+1\}}$ full-rank, we can achieve a full QR-like decomposition $\mathbf{C}'_{\mathcal{A},\{1,\ldots,M+1\}}=\mathbf{Q}\mathbf{L}$, being $\mathbf{Q}$ orthogonal, and $\mathbf{L}$ lower triangular. Note that, being the entries of $\mathbf{C}'_{\mathcal{A},\{1,\ldots,M+1\}}$ rational (actually, integer), we can always keep the elements of $\mathbf{Q}$ and $\mathbf{L}$ rational, e.g. performing a Gram-Schmidt decomposition. Multiplying both sides of the above equation by $\mathbf{Q}^T$, we obtain \begin{equation} \mathbf{L} \mathbf{v}=\mathbf{Q}^T\mathbf{f}'_{\mathcal{A}}\,. \end{equation} Focusing on the first line of the above system, we achieve $v_1=\lambda \in \mathbb{Q}$, since also $\mathbf{f}'\in \mathbb{Q}$. \end{IEEEproof} \section{Conclusions} \label{sec - Conclusions} In this paper we defined a simplified model for the evolution inside an absorbing set of the messages of an LDPC Min-Sum decoder, when saturation is applied. Based on this model we identified a parameter for each AS topology, namely a \emph{threshold}, that is the result of a \emph{max-min} non linear problem, for which we proposed an efficient search algorithm. We have shown that based on this threshold it is possible to classify the AS dangerousness. If all the channel LLRs inside the AS are above this threshold, after a sufficient number of iterations the MS decoder can not be trapped by the AS. Future work will primarily focus on the extension of these concepts to \emph{scaled} and \emph{offset} MS decoders. We are also trying to further simplify the threshold evaluation.
1,116,691,498,082
arxiv
\section{Introduction} When continuous-time records are separated into natural consecutive time intervals, such as days, weeks, or years, for which a reasonably similar behavior is expected, the resulting functions may be described as a time series. For this kind of functional time series, each observed trajectory is a random function. Researchers have proposed a variety of prediction methods for stationary functional time series. Besse, Cardot, and Stephenson (2000) proposed a non-parametric kernel predictor. Antoniadis and Sapatinas (2003) studied first-order functional autoregression curve prediction based on a linear wavelet method. Kargin and Onaski (2008) introduced the predictive factor method. Aue, Dubart Norinho and H\"ormann (2015) proposed to use multivariate techniques. Functional data, sometimes exhibit two types of variabilities, say, amplitude variability, which corresponds to the sizes of features of curves, and phase variability, which pertains to variation of locations of curve features. For example, Figure 1 presents the smoothed curve of annual ocean surface temperatures of seven consecutive years from 1957 to 1963 in the Ni\~no 1+2 region, which is between the International Date Line and $120^\circ$W. Each curve has a peak and a valley, corresponding to hot season and cold season. The time of hot and cold season in different years can be varied. Consequently, it is important to consider the phase variability in this case. However, existing research works only consider amplitude variability of functional times series, but not phase variability. An immediate result is that, the predicted curve may not show the common pattern of the population. When trajectories share a common pattern and meanwhile present phase variation, a typical technique researchers usually adopt is functional registration, which seeks to classify the total variability into two categories, amplitude variability and phase variability (see e.g.\ Srivastava and Klassen (2016)). To the best of our knowledge, methods for prediction in functional data have not incorporated curve registration. To overcome this serious limitation, we develop a novel method for stationary functional time series, whose trajectories share a common pattern. Our goal is not only to give competitive prediction in terms of mean squared error, but also to preserve the underlying pattern for the predicted curve. \begin{figure}[!h] \centering \includegraphics[scale=0.45]{sample.png} \caption{The temperatures curves from 1957 to 1963} \end{figure} The prediction method in this article involves the prediction of amplitude functions and warping functions. The major challenge is the prediction of warping functions, since they do not lie in a linear space, and thus ordinary linear models are not applicable. Warping functions must be monotonically increasing, and they are restricted to start and end at two fixed values. There are several ways to model warping functions. Generally speaking, all these methods seek to apply linear model to non-linear objects. It is noted that warping functions share similar properties with probability distribution functions. There are some papers on modeling probability density functions. A typical idea of these research works is to use some transformations ensuring that the transformed objects are still in a linear space. Brumback (2004) proposed a self-modeling method for monotone functions involving the transformation proposed by Jupp (1978), which is a bijective map from the space of monotone increasing vectors to Euclidean space. Gervini (2015) used the Jupp transformation to study warped functional regression. In their works, the authors apply Jupp transformation to transform increasing vectors to unconstrained vectors. Peterson and M\"uller (2016) proposed to use the log quantile density transformation and log hazard transformation to map a density function into a linear Hilbert space. Gu\'egan and Iacopini (2018+) proposed a nonparametric prediction method for probability density functions using centered-log transformation. Another way is to study the manifold structure of warping fucntions. There are some works on linear modeling for manifolds. Cheng and Wu (2012) used local linear regression models to study the scale-to-manifold regression problem, where the covariate lies on an unknown manifold. Dai and M\"uller (2018) studied spherical PCA. They proposed to apply fPCA to the tangent vectors at the Fr\'echet mean of the sphere, and then use inverse exponential map to transform tangent vectors back into manifold objects. The square root of slope functions (SRSF) of warping functions are of unit norm, and thus lie on an infinite dimensional sphere, making it reasonable to apply spherical PCA on SRSFs. However, all these methods have some limitations. One common characteristic of the first method is that the transformations all involve the ``logarithm'', sometimes necessitating a further re-scaling step. However, the major limitation of the ``log'' function is that it will make the image around zero significantly small, which is nearly impossible to be predicted. Besides, density functions lie in a nonlinear space, and it is always unnatural to use linear models directly. Regarding the second method, since SRSFs of warping functions only form the positive orthant of a sphere, without constraints, it is impossible to find a linear model with homogeneous coefficients for prediction. Some researchers may consider to apply functional linear mixed effect model (see Guo (2002)), where each trajectory is considered to be a linear combination of shifted template functions. This method, however, cannot guarantee the resulting functions pertaining to the same pattern. All of these problems motivate us to find a new methodology to predict the stochastic process composed of warping functions. We develop a novel method that can jointly predict the amplitude and warping functions. The major advantage of our method is that it does not require any unnatural transformations and it retains the predicted warping functions strictly in their original non-linear space. We first implement functional registration to obtain amplitude and warping functions. To predict warping functions, we propose a state-space model, in which the states are driven by a Markov chain. Spherical $K$-means clustering is applied to reduce the dimension of warping functions. In the model, we use finite prototypes to represent the nonlinear manifold of warping functions, where we assume each warping function can be expressed as the sum of its corresponding prototype and a random error. For the prediction of amplitude function, we propose a switching coefficient operator FAR model, in which the states of warping functions influence the coefficient operators. The predicted warping functions and amplitude functions are combined to obtain the final prediction. In this article, several other issues will be addressed: \begin{enumerate} \item Since the real states in the state-space model are unknown in practice, the transition probability matrix of the hidden Markov chain has to be estimated through the estimated states instead of the real states. What can be said about the large-sample behavior of the estimator? \item For the prediction of amplitude functions, is the fFPE criterion proposed by Aue et al.\ (2015) still available? \item How can we quantitatively justify that the proposed method can preserve the common pattern well? \end{enumerate} We give the solutions in the remainder of the paper. We study the asymptotic properties of the least squares estimator of the stochastic matrix in the state-space model with misclassification under consideration, and show that the fFPE criterion can be still applied for this method under some mild conditions. We find the quantity with which the estimator is consistent and the asymptotic distribution of the estimator. Based on the definition of shape space proposed by Srivastava and Klassen (2016), we define the functional shape space as a quotient space with respect to time warping, and we propose to use amplitude distance to measure similarity in pattern/shape between two functions $f_1$ and $f_2$: $$d^{(m)}_{\rm{\rm FR}}(f_1,f_2)=\inf\limits_\gamma\|f_1-f_2\circ\gamma\|_{\rm FR},$$ where the superscript $m$ means ``minimized'', $\gamma$ is a warping function and $\|\cdot\|_\text{\rm FR}$ denotes the norm induced by Fisher--Rao metric. The $l^2$ prediction error is also provided for comparison of prediction accuracy. The rest of the paper is organized as follows. In Section 2, we illustrate the modeling procedure for the stochastic process of warping functions and amplitude functions. In Section 3, we illustrate the joint prediction algorithm and a method of order selection for the state-space model. In Section 4, the shape space framework and the reasoning for using amplitude distance to measure shape similarity can be found. In Section 5, we derive the asymptotic properties of the least squares estimator of the stochastic matrix in the state-space model. In Section 6, we show the results of a simulation study comparing the prediction performance of the new method and an amplitude-only prediction method. In Section 7, we report the results of real data analysis on annual ocean surface temperatures. More simulation results and the technical proof can be found in the appendix. \section{Stochastic Process of Phase and Amplitude Variability} \subsection{Amplitude and phase variability framework} In what follows, let $(f_n(t)\colon n\in\mathbb{N})$ be an arbitrary stationary functional time series defined on a common probability space $(\Omega,\mathcal{A},P)$, where the time parameter $n$ is discrete and the parameter $t$ is continuous. We assume the following decomposition $$f_n=Y_n\circ\gamma_n.$$ The observations $f_n$ are elements of the Hilbert space $H=L^2[0,1]$ equipped with the inner product $\langle x,y\rangle=\int_0^1x(t)y(t)dt$, and the norm of each $Y_n$ satisfies $\|Y_n\|=\sqrt{\langle Y_n,Y_n \rangle}<\infty$. Define the mean curve and covariance function pointwise through $$\mu(t)=E[Y_n(t)],\qquad K(t,s)=\text{cov}(Y_n(t), Y_n(s)).$$ The warping functions $\gamma_n\colon H\to H$ have the following property: $\gamma_n(0)=0$, $\gamma_n(1)=1$, $\gamma_n$ is invertible, and both $\gamma_n$ and $\gamma_n^{-1}$ are smooth. Let $\Gamma$ denote the set of all such functions. The square root of slope function (SRSF) of $\gamma_n$ is defined as $$s_n(t)=S(\gamma_n(t))=\sqrt{\dot{\gamma_n}(t)},$$ and a SRSF $s_n(t)$ can be transformed back into a warping function $\gamma_n(t)$ by applying $S^{-1}(\cdot)$ to it $$\gamma_n(t)=S^{-1}(s_n(t))=\int_0^ts_n^2(u)du,\qquad 0<t<1$$ where $S(\cdot)$ is a bijective map, and $\dot{\gamma}_n(t)$ is the first-order derivative of $\gamma_n(t)$. It can be shown that $\|S(\gamma_n(t))\|=1$. {\bf Remark}: In practice, we only observe $f_n$, thus we need to apply functional registration algorithm to obtain $Y_n$ and $\gamma_n$. In the following, we assume both $(Y_n\colon n\in\mathbb{N})$ and $(\gamma_n\colon n\in\mathbb{N})$ are already obtained. There are a few available functional registration methods (e.g.\ Ramsay and Silverman (2015), Srivastava and Klassen (2016) and Chakraborty and Panaretos (2017)) but we implemented the method of Srivastava and Klassen (2016) because the method avoids over-registration problem. \subsection{State-space model for warping functions} Since $(\gamma_n\colon n\in\mathbb{N})$ are not in a linear space, linear methods are not appropriate. Therefore we need to consider the manifold structure of warping functions. To do so, we study the SRSFs of $(\gamma_n\colon n\in\mathbb{N})$, whose manifold structure is the positive orthant of a sphere. We propose a state-space model with the following assumptions. \begin{itemize} \item The process is driven by a Markov chain, and each state $c_n$ of the Markov chain is associated with a fixed prototype warping function. \item The hidden Markov chain is an irreducible and ergodic process with a finite number of states; \item $u_n$'s are random error functions with $E[u_n]=0$, and given $c_n$, $u_n$ is independent of $c_m$ and $u_{m}$, $m\ne n$, and are such that the resulting functions $(\gamma_n\colon n\in\mathbb{N})$ are still warping functions. \end{itemize} Assume the Markov chain has $g$ states, then each state $c_n$ can be represented by a state-indicating vector $\omega_n$. $\omega_{n}$ is a $g$-dimensional vector, satisfying $\omega_{n,c_n}=1$ and $\omega_{n,i}=0$, for $i\ne c_n$. The state-space model is specified as follows: \begin{align*} E[\omega_{n}|\omega_1,\ldots,\omega_{n-1}]&=E[\omega_{n}|\omega_{n-1}]=\omega_{n-1}P,\\ \gamma_n&=\sum_j^g\omega_{n,j}b_j+u_n, \end{align*} here, $(b_j\colon j=1,\ldots,g)$ are the prototype warping functions. These prototypes can be viewed as a series of basis functions of $\Gamma$. $P$ is the $g\times g$ stochastic transition probability matrix of the Markov chain. \subsubsection{Estimation of the state-space model} Since the hidden state and transition probability matrix are unknown in practice, we need to first estimate $b_j$'s, $\omega_n$'s, and then $P$. We apply spherical $K$-means clustering to SRSFs of warping functions, and use the clusters centroids as the estimators of the SRSF of $b_j$'s. The estimators of $b_j$'s can be obtained by applying $S^{-1}(\cdot)$ to the cluster centroids, $$\hat{b}_j=S^{-1}(\hat{p}_j),\qquad j=1,\ldots,g,$$ where $\hat{p}_j$ is the centroid of the $j$th cluster of SRSFs. The classified categories of $(s_n\colon n\in \mathbb{N})$ are considered as the estimated state of $(\gamma_n\colon n\in \mathbb{N})$. More details are discussed below The standard spherical $K$-means clustering aims to minimize $$D=\sum_{n=1}^N(1-\cos(s_n,p_{c_n}))=\sum_{n=1}^{N}\left(1-\frac{\langle s_n,p_{c_n}\rangle}{\|s_n\|\|p_{c_n}\|}\right)=\sum_{n=1}^N(1-\langle s_n,p_{c_n}\rangle)$$ over all assignments $c$ of objects $n$ to cluster $c_n\in\{1,\ldots,g\}$ and over all SRSF representations of prototype warping functions $p_1,\ldots,p_g$, and the selection of $g$ will be discussed below. A typical projection and minimization procedure is repeated to obtain the estimators of the unknown $c_n$'s and $p_j$'s. $\hat{\omega}_n$ is a $g$-dimensional vector where only the $\hat{c}_n$'s element is $1$ and the rest elements are zeros. We then estimate $P$ by the least squares method, where $\omega_n$ is replaced with $\hat{\omega}_n$, say, $$\hat{P}=\arg\min\limits_{P}\sum_{n=2}^{N}\|\hat{\omega}_{n}-\hat{\omega}_{n-1}P\|^2_2.$$ The number of hidden states is unknown in practice, and we propose a cross-validation method in Section 3.4 to select $g$. We assume the selected $g$ is correct, and will not distinguish between the selected $g$ and the real number of states. Note that, using the R package {\bf skmeans}, spherical $K$-means clustering algorithm can be implemented by the R function $skmeans$ (see Hornik et al.\ (2012)). The estimation procedure is summarized as follows: \begin{algorithm} \caption{Estimation of the state-space model}\label{euclid} \begin{algorithmic}[] \State \textbf{Step 1} Obtain the SRSFs of warping functions, $s_n=S(\gamma_n)$. \State \textbf{Step 2} Fix the number of states $g$, apply spherical $K$-means clustering to $(s_n\colon n\in\mathbb{N})$, then obtain the cluster centroids $(\hat{p}_j\colon j=1,\ldots,g)$ and classified categories $(\hat{c}_n\colon n\in \mathbb{N})$. $(\hat{c}_n\colon n\in \mathbb{N})$ are the estimator of the unknown hidden states of the Markov chain. \State \textbf{Step 3} Apply $S^{-1}(\cdot)$ to $(\hat{p}_j\colon j=1,\ldots,g)$ to obtain the estimated prototype warping functions, say, $$\hat{b}_j=S^{-1}(\hat{p}_j),\qquad j=1,\ldots,g.$$ \end{algorithmic} \end{algorithm} \subsection{FAR process for amplitude functions} Recall that the amplitude functions $(Y_n\colon {n\in\mathbb{N}})$ are defined in $H$. The notation $Y\in L_H^p=L_H^p(\Omega,\mathcal{A},P)$ indicates that, for some $p>0$, $E[\|Y\|^p]<\infty$. By spectral decomposition, we have the following expression of the covariance operator $C$ of any $Y\in L_H^2$, $$C(x)=\sum_{m=1}^{\infty}\lambda_m\langle \nu_m,x\rangle \nu_m,$$ where $(\lambda_m\colon m\in\mathbb{N}_+)$ are the eigenvalues (in strictly descending order) and $(\nu_m \colon m\in\mathbb{N}_+)$ are the corresponding normalized eigenfunctions, so that $C(\nu_m) = \lambda_m\nu_m$ and $\|\nu_m\| = 1$. To satisfy the condition of Mercer's theorem, we usually assume the covariance operator to be continuous. The set $(\nu_m \colon m \in\mathbb{N}_+)$ form a series of orthonormal basis of $L^2[0, 1]$. Then by Karhunen--Lo\`eve theorem, $Y_n$ allows for the representation $$Y_n = \mu+\sum_{m=1}^{\infty}\langle Y_n-\mu, \nu_m\rangle \nu_m,\qquad n\in\mathbb{N}.$$ The coefficients $(\langle Y_n-\mu, \nu_m\rangle\colon m\in\mathbb{N}_+)$ in this expansion are called the fPC scores of $Y_n$. Without loss of generality, we assume the mean of the functions $Y_n$ is zero. The higher-order FAR($p$) model is defined by the stochastic recursion, $$Y_n=\Phi_1(Y_{n-1})+\cdots+\Phi_p(Y_{n-p})+\epsilon_n,\qquad n\in\mathbb{N}.$$ There are two basic assumptions: (1) $(\epsilon_n\colon n\in\mathbb{N})$ is an $i.i.d.$ sequence in $L_H^2$ with $E[\epsilon_n]=0$, and (2) the operators $\Phi_j$ are such that the above equation possesses a unique stationary and causal solution. Here, we adopt the procedure in Aue et al.\ (2015). They fit a vector autoregressive model (VAR($p$)) to the emperical fPC score vectors $({\bf Y}^e_n\colon n\in\mathbb{N})$, where the superscript ``$e$'' means emperical. The algorithm can be summarized as follows:\\ \begin{algorithm} \caption{Prediction of functional time series}\label{euclid} \begin{algorithmic}[] \State \textbf{Step 1} Fix $d$. For $n=1,\ldots,N$, use the data $Y_1,\ldots, Y_N$ to compute the vectors $${\bf Y}_n^e=(y_{n,1}^e,\ldots,y_{n,d}^e)',$$ containing the first $d$ empirical fPC scores $y_{n,l}^e=\langle Y_n,\hat{\nu}_l\rangle$. \State \textbf{Step 2} Fix $h$. Use ${\bf Y}_1^e,\ldots,{\bf Y}_N^e$ to determine the $h$-step ahead prediction $$\hat{\bf Y}_{N+h}^e=(\hat{y}^e_{N+h,1},\ldots,\hat{y}^e_{N+h,d})'$$ for $\hat{\bf Y}_{N+h}^e$ with an appropriate multivariate algorithm. \State \textbf{Step 3} Use the functional object $$\hat{Y}_{N+h}=\hat{y}^e_{N+h,1}\hat{\nu}_1+\ldots+\hat{y}^e_{N+h,d}\hat{\nu}_d$$ as $h$-step ahead predictor of $Y_{N+h}$. \end{algorithmic} \end{algorithm} The selected of $p,d$ is the minimizer of the fFPE (final functional prediction error) criterion function $$\text{fFPE}(p,d)=\frac{N+pd}{N-pd}\text{tr}(\hat{\Sigma}^d_Z)+\sum_{m> d}\hat{\lambda}_m,$$ where $\hat{\Sigma}^d_Z$ is the estimator of $\Sigma_Z^d$, say, $\hat{\Sigma}^d_Z=\frac{1}{N}\sum_{n=1}^Nz_nz_n'$, where $(z_n\colon n\in\mathcal{N})$ are the prediction residuals. \section{Joint Prediction Methodology} After separating phase and amplitude components, it is natural to consider how to predict the two components jointly, since these two sequences are not necessarily independent of each other. Here, we propose the shape preserving (SP) method which is novel prediction algorithm method that jointly predicts the amplitude and phase functions of future curves. Since warping functions and amplitude functions are defined in two different spaces, we need to find a common space for these two kinds of functions for the joint prediction. \subsection{Prediction of warping function} We convert the stochastic process of warping functions into a Markov chain by applying spherical $K$-means clustering to their corresponding SRSFs, as has been discussed in section 2.2. In order to incorporate the correlation between phase and amplitude variability, we also assume the same kind of state-space model for the sequence of amplitude functions, and apply $K$-means clustering to estimate the hidden states of amplitude functions. Similarly, the classified categories are treated as the estimation of hidden states. Figure 2 shows the framework. \begin{figure}[H] \centering \includegraphics[scale=0.35]{Figure2.png} \caption{Real states and estimated states. \\The squares indicates that $\hat{\omega}_n$ only depends on $\omega_n$.} \end{figure} where $\omega$ indicates the true state and $\hat{\omega}$ indicates the estimated state, and superscripts $(a)$ and $(f)$ refer to amplitude and phase variability, respectively. These two sequences could be correlated due to the dependence of phase and amplitude variability. We combine the two categorical sequences to obtain a new sequence, $\hat{\omega}_{n}=(\hat{\omega}_n^{(f)}\otimes\hat{\omega}_n^{(a)})$, in which $\otimes$ represents the Kronecker product. We propose to use least squares method to estimate the transition matrix ${P}$ of this combined estimated Markov chain, where $P$ is a $gl\times gl$ matrix, $g$ is the number of states of phase variability and $l$ is the number of states of amplitude variabilty. When the sample size is small, we might need some ad-hoc adjustments to ensure the estimated matrix satisfies the constraints of a stochastic matrix. We can do the adjustment by solving the optimization problem $$\hat{P}=\arg\min\limits_{P\in P_\mathcal{M}}\|P-\hat{{P}}^{\rm LS}\|_F,$$ where $P_\mathcal{M}$ is the set of all probability transition matrices, and $\|\cdot\|_F$ is Frobenius norm, and $\hat{P}^{LS}$ is the original least squares estimator of $P$. The predicted state is $$\hat{\hat{\omega}}_{n+1}=\hat{\omega}_n\hat{{P}},$$ Suppose $\hat{\hat{\omega}}_{n+1}^{(f)}$ is the predicted state-indicating vector of the next warping function, which is obtained from $\hat{\hat{\omega}}_{n+1}$, then the predicted warping function is $$\hat{\gamma}_{n+1}=\sum_{j=1}^g\hat{\hat{\omega}}^{(f)}_{n+1,j}\hat{b}_j,$$ where $\hat{b}_j$'s are the estimated prototype warping functions. \subsection{Prediction of amplitude function} Without loss of generality, we assume the amplitude functions have mean zero. We propose an FAR model with switching coefficient operators for the prediction of amplitude functions. The coefficient operator is determined by the state of the previous warping function. Suppose $c_n^{(f)}$ is the hidden state of $\gamma_n$, then the proposed model has the representation $$Y_{n+1}=\sum_{h=1}^p\Phi^{(c_n^{(f)})}_{h}(Y_{n+1-h})+\epsilon_{n},$$ where $(\epsilon_n\colon n\in\mathbb{N})$ are centered, independent and indentically distributed innovations in $L_H^2$, and $\Phi^{(c_n^{(f)})}_{h}\colon H\to H$ are bounded linear operators such that the above equation has a unique stationary and causal solution. \subsubsection{Estimation and Prediction} The estimation procedure is inspired by Aue et al.\ (2015) but with the appropriate modification that is more directly suitable for our purpose. We propose to separate the total sum of squares of the error terms with respect to the hidden states of warping functions and then minimize the $g$ sub-SSEs to obtain the $g$ sets of estimated coefficient operators. More details are discussed below. We obtain the estimation of $\{\Phi_h^{(k)}\}_{h=1}^p,\ k=1,\ldots,g$, by minimizing the objective function $$S(\Phi)=\sum_{n=1}^{N-1}\left\|Y_{n+1}-\sum_{h=1}^p\Phi^{{(c}_{n}^{(f)})}_{h}(Y_{n+1-h})\right\|^2_2.$$ By simple decomposition, we have $$S(\Phi)=\sum_{k=1}^g\sum_{n=1}^{N_k}\left\|Y_{n+1}-\sum_{h=1}^p\Phi_{h}^{(k)}(Y_{n+1-h})\right\|^2_2,$$ where $N_k$ is the number of $Y_{n+1}$ of which the previous function $f_n$'s warping function is of state $k$. Then we can minimize the following quantity to obtain the estimation of $\{\Phi^{(k)}_{h}\}_{h=1}^p$: $$S_k(\Phi)=\sum_{n=1}^{N_k}\left\|Y_{n+1}-\sum_{h=1}^p\Phi_{h}^{(k)}(Y_{n+1-h})\right\|^2_2.$$ We apply the multivariate technique to estimate $(\Phi^{(k)}_h\colon h=1,\ldots,p)$ for each $k$, that is, the functions $(Y_n\colon n\in\mathbb{N})$ are projected onto a finite dimensional sub-eigenspace through fPCA, and the unknown operators are estimated in that finite dimensional sub-space. Assume $\hat{\Phi}_h^{(k)}$ is the estimator of ${\Phi}_h^{(k)}$, then the predictor of $Y_{N+1}$ should be $$\hat{Y}_{N+1}=\sum_{k=1}^g\sum_{h=1}^{p}\hat{\Phi}_h^{(k)}(Y_{N+1-h})\mathbbm{1}(\hat{c}_N=k).$$ {\bf Remark}: The final expression is binary. In practice, we can also try the weighted predictor, $$\hat{Y}_{N+1}=\sum_{k=1}^g\sum_{h=1}^{p}\hat{\Phi}_h^{(k)}(Y_{N+1-h})P(\hat{c}_N=k).$$ The weighted predictor have smaller variance but larger bias. The probabilities of states $(P(\hat{c}_N=k),\ k=1,\ldots,g)$ need to be estimated under some principle, for example, $P(\hat{c}_N=k)\propto1/d(\hat{\gamma}_N,\hat{b}_k)$, where $d(\hat{\gamma}_N,\hat{b}_k)=1-\cos(S(\hat{\gamma}_N),S(\hat{b}_k))$. When the warping functions can be well classified, we can adopt the binary predictor, otherwise, we can try the weighted predictor. \subsubsection{Parameter selection} It can be shown that we can still use the fFPE criterion in Aue et al.\ (2015) to select the order and dimension of the sub-eigenspace for prediction. Since the eigenfunctions are orthogonal and the fPC scores are uncorrelated, the mean square prediction error can be decomposed as \begin{align*} E\left[\left\|Y_{N+1}-\hat{Y}_{N+1}\right\|^2\right]&=E\left[\left\|\sum_{m=1}^\infty y_{N+1,m}\nu_m-\sum_{m=1}^d\hat{y}_{N+1,m}\nu_m\right\|^2\right]=E\left[\left\|{\bf Y}_{N+1}-\hat{{\bf Y}}_{N+1}\right\|^2\right]+\sum_{m>d}\lambda_m, \end{align*} where $\|\cdot\|$ denotes $l^2$ norm. The decomposition reveals the trade-off between bias and variance. As for the first summand, assuming $({\bf Y}_n\colon n\in\mathbb{N})$ follows a $d$-variate VAR($p$) process with switching coefficient matrix, that is, $${\bf Y}_{n+1}=\Phi_1^{(k)}{\bf Y}_n+\ldots+\Phi_p^{(k)}{\bf Y}_{n-p+1}+{\bf Z}_{n+1},$$ where $Z_n$ is the error term. It can be shown that (see, e.g.,\ L\"utkepohl 2006) that $$\sqrt{N_k}(\hat{\beta}_k-\hat{\beta})\overset{\mathcal{L}}\to\mathcal{N}_{pd^2}(0,\Sigma_Z^d\otimes\Gamma_p^{-1}),$$ where $\beta_k=\text{vec}([\Phi_1^{(k)},\ldots,\Phi_p^{(k)}]')$ and $\hat{\beta}_k$ is the least squares estimator of $\beta_k$, and $\Gamma_p=\text{var}(\text{vec}([{\bf Y}_p,\ldots,{\bf Y}_1]))$. Let $\hat{\bf Y}^{(k)}_{N+1}$ be the predictor of ${\bf Y}_{N+1}$ if the estimated state of $\gamma_N$ is $k$. Assume the classification is correct, and we have \begin{align*} E\left[\left\|{\bf Y}_{N+1}-\hat{{\bf Y}}_{N+1}\right\|^2\right]&=E\left[\left\|{\bf Y}_{N+1}-\sum_{k=1}^g\hat{\bf Y}^{(k)}_{N+1}\mathbbm{1}({c}_N^{(f)}=k)\right\|^2\right]\\ &=E\left[E\left[\left\|{\bf Y}_{N+1}-\hat{\bf Y}_{N+1}^{({c}_N^{(f)})}\right\|^2\Bigg|{c}_N^{(f)}\right]\right]=\sum_{k=1}^gE\left[\left\|{\bf Y}_{N+1}-\hat{{\bf Y}}^{(k)}_{N+1}\right\|^2\right]P({c}_N^{(f)}=k)\\ &=\sum_{k=1}^gE\left[\left\|{\bf Y}_{N+1}-\sum_{h=1}^p\hat{\Phi}^{(k)}_{h}{\bf Y}_{N+1-h}\right\|^2\right]P({c}_N^{(f)}=k)\\ &=E[\|{\bf Z}_{N+1}\|^2]+\sum_{k=1}^gE\left[\left\|\sum_{h=1}^p(\Phi^{(k)}_{h}-\hat{\Phi}^{(k)}_{h}){\bf Y}_{N+1-h}\right\|^2\right]P({c}_N^{(f)}=k)\\ &=\text{tr}(\Sigma^d_Z)+\sum_{k=1}^gE\left[\left\|I_p\otimes({\bf Y}'_N,\ldots,{\bf Y}'_{N-p+1})(\beta_k-\hat{\beta}_k)\right\|^2\right]P({c}_N^{(f)}=k)\\ &\sim \text{tr}(\Sigma^d_Z)+\sum_{k=1}^g\frac{pd}{N_k}\text{tr}(\Sigma^d_Z)P({c}_N^{(f)}=k)\\ &=\text{tr}(\Sigma^d_Z)+\frac{pd}{N}\text{tr}(\Sigma^d_Z)\sum_{k=1}^g\frac{N}{N_k}P({c}_N^{(f)}=k)\\ &\sim\frac{N+gpd}{N}\text{tr}(\Sigma^d_Z) \end{align*} where ${c}_N^{(f)}$ is the hidden state of $\gamma_N$, and $N_k$ is the number of $\gamma_n$ of the $k$th state, and $a_n\sim b_n$ means $a_n/b_n\to 1$. Replacing $\text{tr}(\Sigma^d_Z)$ with $\text{tr}(\hat{\Sigma}^d_Z$), where $\hat{\Sigma}_Z^d$ is the pooled unbiased estimator of $\Sigma_Z^d$, such that $(N-gpd)\Sigma_Z^d=E[\sum_{n}({\bf Y}_n-\hat{\bf{Y}}_n)^2]$, finally we have $$E[\|Y_{N+1}-\hat{Y}_{N+1}\|^2]\approx\frac{N+gpd}{N}\ \text{tr}(\hat{\Sigma}^d_Z)+\sum_{m>d}{\lambda}_m,$$ thus the selection of $p,d$ can be performed with the modified fFPE criterion given by, $$\text{fFPE}(p,d)=\frac{N+gpd}{N}\ \text{tr}(\hat{\Sigma}^d_Z)+\sum_{m>d}\hat{\lambda}_m.$$ {\bf Remark}: This is a generalization of the fFPE criterion proposed by Aue et al.\ (2015). It is hard to find an unbiased estimator for $\Sigma^d_Z$ because of misclassification, but when the misclassification probability is small, the bias tends to be negligible. In most cases, we do not know the real hidden states, so we cannot distinguish the real states and the estimated states. \subsection{Algorithm} The prediction algorithm proceeds in four steps. First of all, implement functional registration algorithm to separate amplitude and phase variability. Assume the number of hidden states of phase variability resp.\ amplitude variability, say, $g$ resp.\ $l$ are already known a priori or estimated by the data, obtain the state-indicating vector of the estimated hidden state of warping function $(\hat{\omega}_n^{(f)}\colon n\in\mathbb{N})$ by applying spherical $K$-means clustering to the SRSFs of warping functions. Then apply $K$-means clustering to the amplitude functions and obtain the state-indicating vector $(\hat{\omega}_n^{(a)}\colon n\in\mathbb{N})$. Next combine the two sequences to generate a new sequence $\hat{\omega}_n=\hat{\omega}_n^{(f)}\otimes\hat{\omega}_n^{(a)}$, and estimate the transition probability matrix by the least squares method as $$\hat{P}=\arg\min\limits_P\sum_{n=2}^N\|\hat{\omega}_{n}-\hat{\omega}_{n-1}P\|.$$ The one step ahead prediction for the state-indicating vector of warping function is $$\hat{\hat{\omega}}^{(f)}_{N+1}=\hat{\omega}_N\hat{P}J,$$ where $J$ is the $(gl)\times g$ matrix \begin{equation*} J=\left(\begin{array}{cccc} {\bf 1}_l&{\bf 0}_l&\cdots&{\bf 0}_l\\ {\bf 0}_l&{\bf 1}_l&\cdots&{\bf 0}_l\\ \vdots&\vdots&\ddots&\vdots\\ {\bf 0}_l&{\bf 0}_l&\cdots&{\bf 1}_l\\ \end{array}\right), \end{equation*} with ${\bf 1}_l=(1,\ldots,1)^T_{1\times l}$, ${\bf 0}_l=(0,\ldots,0)^T_{1\times l}$. The corresponding predicted warping function is $$\hat{\gamma}_{N+1}=\sum_{j=1}^g\hat{\hat{\omega}}_{N+1,j}\hat{b}_{j}.$$ Next, fix the dimension $d$ and order $p$, and fit a FAR($p$) model with switching coefficient operators to predict the next amplitude function, say, $$\hat{Y}_{N+1}=\sum_{k=1}^g\hat{Y}^{(k)}_{N+1}\mathbbm{1}(\hat{c}_N^{(f)}=k),$$ where $\hat{Y}^{(k)}_{N+1}$ is the predictor of $Y_{N+1}$, while the estimated state of $\gamma_N$, say $\hat{c}^{(f)}_N$, is $k$. The last step is to apply the predicted warping function to the predicted amplitude function, and the final predictor is $$\hat{f}_{N+1}=\hat{Y}_{N+1}\circ\hat{\gamma}_{N+1}.$$ We summarize the algorithm as follows: \begin{algorithm} \caption{Two-stage prediction algorithm (one-step ahead)}\label{euclid} \begin{algorithmic}[] \State \textbf{Step 1} Apply functional registration algorithm to obtain the amplitude and warping functions $Y_n$'s and $\gamma_n$'s. \State \textbf{Step 2} Apply spherical $K$-means clustering algorithm and $K$-means clustering algorithm to the SRSFs of warping functions and the amplitude functions, respectively. Construct a multivariate Markov chain from these two sequences (amplitude and phase) to predict the next warping function $\hat{\gamma}_{N+1}$. \State \textbf{Step 3} Predict the next amplitude function based on an FAR model with switching coefficient operators. We have $$\hat{Y}_{N+1}=\sum_{k=1}^g\hat{Y}^{(k)}_{N+1}\mathbbm{1}(\hat{c}_N^{(f)}=k).$$ \State \textbf{Step 4} Warp $\hat{Y}_{N+1}$ by $\hat{\gamma}_{N+1}$ to obtain the final prediction, $\hat{f}_{N+1}=\hat{Y}_{N+1}\circ\hat{\gamma}_{N+1}.$ \end{algorithmic} \end{algorithm} \subsection{Data-driven selection of the number of states} To the best of our knowledge, there is no widely accepted procedure for order selection of hidden Markov models. The selection of states number is a trade-off between bias and variance. A large number of states will reduce bias, but will increase variance since we have more parameters to be estimated. Considering that our purpose is prediction, we propose an approach based on prediction error. The prediction performance is evaluated by two metrics, say, $l^2$ distance and amplitude distance. Assume that we had a large test data-set $D_{\rm{test}}$ which is an independent copy of the dataset used for model fitting. We can, for example, use the first $80\%$ curves in $D_{\rm{test}}$ to fit a model with $g$ states, and predict the rest $20\%$ curves with the fitted model, and then calculate the average $l^2$ distance and amplitude distance between the predicted curves and the curves to be predicted. We can refer to these two average errors for order selection. In practice, we may not have a large sample size, and we cannot reserve a large fraction of data for test procedure. In this case, we may apply the idea of Monte-Carlo cross-validation (see Shao, J.\ (1993)). We choose a fraction of $\alpha\%$ consecutive curves for training and the rest curves are used for testing. This procedure is repeated multiple times where the partitions are randomly chosen on each run. We choose a group of candidate state numbers. The two average errors are computed for models with different candidates, and we choose the state numbers with the most decent errors. \section{Shape Similarity} One of the main questions considered in this article is: what is a good measurement of shape similarity? In order to compare shape of different trajectories, we need to formally define the functional shape space $\mathcal{E}$. We also need a distance to evaluate pattern similarity. We propose a novel principle that, if a function can be warped into another, then the two functions are considered to be of the same shape. Here, we shall follow the convention that shape is independent of scale and location (Srivastava and Klassen (2016)), so we first re-scale functions, so that they are of unit norm, and start at the same value. Then we study the shape difference of the thus obtained set. This resulting space $\widetilde{\mathbb{L}}$ is termed pre-shape space. In the functional shape space, we will unify the shape representations, that is, obtain the unification of all points in pre-shape space representing the same shape. The functional shape space is a quotient space of $\widetilde{\mathbb{L}}$ with respect to warpings. \subsection{Functional shape space} We define an equivalence relation on $\mathcal{E}$ as follows: let $f_1$, $f_2$ be two elements in the pre-shape space, $f_1\sim f_2$ if there exists a warping function $\gamma$ such that $f_1=f_2\circ\gamma$. Then for any element $f$ in the pre-shape space, the set of all warped functions of the function $f$ are considered as an element of the functional shape space $\mathcal{E}$, that is, $$[f]=(f\circ\gamma\colon\gamma\in\Gamma)\subset\mathcal{E},$$ where $\Gamma$ is the space of all warping functions. Based on our definition, the distance $d([f_1],[f_2])$ between two shape objects should be invariant to warpings. Before we give the distance for measuring shape similarity, we first briefly introduce the Fisher--Rao metric. \subsection{Fisher--Rao metric} Fisher-Rao metric is fundamental to the registration algorithm of Srivastava and Klassen (2016). Let ${H}$ be the functional space we consider, for any $f\in {H}_0=\{f\in {H}\colon\dot{f}>0\}$, and $\nu_1,\nu_2\in T_f({H})$, where $T_f({H})$ is the tangent space of $H$ at $f$, the Fisher--Rao metric is defined as the inner product $$\langle\langle\nu_1,\nu_2\rangle\rangle_f=\frac{1}{4}\int_0^1\dot{\nu}_1(t)\dot{\nu}_2(t)\frac{1}{\dot{f}(t)}dt.$$ One advantage of the Fisher-Rao metric over Euclidean metric is that it avoids the over-registration problem (Srivastava and Klassen (2016)). One important property of Fisher-Rao metric is invariance of simultaneous warping: for any $\gamma\in\Gamma$, $\|f_1-f_2\|_{\text{FR}}=\|f_1\circ\gamma-f_2\circ\gamma\|_{\text{FR}}$. An immediate consequence of this property is that the registration between two functions is unique, which is important for defining a distance in the shape space. Under the SRSF representation, the Fisher--Rao Riemannian metric on $H_0$ becomes the standard $\mathbb{L}^2$ metric (see [21], pp.~105). We can take this property and write the geodesic distance under the Fisher--Rao metric explicitly as $$d_{\rm FR}(f_1,f_2)=\inf\limits_{\{\alpha:[0,1]\to\mathcal{F}\colon\alpha(0)=f_1,\alpha(1)=f_2\}}L[\alpha]=\|s_1-s_2\|,$$ where $L[\alpha]=\int_0^1\sqrt{\langle\langle\dot{\alpha}(t),\dot{\alpha}(t)\rangle\rangle_{\alpha(t)}}dt$, and $s_1,s_2$ are the SRSF representations of $f_1,f_2$. The Fisher--Rao metric is defined only on a subset $H_0\subset H$, but under SRSF representation, we can generalize it to $H$ endowed with $l^2$ metric. We call the $L^2$ metric on SRSF representation space the extended Fisher--Rao metric. \subsection{Amplitude distance} We shall use the amplitude distanc , which has been shown to be a proper distance on the functional shape space, to measure the similarity of pattern/shape, $$d^{(m)}_{\rm FR}=\inf\limits_{\gamma}\|f_1-f_2\circ\gamma\|_{\rm FR},$$ or equivalently, $$d^{(m)}_{\rm FR}=\inf\limits_{\gamma}\|s_1-(s_2\circ\gamma)\sqrt{\dot{\gamma}}\|_2,$$ which makes $\mathcal{E}=\widetilde{\mathbb{L}}/\Gamma$ a metric space. If two functions are of the same shape, then the amplitude distance is zero. The geodesic distance under the Fisher--Rao metric is invariant to simultaneous warping . Therefore, the effect of phase variability will not influence the amplitude distance between two functions, say, $$\inf_{\gamma}\|f_1\circ\gamma_1-f_2\circ\gamma_2\circ\gamma\|_{\text{FR}}=\inf_{\gamma}\|f_1-f_2\circ\gamma\|_{\text{FR}},$$ and thus the amplitude distance between two shape objects is unique. This is the main reason why we use amplitude distance to measure shape similarity. {\bf Remark}: In this paper, we use both the amplitude distance and the Euclidean distance to evaluate the prediction. Neither of the distance can evaluate the prediction well individually, as we consider both amplitude and phase variability. \section{Theoretical Results} We use the least squares method to estimate the unknown transition probabilities, and we aim to find the asymptotic properties of the estimator. It is known that the least squares estimator of the stochastic matrix of a Markov chain is consistent and asymptotically normal (see van der Plas (1983)). However, since the real hidden state of warping and amplitude functions need to be estimated, the least squares estimator of the transition matrix ${P}$ is not necessarily consistent with ${P}$. In order to establish the asymptotic property of the least squares estimator $\hat{P}$, we make the following assumptions. {\bf Assumptions} \begin{enumerate} \item[A1.] The Markov chain $(\omega_n\colon n\in\mathbb{N})$ is stationary and ergodic, and has finite states; \item[A2.] The estimated prototypes are obtained from an independent copy of observations, and thus the estimated state $\hat{\omega}_n^{(a)}$ resp.\ ${\hat{\omega}}_n^{(f)}$ is independent of $\mathcal{F}_{a,0}^{\infty}$ resp.\ $\mathcal{F}^{\infty}_{f,0}$ given $\omega_n^{(a)}$ resp.\ $\omega_n^{(f)}$, where $\mathcal{F}_{a,0}^\infty=\sigma(\omega^{(a)}_0,\ldots,\omega^{(a)}_\infty)$ and $\mathcal{F}_{f,0}^\infty=\sigma(\omega^{(f)}_0,\ldots,\omega^{(f)}_\infty)$; \item[A3.] The number of states $g$ is known; \item[A4.] The misclassification probabilities are the same for all $f_n$; \item[A5.] The $g^2\times g^2$ matrix $A=\{a_{ij}\}\ \text{where} \ a_{ij}=2E\{\langle\frac{\partial\hat{\omega}_0\widetilde{P}}{\partial \theta_i},\frac{\partial\hat{\omega}_0\widetilde{P}}{\partial \theta_j}\rangle\}$ is positive definite. \end{enumerate} {\bf Remarks}: Note that Assumption (A2) is compatible with the assumption on the state-space model's error $u_n$. Based on the model assumption, the estimated state $\hat{\omega}_n$ is only related to the real state $\omega_n$ and the random error $u_n$, so the second assumption is a natural consequence of the assumption on $u_n$. Assumption 2 means, given the corresponding real state, the estimated state will be independent of all other states. This is a reasonable assumption, since as the sample size grows large enough, the estimated prototype functions will tend to be uncorrelated with any individual function, and we can assume that the estimated state is only related to the corresponding actual state. The Bayesian theorem implies the following propositions. \begin{prop} Under Assumptions (A1)--(A4), the transition probabilities of the combined estimated process $(\hat{\omega}^{(f)}_n\otimes\hat{\omega}^{(a)}_n\colon n\in\mathbb{N})$ are given by \begin{align*} P(\hat{\omega}^{(f)}_{n+1},\hat{\omega}^{(a)}_{n+1}|\hat{\omega}^{(f)}_{n},\hat{\omega}^{(a)}_{n})&=\\ \vspace{0.05mm}\\ \hspace{2cm}\sum_{\omega_{n+1}^{(f)},\omega_{n+1}^{(a)},\omega_n^{(f)},\omega_n^{(a)}}& P(\omega^{(f)}_{n+1},{\omega}^{(a)}_{n+1}|\omega^{(a)}_{n},\omega^{(f)}_{n})P(\hat{\omega}^{(a)}_{n+1}|{\omega}^{(a)}_{n+1})P(\hat{\omega}^{(f)}_{n+1}|{\omega}^{(f)}_{n+1})\\ &\hspace{2cm}\times \frac{P(\hat{\omega}^{(a)}_{n}|{\omega}^{(a)}_{n})P(\hat{\omega}^{(f)}_{n}|{\omega}^{(f)}_{n})P(\omega^{(a)}_{n},\omega^{(f)}_{n})}{\sum_{\omega_n^{(a)},\omega_n^{(f)}}P(\hat{\omega}^{(a)}_{n}|{\omega}^{(a)}_{n})P(\hat{\omega}^{(f)}_{n}|{\omega}^{(f)}_{n})P({\omega}^{(f)}_{n},{\omega}^{(a)}_{n})}. \end{align*} \end{prop} {\bf Remarks}: Proposition 1 implies the transition probability of the estimated Markov chain. We show that the least squares estimator for the estimated Markov chain is consistent with $$\widetilde{P}=\{P(\hat{\omega}^{(f)}_{n+1},\hat{\omega}^{(a)}_{n+1}|\hat{\omega}^{(f)}_{n},\hat{\omega}^{(a)}_{n})\}$$ and asymptotically normal. The least squares estimator is defined as the minimizer of the following quantity $$\Phi(P)=\sum_{n=2}^{N}\left\|\hat{\omega}_{n}-\hat{\omega}_{n-1}{P}\right\|^2,$$ where $$\hat{\omega}_{n}=\hat{\omega}_n^{(f)}\otimes\hat{\omega}_n^{(a)}.$$ By {Proposition 1}, we have \begin{equation} E[\hat{\omega}_{n+1}|\hat{\omega}_{n}]=\hat{\omega}_{n}\widetilde{P}. \end{equation} Then we have the following theorem for the least squares estimator, which is a generalization of the result of van der Plas (1983). In the paper. the author considers aggregated Markov chains, but it is not necessary to assume the process is a Markov chain. It is enough to have condition (5-1). First we state the following lemma from van der Plas (1983). \begin{lemma} Let $(X_n\colon n\in\mathbb{N})$ be a stationary and ergodic process with values in a Euclidean space $E$. Let $\Theta$ be a compact subspace of some Euclidean space. Let $F$ be a real valued measurable function on $E\times\theta$ such that $F(x,\theta)$ is a continuous function of $\theta$ for all $x\in E$. Define $\phi(x)=\sup_{\theta\in\Theta}|F(x,\theta)|$ for all $x$ and assume that $E(\phi(X_0))<\infty$, then $$\lim\limits_{N\to\infty}\frac{1}{N}\sum_{n=1}^NF(X_n,\theta)=E(F(X_0,\theta)),$$ a.s. uniformly for all $\theta\in\Theta$. \end{lemma} Then we can derive our first result from the above lemma. \begin{theorem} Let $(\hat{\omega}_n\colon n\in\mathbb{N})$ be the state-indicating vectors of the estimated Markov chain, and assume that Assumptions (A1)--(A4) hold. Then for each $N$ there exists a random matrix $\hat{P}_N$ such that $L_N(\hat{P}_N)=\inf\limits_{P}L_N(P)$ and $$\lim\limits_{N\to\infty}\hat{P}_N=\widetilde{P} \ \ \ a.s.$$ \end{theorem} {\bf Remark}: From Theorem 1, we know that the estimator $\hat{P}$ does not converge to the real transition matrix $P$, but to another stochastic matrix $\widetilde{P}$. Before dicussing the asymptotic normality of $\hat{\theta}_N$, we introduce the following notations: Define $$L_N(P)=\frac{1}{N}\sum_{n=2}^N\|\hat{\omega}_{n}-\hat{\omega}_{n-1} P\|^2,\qquad L(P)=E[L_N(P)]=E[\|\hat{\omega}_2-\hat{\omega}_{1}P\|^2],$$ $$\frac{\partial L_N(P)}{\partial \theta_i}=-2N^{-1}\sum_{n=1}^NF_i(n,\theta),$$ where $$F_i(n,\theta)=\left\langle\hat{\omega}_n-\hat{\omega}_{n-1}P,\hat{\omega}_{n-1}\frac{\partial P}{\partial\theta_i}\right\rangle,\qquad\theta=\text{vec}{(\widetilde{P})},$$ and define $$F_n=\left\langle \hat{\omega}_{n}-\frac{1}{2}\hat{\omega}_{n-1}\widetilde{P}, \hat{\omega}_{n-1}\widetilde{P}\right\rangle.$$ We have the relationship $$F_i(n,\theta)=\frac{\partial F_n}{\partial\theta_i}.$$ Further we need the following lemma concerning the mixing property of $\{F_n\}$, which is an extension of the result in Athreya and Pantula (1986). \begin{lemma} Suppose that $(\hat{\omega}_n\colon n\in\mathbb{N})$ are the state-indicating vectors of the estimated Markov chain with transition probability matrix $\widetilde{P}$ on a state space $(S,\mathcal{S})$. Assume there exists a probability distribution $\widetilde{\pi}$ on $(S,\mathcal{S})$ such that $\|P_y(\hat{\omega}_n\in\cdot)-\widetilde{\pi}(\cdot)\|\to 0,\ \text{as}\ n\to\infty$. Then $\{F_{n}\}$ is strong mixing for any initial distribution of $F_0$, and the mixing coefficients satisfy $\sum_{m=1}^{\infty}\alpha(m)<\infty$. \end{lemma} We now show that $\sqrt{N}(\hat{\theta}_N-\theta)$, where $\hat{\theta}_N=\text{vec}{({\hat{P}_N})}$, converges to a normal distribution, as $N\to\infty$, from the following theorem of Ibraginov (1962), which establishes the asymptotic normality for univariate strong mixing process. \begin{theorem} Let $(X_n\colon n\in\mathbb{N})$ be a centered strictly stationary, strong mixing sequence. Suppose there exists $B<\infty$ such that $|X_n|<B$ a.s. and $\sum_{m=1}^{\infty}\alpha(m)<\infty$. Then $$\sigma^2=E(X_0^2)+2\sum_{j=1}^\infty E(X_0X_j)<\infty$$ and if $\sigma>0$, as $N\to\infty$, $$N^{-1/2}S_N\overset{\mathcal{L}}\to \mathcal{N}(0,\sigma^2),$$ where $S_N=\sum_{n=1}^N X_n$. \end{theorem} Then, for the sequence of least squares estimator $\{\hat{P}_N\colon N=1,2,\ldots\}$, we have the following theorem. \begin{theorem} Under Assumptions (A1)--(A5) hold, and $(\hat{\omega}_n\colon n\in\mathbb{N})$ are the state-indicating vectors of the estimated Markov chain with stochastic matrix $\widetilde{P}$, we have $$N^{1/2}(\hat{\theta}_N-{\theta})\overset{\mathcal{L}}\to\mathcal{N}(0,\Omega),$$ where $\Omega=A^{-1}\Sigma A^{-1}$, and $$\Sigma_{ij}=E(F_i(0,\theta))F_j(0,\theta))+2\sum_{k=1}^\infty E(F_i(0,\theta)F_j(k,\theta)),$$ $$A=\{a_{ij}\},\qquad a_{ij}=E\left\{\left\langle\frac{\partial\hat{\omega}_0\widetilde{P}}{\partial \theta_i},\frac{\partial\hat{\omega}_0\widetilde{P}}{\partial \theta_j}\right\rangle\right\}.$$ \end{theorem} {\bf Remarks}: From the theorem, we know the estimation of the transition probability matrix is consistent and asymptotically normal. Therefore, it is safe to use the SP method for prediction, as the estimation will behave stably with large sample size. \section{Simulations} Finite sample simulations were implemented to illustrate the effectiveness of the SP method. The method was tested on an FAR($1$) process with phase variability. In each simulation run, $200$ (or $500$) functions were generated, and the first 90\% of simulated trajectories were used to do one-step ahead prediction for the remaining 10\% of trajectories. Each simulation run was repeated 10 times. The warping functions and amplitude functions were simulated separately. We implemented two simulation settings. In the first set-up, we first simulated a Markov chain and a series of prototype warping functions followed by the actual warping functions. In the second set-up, the current warping function was generated to be a weighted average of the previous warping function and an error warping function. The prediction performance was compared through two different metrics, namely the $l^2$ distance and amplitude distance. In the situation where the variation in the phase accounts for most of the variation in the functional time series, these numerical experiments demonstrated the superiority of the proposed SP method. \subsection{First simulation setup} \subsubsection{Simulation of warping function} Based on the properties of $B$-splines (de Boor 1978), we generated the warping functions by the following procedure. We first generated four prototype warping functions. The $B$-spline scores of the four prototypes were generated as follows: \begin{enumerate} \item Four 6-dimensional vectors with positive elements, $(\xi_{i1},\ldots,\xi_{i6}),\ i=1,2,3,4$, were specified to determine the first-order derivative of the prototype warping functions; \item The vectors obtained in the first step were transformed as follows: $$\phi_{i,j+1}=\frac{\sum_{k=1}^j\xi_{ik}}{\sum_{k=1}^6\xi_{ik}},\qquad j=1,2,\ldots,6,$$ Then concatenate zeros to the vectors $(\phi_{i2},\ldots,\phi_{i7})$ for $i=1,2,3,4$ to finalize the score vectors of prototype warping functions. \end{enumerate} The four score vectors for the prototypes $\phi_i=(\phi_{i1},\ldots,\phi_{i7})$ were constrained to satisfy $\phi_{i1}=0$, $\phi_{i7}=1$ and $\phi_{i1}<\phi_{i2}<\ldots<\phi_{i7}$. The warping function prototypes were represented by the $B$-spline functions (the B-spline functions were generated by {\bf R} function in package {\bf fda}) $$b_i=\sum_{j=1}^7\phi_{ij}B_j.$$ The error warping functions, denoted $\gamma_n^e$, were generated through the same procedure. The state of warping functions are simulated under a Markov process. The probability transition matrix has the representation \begin{equation*} P=\left(\begin{array}{cccc} p&(1-p)/3&(1-p)/3&(1-p)/3\\ (1-p)/3&p&(1-p)/3&(1-p)/3\\ (1-p)/3&(1-p)/3&p&(1-p)/3\\ (1-p)/3&(1-p)/3&(1-p)/3&p\\ \end{array}\right). \end{equation*} Each state is associated with a prototype. The final warping functions were obtained by $$\gamma_n(t)=(1-\tau)b_{c^{(f)}_n}(t)+\tau \gamma^e_n(t),$$ where $0<\tau<1$ is a positive value determining the proportion of signal, $c^{(f)}_n$ is the simulated state of the $n$th warping function. \subsubsection{Simulation of amplitude function} Amplitude functions were generated with the same seven $B$-splines, where the scores of the third and the fifth basis splines were significantly larger than those of the other basis splines. Thus all curves have the same two-peak pattern. The two pronounced scores jointly follow a VAR($1$) process with switching coefficient matrix, and the amplitude functions were obtained by the basis expansion $$a_n(t)=\sum_{i=1}^7\zeta_{ni}B_i(t).$$ The VAR($1$) process has $4$ coefficient matrices, which are determined by the state of warping function, \begin{align*}\left( \begin{array}{c} \zeta_{n+1,3}-4\\ \zeta_{n+1,5}-6 \end{array}\right)=\Phi^{(c^{(f)}_n)}\left( \begin{array}{c} \zeta_{n,3}-4\\ \zeta_{n,5}-6 \end{array}\right)+e_{n+1}, \end{align*} where $e_n\sim\mathcal{N}(0,\Sigma)$, $\Sigma=\mathrm{diag}(0.02,0.02)$, and the largest eigenvalues $\lambda_1$ of $\Phi^{(c^{(f)}_n)}$ are all 0.8. The other scores independently follow $\mathcal{N}(1,0.1)$. The functional time series trajectories were obtained by applying the warping functions to the amplitude functions, $$f_n(t)=a_n(\gamma_n(t)),$$ Figure 3 and Figure 4 show the simulated warping functions and the simulated functional time series for different $\tau$'s. \begin{figure}[!h] \centering \includegraphics[scale=0.38]{Warpingfun.png} \caption{Prototypes and warping functions for different $\tau$} \end{figure} \begin{figure}[!h] \centering \includegraphics[scale=0.3]{Simulated.png} \caption{Simulated curves for different $\tau$'s} \end{figure} \subsubsection{Prediction comparison} The number of state of the amplitude function was set to be $2$, and the warping functions have $4$ hidden states, which was determined by Monte-Carlo cross-validation. From the simulation, we can see the prediction accuracy of the SP method is competitive, and the shape of the predicted curve by the SP method is more similar to that of the corresponding true curve. Table 1--3 show the average $l_2$ prediction error ($l^2$) and amplitude difference (FR) for $p=0.3,0.6,0.9$, $N=200,500$ and $\tau=0.2,0.3,0.4$ with $\lambda_1=0.8$ (variance of error is shown in the parentheses). The shape-preserving prediction method will always capture the shape better than the amplitude-only prediction method. The shape-preserving method can even give more accurate prediction when the warping functions can be predicted well. Throughout the simulation section, ``SP'' represents the shape-preserving prediction method, and ``AO'' represents the amplitude-only prediction method (see e.g.\ Aue et al.\ (2015)). More simulation results for cases where $\lambda_1=0.6,0.4$ can be found in the appendix. \begin{table}[h!] \centering \label{my-label} \begin{tabular}{|p{0.2in}|cc|cc|} \hline \hline \multicolumn{1}{|c|}{$\tau$=0.4} & \multicolumn{4}{c|}{$N$=200}\\ \hline $p$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.247(0.086) & 0.153(0.076) & 0.244(0.077) & 0.202(0.081) \\ 0.6& 0.200(0.096) & 0.151(0.082) & 0.207(0.086) & 0.188(0.087) \\ 0.9& 0.154(0.115) & 0.157(0.079) & 0.170(0.106) &0.177(0.082) \\ \hline \multicolumn{1}{|c|}{$\tau$=0.4} & \multicolumn{4}{c|}{$N$=500}\\ \hline $p$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.252(0.089) & 0.142(0.071) & 0.255(0.080) & 0.195(0.075) \\ 0.6& 0.212(0.110) & 0.148(0.073) & 0.222(0.099) & 0.186(0.077) \\ 0.9& 0.131(0.087) & 0.152(0.069) & 0.152(0.081) &0.171(0.074) \\ \hline \hline \end{tabular} \caption{Fisher--Rao dissimilarity distance and $l^2$ distance for $\tau=0.4$} \end{table} \begin{table}[h!] \centering \label{my-label} \begin{tabular}{|p{0.2in}|cc|cc|} \hline \hline \multicolumn{1}{|c|}{$\tau$=0.3} & \multicolumn{4}{c|}{$N$=200}\\ \hline $p$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.295(0.108) & 0.149(0.073) & 0.290(0.094) & 0.217(0.075) \\ 0.6& 0.260(0.126) & 0.138(0.066) & 0.261(0.115) & 0.191(0.072) \\ 0.9& 0.145(0.112) & 0.148(0.064) & 0.161(0.106) & 0.170(0.075) \\ \hline \multicolumn{1}{|c|}{$\tau$=0.3} & \multicolumn{4}{c|}{$N$=500}\\ \hline $p$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.281(0.093) & 0.141(0.070) & 0.279(0.080) & 0.214(0.072) \\ 0.6& 0.235(0.116) & 0.144(0.071) & 0.235(0.106) & 0.192(0.075) \\ 0.9& 0.115(0.086) & 0.154(0.070) & 0.140(0.088) & 0.174(0.077) \\ \hline \hline \end{tabular} \caption{Fisher--Rao dissimilarity distance and $l^2$ distance for $\tau=0.3$} \end{table} \begin{table}[h!] \centering \label{my-label} \begin{tabular}{|p{0.2in}|cc|cc|} \hline \hline \multicolumn{1}{|c|}{$\tau$=0.2} & \multicolumn{4}{c|}{$N$=200}\\ \hline $p$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.317(0.105) & 0.140(0.069) & 0.307(0.086) & 0.242(0.072) \\ 0.6& 0.257(0.134) & 0.153(0.076) & 0.258(0.126) & 0.224(0.080) \\ 0.9& 0.123(0.114) & 0.153(0.075) & 0.144(0.106) & 0.172(0.084) \\ \hline \multicolumn{1}{|c|}{$\tau$=0.2} & \multicolumn{4}{c|}{$N$=500}\\ \hline $p$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.325(0.102) & 0.141(0.071) & 0.316(0.085) & 0.248(0.072) \\ 0.6& 0.260(0.130) & 0.138(0.068) & 0.256(0.119) & 0.208(0.074) \\ 0.9& 0.131(0.120) & 0.148(0.064) & 0.148(0.114) & 0.169(0.070) \\ \hline \hline \end{tabular} \caption{Fisher--Rao dissimilarity distance and $l^2$ distance for $\tau=0.2$} \end{table} \subsection{Second simulation setup} In the second setup, the simulation of the amplitude functions is similar to the procedure in the first setup. The difference is that we used an ordinary VAR model, instead of a switching coefficient VAR model. The major difference is the simulation of warping functions, which is discussed below. In this simulation setup, we used the same procedure to simulate a sequence of error warping functions, and the simulated warping function are given by the following recursion equation $$\gamma_{n+1}=\beta\gamma_n+(1-\beta)\gamma_n^e,$$ where $\beta$ takes value in $(0.3,0.5, 0.7)$. Smaller value of $\beta$ indicates higher phase variability. Figure 5 shows the simulated functional time series for different $\beta$'s. By cross-validation result, the selected order of amplitude and warping functions are 2 and 4 respectlvely. Table 4--6 show the average $l^2$ prediction error and amplitude distance between the predicted curves and the corresponding real curves for different value of $\lambda_1$, and similarly, $\lambda_1$ is defined as the first eigenvalue of the coefficient matrix in the VAR model. \begin{figure}[!h] \centering \includegraphics[scale=0.35]{secondsim.png} \caption{Simulated curves for different $\beta$'s} \end{figure} \begin{table}[h!] \centering \label{my-label} \begin{tabular}{|p{0.2in}|cc|cc|} \hline \hline \multicolumn{1}{|c|}{$\lambda_1=0.4$} & \multicolumn{4}{c|}{$N$=200}\\ \hline $\beta$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.378(0.198) & 0.142(0.069) & 0.361(0.157) & 0.340(0.097) \\ 0.5& 0.294(0.138) & 0.138(0.070) & 0.273(0.112) & 0.227(0.080) \\ 0.7& 0.212(0.101) & 0.148(0.071) & 0.186(0.086) & 0.173(0.076) \\ \hline \hline \multicolumn{1}{|c|}{$\lambda_1=0.4$} & \multicolumn{4}{c|}{$N$=500}\\ \hline $\beta$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.365(0.179) & 0.138(0.066) & 0.349(0.136) & 0.339(0.087) \\ 0.5& 0.285(0.140) & 0.137(0.065) & 0.270(0.119) & 0.220(0.077) \\ 0.7& 0.207(0.100) & 0.138(0.067) & 0.181(0.079) &0.163(0.072) \\ \hline \hline \end{tabular} \caption{Fisher--Rao dissimilarity distance and $l^2$ distance for $\lambda_1=0.4$} \end{table} \begin{table}[h!] \centering \label{my-label} \begin{tabular}{|p{0.2in}|cc|cc|} \hline \hline \multicolumn{1}{|c|}{$\lambda_1=0.6$} & \multicolumn{4}{c|}{$N$=200}\\ \hline $\beta$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.373(0.172) & 0.142(0.066) & 0.359(0.135) & 0.329(0.091) \\ 0.5& 0.303(0.144) & 0.146(0.067) & 0.284(0.123) & 0.234(0.075) \\ 0.7& 0.195(0.089) & 0.140(0.065) & 0.183(0.082) & 0.164(0.072) \\ \hline \hline \multicolumn{1}{|c|}{$\lambda_1=0.6$} & \multicolumn{4}{c|}{$N$=500}\\ \hline $\beta$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.380(0.185) & 0.139(0.065) & 0.356(0.148) & 0.324(0.085) \\ 0.5& 0.300(0.152) & 0.135(0.066) & 0.280(0.125) & 0.216(0.078) \\ 0.7& 0.190(0.088) & 0.142(0.066) & 0.174(0.077) & 0.168(0.071) \\ \hline \hline \end{tabular} \caption{Fisher--Rao dissimilarity distance and $l^2$ distance for $\lambda_1=0.6$} \end{table} \begin{table}[h!] \centering \label{my-label} \begin{tabular}{|p{0.2in}|cc|cc|} \hline \hline \multicolumn{1}{|c|}{$\lambda_1=0.8$} & \multicolumn{4}{c|}{$N$=200}\\ \hline $\beta$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.380(0.184) & 0.140(0.062) & 0.358(0.142) & 0.336(0.093) \\ 0.5& 0.298(0.149) & 0.145(0.066) & 0.281(0.119) & 0.220(0.079) \\ 0.7& 0.225(0.107) & 0.140(0.062) & 0.193(0.082) & 0.172(0.083) \\ \hline \hline \multicolumn{1}{|c|}{$\lambda_1=0.8$} & \multicolumn{4}{c|}{$N$=500}\\ \hline $\beta$ & $l^2(\text{SP})$ & \text{\rm FR}(\text{SP}) & $l^2(\text{AO})$ & \text{\rm FR}(\text{AO})\\ \hline 0.3& 0.372(0.182) & 0.139(0.065) & 0.358(0.145) & 0.334(0.087) \\ 0.5& 0.295(0.148) & 0.138(0.072) & 0.280(0.126) & 0.227(0.083) \\ 0.7& 0.202(0.102) & 0.143(0.067) & 0.184(0.086) & 0.170(0.068) \\ \hline \hline \end{tabular} \caption{Fisher--Rao dissimilarity distance and $l^2$ distance for $\lambda_1=0.8$} \end{table} In general, the SP method cannot outperform the amplitude-only method with respect to $l^2$ mean squared error, as the amplitude-only method is designed to minimize the $l^2$ prediction error. What is attractive is that, the prediction accuracy of the SP method shows to be competitive, and the advantage of shape-preserving of the SP method is very pronounced, especially when the data shows strong phase variability. \subsection{Robustness of the number of states } To show the prediction by the our SP method is robust with the number of states, we apply Monte Carlo cross-validation on 1000 simulated curves with different states number. In each case, the $l^2$ prediction error and amplitude distance between the predicted functions and the corresponding actual functions are obtained. The number of states of warping functions is 3,4 or 5, and that of amplitude functions is 1, 2 or 3. Table 7,8 show the two kinds of errors under the first simulation setup ($p=0.9$, $\tau=0.2, 0.4$), and Table 9,10 show the results of the second simulation setup ($\beta=0.3,0.7$). Each simulation run is repeated 10 times, and in each run, $80\%$ curves are randomly selected to predict the rest $20\%$ curves. It is noted that the prediction by the SP method is robust with the states number. As the correct number of states in the first simulation setup is known to be 4, so there is significant improvement if we choose 4 prototypes for the warping functions. \begin{table} \centering \fontsize{8}{10}\selectfont \begin{tabular}{|c|c|c|c|} \hline \hline \diagbox [width=8em,trim=l] {Amplitude}{Phase} & 3 & 4 & 5 \\ \hline 1 & 0.192(0.141) & 0.114(0.147) & 0.112(0.145) \\ 2 & 0.194(0.142) & 0.113(0.145) & 0.114(0.145) \\ 3 & 0.167(0.142) & 0.113(0.146) & 0.112(0.147) \\ \hline \hline \end{tabular}\vspace{0cm} \caption{Mean squared error (amplitude distance) $\tau=0.2$} \end{table} \begin{table} \centering \fontsize{8}{10}\selectfont \begin{tabular}{|c|c|c|c|} \hline \hline \diagbox [width=8em,trim=l] {Amplitude}{Phase} & 3 & 4 & 5 \\ \hline 1 & 0.168(0.143) & 0.120(0.144) & 0.122(0.145) \\ 2 & 0.174(0.142) & 0.121(0.145) & 0.124(0.145) \\ 3 & 0.169(0.142) & 0.121(0.145) & 0.122(0.145) \\ \hline \hline \end{tabular}\vspace{0cm} \caption{Mean squared error (amplitude distance) $\tau=0.4$} \end{table} \begin{table} \centering \fontsize{8}{10}\selectfont \begin{tabular}{|c|c|c|c|} \hline \hline \diagbox [width=8em,trim=l] {Amplitude}{Phase} & 3 & 4 & 5 \\ \hline 1 & 0.366(0.132) & 0.370(0.131) & 0.363(0.130) \\ 2 & 0.370(0.131) & 0.373(0.131) & 0.370(0.130) \\ 3 & 0.366(0.132) & 0.371(0.130) & 0.367(0.131) \\ \hline \hline \end{tabular}\vspace{0cm} \caption{Mean squared error (amplitude distance) $\beta=0.3$} \end{table} \begin{table} \centering \fontsize{8}{10}\selectfont \begin{tabular}{|c|c|c|c|} \hline \hline \diagbox [width=8em,trim=l] {Amplitude}{Phase} & 3 & 4 & 5 \\ \hline 1 & 0.200(0.127) & 0.200(0.128) & 0.192(0.130) \\ 2 & 0.201(0.127) & 0.202(0.129) & 0.197(0.129) \\ 3 & 0.202(0.128) & 0.200(0.128) & 0.195(0.129) \\ \hline \hline \end{tabular}\vspace{0cm} \caption{Mean squared error (amplitude distance) $\beta=0.7$} \end{table} \section{Analysis of the Ocean Surface Temperature} As oceans cover more than 70\% of the earth's surface, the temperature of the ocean surface plays an important role in the interaction between air and water, thus further influencing atmosphere. As the atmosphere greenhouse gas levels increase, the oceans absorb more heat and ocean surface temperature increase. Ocean surface temperatures are therefore considered to be a good measure of changes in the climate system. For example, the El Ni\~no phenomenon can be detected from ocean surface temperature. El Ni\~no is associated with a band of warm ocean water that develops in the central and east-central equatorial Pacific, which is the area between approximately the International Date Line and $120^\circ$W, including off the Pacific coast of South America. Meanwhile, La Ni\~no events are also associated with abnormally low ocean surface temperatures (see Xie et al.\ (2016)). Therefore, it is important to develop statistical methods that give accurate predictions of curves of ocean surface temperature. Our proposed method is inspired by this problem of high significance. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{temperature.png} \caption{Smoothed annual trajectories, registered trajectories, warping functions, and prototype warping functions} \end{figure} The ocean surface temperature (SST) data for the Ni\~no 1+2 region is provided on the Climate Prediction Center website (\href{http://www.cpc.ncep.noaa.gov/data/indices/ersst3b.nino.mth.81-10.ascii}{http://www.cpc.ncep.noaa.gov/data/indices/ersst3b.nino.mth.81-10.ascii}). In our study, we used annual SST curves from 1950--2015 (64 years); each annual curve consists of monthly SST readings. We using 11 $B$-splines to transform the monthly records into smooth functions for each year. Two curves with obviously different pattern are removed. Thus, the dataset contains a total of 64 annual SST functions. In our preliminary analyses, we produced plots of yearly curves that clearly display natural phase variability (see Figure 1). These suggest the importance of using a statistical procedure that has the ability to separate amplitude and phase variability before prediction. As a first step, the phase and amplitude components were separated. As the sample size is not large enough, we did not consider the interaction between phase and amplitude. We chose 2 prototype warping functions to represent phase variability. The smoothed curves, registered curves, warping functions, and prototype warping functions are shown in Figure 6. We applied the prediction method in Aue at el.\ (2015) to do one-step ahead prediction for the amplitude functions. The unknown coefficients were re-estimated for each prediction, that is, $f_{k},\ldots,f_{k+49}$ were used to fit a certain model for every $k$, where $k=1,\ldots,14$. Then, the out-of-sample prediction for the value $f_{k+50}$ was made. Finally, the final predictions were evaluated by mean squared prediction error and amplitude distance. The average $l_2$ prediction error and amplitude distance were computed as \begin{align*} \text{Shape-preserving method:}&\ d_{l_2}=0.894, d^{(m)}_{\rm FR}=0.176;\\ \text{Amplitude-only method:}&\ d_{l_2}=0.905, d^{(m)}_{\rm FR}=0.196. \end{align*} The SP method preserves the shape of temperature trajectories better than the amplitude-only method, and the prediction accuracy is competitive. \section{Conclusions} In this paper, we developed the SP method which is a new prediction method for stationary functional time series with a common pattern. It is the first method that incorporates functional registration into prediction, and thus the first method to consider phase variability in prediction. The prediction algorithm is a step-wise procedure, amplitude and phase components are predicted jointly, and the two predicted components are combined to form the final prediction. The SP method has two main advantages. First, if the curves possess similar patterns, and significant phase variability, a large number of principal components is needed to capture the pattern, which will increase model complexity. Comparatively, the new methodology separates amplitude and phase components first, thus the model can capture the shape better. Second, the method is "natural" in the following sense. i).\ $S(\cdot)$ is a bijective transformation, thus we do not need further adjustments to transform a SRSF back to a warping function, which avoids further bias; ii).\ The method does not directly apply linear models to non-linear objects, making the prediction natural and avoiding extremely small values resulting from the ``logarithm''. The simulation study and real data analysis of annual ocean surface temperature data show the SP method is superior to the amplitude-only method in capturing the common pattern of trajectories, and meanwhile will produce predictions with competitive prediction accuracy. \section*{acknowledgements} The authors sincerely thank Prof.\ Alexander Aue for the help in finishing the paper.
1,116,691,498,083
arxiv
\section{Introduction} The calibration of the detectors used in astronomy is fundamental for the sake of scientific studies. However, the processes at the basis of the calibrations are often complex especially for instruments which are onboard satellites. Indeed their calibrations have to quickly evolve with time as the space environment does not favour the stability of the instruments, for example due to the impact with micrometeorites, excessive irradiation by high energy particles or unexpected effects that were not observed on the ground. Focusing our attention to X-ray satellites, they are also highly affected by fluorescence emission lines produced by material in the satellite environment when irradiated by X-ray photons. Furthermore, the calibration of the instruments are also carried out using an X-ray source onboard of the satellite which produces only a few emission lines and does not allow a precise calibration of the whole energy range. Here we focus our attention on the impact of the calibration of the energy scale in the EPIC-pn instrument \citep{struder01} on-board XMM-Newton \citep{jansen01}, when this instrument is operated in Timing Mode and observing bright sources ($>100$ count s$^{-1}$). It has been noticed that the large amount of photons, i.e. energy, which these bright sources deposit on the CCD may affect the calibration of the energy scale. In particular, it distorts the observed spectral shape of the sources, altering the scientific results. In order to account for this effect, two approaches were developed: one is calibrated on the spectrum in Pulse Invariant (PI), assuming an astrophysical model of the spectrum in the 1.5-3 keV energy band; while the other one acts on the Pulse Height Analyser information (PHA, Guainazzi 2013). The former, called Rate Dependent Charge Transfer inefficiency (RDCTI or \textit{epfast}; Guainazzi et al. 2008, XMM-CAL-SRN-248\footnote{http://xmm2.esac.esa.int/docs/documents/CAL-SRN-0248-1-0.ps.gz}), was historically developed to correct for Charge Transfer Inefficiency (CTI) and X-ray loading (XRL), although their energy-dependence is based on unverified assumptions (Guainazzi $\&$ Smith 2013, XMM-CAL-SNR-0302\footnote{http://xmm2.esac.esa.int/docs/documents/CAL-SRN-0302-1-5.pdf}). Instead in the second approach, called Rate Dependent Pulse Height Amplitude (RDPHA), the energy scale is calibrated by fitting the peaks in derivative PHA spectra corresponding to the Si-K ($\sim1.7$ keV) and Au-M ($\sim2.3$ keV) edges of the instrumental response, where the gradient of the effective area is the largest. It also includes an empirical calibration at the energy of the transitions of the $K_{\alpha}$ Iron line (6.4-7.0 keV; Guainazzi, 2014, XMM-CAL-SRN-0312\footnote{http://xmm2.esac.esa.int/docs/documents/CAL-SRN-0312-1-4.pdf}). The RDPHA approach avoids any assumption on the model dependency in the spectral range around the edges and it is also calibrated in PHA space before events are corrected for gain and CTI. In order to test the goodness of these corrections, bright sources with a number of features (in absorption or emission) are needed. To investigate these two approaches, we selected the bright source GX 13+1 as a test-study. GX 13+1 is a low mass X-ray binary (LMXB) source which is a well known persistent accreting neutron star (NS) at the distance of $7\pm1$ kpc. Its companion is an evolved K IV mass-donor giant star \citep{bandyopadhyay99}. In particular, GX 13+1 is a dipping source \citep{corbet10,diaz12} which has probably shown periodic dips due to the orbital motion (\citealt{iaria14}) during the last decades. It was suggested that dips are more likely produced by optically thick material at the outer edge of the disc created by the collision between the accretion flow from the companion star and the outer disc (e.g. \citealt{white82}) or by outflows in the outer disc. The orbital period of GX 13+1 is 24.52 days, making this source the second LMXB with the longest orbital period after GRS 1915+105. The continuum emission of GX 13+1 can be described with the combination of a multicolour blackbody plus a cold, optically thick comptonisation component (e.g. \citealt{homan04,diaz12}). Because the comptonisation is optically thick, it can be also approximated by a blackbody component, lightening the fit calculation given that the properties of the comptonising component (electron temperature and optical depth) are usually poorly constrained \citep{ueda01,sidoli02,ueda04}. The spectra show also the existence of several spectral features \citep{diaz12,dai14}. These are associated with a warm absorbing medium close to the source and produced by outflows from the outer regions of the accretion disc. The warm absorber is present during all the orbital period and might become denser during the dips episodes, suggesting a cylindrical distribution around the source. It also results more opaque, and probably cloudy, close to the plane of the disc. In addition, the absorption lines associated to the warm absorber are produced by highly ionised species and indicate bulk outflow velocities of $\sim 400$ km s$^{-1}$ \citep{ueda04,madej13,dai14}. Furthermore, a broad, emission component at the energy of the K-shell of the Iron XXI-XXVI was found and interpreted as reflection of hard photons from the surface of the accretion disc \citep{diaz12}. The broadness of the line has been suggested not to be produced by relativistic effects but by Compton broadening in the corona, although this interpretation has been questioned \citep{cackett13}. GX 13+1 is therefore an ideal candidate to test the RDCTI and RDPHA calibration approaches as it shows a simple continuum characterised by a number of narrow absorption features and a broad emission line. \section{Data Reduction} \label{data_reduction} We carried out a spectral analysis on one \textit{XMM-Newton} observation (Obs.ID. 0122340901) taken in Timing mode, of the accreting NS GX 13+1. Data were reduced using the latest calibrations (at the date of April 25th, 2014) and Science Analysis Software (SAS) v. 13.5.0. At high count rates, X-ray loading and CTI effects have to be taken into account as they affect the spectral shape and, in particular, they produce an energy shift on the spectral features. In order to optimize the data reduction, we generated two EPIC-pn events files, each one created according to the RDCTI and RDPHA corrections: for the RDCTI corrections, we made use of the standard \textit{epfast} corrections\footnote{http://xmm.esac.esa.int/sas/current/documentation/threads/EPIC$\_$ reprocessing.shtml} adopting the following command: ``{\sc epproc runepreject=yes withxrlcorrection=yes runepfast=yes}''; for the RDPHA corrections, we reprocessed the data using the command: ``{\sc epproc runepreject=yes withxrlcorrection=yes runepfast=no withrdpha=yes}''. The command ``{\sc runepfast=no withrdpha=yes}'' has to be explicitly applied in order to avoid the combined use of both corrections. We note that the adopted RDCTI (\textit{epfast}) task is the latest released version and its effect on data is dissimilar from versions of \textit{epfast} older than May 23$^{rd}$, 2012. Indeed the older versions combined XRL and rate-dependent CTI corrections in a single correction, while they are now applied separately, each with its appropriate calibration. Hence the calibrations due to different \textit{epfast} versions might provide different results. For each EPIC-pn event file, we extracted the spectra from events with {\sc pattern}$\leq 4$ (which allows for single and double pixel events) and we set `{\sc flag}=0' retaining events optimally calibrated for spectral analysis. Source and background spectra were then extracted selecting the ranges RAWX=[31:41] and RAWX=[3:5], respectively. We generated the auxiliary files using \textit{arfgen} and setting ``detmaptype=psf'' and ``psfmodel=EXTENDED'', using the calibration file ``XRT3$\_$XPSF$\_$0016.CCF'' (Guainazzi et al., 2014; XMM-CAL-SRN-0313\footnote{http://xmm2.esac.esa.int/docs/documents/CAL-SRN-0313-1-3.pdf}). EPIC-pn spectra were subsequently rebinned with an oversample of 3 using \textit{specgroup}. Finally, all RGS spectra were extracted using the standard \textit{rgsproc} task, filtered for periods of high background and grouped with a minimum of 25 counts per noticed channel. The RGS and EPIC-pn spectra were then fitted simultaneously using {\sc xspec} V. 12.8.1 \citep{arnaud96}, in the range 0.6-2.0 keV and 2.0-10.0 keV, respectively. We also compared the EPIC-pn data to a \textit{Chandra} observation, with Obs.ID 11814. In particular, we analysed the data of the High Energy Grating (HEG) instrument onboard \textit{Chandra}. Since the data reduction and extraction process is described in \citet{dai14}, we suggest the reader to reference that paper for more details. \subsection{Pile-up} \begin{figure} \center \hspace{-0.63cm} \includegraphics[height=9.0cm,width=7.0cm,angle=270]{tot_1col_3col_5col_fit_simultaneo_estratti_correttamente.ps} \caption{EPIC-pn spectra where any (\textit{black}), one (\textit{red}), three (\textit{green}) and five (\textit{blue}) columns were excised in order to test for pile-up effects. We also show the residuals obtained adopting the model {\sc phabs*edge*(nthcomp + gauss)} and fitting the spectra simultaneously. We found consistency between the spectra obtained when removing three and five columns of the CCD.} \label{comparison_pile_up} \end{figure} The source has a mean count rate in the EPIC-pn detector of $\sim700$ cts s$^{-1}$, close to the nominal threshold for pile-up effects, that is $>800$ cts s$^{-1}$ in Timing mode. In order to test the presence of pile-up effects in the EPIC data, we initially made use of the SAS tool \textit{epatplot}. It provides indications that the data are affected by pile-up which can be widely corrected excising the three brightest central column (RAWX=[35:37]). However, the best test can be done comparing the residuals of the spectra with none, one, three and five columns excised, where a fit with a same spectral model is adopted. Hence we selected the range 2.4-10 keV of the four EPIC-pn spectra (RDPHA corrected) and we fitted them simultaneously with an absorbed {\sc nthcomp} model (\citealt{zdiarski96}), letting only the normalizations between the spectra free to vary. We also added an absorption edge and Gaussian models to take into account some absorption narrow lines and a broad emission line (see next sections for major details). We obtained a reduced $\chi^2$ of 1.9 ($\chi^2=874.93$ for 451 degrees of freedom). We show the best fit and its residuals in Figure~\ref{comparison_pile_up}. Inspecting the residuals, spectra extracted after excising three and five columns are consistent, confirming that removing only three CCD column corrects for pile-up effects. Therefore in the spectral analyses of the next sections we will only consider the spectrum extracted removing the three central column. We also highlight that the same findings hold also for the RDCTI corrected spectra. \section{Spectral analysis} \label{analysis} We analysed spectra extracted from RDPHA and RDCTI corrected event files and we adopted the same continuum for both of them. The neutral absorption is described with the {\sc phabs} model, using the abundance of \citet{andres89}. The continuum is instead based on a blackbody component ({\sc bbody} in {\sc xspec}) plus a comptonisation component ({\sc nthcomp} in {\sc xspec}; \citealt{zdiarski96}). We note that leaving the seed photons temperature (kT$_{seed}$) free to vary makes this parameter totally unconstrained; therefore we linked it with the blackbody temperature, assuming that the seed photons for comptonisation are provided by the inner regions of the accretion disc. \begin{figure} \center \hspace{-0.63cm} \includegraphics[height=9.0cm,width=7.0cm,angle=270]{rdpha_vs_rdcti.eps} \caption{ Comparison of the unfolded ($Ef(E)$) EPIC-pn spectra in the 5-10 keV energy band, applying RDPHA (\textit{black} spectrum) and RDCTI (\textit{red} spectrum) corrections. Both dataset are fitted with their corresponding best fit models, {\sc edge$\cdot$phabs$\cdot$(bbody+nthcomp+6 gaussian)} showed in Table~\ref{table_continuum_gauss_rdpha} and ~\ref{gauss_rdpha} (the emission feature is taken into account). Below 5 keV, the spectra are generally consistent. Instead, above this threshold, clear discrepancies in the features and the continua are seen (see text).} \label{comparison_rdpha_rdcti} \end{figure} We introduced a multiplicative constant model in each fit in order to take into account the diverse calibration of RGS and EPIC instruments. We fixed to 1 the constant for the EPIC-pn spectrum and allowed the RGS constants to vary. In general, this parameter range does not vary more than 10$\%$ in comparison with the EPIC-pn constant. \begin{table*} \footnotesize \begin{center} \caption{Best fit spectral parameters obtained with the absorbed {\sc bbody+nthcomp} model plus {\sc diskline} or {sc reflionx}.The Gaussian lines and the absorption edge are always taken into account. Errors are at 90$\%$ for each parameter.} \label{table_continuum_gauss_rdpha} \scalebox{0.85}{\begin{minipage}{18.0cm} \begin{tabular}{lllllllll} \hline Model & Component & \multicolumn{6}{c}{1 col. removed$^1$} \\ \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{2}{c}{(7)} & \multicolumn{1}{c}{(8)} \\ \\ & & RDPHA & RDCTI & RDPHA & RDCTI & \multicolumn{2}{c}{RDPHA$^{**}$} & RDCTI \\ \\ {\sc phabs} &N$_H$ (10$^{22}$ cm$^{-2}$)$^a$ &$2.67^{+0.05}_{-0.05}$ & $2.70^{+0.05}_{-0.05}$ & $2.68^{+0.06}_{-0.05}$ & $2.68^{+0.05}_{-0.05}$& $2.90^{+0.09}_{-0.06}$ &$3.66^{+0.06}_{-0.09}$ & $3.4^{+0.1}_{-0.1}$\\ \\ {\sc bbody} &kT$_{bb}$ (keV)$^b$& $0.55^{+0.02}_{-0.02}$ & $0.52^{+0.02}_{-0.02}$ & $0.53^{+0.02}_{-0.03}$ & $0.54^{+0.02}_{-0.02}$& $0.53^{+0.1}_{-0.2}$ & - &$0.33^{+0.03}_{-0.02}$\\ & Norm.$^c$ & $0.036^{+0.001}_{-0.01}$ & $0.037^{+0.001}_{-0.01}$ & $0.034^{+0.001}_{-0.002}$ & $0.037^{+0.001}_{-0.01}$& $0.002^{+0.003}_{-0.002}$& - &$0.022^{+0.009}_{-0.009}$\\ \\ {\sc nthcomp} & kT$_{e}$ (keV)$^d$&$1.21^{+0.04}_{-0.01}$& $1.16^{+0.01}_{-0.01}$ & $1.18^{+0.01}_{-0.03}$ &$1.17^{+0.02}_{-0.04}$& $1.20^{+0.01}_{-0.04}$& $1.20^{+0.02}_{-0.05}$ & $1.13^{+0.01}_{-0.01}$\\ &$\Gamma$$^e$&$1.0^{+0.2}_{-*}$& $1.0^{+0.2}_{-*}$ & $1.0^{+0.1}_{-*}$ & $1.0^{+0.2}_{-*}$& $1.44^{+0.1}_{-0.2}$& $1.40^{+0.3}_{-*}$ & $1.4^{+0.1}_{-*}$\\ &kT$_{0}$$^f$& = kT$_{bb}$& = kT$_{bb}$ & = kT$_{bb}$ & = kT$_{bb}$& = kT$_{bb}$& $1.2^{+0.6}_{-0.4}$ & = kT$_{bb}$\\ \\ {\sc diskline} & Energy (keV)$^g$& - & -&6.63$_{-0.03}^{+0.06}$ & 6.60$_{-0.03}^{+0.03}$&- & -& -\\ & Betor10 $^h$ & - & - &-2.37$_{-0.1}^{+0.09}$ & -2.43$_{-0.1}^{+0.2}$& -& -& -\\ & R$_{in}$ (R$_g$)$^i$ & -& -&16$_{-10}^{+7}$ & 6$_{-*}^{+2}$ & -& -& -\\ & Inclination (degree)$^l$ & -& - &90$_{-37}^{*}$ & $<21$& -& -& -\\ & Norm.$^m$ & -& - &1.1$_{-0.3}^{+0.4}$$\times10^{-2}$ & 6.91$_{-0.2}^{+0.09}$$\times10^{-3}$& -& -& -\\ \\ {\sc highecut} &cutoffE (keV)$^n$& -&-&-&-& \multicolumn{3}{c}{0.1 (frozen)}\\ &foldfE (keV)$^o$&- &- &-&-& \multicolumn{3}{c}{2.7$\times$kT$_{e}$ (frozen)}\\ \\ {\sc rdblur} & Betor10 $^h$& - & -& -& -& -2.82$_{-0.09}^{+0.1}$& -2.82$_{-0.4}^{+0.3}$&-2.57$_{-0.1}^{+0.1}$\\ & R$_{in}$ (R$_g$)$^i$ & - & -& -& -& 7.0$_{-1}^{+2}$& 10$_{-4}^{+5}$&15$_{-6}^{+4}$\\ & Inclination (degree)$^l$& - & -& -& -&66$_{-3}^{+6}$& 52$_{-5}^{+8}$&$>60$\\ \\ {\sc reflionx} & Fe/solar$^p$ & - & -& -& -& 3.0$_{-1.3}^{+*}$& 3.0$_{-0.5}^{+*}$&3.0$_{-1.0}^{+*}$\\ & $\xi$ (erg cm s$^{-1}$)$^q$& - & -& -& -& $<20$& $1060_{-86}^{+73}$&220$_{-10}^{+17}$\\ \hline &$\chi^2/dof$$^r$&904.01/836& 891.08/836&900.26/834& 885.94/834&923.56/835& 929.42/836&922.98/835\\ \end{tabular} \end{minipage}} \end{center} \begin{flushleft} $^1$ EPIC-pn spectrum corrected for pile-up removing the central, brightest column; $^a$ Column density; $^b$ Blackbody temperature and seed photons temperature of the {\sc nthcomp}; $^c$ Normalization of the {\sc bbody} component in unity of $L_{39}^2/D_{10kpc}$, where $L_{39}$ is the luminosity in unity of 10$^{39}$ erg s$^{-1}$) and $D_{10kpc}$ is the distance in unity of 10 kpc; $d$ Electrons temperature of the corona; $^e$ Photon index; $^f$ Seed photons temperature (usually equal to kT$bb$); $^g$ Energy of the relativistic line; $^h$ Power law dependence of emissivity; $^i$ Inner radius in terms of gravitational radius R$_g$; $^l$ Inclination angle of the binary system; $^m$; Normalization of the model in {\sc xspec} unit; $^n$ low energy cut-off of the reflection component; $^o$ high energy cut-off of the reflection component, set as 2.7 times the electron temperature of the comptonisation model; $^p$ Ratio of Iron and hydrogen abundance; $^q$ ionisation parameter; $^r$ reduced $\chi^2$ of the best fit including the absorption/emission features shown in Table~\ref{gauss_rdpha}. \\ $*$: the value pegged at its higher/lower limit; $^{**}$ Two alternative models (with and without {\sc bbody} component) are shown for this spectrum. \end{flushleft} \end{table*} \begin{table} \footnotesize \begin{center} \caption{Best fitting absorption features evaluated adopting a Gaussian or {\sc xstar} model and introducing an absorption edge. Errors are at 90$\%$ for each parameter.} \label{gauss_rdpha} \scalebox{0.8}{\begin{minipage}{9.0cm} \begin{tabular}{llll} Model & Component & \multicolumn{2}{c}{1 col. removed$^1$} \\ \\ & & RDPHA & RDCTI\\ \\ \hline \multicolumn{4}{c}{\sc emission feature}\\ \\ \textit{\sc Gaussian} & E$_{line}$ (keV)$^a$&$6.69_{-0.07}^{+0.07}$ & $6.33_{-0.09}^{+0.08}$\\ & $\sigma_v$ (keV)$^b$&$0.34_{-0.08}^{+0.1}$ & $0.24_{-0.08}^{+0.1}$\\ & Norm.$^c$ &$5.9^{+0.2}_{-0.2} \times10^{-3}$ & $1.2^{+0.1}_{-0.1} \times10^{-2}$\\ \hline \multicolumn{4}{c}{\sc absorption features}\\ \\ \textit{\sc Gaussian$^*$} & E$_{line}$ (keV)&$2.26_{-0.02}^{+0.02}$ & $2.22_{-0.08}^{+0.08}$ \\ & Norm. &$6.2^{+0.2}_{-0.2} \times10^{-3}$ & $1.6^{+0.2}_{-0.2} \times10^{-3}$\\ \\ \textit{\sc Gaussian} & E$_{line}$ (keV)&$6.70_{-0.02}^{+0.02}$ & $6.55_{-0.02}^{+0.02}$ \\ & Norm. &$1.9^{+0.4}_{-0.7} \times10^{-3}$ & $1.6^{+0.7}_{-0.4} \times10^{-3}$\\ \\ \textit{\sc Gaussian} & E$_{line}$ (keV)&$6.99_{-0.01}^{+0.01}$ & $6.83_{-0.02}^{+0.02}$\\ & Norm. &$2.6^{+0.4}_{-0.4} \times10^{-3}$ & $1.8^{+0.6}_{-0.5} \times10^{-3}$\\ \\ \textit{\sc Gaussian} & E$_{line}$ (keV)&$7.86_{-0.03}^{+0.04}$ & $7.69_{-0.04}^{+0.04}$\\ & Norm. &$1.1^{+0.3}_{-0.3} \times10^{-3}$ & $1.1^{+0.3}_{-0.3} \times10^{-3}$\\ \\ \textit{\sc Gaussian} & E$_{line}$ (keV)&$8.19_{-0.05}^{+0.05}$ &$7.99_{-0.03}^{+0.1}$\\ & Norm. &$9^{+3}_{-3} \times10^{-4}$ &$1.3^{+0.3}_{-0.3} \times10^{-3}$\\ \\ \textit{\sc edge} & E$_{edge}$ (keV)&$8.7_{-0.1}^{+0.2}$ & $8.38_{-0.08}^{+0.08}$\\ & $\tau$ &$0.13_{-0.05}^{+0.06}$ & $0.27_{-0.05}^{+0.05}$\\ \hline \hline \multirow{3}{*}{\sc xstar} & N$_H^{abs}$ (10$^{22}$ cm$^{-2}$)$^d$ & $60_{-30}^{+20}$&$4_{-1}^{+1}$ \\ & Log($\xi_{abs}$)$^e$& $4.2\pm0.1$ & $3.41\pm0.09$\\ & z$^{abs}$ (km s$^{-1}$)$^f$& $-330_{+500}^{-260}$ & $-2600_{+1300}^{-1800}$ \\ \hline \\ \end{tabular} \end{minipage}} \end{center} $^1$ EPIC-pn spectrum corrected for pile-up removing the central, brightest column; $^a$ Energy of the feature; $^b$ Line width in keV $^c$ Normalization of the feature ({\sc xspec} units); $^d$ columns density of the warm absorber; $^e$ ionisation parameter of the warm absorber; $^f$ blueshift velocity of the warm absorber. $*$ This line can be associated to a residual of calibration around the instrumental Au-M edge. \\ \end{table} \subsection{Broadband continuum and narrow absorption features} \label{absfeatures} In Table~\ref{table_continuum_gauss_rdpha} (columns $\#3$ and 4), we show the best fit parameters obtained with the continuum model described in the previous section. RDPHA and RDCTI corrections give similar continuum parameters: they are both well described by a {\sc bbody} with a temperature of $\sim 0.55$ keV and the comptonisation component shows a marked roll-over into the \textit{XMM-Newton} bandpass. Indeed, the electron temperature (kT$_e$) is consistent with $\sim1.2$ keV, while the powerlaw photon index ($\Gamma$) lies at 1.0, although it pegged to the lower limit. Not surprisingly, several features (in absorption and emission) are clearly observed in the EPIC-pn spectra and we initially model all of them with Gaussian components, following \citet{diaz12} and \citet{dai14}. For the absorption lines, we fixed at zero the dispersion width ($\sigma_v$) as they are narrower than the detector sensitivity. All the best fit of the features are provided in Table~\ref{gauss_rdpha} and they are all statistically significant for the corresponding spectrum. An absorption line at $\sim2.2-2.3$ keV is found in both spectra that we identify as the residuals of calibration around the instrumental edge of Au-M at 2.3 keV. We also highlight that the residuals associated to these features are clearly stronger in the RDCTI data (more than 10$\sigma$) than in the RDPHA spectrum (less than 5$\sigma$), suggesting that RDPHA calibrations provide a better correction at low energies. Other marginally statistically acceptable features may also be found in the RGS spectra, but they are not taken into account as they are beyond the scope of this paper. In addition, an absorption edge at $\sim8-9$ keV (associated to highly ionised species of Iron, Fe XXI - XXV) is observed and it was then included in the fit. The absorption lines of the RDPHA corrected spectrum, ordered by energy as shown in Table~\ref{gauss_rdpha}, can be associated to Fe XXV $K_{\alpha}$ (6.70 keV), Fe XXVI $K_{\alpha}$ (6.99 keV), Fe XXV $K_{\beta}$ (7.86 keV) and Fe XXVI K$_\beta$ (8.19 keV), respectively. Notably, the centroid energy of these features are only marginally affected by the continuum and are compatible with zero shift although the uncertainty on them is of the order of $\sim900$ km s$^{-1}$. In addition, we note marginal hints of the K$_\alpha$ lines of S XVI $K_{\alpha}$ (2.64 KeV), Ar XVIII $K_{\alpha}$ (3.30 keV) and Ca XX $K_{\alpha}$ (4.10 keV), but they are not statistically significant. On the other hand, the RDCTI corrected spectrum shows a number of features whose energies are not consistent with those found in the RDPHA corrected spectrum. Indeed, we detected absorption lines at 6.55 keV, 6.83 keV, 7.69 keV and 7.99 keV (see Table~\ref{gauss_rdpha}). In Figure~\ref{comparison_rdpha_rdcti}, we show the significant discrepancies of the centroid energies of absorption lines in the RDCTI and RDPHA corrected spectra. Hence, those found in the RDCTI data could be either different line species or miscalibrations of the energy scale with one of the two corrections. We add that the energy lines in the RDCTI spectrum do not appear to be consistent with known rest frame absorption lines: we might claim that the highest energy line (7.99 keV) can be associated to the same Fe XXVI K$_{\beta}$ line of the RDPHA corrected spectrum, but with a high redshift ($\sim 9500$ km s$^{-1}$). Adopting a similar argument also to the lines at 6.55 keV and 6.83 keV, and associating them to Fe XXV and Fe XXVI, we would expect a redshift of $\sim6000/7000$ km s$^{-1}$. On the other hand, a similar approach can be applied to the line at 7.69 keV that, if associated to Fe XXV $K_{\beta}$, would be redshifted of $\sim 4000$ km s$^{-1}$. Alternatively, the lines at 6.83 keV and 7.69 keV might be associated to blueshifted lines of Fe XXV and Fe XXVI with velocity of $\sim6000$ and $\sim30000$ km s$^{-1}$, respectively, but they are larger than the velocities commonly observed in the dippers. However, these claims are only qualitative and might be misleading. Hence, in order to better constrain the properties of the warm medium which is more likely responsible for the narrow absorption lines, we substitute the Gaussian models with an {\sc xstar} grid \citep{kallman01}. The selected {\sc xstar} grid depends on the column density of the warm absorber, its ionisation level and its redshift/blueshift velocity. The dispersion velocity is not a variable parameter of the grid and is fixed to zero. In Table~\ref{gauss_rdpha}, we show that for the RDPHA spectrum the warm absorber column density is about an order of magnitude higher than that of the RDCTI spectrum ($60\times10^{22}$ cm$^{-2}$ vs $4\times10^{22}$ cm$^{-2}$). The ionisation is also clearly higher for the RDPHA spectrum ($\sim4.2$ vs $\sim3.4$). This latter result is consistent with the energy of the Iron edges found in both spectra. Indeed, the edge in the RDCTI corrected spectrum is at 8.4 keV, which energy may be associated to a lower level of Iron ionisation (Fe XXIII-XXIV) if compared to the RDPHA corrected one ($\sim8.8$ keV, Fe XXV). In addition, we note that if the RDCTI edge is associated anyway with the Fe XXVI or XXV K-edge, we should consider a high redshift ($>15000$ km s$^{-1}$). On the other hand, this redshift is not consistent with the blueshift of $2600$ km s$^{-1}$ found with the {\sc xstar} model (Table~\ref{gauss_rdpha}), unless to hypothesize that the edge is produced near the compact object and is affected by relativistic redshift. We finally mention that {\sc xstar} leaves stronger residuals around 6.5-7.2 keV in the RDCTI data, suggesting that, in this spectrum, either the lines are broad (as the dispersion velocity is fixed at zero) or {\sc xstar} is not able to simultaneously well model all the lines present in the spectrum. The {\sc xstar} grid in the RDPHA spectrum provides instead a blueshift of $\sim 300$ km s$^{-1}$ which better matches the blueshift of the absorption lines found in other \textit{XMM-Newton} and \textit{Chandra} observations of GX 13+1 (we will discuss a direct comparison with the \textit{Chandra} data in Section~\ref{xmm-chandra}). \subsection{Broad emission line} \label{emissionline} The comparisons in the previous section can be further extended investigating the properties of the broad Iron emission line. We initially model it with a Gaussian component in which we left the $\sigma_v$ parameter free to vary, as the line is clearly broad. However, we limited its range to 0.7 keV, in order to avoid unphysical values. \begin{figure} \center \hspace{-0.5cm}\includegraphics[height=8.5cm,width=6.5cm,angle=270]{diskline_zero.eps} \caption{Residuals of the best fit continuum model and absorption features, in the 2.0-10 keV energy range, for the RDPHA (\textit{black}) and RDCTI (\textit{red}) spectra. As expected, an emission line is clearly found at $\sim6-7$ keV and it displays a different shape for the two calibrations (see text).} \label{RDPHA_RDCTI_diskline_zero} \end{figure} In Figure~\ref{RDPHA_RDCTI_diskline_zero}, we show the residuals of the best fit continuum model and absorption features: a clear emission feature is seen, as expected, at $\sim6-7$ keV and we note that the spectral shape is different adopting the two calibrations. This feature seems to suggest that the RDCTI correction produces a marginally less physical energy of the Iron line (6.3 keV), which is expected to be observed between 6.4 and 7.0 keV (depending on the iron ionisation level), although the error bars make the feature being consistent with 6.4 keV, i.e. neutral Fe. We found that its broadness is $\sim0.3$ keV. On the other hand, the RDPHA corrections provide an energy line of 6.6 keV which is (well) consistent with the energies found in \citet{diaz12} and \citet{dai14}, and it can be associated to Fe XXV. However, its broadness is larger (0.7 keV) than that of the RDCTI corrected spectrum. We note that also its intensity is about a factor of 4 stronger than in the RDCTI data. The discrepancy in the two spectra for the observed properties of the broad Gaussian profile could be due either directly to the diverse calibration of the energy scale or to the difference in the underlying modeling of the continuum or also to a mismodeling of the line itself. \begin{figure*} \center \subfigure{\includegraphics[height=7.8cm,width=6.9cm,angle=270]{rdpha_diskline_eeuf.eps}} \subfigure{\includegraphics[height=7.8cm,width=6.9cm,angle=270]{rdcti_diskline_eeuf.eps}} \caption{Unfolded $Ef(E)$ EPIC-pn (\textit{black}) and RGS spectra (\textit{red} and \textit{green}), corrected with RDPHA (\textit{left}) and RDCTI (\textit{right}). The solid line represents the best fit model, the dashed \textit{blue} curve is the {\sc bbody} component, the dashed \textit{gray} curve is the {\sc nthcomp} component and the dashed \textit{purple} curve is the {\sc diskline} component. A number of absorption features is taken into account in the fit (see text). For display purposes, the RGS spectra have been rebinned at a minimum significance of 15$\sigma$.} \label{RDPHA_continuum+gauss-reflionx} \end{figure*} However, as we have already shown that the continuum parameters are insensitive to the detailed calibration of the energy scale and the width of the line suggests also relativistic smearing, we substitute the Gaussian model with a {\sc diskline} model \citep{fabian89} in order to describe a relativistic reflection line. This model depends on six parameters: the energy of the line, the radius of the inner and outer disc ($R_{in}$ and $R_{out}$) in unit of gravitational radius, the inclination angle of the system, the power-law index (\textit{Betor10}) in the radial dependence of the emissivity, and finally the normalization. We do not allow the energy line to go beyond the range 6.4-7.0 keV as it represents the lower and upper energy limits of the K$_{\alpha}$ Iron emission lines in all possible ionisation states. In Fig.~\ref{RDPHA_continuum+gauss-reflionx}, we show the unfolded spectra related to the best fits model. Although the shape of the lines appears different, in general, the continuum parameters are consistent within the errors with those inferred adopting the simple Gaussian line (see Table~\ref{table_continuum_gauss_rdpha}). The emission energy line in both RDCTI and RDPHA data are consistent with 6.6-6.7 keV (Fe XXV), while the emissivity index is constrained between -2.4 and -2.6. In addition, although the inferred inner disc radius ($R_{in}$) is poorly constrained, our results point towards $R_{in}\sim$ 6 R$_{g}$ for both the RDPHA and RDCTI corrected spectra, while the outer radius has been fixed to $10^4$ R$_g$ as it was unconstrained. The inclination angle is instead extremely different for the two corrections as a large inclination angle ($>50$\textdegree) is found for the RDPHA corrected spectrum while, for the RDCTI corrected spectrum, the inclination angle is consistent with a value lower than $\sim30$\textdegree. The latter result collides with the findings of dips in the lightcurves of GX 13+1, which are usually observed in sources with high inclination angles ($>65$\textdegree). \begin{figure*} \center \subfigure{\includegraphics[height=7.8cm,width=6.9cm,angle=270]{rdpha_reflionx_eeuf.eps}} \subfigure{\includegraphics[height=7.8cm,width=6.9cm,angle=270]{rdcti_reflionx_eeuf.eps}} \caption{Unfolded $Ef(E)$ EPIC-pn (\textit{black}) and RGS spectra (\textit{red} and \textit{green}), corrected with RDPHA (\textit{left}) and RDCTI (\textit{right}). The solid line represents the best fit model, the dashed \textit{blue} curve is the {\sc bbody} component, the dashed \textit{gray} curve is the {\sc nthcomp} component and the dashed \textit{purple} curve is the {\sc reflionx} component. A number of absorption features is taken into account in the fit (see text). For display purposes, the RGS spectra have been rebinned at a minimum significance of 15$\sigma$.} \label{RDPHA_continuum+gauss} \end{figure*} \subsection{Reflection component} However, the {\sc diskline} model gives account of the shape of a single emission line and does not describe the whole reflection emission and this may lead to a wrong estimate of the inclination angle, for example. Hence, we refined our previous results substituting the {\sc diskline} model with a full broadband self-consistent reflection model, i.e. the {\sc reflionx} model \citep{ross05}. It takes into account the reflection continuum and a set of discrete features. The reflection component close to the NS should be affected by Doppler and relativistic effects in the inner regions close to the compact object which are not included in the model. Hence, we multiplied the reflection component by the relativistic kernel {\sc rdblur} that depends on the inner disc radius, the emissivity index ($Betor10$), the inclination angle and the outer disc radius. The latter has again been fixed to $10^4$ R$_g$, as it turned out to be unconstrained. In addition, we also introduced an {\sc highecut} component which allowed us to physically constrain the highest energy range of the reflected emission. We fixed the low energy cut-off at 0.1 keV, while the folding energy cut-off was tied to the electron temperature of the comptonising component as $2.7\times$kT$_e$ since, for saturated comptonisation, a Wien bump is formed at $\sim 3$ times the electron temperature. Finally, we linked the photon index of the {\sc nthcomp} model to that of the reflection component. In Figure~\ref{RDPHA_continuum+gauss} and Table~\ref{table_continuum_gauss_rdpha} (columns $\#7$ and 8), we show the best fit parameters for the RDPHA and RDCTI corrected spectra. The addition of the broad-band reflection component modifies the spectral description of the continuum, i.e. the parameters of the blackbody and the Comptonized components. Not surprisingly the photon index of the {\sc nthcomp} model pegged (or is very close) to 1.4, since the {\sc reflionx} model is not calculated for $\Gamma$ below 1.4. However, in the previous section, we found that the photon index of the {\sc nthcomp} would prefer to settle close to 1. Therefore, it is important to mention that the use of {\sc reflionx} might force the fit to converge towards spectral parameters of the broadband continuum which are affected by this assumption. We note that in the case of the RDPHA corrected spectrum, the {\sc nthcomp} component dominates in whole bandpass, with a blackbody emission stronger than the reflection one at energies below $\sim4$ keV. However, for this best fit, the ionisation parameter of the {\sc reflionx} is dramatically low ($\sim15$ erg cm s$^{-1}$) and, moreover, the normalization of the soft component is mostly unconstrained. This may suggest that the properties of the spectrum does not allow us to describe the overall continuum with both the {\sc reflionx} and the {\sc bbody} components. Therefore, in Table~\ref{table_continuum_gauss_rdpha}, we show also an alternative fit without the {\sc bbody} component whose spectral parameters appear instead physically more plausible. Indeed the ionisation parameter is now increased up to $\sim1100$ erg cm s$^{-1}$, which is more acceptable than the previous $\sim15$ erg cm s$^{-1}$. Hereafter, we consider this fit as reference for the RDPHA data. On the other hand, the RDCTI corrected spectrum does not suffer of this degeneracy in the spectral parameters. The {\sc bbody} component is well constrained, with the reflection component that dominates over the blackbody emission and is predominant at energies below 1.5 keV. The parameters of ionisation is consistent with $\sim200 $ erg cm s$^{-1}$. The discrepancies in the reflection properties inferred with {\sc reflionx} between RDPHA and RDCTI corrected spectra again suggest that the shape of the Iron emission line may be different for the two spectra as found with a simple {\sc gaussian} or {\sc diskline} model. Then, we further note that the column density of the RDCTI has raised up to $3.4\times10^{22}$ cm$^{-2}$ while that of the RDPHA (in the fit without a blackbody) settles at $\sim3.7\times10^{22}$ cm$^{-2}$. In addition, the abundance of iron relative to solar value is over-abundant ($\sim3$, which was set as upper limit in order to avoid unphysical values). The latter result is found also in the RDPHA data, although the error bars in the parameters of both spectra are large. Finally, the {\sc RDBLUR} parameters that account for the relativistic smearing of the reflection component give similar best fit parameters for the RDPHA and RDCTI data. In particular the inner disc radius is consistent with $\sim 6-15$ R$_g$ and the inclination angle is larger than $\sim50$\textdegree, which would be consistent with the expected high inclination due to the presence of dips. The spectral parameters obtained from the fits of the broad component clearly suggest that RDCTI and RDPHA corrections provide different spectral shapes. However, they do not allow us to univocally discriminate between the goodness of the two corrections. We can only note that different spectral results are obtained in the two cases. \subsection{Comparing \textit{XMM-Newton} and \textit{Chandra} data} \label{xmm-chandra} We found that the RDPHA and RDCTI corrections provide similar broadband continua and the study of the broad iron emission line suggests that also the inclination angle is compatible with that inferred by the existence of dips in the lightcurve. However, we note that the most important discrepancy (beyond the ionisation level of the reflection component) between the two corrections turns out to be the centroid energy line of the absorption features which significantly differ for the two corrective approaches of spectra. However, the RDPHA corrected spectrum shows absorption features whose energies are largely consistent with those found in the Chandra observation presented in \citet{dai14}. To better assess this issue, we fitted simultaneously the HEG and RDPHA spectra, adopting a continuum model consisting of an absorbed {\sc nthcomp} and a {\sc diskline}. We added Gaussian models, with dispersion velocity fixed at 0, in order to fit the lines at 6.69 keV, 6.70 keV and 8.2 keV (the line at $\sim 7.8$ keV is not present in the \textit{Chandra} spectrum), plus the addition of an absorption edge at $\sim8.8$ keV. These features are the focus of such a spectral comparison as they are common to the \textit{Chandra} and EPIC-pn RDPHA corrected spectra. Furthermore, in HEG data, we also considered significant lines of Ca XX K$_{\alpha}$, Si XIV K$_{\alpha}$ and Si XVI K$_{\alpha}$ lines at 4.10 keV, $\sim 2.0$ keV and $\sim 2.6$ keV respectively \citep{dai14}, which are not (or, at maximum, marginally) seen in the EPIC-pn data. We left the continuum free to vary between the spectra because the HEG data can show spectral variability and they are also affected by pile-up. Since this is not taken into account, the continuum spectral shape can be different from that of the RDPHA spectrum. This is supposed not to be an issue for the absorption features as we checked that they are largely independent from the continuum model and are usually found also at different levels of luminosity \citep{ueda04}. On the other hand, the energy lines are linked between the spectra during the fit calculation, except their normalisations. In Figure~\ref{RDPHA_HEG}, we show the best fit obtained with the mentioned model. The best fit energy lines of the common lines are 6.702 ($\pm0.008$) keV, 6.98 ($\pm0.01$) keV and 8.19 ($\pm0.09$) keV, and the edge is at 8.84 ($\pm0.08$) keV which are widely consistent with those found in Section~\ref{absfeatures}. It is evident that no significant residuals are still present except for a HEG point at $\sim 7$ keV, which present a residual at $\sim4\sigma$, and the EPIC-pn absorption lines at $\sim7.9$ keV and $\sim2.3$ keV. We suppose that the former is produced by either a broadening of the line in the \textit{Chandra} data or to a blueshift of the line which cannot be detected in the EPIC-pn spectrum; on the other hand, the $\sim7.9$ keV line is the Fe XXV K$_{\beta}$ which is not observed in the HEG data while the 2.3 keV line is instead likely a residual in the calibration of the EPIC instrument around the Au edge. This may suggest that the RDPHA calibrations may be further improved. In any case, as the other absorption features are totally consistent between HEG and EPIC-pn data, we strongly suggest that the RDPHA calibrations should to be preferred to the RDCTI ones. \section{Discussion} \begin{figure} \center \includegraphics[height=8.5cm,width=7.5cm,angle=270]{rdpha_vs_chandre_pl_eeuf.eps} \caption{\textit{Top-panel}: Unfolded $Ef(E)$ \textit{Chandra}-HEG (\textit{black}) and EPIC-pn RDPHA corrected spectra (\textit{red}), adopting an absorbed {\sc edge$\cdot$(nthcomp+diskline+gauss)} (see text). The positions of the absorption features which are common to the two spectra are widely consistent within the errors, when fitted simultaneously. \textit{Bottom panel}: Ratio of the data and the best fit model. For display purposes, the HEG spectrum have been rebinned at a minimum significance of 30$\sigma$.} \label{RDPHA_HEG} \end{figure} In this work we have analysed a single \textit{XMM-Newton} observation taken in \textit{Timing} mode of the LMXB dipping source GX 13+1. The main goal of this paper is studying the impact of different calibrations (RDPHA and RDCTI) of the energy scale in EPIC-pn Timing Mode on the spectral modeling of the LMXB dipping source GX 13+1. The RDPHA is an empirical correction of the energy scale that does not assume any specific energy-dependence, as it is instead for the RDCTI correction. We have proven that RDPHA and RDCTI corrected data provide different spectral results and we tentatively try to understand which correction offers the most plausible and physically acceptable scenario. We have shown that, in order to avoid spurious effects on the spectral analysis due to the existence of pile-up, it is necessary to remove the three brightest, central columns of the EPIC-pn CCD. We found that the broad band continuum can be well described for both types of spectral data by the combination of a soft blackbody and an optically thick, cold comptonising component. This result is consistent with that found for other accreting NSs and also for \textit{XMM-Newton} and \textit{Chandra} spectra of GX 13+1 (e.g. \citealt{diaz12,dai14} and reference therein). The soft component provides an inner temperature consistent with $\sim 0.6$ keV for the two corrections. From its normalization, we infer an emission radius of $\sim40$ km, which prevents to relate this emission with the NS surface and allows us to more likely associate it to the accretion disc. The parameters of the comptonising corona, which may be possibly produced close the NS, are instead consistent with a cold ($\sim1.2$ keV) electron population, where we assumed that the seed photons are provided by the inner disc regions for both spectra. Notably, several features are clearly observed in the RDPHA and RDCTI corrected spectra and it has been suggested that they are produced by a warm absorber during both dips and persistent epochs of GX 13+1. The warm absorber may be created by outflows at the outer regions of the disc where the thermal pressure is stronger than the relative gravitational pull (e.g. \citealt{diaz12,dai14}). The narrow absorption features are the best gauge to estimate the accuracy of the energy scale yielded by the two aforementioned calibration methods. We compared our absorption lines, modeled with a Gaussian, to the absorption lines observed in the \textit{Chandra/HEG} data presented in \citet{dai14}. In that work, the authors found the existence of lines at 2.6234, 4.118, 6.706, 6.978, 8.273 keV associated with S XVI, Ca XX, Fe XXV, Fe XXVI K$_{\alpha}$ and Fe XXVI K$_{\beta}$, respectively, with possibly blueshifts comprise between $\sim200-1000$ km s$^{-1}$. We detected the same lines (although the first two are not statistically significant) only on the RDPHA data, with the addition of a line at 7.82 keV, more likely associated to Fe XXV K$_{\beta}$. Unfortunately the error bars inferred by a simple Gaussian model are large and prevents to constrain the possible blueshift of most of these features. This may be done only to the line of Fe XXVI K$_{\alpha}$ which is shifted if compared to the rest frame energy (6.9662 keV), suggesting a blueshift of $\sim 1500\pm 300$ km s$^{-1}$. On the other hand, the lines in the RDCTI data are systematically different and shifted from those found in the RDPHA corrected spectrum. These absorption features, if associated to the lines observed in \textit{Chandra} data, should have too large redshifts ($>5000$ km s$^{-1}$; see section~\ref{absfeatures}) which are physically implausible if produced by a warm absorber located at the outer edge of the disc. We note that, however, these associations might then be misleading. Therefore, we tentatively tried to better constrain the properties of the warm medium which produces these absorption lines adopting an {\sc xstar} grid. It showed us that for the RDPHA spectrum, the column density of the ionised medium is $\sim6\times10^{23}$ cm$^{-2}$, with an ionisation level of Log($\xi$) $\sim 4.2$, and more likely ejected by the system with a velocity of $\sim 300$ km s$^{-1}$. On the other hand, for the RDCTI data, column density and ionisation are an order of magnitudine lower while the blueshift velocity is instead a factor of 8-9 higher than the RDPHA data. However, the blueshift velocity in the RDPHA corrected spectrum is highly consistent with those found in previous works \citep{ueda04,madej13,diaz12,dai14}. In addition, the {\sc xstar} grid is not completely able to model the lines at 6-7 keV in the RDCTI spectrum, suggesting either a broadening or a general mis-modeling of the lines. This result further supports the conclusion that the RDPHA corrections are generally more reliable than the RDCTI ones. We then found that RDCTI and RDPHA corrections provide similar results on the continuum and the broad emission line, if the latter is described by a model more complex than a simple Gaussian. In fact, we found the existence of residuals which suggest a relativistic broadening of the line. For this reason, we initially introduced the {\sc diskline} model which shows that the inclination angle is small ($<30$\textdegree) for the RDCTI data and not consistent with the dip episodes of GX 13+1, which instead point towards a large inclination angle ($60-85$\textdegree). On the other hand, this finding is, however, not confirmed if we model the disc reflection with the self-consistent disc reflection code {\sc reflionx} \citep{ross05} modified by a relativistic kernel. Indeed, for both spectra, we found that the inclination angle can be larger than {50\textdegree}, confirming that a single Gaussian or {\sc diskline} model are too simple for the quality of the data. However, we highlight that the results obtained with {\sc reflionx} can be affected by the pitfall of the model, as {\sc reflionx} is not defined for photon index lower than 1.4 while we found that both type of spectra can be fitted by an {\sc nthcomp} with $\Gamma\sim1.0$. In addition, {\sc reflionx} does not take into account the self-ionisation of the accretion disc, introducing a possible source of uncertainty in the description of the continuum. With that in mind, for the fit of the RDPHA spectrum, we observed a degeneracy in the spectral parameters of the {\sc bbody} and {\sc reflionx} component, or in other words, we could find two best fits which are statistically comparable: in the first one, the normalization of the soft component is poorly constrained and the ionisation parameter of the {\sc reflionx} is close to its lower limit ($\sim 15$ erg cm s$^{-1}$); in the other case, the soft component can be removed from the fit and the ionisation parameter converges towards more physical values ($\sim 1000$ erg cm s$^{-1}$). The spectral degeneracy warns that the data are possibly not adequate to constrain the properties of the blackbody emission, when using {\sc reflionx} because of the complexity of the adopted model. We highlight that in the first fit, the reflection systematically needs to converge towards a strong iron emission line in low ionisation conditions. This is in contrast with the existence of H-like and He-like Iron absorption features that can exist only in case of ionisation higher than 100 erg cm s$^{-1}$ \citep{kallman04}. On the other hand, this is instead satisfied in the second best fit, where the ionisation parameter is higher than 1000 erg cm s$^{-1}$. The discrepancy with the latter ionisation value and that found with the {\sc xstar} grid may more likely suggest that the density of the material is different in the warm absorber and in the disc. Instead, for the RDCTI data, the ionisation parameter of the reflection component is lower ($\sim200$ erg cm s$^{-1}$) than the second best fit of the RDPHA data and only marginally supports the value found with the {\sc xstar} grid ($>1000$ erg cm s$^{-1}$). Such a low ionisation level of the reflection component may collide with the energies of the absorption lines that, according to {\sc xstar}, would be expected at energies higher than those found in RDCTI corrected spectrum. However, as the ionisation parameter depends on the density, we note again that the absorption features are more likely produced in a warm medium. This should have a density ($10^{22}$ cm$^{-2}$) lower than that in the accretion disc, where the reflection is produced. However, as the ionisation parameter depends also on the distance, we cannot exclude the effect of the latter on the estimates of the ionisation. We finally conclude that the reflection component cannot be easily investigated to infer the goodness of one correction in comparison to the other. Instead, it clearly suggests that the spectra, i.e. the shapes of the broad emission line, obtained with the two calibrations are different. However, the quality of the two corrections can be discriminated studying the narrow absorption lines. Indeed, adopting simple continuum models and Gaussian models for the narrow absorption features, the RDPHA corrected spectrum of GX 13+1 provides more physical spectral parameters. In particular, regarding the absorption lines that are much more consistent with those inferred also by \textit{Chandra}. In addition, the residuals at the energy of the instrumental Au edge (2.2-2.3 keV) are smaller in the RDPHA data, favouring a better calibration of that energy range with the RDPHA corrections. For these reasons, although the EPIC-pn calibrations can be further improved, we propose that the RDPHA corrections should be generally preferred to the standard RDCTI (or \textit{epfast}) corrections, especially in case of spectra with a large number of absorption features. Hence, supported also by our results, RDPHA will be the default calibration of the next SAS version (SAS v.14), superseding RDCTI (i.e. \textit{epfast}) that was the default from SAS v.9 to SAS v.13.5. Our conclusions will be further tested on other accreting sources in order to support with more solid basis the RDPHA corrections. \section{Conclusions} Accuracy of the scale energy calibration in \textit{XMM-Newton} data taken in Timing mode is extremely important when observing bright sources, where rate-dependent effects have to be taken into account. For this reason, two calibration approaches have been developed: the new RDPHA and the standard RDCTI (\textit{epfast}) corrections. The aim of this work was to analyse and test the two calibrations on one EPIC-pn \textit{XMM-Newton} observation, taken in Timing mode, of the persistent accreting NS GX 13+1. This source is a dipper which has shown periodic dips and its spectra are characterised by a number of absorption features. It also shows an emission line associated to the Fe XXV or XXVI K$_{\alpha}$ transition. Hence GX 3+1 is a suitable source to infer the goodness of the two calibrations, thanks to its several absorption features, the emission line and a simple continuum. \noindent We found that: \begin{itemize} \item the continuum can be well described, in both spectra, by a blackbody of $\sim0.5$ keV and a high energy comptonisation with electron temperature of $\sim1.1-1.2$ keV and photon index of $\sim 1$; \item however, the two calibrations provide different results of the spectral features as the absorption lines observed in the RDCTI and RDPHA spectra differ significantly. \item We suggest that the lines in the RDCTI spectrum could be associated to known atomic transitions only assuming implausibly high inflow and outflow velocities for this source. On the other hand, the absorption lines in the RDPHA spectrum are more consistent with those already found in previous \textit{Chandra} and \textit{XMM-Newton} observations and we easily associated them to Fe XXV $K_{\alpha}$ (6.70 keV), Fe XXVI $K_{\alpha}$ (6.99 keV), Fe XXV $K_{\beta}$ (7.86 keV) and Fe XXVI K$_\beta$ (8.19 keV). \item We also observed marginal differences in the shape of the broad emission line, either when fit with a {\sc diskline} or with a {\sc reflionx} model. However, such a component cannot allow us to clearly infer the validity of the two corrections because of the poor quality of the constraints on the best fit parameters. \item Finally, although we note that improvement can be made especially at the energy of the instrumental Au line ($\sim2.3$ keV), our results suggest that the RDPHA calibrations are more physically reliable than the RDCTI ones and, for this reason, they should be implemented as default as of SASv14 (and associated data reduction pipeline). \end{itemize} \section*{Acknowledgements} A. R. gratefully acknowledges the Sardinia Regional Government for the financial support (P. O. R. Sardegna F.S.E. Operational Programme of the Autonomous Region of Sardinia, European Social Fund 2007-2013 - Axis IV Human Resources, Objective l.3, Line of Activity l.3.1). This work was partially supported by the Regione Autonoma della Sardegna through POR-FSE Sardegna 2007-2013, L.R. 7/2007, Progetti di Ricerca di Base e Orientata, Project N. CRP-60529, and by the INAF/PRIN 2012-6. The High-Energy Astrophysics Group of Palermo acknowledges support from the Fondo Finalizzato alla Ricerca (FFR) 2012/13, project N. 2012-ATE-0390, founded by the University of Palermo. \addcontentsline{toc}{section}{Bibliography} \bibliographystyle{mn2e}
1,116,691,498,084
arxiv
\section{Introduction} The human genome encodes more than 20,000 protein-coding genes, of which a large fraction do not have annotated function to date \citep{galperin2010complete}. Predicting unknown member genes to biological pathways/complexes and the determination of function for poorly characterized genes are crucial for understanding biological processes and human diseases. It has been observed that functionally associated genes tend to be gained and lost together during evolution \citep{pellegrini1999assigning,kensche2008practical}. Identifying shared evolutionary history (aka, co-evolution) of genes can help predict functions for unstudied genes, reveal alternative functions for genes considered to be well characterized, propose new members of biological pathways, and provide new insights into human diseases. The concept of ``phylogenetic profiling'' was first introduced by \cite{pellegrini1999assigning} to characterize phylogenetic distributions of genes. One can predict a gene's function based on its phylogenetic similarity to those with known functions. Let the binary phylogenetic profile matrix $\mathbf{X}_{N\times S}$ denote the presence/absence of $N$ genes across $S$ species. \citet{pellegrini1999assigning} proposed to measure the ``degree" of co-evolution of a pair or genes $i$ and $j$ as the Hamming distance \citep{hamming1950error} between the $i$th and $j$th rows of $\mathbf{X}$. A toy example is shown in Figure \ref{fig:hamming}. Various methods have since been developed (see \citep{kensche2008practical} for a review) and applied with success in predicting components for prokaryotic protein complexes \citep{pellegrini1999assigning}; phenotypic traits such as pili, thermophily, and respiratory tract tropism \citep{jim2004cross}; cilia \citep{li2004comparative}; mitochondrial complex I \citep{ogilvie2005molecular,pagliarini2008amitochondrial}; and small RNA pathways \citep{tabach2013identification}. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.70]{./hamming} \par\end{centering} \caption{A toy example of phylogenetic profile matrix for $N=6$ genes (G1, ..., G6) and $S=8$ species (S1, ..., S8). Blue and white squares respectively denote presence or absence of genes in corresponding genomes. G1 and G2 have Hamming distance $1$, while G4 and G6 have Hamming distance $0$. \label{fig:hamming}} \end{figure} Currently there are more than 200 eukaryotic species with their genomes completely sequenced and about 2,000 species with full genomes being sequenced (JGI GOLD\footnote{JGI Genome Online Database: https://gold.jgi.doe.gov/}). The growing availability of genome sequences from diverse species provides us unprecedented opportunities to chart the evolutionary history of human genes. However, existing phylogenetic profiling methods still suffer from some limitations \citep{kensche2008practical}. First, most available methods perform only pairwise comparison between an input query gene and a candidate, and are thus unable to discover subtle patterns that show up only after aligning multiple input query genes. Such methods also cannot handle cases where members in the query gene set exhibit different phylogenetic profiles. Second, most methods ignore errors in phylogenetic profiles, which are often caused by inaccuracies in genome assembly, gene annotation, and detection of distant homologs \citep{trachana2011orthology}. Third, most methods (with exceptions of \cite{barker2005predicting,vert2002tree,von2003string,zhou2006inferring}) assume independence across input species, ignoring their phylogenetic relationships, e.g., the tree structure of their evolutionary history. These methods are rather sensitive to the organisms' selection in the analysis. Currently available tree-based methods, however, are computationally cumbersome and hardly scalable for analyzing large input sets, let alone entire genomes \citep{barker2005predicting,barker2006constrained}. To cope with the aforementioned limitations, \cite{li2014expansion} introduced the two-step procedure {\it CLustering by Inferred Models of Evolution} (denoted by CLIME 1.0). In its Partition step, CLIME 1.0 clusters the input gene set $\mathcal{G}$ into disjoint evolutionarily conserved modules (ECMs), simultaneously inferring the number of ECMs and each gene's ECM membership. In the Expansion step, CLIME 1.0 scores and ranks other genes not in $\mathcal{G}$ according to a log-likelihood-ratio (LLR) statistic for their likelihood of being new members of an inferred ECM. \cite{li2014expansion} systematically applied CLIME 1.0 to over 1,000 human canonical complexes and pathways, resulting in a discovery of unanticipated co-evolving components and new members of important gene sets. We here provide a full statistical account of CLIME 1.0 and its computational strategies, evaluate CLIME 1.0's performances with extensive simulations, extend it to incorporate uncertainties in the phylogenetic tree structure, and compare CLIME 1.0 with existing methods such as BayesTraits. Finally we apply CLIME 1.0 to gene sets in OMIM (Online Mendelian Inheritance in Man) to reveal new insights on human genetic disorders. Compared with existing methods, by incorporating a coherent statistical model, CLIME 1.0 (1) takes proper account of the dependency between species; (2) automatically learns the number of distinct evolutionary modules in the input gene set $\mathcal{G}$; (3) leverages information from the entire input gene set to more reliably predict new genes that have arisen with a shared pattern of evolutionary gains and losses; (4) uses the LLR statistic as a principled measure of co-evolution compared to naive metrics (e.g. Hamming distance, Pearson correlation). Complementary to the original CLIME 1.0, we further provide an extended version, named CLIME 1.1, which inherits the Bayesian hidden Markov tree model from CLIME 1.0, but further accounts for the uncertainty of the input phylogenetic tree structure by incorporating a prior on the evolutionary tree. Instead of a single, fixed tree as by CLIME 1.0, CLIME 1.1 takes an empirical distribution of tree structures, in addition to the phylogenetic profiles of a given gene set, as input; infers the posterior of the hidden evolutionary histories, hidden cluster (ECM) labels and parameters, as well as the posterior of evolutionary tree structure through Gibbs sampling; eventually outputs the ECMs of input gene set in the Partition step, and then classify novel genes into inferred ECMs in the Expansion step. Rather than using only a point tree estimate, CLIME 1.1 adds to the original CLIME 1.0 by allowing the estimation error in the tree-building process as well as the variability of phylogenetic trees among genes, and thus alleviating the risk of misspecification in the tree structure. In practice, popular tree-building methods and softwares such as PhyML \citep{guindon2010new} and MrBayes \citep{ronquist2003mrbayes} characterize the uncertainty in the estimation with bootstrap or posterior tree samples. CLIME 1.1 can readily utilize such output samples as empirical approximation for tree prior distribution. We also compare CLIME 1.1 with CLIME 1.0 and other benchmark methods in extensive simulations and real data to showcase its features and strengths. We find that CLIME 1.1 is more robust and accurate when there is high uncertainty in tree estimation or gene-wise variability in the evolutionary tree structures. The rest of this article is organized as follows. In Section \ref{sec:model}, we introduce the tree-structured hidden Markov model (HMM) for genes' stochastic gain/loss events on a given phylogenetic tree, and the Dirichlet process mixture (DPM) model for clustering genes into modules with shared history. The Partition step of CLIME 1.0, which implements the Gibbs sampler to sample from the posterior distribution of the DPM model, is described in Section \ref{sec:partition}. The Expansion step is introduced in Section \ref{sub:expansion}. In Section \ref{sub:gain_null_est}, we briefly introduce the pre-processing of CLIME 1.0. The extended model and inference procedure of CLIME 1.1 are described in Section \ref{sec:clime+}. Simulation studies that compare CLIME 1.0 and CLIME 1.1 with hierarchical clustering are presented in \ref{sec:simstudy}. In Section \ref{sec:realdata}, we apply CLIME 1.0 and 1.1 on real data, and use leave-one-out cross-validation to compare the performance of CLIME 1.0 with hierarchical clustering on gene sets from GO (Gene Ontology) and KEGG (Kyoto Encyclopedia of Genes and Genomes) databases. We conclude this paper with a discussion in Section \ref{sec:discussion}. \section{Bayesian mixture of HMM on a phylogenetic tree}\label{sec:model} \subsection{Notation} Let $\mathcal{G}$ denote the input gene set with $n$ genes, and $N$ be the total number of genes in the reference genome. Let $\mathbf{X}_i$ be the phylogenetic profile of gene $i$, $i=1, \dots, N$, and specifically, let $\mathbf{X}$ denote the phylogenetic profile of the input gene set. For example, $\mathcal{G}$ can be the set of 44 subunit genes of human mitochondrial complex I, and $\mathbf{X}$ is their phylogenetic profile matrix; for reference genome, we have $N=20,834$ human genes with their phylogenetic profile matrix denoted by $\mathbf{X}_{1:N}$. For notational simplicity, we let $1,\dots,n$ index the $n$ genes in $\mathcal{G}$ and let $n+1,\dots,N$ index the rest in the genome. The input phylogenetic tree has $S$ living species indexed by $1,\dots,S$, and $S-1$ ancestral extinct species indexed by $S+1,\dots,2S-1$. The $2S-1$ living and extinct species are connected by the $2S-2$ branches on the tree. For simplicity, we assume that the phylogenetic tree is binary, while the model and algorithm can be easily modified for non-binary input trees. For each gene $i=1,\dots,N$, its {\it phylogenetic profile} is defined as the observed vector $\boldsymbol{X}_{i}=\left(X_{i,1},\dots,X_{i,S}\right)$ with $X_{i,j}=1$ or $0$ denoting the presence or absence of gene $i$ across the $S$ extant species. Let $\boldsymbol{H}_{i}=\left(H_{i,1},\dots,H_{i,2S-1}\right)$ denote gene $i$'th ancestral (unobserved) and extant presence/absence states in the $2S-1$ species. We call a cluster of genes with shared evolutionary history an evolutionarily conserved module (ECM). Let $\boldsymbol{I}=\left(I_{1},\dots,I_{n}\right)$ denote the ECM assignment indicators of genes, where $I_{i}=k$ indicates that gene $i$ is assigned to ECM $k$. We assume that each gene can only be ``gained" once throughout the entire evolutionary history, which happens at branch $\lambda_{i}$, $i=1,\dots,N$. Let $\boldsymbol{\lambda}=\left(\lambda_{1},\dots,\lambda_{N}\right)$ denote the gain nodes of the $N$ genes, where $\lambda_{i}=s$ indicates that gene $i$ was gained at tree node $s$. With the available data, we can estimate $\boldsymbol{\lambda}$ in the pre-processing stage as described in Section \ref{sub:gain_null_est} with very small estimation error. We thus assume that $\boldsymbol{\lambda}$ is a known parameter throughout the main algorithm. \subsection{Tree-structured HMM for phylogenetic profiles}\label{sub:ecm_mixture_model} We introduce here a tree-structured HMM to model the presence/absence history and phylogenetic profile of genes. For each gene $i$, its complete evolutionary history $\boldsymbol{H}_{i}=\left(H_{i,1},\dots,H_{i,2S-1}\right)$ is only partially observed at the bottom level, i.e., the phylogenetic profile vector $\boldsymbol{X}_{i}=\left(X_{i,1},\dots,X_{i,S}\right)$ is the observation of presence/absence states for only the living species, $H_{i,1},\dots,H_{i,S}$. Due to sequencing and genome annotation errors, there are also observation errors on the presence/absence of genes. In other words, $X_{i,1},\dots,X_{i,S}$ are noisy observations on $H_{i,1},\dots,H_{i,S}$. We assume that genes in ECM $k$ share the same set of branch-specific probabilities of gene loss for the $2S-2$ branches, denoted by $\boldsymbol{\theta}_{k}=\left(\theta_{k,1},\dots,\theta_{k,2S-2}\right)$. For genes in ECM $k$, the transition of absence/presence states from its direct ancestor to species $s$ is specified by transition matrix $\mathbf{Q}_{k,s}$,\vspace{-10pt} \[ \mathbf{Q}_{k,s}=\begin{array}{c} \begin{array}{cc} 0 & \quad\;\;\:1\end{array}\\ \begin{array}{c} 0\\ 1 \end{array}\left[\begin{array}{cc} 1 & 0\\ \theta_{k,s} & 1-\theta_{k,s} \end{array}\right]. \end{array} \] Thus, for every evolutionary branch (after the gain branches $\mathbf{\lambda}$), there is a $\mathbf{Q}$ matrix. We assume that once a gene got lost, it cannot be re-gained, which is realistic for eukaryotic species. Therefore the first row of $\mathbf{Q}_{k,s}$ indicates that the transition probability from absence to presence (re-gain) is $0$. The second row shows our parameterization that the transition probability from presence to absence (gene loss) is $\theta_{k,s}$, and presence to presence is $1-\theta_{k,s}$. Let $\sigma\left(s\right)$ denote the direct ancestor species of $s$, and let set $\mathcal{T}\left(s\right)$ include all of the offspring species in the sub-tree rooted at node $s$. Obviously $H_{i,s}=0$ if species $s$ is not in $\mathcal{T}\left(\lambda_{i}\right)$. The likelihood function of evolutionary history $\boldsymbol{H}_{i}$ conditional on gene $i$ in ECM $k$ is \begin{eqnarray*} & & \text{Pr}\left(\boldsymbol{H}_{i}\mid\boldsymbol{\theta}_{k},I_{i}=k\right)\\ & = & \begin{cases} \prod_{s\in\mathcal{T}\left(\lambda_{i}\right)\backslash\lambda_{i}}\mathbf{Q}_{k,s}\left(H_{i,\sigma\left(s\right)},H_{i,s}\right), & \text{if }\,H_{i,s}=0\,\forall s\not\in\mathcal{T}\left(\lambda_{i}\right),\\ 0, & \text{otherwise}. \end{cases} \end{eqnarray*} To account for errors in determining the presence/absence of a gene, we allow each component of the observed phylogenetic profile, $X_{i,s}$, to have an independent probability $q$ to be erroneous (i.e., different from the true state $H_{i,s}$). The error probability $q$ is low and assumed to be known. By default, we set $q=0.01$ based on our communication with biologists with expertise in genome sequencing and annotation. We note that estimating it in the MCMC procedure is straightforward, but a strong prior on $q$ is needed for its proper convergence and identifiability. For each gene $i$, the likelihood function of $\boldsymbol{X}_{i}$ given $\boldsymbol{H}_{i}$ is \begin{equation} \text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{i}\right)=\prod_{s=1}^{S}\text{Pr}\left(X_{i,s}\mid H_{i,s}\right)=\prod_{s=1}^{S}\left(1-q\right)^{\mathbb{I}\left\{ X_{i,s}=H_{i,s}\right\} }\left(q\right)^{\mathbb{I}\left\{ X_{i,s}\neq H_{i,s}\right\} },\label{eq:P(Xi|Hi)} \end{equation} where $\mathbb{I}\left\{ \cdot\right\} $ is the indicator function that is equal to $1$ if the statement is true, and $0$ otherwise. The complete likelihood for gene $i$ is \begin{eqnarray} & & \text{Pr}\left(\boldsymbol{X}_{i},\boldsymbol{H}_{i}\mid\boldsymbol{\theta},I_{i}\right)\nonumber = \label{eq:model_i} \\ \!& \! &\!\! \left[\prod_{s\in\mathcal{T}\left(\lambda_{i}\right)\backslash\lambda_{i}}\mathbf{Q}_{I_{i},s}\left(H_{i,\sigma\left(s\right)},H_{i,s}\right)\right]\left[\prod_{s=1}^{S}\left(1-q\right)^{\mathbb{I}\left\{ X_{i,s}=H_{i,s}\right\} }\left(q\right)^{\mathbb{I}\left\{ X_{i,s}\neq H_{i,s}\right\} }\right] \end{eqnarray} and the complete likelihood for all the genes is \begin{equation} \text{Pr}\left(\boldsymbol{X},\boldsymbol{H}\mid\boldsymbol{\theta},\boldsymbol{I}\right)=\prod_{i=1}^{n}\text{Pr}\left(\boldsymbol{X}_{i},\boldsymbol{H}_{i}\mid\boldsymbol{\theta},I_{i}\right). \end{equation} \subsection{Dirichlet process mixture of tree hidden Markov models} The number of ECMs $K$ may be specified by users reflecting their prior knowledge on the data set. When the prior information about the data set is not available, we can estimate $K$ from data by MCMC sampling with a Dirichlet process prior on $\boldsymbol{\theta}$ \citep{ferguson1973abayesian,neal2000markovchain}. For each gene $i\in\left\{ 1,\dots,n\right\} $, we let the prior distribution of $\boldsymbol{\theta}_{i}$ follow Dirichlet process with concentration parameter $\alpha$ and base distribution $\mathcal{F}_{0}$, denoted by $\text{DP}\left(\mathcal{F}_{0},\alpha\right)$. This gives us the following Bayesian hierarchical model. For each gene $i=1,\dots,n$, \begin{equation} \begin{aligned}\boldsymbol{X}_{i}\mid\boldsymbol{H}_{i} & \;\:\sim\;\:P\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{i}\right),\\ \boldsymbol{H}_{i}\mid\boldsymbol{\theta}_{i} & \;\:\sim\;\:P\left(\boldsymbol{H}_{i}\mid\boldsymbol{\theta}_{i}\right),\\ \boldsymbol{\theta}_{i}\mid\mathcal{F} & \;\:\sim\;\:\mathcal{F},\\ \mathcal{F} & \;\:\sim\;\:\text{DP}\left(\mathcal{F}_{0},\alpha\right),\\ \mathcal{F}_{0} & \;\:=\;\:\prod_{s=1}^{2S-2}\text{Beta}\left(a,b\right), \end{aligned} \label{eq:DP} \end{equation} The base distribution $\mathcal{F}_{0}$ is set as the product of a set of Beta distributions for branch-specific gene loss probabilities. We use the Chinese restaurant process representation \citep{aldous1985exchangeability,pitman1996somedevelopments} of the Dirichlet process and implement a Gibbs sampler \citep{gelfand1990sampling,liu2008montecarlo} to draw from the posterior distribution of ECM assignments $\boldsymbol{I}=\left(I_{1},\dots,I_{n}\right)$. The Chinese restaurant process prior for cluster assignments is exchangeable \citep{aldous1985exchangeability}, therefore the prior distribution for $\boldsymbol{I}$ is invariant to the order of $n$ genes. More precisely, the mixture model in Eq (\ref{eq:DP}) can be formulated as follows: \begin{equation} \begin{aligned}\boldsymbol{X}_{i}\mid\boldsymbol{H}_{i} & \;\:\sim\;\:P\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{i}\right),\quad i=1,2,\dots n,\\ \boldsymbol{H}_{i}\mid\boldsymbol{\theta}_{I_{i}} & \;\:\sim\;\:P\left(\boldsymbol{H}_{i}\mid\boldsymbol{\theta}_{I_{i}}\right),\quad i=1,2,\dots n,\\ \boldsymbol{\theta}_{k} & \;\:\sim\;\:\prod_{s=1}^{2S-2}\text{Beta}\left(a,b\right),\quad k=1,2,\dots\\ \text{Pr}\left(I_{i}=I_{j},\;j<i\mid I_{1},\dots,I_{i-1}\right) & \;\:=\;\:n_{i,j}/\left(i-1+\alpha\right),\quad i=1,2,\dots n,\\ \text{Pr}\left(I_{i}\neq I_{j},\;\forall j<i\mid I_{1},\dots,I_{i-1}\right) & \;\:=\;\:\alpha/\left(i-1+\alpha\right),\quad i=1,2,\dots n, \end{aligned} \label{eq:CRP} \end{equation} where $n_{i,j}=\sum_{l=1}^{i-1}\mathbb{I}\left\{ I_{l}=I_{j}\right\} $. \subsection{Dynamic programming for integrating out $\boldsymbol{H}$\label{sub:H_integration}} In Section \ref{sub:gibbs}, we will introduce the Gibbs sampler to sample from the posterior distribution of $\boldsymbol{I}$. In the Gibbs sampler, we need to calculate the marginal probability of $\mathbf{X}_i$ given the HMM parameter $\boldsymbol{\theta}$, with gene $i$'s evolutionary history $\boldsymbol{H}_{i}$ integrated out. Suppose gene $i$ is in ECM $k$, then \[ \text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}_{k}\right)\,\,=\,\,\sum_{\boldsymbol{H}_{i}}\text{Pr}\left(\boldsymbol{X}_{i},\boldsymbol{H}_{i}\mid\boldsymbol{\theta}_{k}\right). \] We use the following tree-version of the backward procedure to calculate this marginal probability. For gene $i$, define $\boldsymbol{X}_{i}^{s}$ as its phylogenetic profile in the sub-tree rooted at species $s$ (obviously $\boldsymbol{X}_{i}^{2S-1}=\boldsymbol{X}_{i}$). We calculate the marginal probability by recursively computing factors $\beta_{i,s}\left(h\right)$, defined as \[ \beta_{i,s}\left(h\right)\,\,\equiv\,\,\text{Pr}\left(\boldsymbol{X}_{i}^{s}\mid\boldsymbol{\theta}_{k},H_{i,s}=h\right). \] For a living species $s$, which is a leaf of the tree, \[ \beta_{i,s}\left(h\right)\,\,=\,\,\text{Pr}\left(X_{i}^{s}\mid\boldsymbol{\theta}_{k},H_{i,s}=h\right)\,\,=\,\,\left(1-q\right)^{\mathbb{I}\left\{ X_{i}^{s}=h\right\} }\left(q\right)^{\mathbb{I}\left\{ X_{i}^{s}\neq h\right\} }. \] Let $\delta_{1}\left(s\right)$ and $\delta_{2}\left(s\right)$ denote those two children species of $s$. For a inner tree species $s$, we can factorize $\beta_{i,s}\left(t\right)$ as \begin{eqnarray*} \beta_{i,s}\left(h\right) & = & \sum_{h_{1},h_{2}\in\left\{ 0,1\right\} }\text{Pr}\left(\boldsymbol{X}_{i}^{s},H_{i,\delta_{1}\left(s\right)}=h_{1},H_{i,\delta_{2}\left(s\right)}=h_{2}\mid\boldsymbol{\theta}_{k},H_{i,s}=h\right)\\ & = & \sum_{h_{1},h_{2}\in\left\{ 0,1\right\} }\text{Pr}\left(\boldsymbol{X}_{i}^{s}\mid\boldsymbol{\theta}_{k},H_{i,\delta_{1}\left(s\right)}=h_{1}\right)\cdot\text{Pr}\left(H_{i,\delta_{1}\left(s\right)}=h_{1}\mid\boldsymbol{\theta}_{k},H_{i,s}=h\right)\\ & & \cdot\text{Pr}\left(\boldsymbol{X}_{i}^{s}\mid\boldsymbol{\theta}_{k},H_{i,\delta_{2}\left(s\right)}=h_{2}\right)\cdot\text{Pr}\left(H_{i,\delta_{1}\left(s\right)}=h_{2}\mid\boldsymbol{\theta}_{k},H_{i,s}=h\right)\\ & = & \left[\sum_{h_{1}\in\left\{ 0,1\right\} }\beta_{i,\delta_{1}\left(s\right)}\left(h_{1}\right)\mathbf{Q}_{k,\delta_{1}\left(s\right)}\left(h,h_{1}\right)\right]\left[\sum_{h_{2}\in\left\{ 0,1\right\} }\beta_{i,\delta_{2}\left(s\right)}\left(h_{2}\right)\mathbf{Q}_{k,\delta_{2}\left(s\right)}\left(h,h_{2}\right)\right]. \end{eqnarray*} For each gene $i$, we calculate the $\beta$'s recursively bottom-up along the tree, until the gain branch $\lambda_{i}$, resulting in the marginal probability: \begin{eqnarray} \text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}_{k}\right) & = & \sum_{h\in\left\{ 0,1\right\} }\text{Pr}\left(\boldsymbol{X}_{i}^{\lambda_{i}}\mid\boldsymbol{\theta}_{k},H_{i,\lambda_{i}}=h\right)\text{Pr}\left(H_{i,\lambda_{i}}=h\mid\boldsymbol{\theta}_{k}\right)\nonumber \\ & = & 0+\text{Pr}\left(\boldsymbol{X}_{i}^{\lambda_{i}}\mid\boldsymbol{\theta}_{k},H_{i,\lambda_{i}}=1\right)\:\stackrel{def}{=} \:\beta_{i,\lambda_{i}}\left(1\right).\label{eq:P(X|theta)} \end{eqnarray} \subsection{Dynamic programming for integrating out $\boldsymbol{\theta}$\label{sub:theta_integration}} In each step of the Gibbs sampler, we pull out each gene from its current ECM and either re-assign it to an existing ECM or create a new singleton ECM for it according to the calculated conditional probability $\text{Pr}\left(I_{i}\mid\boldsymbol{X}_{i},\boldsymbol{H}_{i},\boldsymbol{\theta}\right)$. For each ECM $k$, its parameter $\boldsymbol{\theta}_{k}=\{\theta_{k,s}\}_{s=1}^{2S-2}$ is a vector containing $2S-2$ loss probabilities. Our real data has $S=139$, which makes each $\boldsymbol{\theta}_{k}$ a $276$-dimensional vector. The high dimensionality of $\boldsymbol{\theta}_{1},\dots,\boldsymbol{\theta}_{K}$ adds heavy computational burden and dramatically slows down the convergence rate of the Gibbs sampler. To overcome this difficulty, we develop a collapsed Gibbs sampler \citep{liu1994collapsed} by applying the predictive updating technique \citep{chen1996predictive} to improve the MCMC sampling efficiency. In particular, we integrate $\boldsymbol{\theta}_{k}$ out from the conditional probability $\text{Pr}\left(I_{i}=k\mid\boldsymbol{X}_{i},\boldsymbol{H}_{i},\boldsymbol{\theta}_{k}\right)$, so that \begin{eqnarray*} \text{Pr}\left(I_{i}=k\mid\boldsymbol{X}_{i},\boldsymbol{H},\boldsymbol{I}_{-i}\right) & = & \int\text{Pr}\left(I_{i}=k\mid\boldsymbol{X}_{i},\boldsymbol{H},\boldsymbol{\theta}_{k}\right)\text{Pr}\left(\boldsymbol{\theta}_{k}\mid\boldsymbol{X}_{i},\boldsymbol{H},\boldsymbol{I}_{-i}\right)d\boldsymbol{\theta}_{k}\\ & \propto & \text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{-i}^{k},I_{i}=k\right)\text{Pr}\left(I_{i}=k\mid\boldsymbol{I}_{-i}\right), \end{eqnarray*} where $\boldsymbol{H}^{k}=\left\{ \boldsymbol{H}_{j}:\,I_{j}=k,\,j=1,\dots,n\right\} $ denotes the evolutionary histories for genes in ECM $k$, and $\boldsymbol{H}_{-i}^{k}=\boldsymbol{H}^{k}\backslash\left\{ \boldsymbol{H}_{i}\right\} $. $\text{Pr}\left(I_{i}=k\mid\boldsymbol{I}_{-i}\right)\\ =\sum_{j\neq i}\mathbb{I}\left\{ I_{j}=k\right\} /\left(n-1+\alpha\right)$ is the Chinese restaurant prior on $\boldsymbol{I}$, and $\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{-i}^{k},I_{i}=k\right)$ is the marginal likelihood of $\boldsymbol{X}_{i}$ conditional on gene $i$ is in ECM $k$ with $\boldsymbol{\theta}_{k}$ integrated out. We calculate $\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{-i}^{k},I_{i}=k\right)$ as follows. Conditional on $\boldsymbol{H}_{-i}^{k}$, the distribution of $\theta_{k,s}$, $s=1,\dots,2S-2$, is simply a conjugate Beta posterior distribution, \begin{eqnarray*} \theta_{k,s}\mid\boldsymbol{H}_{-i}^{k} & \sim & \text{Beta}\left(a+\sum_{j\neq i,I_{j}=k}\mathbb{I}\left\{ H_{j,\sigma\left(s\right)}=1,H_{j,s}=0\right\} ,\right.\\ & & \left.b+\sum_{j\neq i,I_{j}=k}\mathbb{I}\left\{ H_{j,\sigma\left(s\right)}=1,H_{j,s}=1\right\} \right). \end{eqnarray*} Integrating out $\boldsymbol{\theta}_{k}$ with respect to this distribution, we obtain the likelihood of $\boldsymbol{X}_{i}$ conditional on $\boldsymbol{H}_{-i}^{k}$: \begin{eqnarray} \text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{-i}^{k},I_{i}=k\right) & = & \int\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}_{k},I_{i}=k\right)\text{Pr}\left(\boldsymbol{\theta}_{k}\mid\boldsymbol{H}_{-i}^{k}\right)d\boldsymbol{\theta}_{k}\nonumber \\ & = & \int\beta_{i,\lambda_{i}}\left(1\right)\text{Pr}\left(\boldsymbol{\theta}_{k}\mid\boldsymbol{H}_{-i}^{k}\right)d\boldsymbol{\theta}_{k}\:=\:\bar{\beta}_{i,\lambda_{i}}\left(1\right),\label{eq:P(Xi|H,Ii=00003Dk)} \end{eqnarray} where $\bar{\beta}$ is defined as \[ \bar{\beta}_{i,s}\left(h\right)\equiv\mathbb{E}\left[\beta_{i,s}\left(h\right)\mid\boldsymbol{H}_{-i}^{k}\right]=\mathbb{E}\left[\text{Pr}\left(\boldsymbol{X}_{i}^{s}\mid\boldsymbol{\theta}_{k},H_{i,s}=h\right)\mid\boldsymbol{H}_{-i}^{k}\right]. \] For a leaf species $s$, $\bar{\beta}_{i,s}\left(h\right)=\beta_{i,s}\left(h\right)$. For an inner tree species $s$, $\bar{\beta}_{i,s}\left(h\right)$ can be calculated recursively from bottom of the tree to the top as \begin{eqnarray*} & & \bar{\beta}_{i,s}\left(h\right) = \mathbb{E}\left[\beta_{i,s}\left(h\right)\mid\boldsymbol{H}_{-i}^{k}\right]\\ & = & \left[\sum_{h_{1}=0,1 }\bar{\beta}_{i,\delta_{1}\left(s\right)}\left(h_{1}\right)\bar{\mathbf{Q}}_{k,\delta_{1}\left(s\right)}\left(h,h_{1}\right)\right]\left[\sum_{h_{2}=0,1}\bar{\beta}_{i,\delta_{2}\left(s\right)}\left(h_{2}\right)\bar{\mathbf{Q}}_{k,\delta_{2}\left(s\right)}\left(h,h_{2}\right)\right]. \end{eqnarray*} where $\bar{\mathbf{Q}}_{k,s}$ is the expectation of transition probability matrix $\mathbf{Q}_{k,s}$ conditional on $\boldsymbol{H}_{-i}^{k}$, \begin{eqnarray} \bar{\mathbf{Q}}_{k,s} & = & \mathbb{E}\left[\mathbf{Q}_{k,s}\mid\boldsymbol{H}_{-i}^{k}\right]=\left[\begin{array}{cc} 1 & 0\\ \mathbb{E}\left[\theta_{k,s}\mid\boldsymbol{H}_{-i}^{k}\right] & 1-\mathbb{E}\left[\theta_{k,s}\mid\boldsymbol{H}_{-i}^{k}\right] \end{array}\right],\label{eq:Q_bar} \end{eqnarray} and $\mathbb{E}\left[\theta_{k,s}\mid\boldsymbol{H}_{-i}^{k}\right]$ is simply the expectation of a Beta conjugate posterior distribution. \[ \mathbb{E}\left[\theta_{k,s}\mid\boldsymbol{H}_{-i}^{k}\right]=\frac{a+\sum_{j:\,I_{j}=k,\,j\neq i}\mathbb{I}\left\{ H_{j,\delta\left(s\right)}=1,H_{j,s}=0\right\} }{a+b+\sum_{j:\,I_{j}=k,\,j\neq i}\mathbb{I}\left\{ H_{j,\delta\left(s\right)}=1\right\} }. \] In the Gibbs sampler, we also need to compute the marginal probability that gene $i$ is in its own singleton group, i.e. $\text{Pr}\left(\boldsymbol{X}_{i}\mid I_{i}\neq I_{j},\,\forall j\neq i\right)$. By integrating out $\boldsymbol{H}_{i}$ and $\boldsymbol{\theta}_{i}$, we have \begin{eqnarray} \text{Pr}\left(\boldsymbol{X}_{i}\mid I_{i}\neq I_{j},\,\forall j\neq i\right) & = & \int\sum_{\boldsymbol{H}_{i}}\text{Pr}\left(\boldsymbol{X}_{i},\boldsymbol{\theta}_{i},\boldsymbol{H}_{i}\mid I_{i}\neq I_{j},\,\forall j\neq i\right)d\boldsymbol{\theta}_{i}\nonumber \\ & = & \int\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}_{i},I_{i}\neq I_{j},\,\forall j\neq i\right)d\mathcal{F}_{0}\left(\boldsymbol{\theta}_{i}\right)\nonumber \\ & = & \int\beta_{i,\lambda_{i}}\left(1\right)\text{Pr}\left(\boldsymbol{\theta}_{i}\right)d\boldsymbol{\theta}_{i}.\label{eq:Pr(Xi|Ii=00003DK+1)} \end{eqnarray} Note that (\ref{eq:Pr(Xi|Ii=00003DK+1)}) is a special case of (\ref{eq:P(Xi|H,Ii=00003Dk)}) with $\boldsymbol{H}_{-i}^{k}=\emptyset$, thus it can be calculated in the same recursive way with \begin{eqnarray*} \bar{\mathbf{Q}}_{k,s} & = & \mathbb{E}\left[\mathbf{Q}_{k,s}\mid\boldsymbol{H}_{-i}^{k}=\emptyset\right]=\left[\begin{array}{cc} 1 & 0\\ a/\left(a+b\right) & b/\left(a+b\right) \end{array}\right]. \end{eqnarray*} \subsection{ECM strength measurement} After partitioning the input gene set $\mathcal{G}$ into ECMs, it is of great interest to determine which of the ECMs share more informative and coherent evolutionary histories than others, since the ranking of ECMs leads to different priorities for further low-throughput experimental investigations. In our Bayesian model-based framework, the strength of ECM $k$, denoted by $\phi_{k}$, is defined as the logarithm of the Bayes Factor between two models normalized by the number of genes in that ECM. The first model is under the assumption that these genes have co-evolved in the same ECM and share the same $\boldsymbol{\theta}$ parameter, and the second model is under the assumption that each gene has evolved independently in its own singleton ECM with different $\boldsymbol{\theta}$s. Specifically, with a partitioning configuration $\boldsymbol{I}$, the strength for ECM $k$ is defined as \begin{equation} \phi_{k}\,=\,\left\{ \log\left[\frac{\int\left[\prod_{i:\,I_{i}=k}\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}\right)\right]\text{Pr}\left(\boldsymbol{\theta}\right)d\boldsymbol{\theta}}{\prod_{i:\,I_{i}=k}\int\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}\right)\text{Pr}\left(\boldsymbol{\theta}\right)d\boldsymbol{\theta}}\right]\right\} \,/\:\sum_{i=1}^{n}\mathbb{I}\left\{ I_{i}=k\right\} . \end{equation} This strength measurement reflects the level of homogeneity among the evolutionary histories of genes in this ECM. A larger $\phi_{k}$ indicates that genes in ECM $k$ share more similar and informative evolutionary history with more branches having high loss probabilities. \section{Partition step: MCMC sampling and point estimators\label{sec:partition}} \subsection{Choice of hyper-parameters} Several hyper-parameters need to be specified, including the concentration parameter $\alpha$ in the Dirichlet process prior and hyper-parameters $a,b$ for the Beta prior of $\theta$s. Concentration parameter $\alpha$ controls the prior belief for the number of components in the mixture model, as larger $\alpha$ makes it easier to create a new ECM in each step of the Gibbs sampling. We set Dirichlet process concentration parameter as widely used $\alpha=1$. To test the method's robustness on $\alpha$, we applied the algorithm to simulated and real data with $\alpha=1$, $\alpha=\log\left(n\right)$ and $\alpha=\sqrt{n}$ respectively, and observed no significant changes on the posterior distribution of $K$. The reason is that histories of ECMs are often so different from each other that the likelihood function dominates the prior on determining $K$. We set hyper-parameters $\alpha=0.03$, $\beta=0.97$ to make the prior have mean $0.03$, which reflects our belief that overall $3\%$ of times a gene gets lost when evolving from one species to another on a branch of the tree. The $3\%$ average loss probability was determined based on the genome-wide average loss rate observed in our data. \subsection{Forwad-backward sampling for $\boldsymbol{H}$\label{sub:H_sampling}} In the Gibbs sampler, we apply a tree-version of forward-summation-backward-sampling method \citep[Sec. 2.4]{liu2008montecarlo} to sample/impute the hidden evolutionary history states in $\boldsymbol{H}$. Conditional on gene $i$ is in ECM $k$, we want to sample $\boldsymbol{H}_{i}$ from the conditional distribution $\text{Pr}\left(\boldsymbol{H}_{i}\mid\boldsymbol{X}_{i},\boldsymbol{\theta}_{k}\right)$. Note that, by the Markovian structure of tree HMM, $\text{Pr}\left(\boldsymbol{H}_{i}\mid\boldsymbol{X}_{i},\boldsymbol{\theta}_{k}\right)$ can be written as \begin{eqnarray*} & & \text{Pr}\left(\boldsymbol{H}_{i}\mid\boldsymbol{X}_{i},\boldsymbol{\theta}_{k}\right)\\ & = & \begin{cases} \prod_{s\in\mathcal{T}\left(\lambda_{i}\right)\backslash\lambda_{i}}\text{Pr}\left(H_{i,s}\mid H_{i,\sigma\left(s\right)},\boldsymbol{X}_{i},\boldsymbol{\theta}_{k}\right) & \text{if }H_{i,s}=0\,\,\forall s\not\in\mathcal{T}\left(\lambda_{i}\right),\\ 0 & \text{otherwise}. \end{cases} \end{eqnarray*} which suggests a sequential sampling procedure: draw $H_{i,s}$ for each species $s\in\mathcal{T}\left(\lambda_{i}\right)\backslash\lambda_{i}$ top-down along the tree from $\text{Pr}\left(H_{i,s}\mid H_{i,\sigma\left(s\right)},\boldsymbol{X}_{i},\boldsymbol{\theta}_{k}\right)$ conditional on the previously drawn state $H_{i,\sigma\left(s\right)}$ of its ancestral species $\sigma\left(s\right)$. We first use the backward procedure described in Section (\ref{sub:H_integration}) to calculate the $\beta_{i,s}$ for all species $s\in\mathcal{T}\left(\lambda_{i}\right)\backslash\lambda_{i}$ bottom-up along the tree, then we have \begin{eqnarray*} \text{Pr}\left(H_{i,s}\mid H_{i,\sigma\left(s\right)},\boldsymbol{X}_{i},\boldsymbol{\theta}_{k}\right) & \propto & \text{Pr}\left(H_{i,s},\boldsymbol{X}_{i}^{s}\mid H_{i,\sigma\left(s\right)},\boldsymbol{\theta}_{k}\right)\\ & = & \text{Pr}\left(\boldsymbol{X}_{i}^{s}\mid H_{i,s},\boldsymbol{\theta}_{k}\right)\cdot\text{Pr}\left(H_{i,s}\mid H_{i,\sigma\left(s\right)},\boldsymbol{\theta}_{k}\right)\\ & = & \beta_{i,s}\left(H_{i,s}\right)\cdot\mathbf{Q}_{k,s}\left(H_{i,\sigma\left(s\right)},H_{i,s}\right). \end{eqnarray*} Similar to Section \ref{sub:theta_integration}, we integrate out $\boldsymbol{\theta}_{k}$ to derive that \begin{eqnarray} & & \text{Pr}\left(\boldsymbol{X}_{i},\boldsymbol{H}_{i}\mid\boldsymbol{H}_{-i},I_{i}=k\right) \label{eq:P(X,H|H-i,I)} \\ & {=} & \int\text{Pr}\left(\boldsymbol{X}_{i},\boldsymbol{H}_{i}\mid\boldsymbol{\theta}_{k}\right)\text{Pr}\left(\boldsymbol{\theta}_{k}\mid\boldsymbol{H}_{-i},I_{i}=k\right)d\boldsymbol{\theta}_{k} \nonumber \\ & {=} & \left[\prod_{s\in\mathcal{T}\left(\lambda_{i}\right)\backslash\lambda_{i}} \!\bar{\mathbf{Q}}_{k,s}\left(H_{i,\sigma\left(s\right)},H_{i,s}\right)\right]\left[\prod_{s=1}^{S}\left(1-q\right)^{\mathbb{I}\left\{ X_{i,s}=H_{i,s}\right\} }q^{\mathbb{I}\left\{ X_{i,s}\neq H_{i,s}\right\} }\right] \nonumber, \end{eqnarray} where $\bar{\mathbf{Q}}_{k,s}$ was defined in Eq (\ref{eq:Q_bar}). Obviously, Eq (\ref{eq:P(X,H|H-i,I)}) is in the same form as the complete likelihood in Eq (\ref{eq:model_i}) with transition probabilities matrix $\mathbf{Q}_{k,s}$ replaced by $\bar{\mathbf{Q}}_{k,s}$. The sequential sampling strategy for $\boldsymbol{H}_{i}$ from conditional distribution $\text{Pr}\left(\boldsymbol{H}_{i}\mid\boldsymbol{X}_{i},\boldsymbol{H}_{-i},I_{i}=k\right)$ is to start with $H_{i,\lambda_{i}}=1$ and draw $H_{i,s}$ for each species $s\in\mathcal{T}\left(\lambda_{i}\right)\backslash\lambda_{i}$ top-down along the tree from distribution $\text{Pr}\left(H_{i,s}\mid H_{i,\sigma\left(s\right)},\boldsymbol{X}_{i},\boldsymbol{H}_{-i},I_{i}=k\right)$ conditional on the sampled state $H_{i,\sigma\left(s\right)}$ of its ancestral species $\sigma\left(s\right)$, with matrices $\mathbf{Q}_{k,s}$ replaced by $\bar{\mathbf{Q}}_{k,s}$. \subsection{Gibbs sampling implementation\label{sub:gibbs}} In each step of Gibbs sampling, we pull out each gene from its current ECM and assign it to an existing ECM or create a new singleton ECM for it with respect to the calculated conditional distribution $\text{Pr}\left(I_{i}\mid\boldsymbol{X}_{i},\boldsymbol{H},\boldsymbol{I}_{-i}\right)$, which is calculated as \begin{eqnarray} & & \text{Pr}\left(I_{i}=k\mid\boldsymbol{X}_{i},\boldsymbol{H},\boldsymbol{I}_{-i}\right)\label{eq:P(I=00003Dk|X,H,I)}\\ & \propto & \begin{cases} \frac{\sum_{j:\,j\neq i}\mathbb{I}\left\{ I_{j}=k\right\} }{n-1+\alpha}\cdot\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{-i},I_{i}=k\right), & \exists j\neq i,\text{ s.t. }I_{j}=k,\\ \frac{\alpha}{n-1+\alpha}\cdot\text{Pr}\left(\boldsymbol{X}_{i}\mid I_{i}\neq I_{j},\,\forall j\neq i\right), & \text{otherwise}. \end{cases}\nonumber \end{eqnarray} where $\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{H}_{-i},I_{i}=k\right)$ and $\text{Pr}\left(\boldsymbol{X}_{i}\mid I_{i}\neq I_{j},\,\forall j\neq i\right)$ are respectively calculated in Eqs (\ref{eq:P(Xi|H,Ii=00003Dk)}) and (\ref{eq:Pr(Xi|Ii=00003DK+1)}). We implement the collapsed Gibbs sampler to calculate the posterior distribution of $\boldsymbol{I}$ and $\boldsymbol{H}$. In each Gibbs sampler iteration, we conduct the following two steps: \begin{enumerate} \item Draw $\boldsymbol{H}_{i}\sim\text{Pr}\left(\boldsymbol{H}_{i}\mid\boldsymbol{X}_{i},\boldsymbol{H}_{-i},\boldsymbol{I}\right),\;i=1,\dots,n$ by the procedure in Section \ref{sub:H_sampling}. \item Draw $I_{i}\sim\text{Pr}\left(I_{i}\mid\boldsymbol{X}_{i},\boldsymbol{H},\boldsymbol{I}_{-i}\right),\;i=1,\dots,n$ as calculated in Eq (\ref{eq:P(I=00003Dk|X,H,I)}). \end{enumerate} By using this Gibbs sampling scheme, genes with similar evolutionary history will be clustered to the same ECM, and genes without any close neighbor will stay in their own singleton ECMs. This automatically estimates the number of ECMs $K$. We implemented this Gibbs sampler in C++, and tested its computational efficiency. On a typical input gene set with $\sim$ $100$ genes across $139$ species, the Gibbs sampler takes about 30 minutes to finish $1000$ iterations on a standard Linux server using a single CPU. For input gene sets of size $5000$, the Gibbs sampler takes less than 24 hours to finish $1000$ iterations. \subsection{Point estimator for ECM assignments $\boldsymbol{I}$\label{sub:ml_calc}} While the posterior distribution of $\boldsymbol{I}$ is calculated by the Gibbs sampler, users may prefer a single optimal solution for $\boldsymbol{I}$ as it is easier to interpret and proceed to further experimental investigations. To obtain a point estimator of $\boldsymbol{I}$, we calculate the posterior probability $\text{Pr}\left(\boldsymbol{I}\mid\boldsymbol{X}\right)$ at the end of each Gibbs sampling iteration. The {\it maximum a posteriori} (MAP) assignment, $\arg\max_{\boldsymbol{I}}\text{Pr}\left(\boldsymbol{I}\mid\boldsymbol{X}\right)$, will be reported as the final MAP estimation. Suppose we have $M$ MCMC samples, denoted by $\boldsymbol{I}^{\left(1\right)},\dots,\boldsymbol{I}^{\left(M\right)}$, then the MAP assignment can be approximated by \[ \hat{\boldsymbol{I}}\,=\,\underset{\boldsymbol{I}^{\left(m\right)}:\,m=1,\dots,M}{\arg\max}\:\text{Pr}\left(\boldsymbol{I}^{\left(m\right)}\mid\boldsymbol{X}\right). \] We know that \[ \text{Pr}\left(\boldsymbol{I}\mid\boldsymbol{X}\right)\,\propto\,\text{Pr}\left(\boldsymbol{X}\mid\boldsymbol{I}\right)\text{Pr}\left(\boldsymbol{I}\right), \] where $\text{Pr}\left(\boldsymbol{I}\right)$ is the Chinese restaurant process prior, \[ \text{Pr}\left(\boldsymbol{I}\right)=\frac{\prod_{k=1}^{K}\left(n_{k}-1\right)!}{n!},\quad\,\text{where }\,\,n_{k}=\sum_{i=1}^{n}\mathbb{I}\left\{ I_{i}=k\right\} , \] and $\text{Pr}\left(\boldsymbol{X}\mid\boldsymbol{I}\right)=\prod_{k=1}^{K}\text{Pr}\left(\boldsymbol{X}_{k}\mid\boldsymbol{I}\right)$, where $\boldsymbol{X}_{k}=\left\{ \boldsymbol{X}_{i}:\,I_{i}=k,i=1,\dots,n\right\} $ and $\text{Pr}\left(\boldsymbol{X}_{k}\mid\boldsymbol{I}\right)$ is the marginal probability for phylogenetic profiles of genes in ECM $k$, i.e., \begin{eqnarray*} \text{Pr}\left(\boldsymbol{X}_{k}\mid\boldsymbol{I}\right) & = & \int\left[\prod_{i:I_{i}=k}\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}_{k}\right)\right]\text{Pr}\left(\boldsymbol{\theta}_{k}\right)d\boldsymbol{\theta}_{k}. \end{eqnarray*} This integral has no closed-form solution, but we can approximate this marginal likelihood by the method in \citet{chib1995marginal} using samples obtained by the Gibbs sampler. In particular, we have the following equation holds for any $\boldsymbol{\theta}_{k}^{*}=\left(\theta_{k,1}^{*},\dots,\theta_{k,2S-1}^{*}\right)$: \begin{equation} \log\text{Pr}\left(\boldsymbol{X}_{k}\mid\boldsymbol{I}\right) \! = \! \sum_{i:I_{i}=k}\log \text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}_{k}^{*}\right)+\log \text{Pr}\left(\boldsymbol{\theta}_{k}^{*}\right)-\log \text{Pr}\left(\boldsymbol{\theta}_{k}^{*}\mid\boldsymbol{X}_{k},\boldsymbol{I}\right). \label{eq:lnp(X_k|I)} \end{equation} In the equation above, prior probability $\text{Pr}\left(\boldsymbol{\theta}_{k}^{*}\right)$ can be calculated directly and the likelihood $\text{Pr}\left(\boldsymbol{X}_{i}\mid\boldsymbol{\theta}_{k}^{*}\right)$ can be calculated by dynamic programming with computational complexity $O\left(S\right)$. We approximate $\text{Pr}\left(\boldsymbol{\theta}_{k}^{*}\mid\boldsymbol{X}_{k},\boldsymbol{I}\right)$ by running additional Gibbs sampling. Let $\boldsymbol{H}_{k}=\left\{ \boldsymbol{H}_{i}:\,I_{i}=k,\,i=1,\dots,n\right\} $. We fix ECM assignments at $\boldsymbol{I}$ and re-run Gibbs sampler for $T$ iterations to draw samples $\left\{ \boldsymbol{H}_{k}^{\left(1\right)},\dots,\boldsymbol{H}_{k}^{\left(M\right)}\right\} $ from $\text{Pr}\left(\boldsymbol{H}_{k}\mid\boldsymbol{X}_{k},\boldsymbol{I}\right)$, and then $\text{Pr}\left(\boldsymbol{\theta}_{k}^{*}\mid\boldsymbol{X}_{k},\boldsymbol{I}\right)$ can be approximated as \begin{equation} \text{Pr}\left(\boldsymbol{\theta}_{k}^{*}|\boldsymbol{X}_{k},\boldsymbol{I}\right)=\sum_{\boldsymbol{H}_{k}}\text{Pr}\left(\boldsymbol{\theta}_{k}^{*}|\boldsymbol{H}_{k}\right)\text{Pr}\left(\boldsymbol{H}_{k}|\boldsymbol{X}_{k},\boldsymbol{I}\right)\approx\frac{1}{M}\sum_{m=1}^{M}\text{Pr}\left(\boldsymbol{\theta}_{k}^{*} | \boldsymbol{H}_{k}^{\left(m\right)}\right),\label{eq:p(theta_k|X_k, I)} \end{equation} where \begin{eqnarray*} \text{Pr}\left(\boldsymbol{\theta}_{k}^{*}\mid\boldsymbol{H}_{k}^{\left(m\right)}\right) &{=}& \prod_{s=1}^{2S-2}\text{Be}\left(\theta_{k,s}^{*}\left\vert a+\sum_{i:\,I_{i}=k}\mathbb{I}\left\{ H_{i,\delta\left(s\right)}^{\left(m\right)}=1,H_{i,s}^{\left(m\right)}=0\right\} ,\right.\right.\\ & & \quad\quad\quad\quad\quad\left. b+\sum_{i:\,I_{i}=k}\mathbb{I}\left\{ H_{i,\delta\left(s\right)}^{\left(m\right)}=1,H_{i,s}^{\left(m\right)}=1\right\} \right). \end{eqnarray*} $\text{Be}(\theta | \alpha,\beta)$ is the Beta density function. Plug Eq (\ref{eq:p(theta_k|X_k, I)}) in Eq (\ref{eq:lnp(X_k|I)}), we get the approximation for marginal likelihood $\text{Pr}\left(\boldsymbol{X}_{k}\mid\boldsymbol{I}\right)$. Though the approximation is consistent for any $\boldsymbol{\theta}_{k}^{*}$, as pointed out by \citet{chib1995marginal}, the choice of $\boldsymbol{\theta}_{k}^{*}$ determines the efficiency of approximation. The approximation is likely to be more precise with a $\boldsymbol{\theta}_{k}^{*}$ that is close to the true $\boldsymbol{\theta}_{k}$. A natural choice for $\boldsymbol{\theta}_{k}^{*}$ is the posterior mean estimator of $\boldsymbol{\theta}_{k}$ as calculated in Eq (\ref{eq:point_theta}). \subsection{Point estimator for loss probabilities $\boldsymbol{\theta}$} In the implementation of the Gibbs sampler, we integrate out the $\boldsymbol{\theta}$'s from the model and run the collapsed Gibbs sampler, which improves the MCMC sampling efficiency. After obtaining the final partitioning $\hat{\boldsymbol{I}}$, we want to calculate the point estimators for the $\boldsymbol{\theta}$'s for the $K$ ECMs defined in $\hat{\boldsymbol{I}}$, denoted by $\left\{ \hat{\boldsymbol{\theta}}_{1},\dots,\hat{\boldsymbol{\theta}}_{K}\right\} $. For each ECM $k$, those branches with estimated high loss probabilities $\hat{\theta}_{k,s}$ are evolutionary signature of ECM $k$ and distinguish it from other ECMs. In Section \ref{sub:expansion}, we plug the estimated parameters $\left\{ \hat{\boldsymbol{\theta}}_{1},\dots,\hat{\boldsymbol{\theta}}_{K}\right\} $ into the likelihood ratio statistics to identify novel genes that are not in $\mathcal{G}$ but share close history with any of the $K$ ECMs. The point estimator of $\theta_{k,s}$ is defined as the posterior mean of $\theta_{k,s}$ conditional on $\mathbf{X}$ and $\hat{\boldsymbol{I}}$, i.e. $\hat{\theta}_{k,s}\,=\,\mathbb{E}\left[\theta_{k,s}\mid\mathbf{X},\hat{\boldsymbol{I}}\right].$ To compute $\hat{\theta}_{k,s}$, we re-run the Gibbs sampler conditional on $\hat{\boldsymbol{I}}$ to draw $M=1000$ samples $\boldsymbol{H}_{k}^{\left(1\right)},\dots,\boldsymbol{H}_{k}^{\left(M\right)}$ from $\text{Pr}\left(\boldsymbol{H}_{k}\mid\mathbf{X},\hat{\boldsymbol{I}}\right)$, where $\boldsymbol{H}_{k}=\left\{ \boldsymbol{H}_{i}:\,I_{i}=k,i=1,\dots,n\right\} $. $\hat{\theta}_{k,s}$ is approximated by the following Rao-Blackwellized estimator \citep{liu1994covariance}: \begin{equation} \hat{\theta}_{k,s} \approx \frac{1}{M}\sum_{m=1}^{M}\mathbb{E}\left[\theta_{k,s}\mid\boldsymbol{H}_{k}^{\left(m\right)},\hat{\boldsymbol{I}}\right],\label{eq:point_theta} \end{equation} where \begin{equation} \mathbb{E}\left[\theta_{k,s}\mid\boldsymbol{H}_{k}^{\left(m\right)},\hat{\boldsymbol{I}}\right]=\frac{a+\sum_{i:\hat{I}_{i}=k}\mathbb{I}\left\{ H_{i,\delta\left(s\right)}^{\left(m\right)}=1,H_{i,s}^{\left(m\right)}=0\right\} }{a+b+\sum_{i:\hat{I}_{i}=k}\mathbb{I}\left\{ H_{i,\delta\left(s\right)}^{\left(m\right)}=1\right\} }.\label{eq:point_theta_pm} \end{equation} \section{Expansion step: identifying novel genes co-evolved with each ECM\label{sub:expansion}} In the Partition step, CLIME 1.0 clusters the input set $\mathcal{G}$ into disjoint evolutionarily conserved modules (ECMs), simultaneously inferring the number of ECMs and each gene\textquoteright s ECM membership. The second step of CLIME 1.0, the Expansion step, identifies novel genes that are not in the input gene set $\mathcal{G}$ but share evolutionary history with any ECM $k$ identified in the Partition step. The Expansion step is essential to CLIME 1.0 as the main goal of it is to identify novel genes that are co-evolved with a subset of $\mathcal{G}$. The underlying logic is that if a ECM $k$ consists of a large number of genes of $\mathcal{G}$, then the other genes not in $\mathcal{G}$ but share history with ECM $k$ are likely functionally associated with $\mathcal{G}$. For each candidate gene $g$ and ECM $k$, $g=1,\dots,N$ and $k=1,\dots,K$, we calculate the log-likelihood ratio (LLR), \[ \text{LLR}_{g,k}\,=\,\log{\Pr}\left(\boldsymbol{X}_{g}\mid\hat{\boldsymbol{\theta}}_{k}\right)-\log\text{Pr}\left(\boldsymbol{X}_{g}\mid\hat{\boldsymbol{\theta}}_{0}\right), \] where the background null model $\hat{\boldsymbol{\theta}}_{0}$ is defined as the estimated genome-wide average loss probabilities over all $N=20,834$ human genes. The estimation of $\hat{\boldsymbol{\theta}}_{0}$ is straightforward and described in Section \ref{sub:gain_null_est}. In the LLR, the first term $\log\text{Pr}(\boldsymbol{X}_{g}\mid\hat{\boldsymbol{\theta}}_{k})$ quantifies the likelihood that $\boldsymbol{X}_{g}$ was generated from the HMM of ECM $k$, and the second term $\log\text{Pr}(\boldsymbol{X}_{g}\mid\hat{\boldsymbol{\theta}}_{0})$ quantifies the likelihood that $\boldsymbol{X}_{g}$ was generated from the background null HMM. High value of $\text{LLR}_{g,k}$ indicates that the HMM of ECM $k$ explains the phylogenetic profile $\boldsymbol{X}_{g}$ much better than the background null model, which suggests that gene $g$ is more probable to share the same evolutionary history with the genes in ECM $k$, than a randomly selected gene in human genome. For each ECM, CLIME 1.0 scores all $N-n$ human genes, ranks them by LLR scores, and reports the list of genes with LLR $>$ 0 (denoted by ECM+). Compared to naïve metrics (e.g. Hamming distance, Pearson correlation between phylogenetic profiles), this LLR statistic measures co-evolution more appropriately and achieves substantially higher prediction sensitivity and specificity (see Section \ref{sub:leave-one-out}). \section{Pre-processing: estimation of gain branches $\boldsymbol{\lambda}$ and background null model $\boldsymbol{\theta}_{0}$\label{sub:gain_null_est}} In the pre-processing stage, CLIME 1.0 infers the gain branch $\lambda_{i}$ for each gene $i$ and estimates the background null model parameter $\hat{\boldsymbol{\theta}}_{0}$ for gene loss events from phylogenetic profiles of all human genes in the input matrix. The null model is an ECM-independent HMM whose branch-specific loss probabilities are averaged over all genes in the human genome. We estimate $\boldsymbol{\theta}_{0}$ under the model that all $N=20,834$ human genes share the same loss probability vector $\boldsymbol{\theta}_{0}$, i.e. $\boldsymbol{\theta}_{1}=\boldsymbol{\theta}_{2}=\cdots=\boldsymbol{\theta}_{N}=\boldsymbol{\theta}_{0}$, and implement a Gibbs sampler to sample from the posterior distribution of $\text{Pr}\left(\boldsymbol{\theta}_{0},\boldsymbol{\lambda}\mid\mathbf{X}_{1:N}\right)$. We start the Gibbs sampler from the initial state with $\boldsymbol{\theta}_{0}=\left(0.03,\dots,0.03\right)$ and $\boldsymbol{\lambda}=\left(2S-1,\dots,2S-1\right)$. In each step of the Gibbs sampler, we conduct the following steps: \begin{enumerate} \item Draw $\lambda_{i}\sim\text{Pr}\left(\lambda_{i}\mid\boldsymbol{X}_{i},\boldsymbol{\theta}_{0}\right),\;i=1,\dots,N$. \item Draw $\boldsymbol{H}_{i}\sim\text{Pr}\left(\boldsymbol{H}_{i}\mid\boldsymbol{X}_{i},\boldsymbol{\lambda},\boldsymbol{\theta}_{0}\right),\;i=1,\dots,N$ by the forward-backward procedure. \item Draw $\boldsymbol{\theta}_{0}\sim\text{Pr}\left(\boldsymbol{\theta}_{0}\mid\mathbf{H}_{1:N},\boldsymbol{\lambda}\right),\;i=1,\dots,N$. \end{enumerate} Both conditional distributions $\text{Pr}\left(\lambda_{i}\mid\boldsymbol{X}_{i},\boldsymbol{\theta}_{0}\right)$ and $\text{Pr}\left(\boldsymbol{\theta}_{0}\mid\mathbf{H}_{1:N},\boldsymbol{\lambda}\right)$ are straightforward to sample from. $\text{Pr}\left(\lambda_{i}\mid\boldsymbol{X}_{i},\boldsymbol{\theta}_{0}\right)$ is a discrete distribution and for $s=1,\dots,2S-1$, \[ \text{Pr}\left(\lambda_{i}=s\mid\boldsymbol{X}_{i},\boldsymbol{\theta}_{0}\right)\,\propto\,\text{Pr}\left(\boldsymbol{X}_{i}\mid\lambda_{i}=s,\boldsymbol{\theta}_{0}\right)\text{Pr}\left(\lambda_{i}=s\right). \] We adopt a uniform prior on $\text{Pr}\left(\lambda_{i}=s\right)=1/\left(2S-1\right)$ and calculate likelihood function $\text{Pr}\left(\boldsymbol{X}_{i}\mid\lambda_{i}=s,\boldsymbol{\theta}_{0}\right)$ with dynamic programming outlined in Eq (\ref{eq:P(X|theta)}). $\text{Pr}\left(\boldsymbol{\theta}_{0}\mid\mathbf{H}_{1:N},\boldsymbol{\lambda}\right)$ is simply a product of Beta distributions, and each $\theta_{0,s}$, $s=1,\dots,2S-2$, can be drawn independently. Similar to Eq (\ref{eq:point_theta}), we define the point estimator of $\boldsymbol{\theta}_{0}$ as $\hat{\boldsymbol{\theta}}_{0}=\mathbb{E}\left[\boldsymbol{\theta}_{0}\mid\mathbf{X}_{1:N}\right]$ and approximate it with MCMC samples. Suppose we have $M$ MCMC samples on $\boldsymbol{\lambda}$, denoted by $\boldsymbol{\lambda}^{\left(1\right)},\dots,\boldsymbol{\lambda}^{\left(M\right)}$. For each gene $i=1,\dots,N$, we define $\hat{\lambda}_{i}$ as the {\it maximum a posteriori} (MAP) estimator approximated by MCMC samples, \[ \hat{\lambda}_{i}\,\,=\,\,\underset{s}{\arg\max}\sum_{m=1}^{M}\mathbb{I}\left\{ \lambda_{i}^{\left(m\right)}=s\right\} . \] In both the Partition and the Expansion steps of CLIME 1.0, gain branches $\boldsymbol{\lambda}=\left(\lambda_{1},\dots,\lambda_{N}\right)$ are considered as known and fixed. An alternative way for estimating the gain branch for each gene $i$ in $\mathcal{G}$ is to update $\lambda_{i}$ in the Gibbs sampler of the Partitioning step and calculate their posterior distributions. There are two reasons why we chose to estimate the gain branch for each gene in the Pre-processing step and kept it fixed in the later two steps. First, the gain branches can usually be reliably estimated with little uncertainty. For example, if a gene $i$ was truly gained at node $s$, then most likely we will observe its presences only in $\boldsymbol{X}_{i}^{s}$, which informs us that the gain event happened at node $s$. Second, by estimating the gain branches at the Pre-processing step, we reduce the computation complexity compared to a full model that updates $\boldsymbol{\lambda}$ at each MCMC iteration of the Partition step. \section{The extended model with uncertainty of phylogenetic tree}\label{sec:clime+} \subsection{The extended model of CLIME 1.1} Here we introduce the model of CLIME 1.1, which extends CLIME 1.0 by incorporating the uncertainty in phylogenetic trees. We keep the same notation as in the original CLIME 1.0. Conditioning on the tree structure $T$, we follow the same specification as in Eq (\ref{eq:CRP}). Additionally, we assume that the tree structure follows a prior $T\sim \mathcal{F_T}$, so that jointly we have: \begin{align*} \boldsymbol{X}_i|\boldsymbol{H}^T_i, T &\sim P(\boldsymbol{X}|\boldsymbol{H}^T_{i}), \quad i=1,2,\dots n,\\ \boldsymbol{H}^T_{i}|\boldsymbol{\theta}_k^T, I_i=k, T & \sim P(\boldsymbol{H}^T_{i}|\boldsymbol{\theta}_{I_i}^T)\quad i=1,2,\dots n,\\ \boldsymbol{\theta}_k^T &\sim \prod_{s=1}^{2S-2} Beta(a,b), \quad k=1,2,\dots \\ I_i &\sim CRP (\alpha), \quad i=1,2,\dots n,\\ T &\sim \mathcal{F}_T. \end{align*} Here, the superscript $T$ indicates dependency on the tree structure, which will be suppressed in the following derivations for simplicity. In practice, we utilize the bootstrap samples or posterior draws of trees from the output of tree-constructing softwares to approximate the prior distribution $\mathcal{F}_T$. That is, suppose we have $N_T$ sampled tree structures $\{T_1, \dots, T_{N_T}\}$, we assume that $\mathcal{F}_T = \frac{1}{N_T}\delta_{T_i}(T)$, where $\delta$ is the Dirac point mass. This distribution is derived based on a probabilistic model of evolution and can well characterize the variability in the estimation of the evolutionary tree. \subsection{Posterior inference of CLIME 1.1 with Gibbs sampler}\label{CLIME+Gibbs} We implement a collapsed Gibbs sampler \citep{liu1994collapsed} to draw from the posterior distribution, which cycles through the samplings of the hidden evolutionary history $\boldsymbol{H}$, the tree structure $T$, and the ECM label $\boldsymbol{I}$. The high-dimensional parameter vector $\boldsymbol{\theta}$ is integrated out throughout the process similarly as what we did for CLIME 1.0 to improve the sampling efficiency. \begin{enumerate} \item Sampling $[\boldsymbol{H}\mid \boldsymbol{X}, \boldsymbol{I},T]$: For each gene $i$, we sample its evolutionary history $\boldsymbol{H}_i$ from $\text{Pr}(\boldsymbol{H}_i | \boldsymbol{X}, \boldsymbol{H}_{-i}, T, \boldsymbol{I})$, which can be achieved by the same procedure described in Section \ref{sub:H_sampling} to sample $\boldsymbol{H}_i$, conditioning on tree structure $T$. \item Sampling $[\boldsymbol{I}\mid \boldsymbol{X}, \boldsymbol{H},T]$: For each gene $i$, we sample its cluster label $I_i$ from $\text{Pr}(I_i|\boldsymbol{I}_{-i}, \boldsymbol{X}, \boldsymbol{H}_{-i}, T)$, which, conditioning on tree structure $T$, can be similarly calculated as in Eq (\ref{eq:P(I=00003Dk|X,H,I)}). \item Sampling $[T \mid \boldsymbol{X}, \boldsymbol{I}]$: We sample $T$ based on posterior $$ \text{Pr}(T|\boldsymbol{X}, \boldsymbol{I}) \propto \mathcal{F}_T(T) \text{Pr}(\boldsymbol{X}|T,\boldsymbol{I}). $$ Since the prior $\mathcal{F}_T $ is taken as the empirical distribution $\frac{1}{N_T}\delta_{T_i}(T)$, we sample $T=T_i$ with probability proportional to $\text{Pr}(\boldsymbol{X}|T_i,\boldsymbol{I})$, where $\text{Pr}(\boldsymbol{X}|T_i,\boldsymbol{I})$ can be approximated by the method of \cite{chib1995marginal} as in Eq (\ref{eq:lnp(X_k|I)}). Note that the conditional distribution $\text{Pr}(\boldsymbol{X}|T_i,\boldsymbol{I})$ will be used again in the Partition step for calculating $\arg\max_{\boldsymbol{I}}\text{Pr}(\boldsymbol{I}|\boldsymbol{X})$, and the Expansion step for calculating the LLR of novel genes. \end{enumerate} \subsection{Partition Step of CLIME 1.1} We are mainly interested in estimating the ECM clustering labels of all input genes. Similar to CLIME 1.0, we adopt the MAP estimator $\hat{\boldsymbol{I}}=\arg\max_{\boldsymbol{I}}\text{Pr}(\boldsymbol{I}|X)$, approximated by searching through all MCMC samples of $\boldsymbol{I}$, i.e., \[ \hat{\boldsymbol{I}}\,=\,\underset{\boldsymbol{I}^{\left(m\right)}:\,m=1,\dots,M}{\arg\max}\: \text{Pr} \left(\boldsymbol{I}^{\left(m\right)}\mid\boldsymbol{X}\right). \] Specifically, \begin{align*} \text{Pr}(\boldsymbol{I}|\boldsymbol{X}) \propto \int \text{Pr}(\boldsymbol{X},\boldsymbol{I}|T)\mathcal{F}_T(T) dT = \text{Pr}(\boldsymbol{I})\sum_{T_i} \frac{1}{N_T} \text{Pr}(\boldsymbol{X}|\boldsymbol{I},T_i), \end{align*} where the conditional distribution $\text{Pr}(\boldsymbol{X}|\boldsymbol{I},T_i)$ has been calculated in Step 3 of the Gibbs sampler in Section~\ref{CLIME+Gibbs}, and the prior $\text{Pr}(\boldsymbol{I})$ is assumed to be the Chinese restaurant process. \subsection{Expansion step of CLIME 1.1} Suppose a gene $g$'s phylogenetic profile being $\boldsymbol{X}_g$ ($g= 1, \dots, N$). We calculate its LLR for all ECMs, $k=1,\dots, K$, similarly as for CLIME 1.0, i.e., $$ LLR_{g,k} = \log \text{Pr}(\boldsymbol{X}_g|I_g=k, \boldsymbol{X}, \hat{\boldsymbol{I}}) - \log \text{Pr}(\boldsymbol{X}_g|I_g=0, \boldsymbol{X}, \hat{\boldsymbol{I}}), $$ where $I_g=0$ indicates the background null model. We calculate the predictive likelihood by integrating out $\boldsymbol{\theta}_k$ and $T$: \begin{align*} \text{Pr}(\boldsymbol{X}_g|I_g=k, \boldsymbol{X}, \hat{\boldsymbol{I}}) &=\int \text{Pr}(\boldsymbol{X}_g|I_g=k, \boldsymbol{\theta}_k, T)\text{Pr}(\boldsymbol{\theta}_k,T|\boldsymbol{X}, \hat{\boldsymbol{I}})dTd\boldsymbol{\theta}_k\\ &= \int \text{Pr}(\boldsymbol{X}_g|I_g=k, \boldsymbol{\theta}_k, T)\text{Pr}(\boldsymbol{\theta}_k|T, \hat{\boldsymbol{I}}, \boldsymbol{X}) \text{Pr}(T | \boldsymbol{X}, \hat{\boldsymbol{I}}) dT d\boldsymbol{\theta}_k \\ &\propto\int \text{Pr}(\boldsymbol{X}_g|I_g=k, \boldsymbol{\theta}_k, T)\text{Pr}(\boldsymbol{\theta}_k|T, \hat{\boldsymbol{I}}, \boldsymbol{X})\text{Pr}(\boldsymbol{X}|T, \hat{\boldsymbol{I}}){\cal F}_T(T) dTd\boldsymbol{\theta}_k \end{align*} Note that $\mathcal{F}_T = \frac{1}{N_T}\delta_{T_i}(T)$, and \begin{align*} \text{Pr}(\boldsymbol{\theta}_k|T, \hat{\boldsymbol{I}}, \boldsymbol{X}) &= \int \text{Pr}(\boldsymbol{\theta}_k|T, \hat{\boldsymbol{I}}, \boldsymbol{H}) \text{Pr}(\boldsymbol{H}|\boldsymbol{X}, T, \hat{\boldsymbol{I}}) d\boldsymbol{H}, \end{align*} which can be approximated using the Gibbs sampling draws as \begin{align*} \text{Pr}(\boldsymbol{\theta}_k|T, \hat{\boldsymbol{I}}, \boldsymbol{X}) &\approx \frac{1}{M} \sum_{i=1}^M \text{Pr}(\boldsymbol{\theta}_k|T, \hat{\boldsymbol{I}}, \boldsymbol{H}^{(m)}). \end{align*} Plugging in the foregoing integral, we have the following approximation \begin{align*} \text{Pr}(\boldsymbol{X}_g|I_g=k, \boldsymbol{X}, \hat{\boldsymbol{I}}) &\approx \frac{1}{N_T}\sum_{i=1}^{N_T} \left[\frac{1}{M}\sum_{m=1}^{M} \text{Pr}(\boldsymbol{X}_g|I_g=k, \bar{\boldsymbol{\theta}}_k^{(i,m)}, T_i, )\text{Pr}(\boldsymbol{X}|T_i,\hat{\boldsymbol{I}}) \right], \end{align*} where $\bar{\boldsymbol{\theta}}_k^{(i,m)} = E(\boldsymbol{\theta}_k|T_i, \hat{\boldsymbol{I}},\boldsymbol{H}^{(m)})$ can be calculated by conjugate Beta distribution as in Eq (\ref{eq:point_theta_pm}); the predictive likelihood $\text{Pr}(\boldsymbol{X}_g|I_g=k, \bar{\boldsymbol{\theta}}_k^{(i,m)}, T_i)$ can then be calculated by dynamic programming introduced in Section \ref{sub:H_integration}; and the likelihood of input gene set $\text{Pr}(\boldsymbol{X}|T_i,\hat{\boldsymbol{I}})$ has been previously calculated in the step 3 of Gibbs sampler in Section~\ref{CLIME+Gibbs}. \section{Simulation studies\label{sec:simstudy}} We simulated the phylogenetic profile data from two models: a tree-based hidden Markov model and a tree-independent model where CLIME 1.0 and CLIME 1.1's model is mis-specified. The simulated input gene sets contain 50 genes, comprising a mixture of 5 ECMs, each with 10 genes, whose phylogenetic profiles were generated using the tree-based and tree-independent models. We analyzed the data with four methods: (1) CLIME 1.0; (2) CLIME 1.1; (3) hierarchical clustering based on Hamming distance \citep{pellegrini1999assigning};(4) hierarchical clustering based on squared anti-correlation distance \citep{glazko2004detection}, where the distance between gene $i$ and $j$ is defined as $d_{i,j}=1-\left[\text{corr}\left(\boldsymbol{X}_{i},\boldsymbol{X}_{j}\right)\right]^{2}$. For the tree-based hidden Markov model, we first used MrBayes \citep{ronquist2003mrbayes} to obtain 100 phylogenetic trees generated from the posterior distribution of the tree structure model based on 16 highly reserved proteins of 138 eukaryotic species \citep{bick2012evolutionary} and an additional prokaryote outgroup (139 species in total). For each simulation, we randomly picked one of the 100 tree structures, and generated the phylogenetic profiles and ECM assignments based on the tree-based HMM and this picked tree structure. Note that here we simulated uncertainties in the tree structure. Thus, the original CLIME 1.0 with a single phylogenetic tree (the consensus) input runs the risk of tree misspecification for these simulated data. For each ECM, we first randomly selected one branch in the evolution tree to be the gain branch, and then, along its sub-tree, selected $N_{L}$ branches to be the potential gene loss branches and assign $P_{L}$ to be their gene loss probability to generate the phylogenetic profile of each gene. A higher $P_{L}$ leads to a more similar evolutionary history among the simulated genes in the same ECM, and a lower $P_{L}$ makes the underlying histories of genes less similar and adds more difficulty to the algorithms. We simulated observation error with rate $q=0.02$, which is different from $q=0.01$ as pre-specified in CLIME 1.0 and CLIME 1.1's model. In addition, we simulated $N_{S}\in\left\{ 0,10,20,50\right\} $ singleton ECMs with one gene in each ECM as the background noise. Eventually, each input dataset contains a $(50+N_{s})\times 139$ binary matrix indicating the presence or absence of each gene in each species. For the tree-independent generating model in comparison, $N_{L}$ potential gene losses were randomly selected from 139 species without any reference to their evolutionary relations. Note that such a tree-independent model is equivalent to a tree-based model when all the losses are constrained to happen exclusively on leaf branches. We range $ P_{L} \in \{0.6,0.7,0.8,0.9\}$ and $N_{L}\in \{4,6,8,10\}$ for both the tree-based model and the tree-independent model. Higher $N_{S}$ gives more noise and higher $P_{L}$ and $N_{L}$ indicate more independent loss events across various ECMs thus stronger signal. For each set of parameters, we simulated phylogenetic profile matrices for 20 times, applied all four methods, and adopted the average adjusted Rand index (ARI) \citep{hubert1985comparing} between the estimated and true partitioning for these 20 simulated datasets to evaluate clustering accuracy. For CLIME 1.0, to be consistent with the online software, we used the consensus phylogenetic tree built from 16 highly reserved proteins of 138 species \citep{bick2012evolutionary} with one outgroup prokaryote species as the single input tree structure, shown in Figure \ref{fig:evotree}. For CLIME 1.1, we included the 100 MrBayes samples described above as the input for empirical prior of the tree structure to account for estimation uncertainty. For hierarchical clustering, we used $10\%$ singleton genes as cutoff for clustering as adopted in \citep{glazko2004detection}. The complete simulation results for tree-based model and tree-independent model are reported in Figures \ref{fig:sim_results} and \ref{fig:sim_results_nt}, respectively. \begin{figure} \centering{}\includegraphics[width=\textwidth]{./simRI_bppt_sim.pdf}\caption{Simulation study results under tree-based model. Comparison of clustering accuracy (ARI) between CLIME 1.0 (black solid line), CLIME 1.1 (red dash), hierarchical clustering by Hamming distance (green dot), and hierarchical clustering by anti-correlation (blue dot-dash). $N_{L}$: number of tree branches for each ECM to have non-zero loss probability. $P_{L}$: loss probability for the $N_{L}$ branches. $N_{S}$: number of singleton ECMs for each dataset.\label{fig:sim_results}} \end{figure} \begin{figure} \centering{}\includegraphics[width=\textwidth]{./simRI_bppt_simnt.pdf}\caption{Simulation study results under tree-independent model. Comparison of clustering accuracy (ARI) between CLIME 1.0 (black solid line), CLIME 1.1 (red dash), hierarchical clustering by Hamming distance (green dot), and hierarchical clustering by anti-correlation (blue dot-dash). $N_{L}$: number of tree branches for each ECM to have non-zero loss probability. $P_{L}$: loss probability for the $N_{L}$ branches. $N_{S}$: number of singleton ECMs for each dataset\label{fig:sim_results_nt}} \end{figure} As shown in Figure \ref{fig:sim_results}, when phylogenetic profiles were generated from a tree-based model of evolution with the risk of tree misspecification, CLIME 1.1 dominates all other clustering methods in terms of accuracy with the tree-uncertainty taken into account. CLIME 1.0, in general, also holds the lead over hierarchical clustering methods. The advantages of our tree-based Markov model are even more significant in scenarios where more loss events are present along the evolutionary history, i.e., more loss branches ($N_L \geq 6$) with higher ($P_L \geq 0.7$), to provide stronger signals for our tree-based model. Another feature of our methods is the robustness against the varying number of singleton ECMs, or the noise in clustering. As the noise level ($N_S$) increases, both CLIME 1.0 and CLIME 1.1 demonstrate consistent clustering accuracy, while hierarchical clustering methods show severe decay in performance. Notably, by incorporating the uncertainty of tree structure and weighting the clustering on the more probable tree structures, CLIME 1.1 further boosts the clustering accuracy of CLIME 1.0, where the latter draws inference based solely on a single possibly incorrect tree input. Simulations under the tree-independent model give all four methods a more even ground. Yet still, both CLIME 1.0 and CLIME 1.1 outperformed other benchmark methods in most of the simulation settings. Specifically, CLIME 1.1 maintained its domination over all other methods, sustaining the benefit of incorporating of tree structure viability. With a distribution of possible evolutionary trees to integrate, CLIME 1.1 takes advantage of the effect of model averaging through posterior updates of tree structure, and adapts more successfully to the change of the generative model. Both CLIME 1.0 and CLIME 1.1 maintained consistency in performance across varying simulation setting, while hierarchical methods, especially the one with Hamming distance, is very sensitive to the noise level ($N_S > 0$). \section{Application to real data\label{sec:realdata}} We next apply both CLIME 1.0 and CLIME 1.1 to several real datasets, including two selected gene sets (mitochondrial complex I and proteinaceous extracellular matrix), as well as 409 manually curated gene sets from OMIM (Online Mendelian Inheritance in Man) \citep{hamosh2005online}, where each gene set consists of genes known to be associated to a specific genetic disease. We show that CLIME 1.0 and CLIME 1.1 enjoy advantages in clustering accuracy over existing methods. Furthermore, the results of clustering and expansion analysis by CLIME 1.0 and CLIME 1.1 on these gene sets agree with established biological findings and also shed lights on potential biological discovery on gene functions and pathway compositions. \subsection{Phylogenetic tree and matrix} To facilitate the following analyses by CLIME 1.0, we used a single, consensus species tree that was published in \citep{bick2012evolutionary} consisting of 138 diverse, sequenced eukaryotes with an additional prokaryote outgroup. For the analyses by CLIME 1.1, we used 100 posterior samples obtained by MrBayes \citep{ronquist2003mrbayes} based on the 16 highly reserved proteins of 138 species used by \citep{bick2012evolutionary}. We used the phylogenetic profile matrix in \citep{li2014expansion} for all $N=20,834$ human genes across the 139 species. A greater diversity of the organisms in the input tree often leads to a greater power for CLIME 1.0 and CLIME 1.1, through the increased opportunity for independent loss events. \begin{figure} \centering{}\includegraphics[scale=0.565]{./tree_138}\caption{Phylogenetic tree in use with 138 eukaryotic species\textcolor{black}{{} \citep{bick2012evolutionary}}. The tree consists of species in four different eukaryotic kingdoms (Protists, Plants, Fungi and Animals), labeled in four different colors. Human is the rightmost species on the tree.\label{fig:evotree}} \end{figure} \subsection{Leave-one-out cross validation\label{sub:leave-one-out}} We compared CLIME 1.0 with Hamming distance and BayesTraits (BT) \citep{barker2005predicting, pagel2007bayestraits}, another phylogenetic-tree-based method for gene co-evolution analysis. We conducted leave-one-out cross-validation analysis on two selected pathway/gene sets (mitochondrial complex I and proteinaceous extracellular matrix) to evaluate the clustering accuracy of the three methods. Note that we here focus on the performance of CLIME 1.0, considering the computational demands of CLIME 1.1. In Section \ref{sec:complex1} and \ref{sec:OMIM}, we show that CLIME 1.0 and CLIME 1.1 give relatively consistent results in real pathway-based data analysis. For each gene set, we applied CLIME 1.0 to all but one gene within a specific pathway as test set for ECM identification and then expand the identified ECMs with the remaining human genes ($\sim 20,000$ candidate genes). We varied the LLR threshold in the expansion step of CLIME 1.0 and repeated this leave-one-out procedure for all genes in the gene set to calculate the average sensitivity and specificity of the algorithm. Note that the true positive calls (sensitivity) are made when the left-out gene is included in the expansion list and false positive calls are made when genes outside the pathway are included in the expansion list of any established ECM. For comparison, we also conducted the same experiment with the Hamming distance method \citep{pellegrini1999assigning} and BayesTraits. BayesTraits is computationally intensive as it evaluates genetic profiles in a pairwise manner (estimated $\sim 244$-hour CPU time for $44\times 20,000$ pairwise calculation, one leave-one-out experiment for a $44$-gene pathway test set; versus CLIME 1.0's $\sim 2$-hour CPU time). For efficiency in computation, we only subsampled $500$ genes from remaining ($\sim 20,000$) human genes as the candidate set for gene set expansion. We calculated the pairwise co-evolution p-values by BayesTraits between all genes in the leave-one-out test set and the candidate set, and made a positive call if the minimal p-value between the candidate gene and each gene in the test set is below certain threshold. Similarly, we varied the threshold to obtain the sensitivity and specificity of the algorithm. We applied all these methods to two gene sets, mitochondrial respiratory chain complex I (44 genes), and proteinaceous extracellular matrix (194 genes) and report the receiver operating characteristic curves (ROC, true positive rate (TPR) versus false positive rate (FPR)) of all methods in Figure \ref{fig:loo_CI} and \ref{fig:loo_PEM} respectively. Both CLIME 1.0 and BayesTraits dominated the Hamming-distance-based method, showing the strong advantage of incorporating the information from phylogenetic trees for functional pathway analysis based on genetic profiles. Compared with BayesTraits, CLIME 1.0 performed slightly better than BayesTraits in majority of the evaluation range of the ROC curve. Particularly, CLIME 1.0 dominated BayesTraits in both experiments when false positive rates are under $0.2\%$, indicating CLIME 1.0's strength in providing accurate gene clustering with controlled mis-classification errors. \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{{./CI_loo_ROC}.pdf} \caption{Real data leave-one-out cross-validation on gene set: mitochondrial respiratory chain complex I. Comparison of ROC curves between CLIME 1.0, BayesTrait, and Hamming distance. \label{fig:loo_CI}} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{{./PEM_loo_ROC}.pdf} \caption{Real data leave-one-out cross-validation on gene set: proteinaceous extracellular matrix. Comparison of ROC curves between CLIME 1.0, BayesTrait, and Hamming distance. \label{fig:loo_PEM}} \end{figure} \subsection{Human complex I}\label{sec:complex1} We compared CLIME 1.0 and CLIME 1.1 on a set of 44 human genes encoding complex I, the largest enzyme complex of the mitochondrial respiratory chain essential for the production of ATP \citep{balsa2012ndufa4}. CLIME 1.0 partitioned the 44 genes into five nonsingleton ECMs, and CLIME 1.1 gave nearly identical clustering (ARI: 0.962), as shown in Figure \ref{fig:complex1}, except that CLIME 1.1 combines the two ECMs by CLIME 1.0 that are related to nuclear DNA encoded subunits of the alpha subcomplex (with prefix NDUF)\citep{mimaki2012understanding}. Both CLIME 1.0 and CLIME 1.1 identified an ECM containing only the subunits encoded by mitochondrial DNA (ECM1: ND1, ND4 and ND5, ECM strength by CLIME 1.0: $\phi=30.1$, CLIME 1.1: $\phi=30.1$), and an ECM comprising solely the core components of the N module in complex I (ECM2: NDUFV1 and NDUFV2, ECM strength by CLIME 1.0: $\phi=6.2$, CLIME 1.1: $\phi=6.7$)\citep{mimaki2012understanding}. A detailed report on the largest ECM (indexed ECM3, ECM strength by CLIME 1.0: $\phi=5.0$, CLIME 1.1: $\phi=5.8$) by both methods and their respective top extended gene sets (ECM3+) is shown in Table \ref{tab:complex1}. ECM3 mainly contains the nuclear-DNA-encoded subunits of complex I, including all four core subunits in the module Q of complex I (marked by asterisk). Among the top extended genes in ECM3+, multiple complex I assembly factors and core subunits are identified (marked by boldface). \begin{figure} \centering \includegraphics[width=\textwidth]{./human_CI_0_100_tr1.pdf} \caption{Partition of 44 human Complex I genes by CLIME 1.0 and CLIME 1.1. Genes with same colored blocks are included in the same non-singleton ECMs. Grey color indicates singleton genes.}\label{fig:complex1} \end{figure} \begin{table}[ht] {\centering \setlength{\tabcolsep}{4pt} \resizebox{\textwidth}{!}{ \begin{tabular}{r|llll||llll} \hline & \multicolumn{4}{c||}{CLIME 1.0}& \multicolumn{4}{c}{CLIME 1.1} \\ \hline \multirow {4}{*} {ECM3}& NDUFS7* & NDUFA9 & NDUFS3* & NDUFS4 & NDUFS7* & NDUFA9 & NDUFS3* & NDUFS4 \\ & NDUFS6 & NDUFS2* & NDUFS1 & NDUFA6 & NDUFS6 & NDUFS2* & NDUFS1 & NDUFA6 \\ & NDUFA12 & NDUFS8* & NDUFA13 & NDUFB9 & NDUFA12 & NDUFS8* & NDUFA13 & NDUFB9 \\ &\multicolumn{4}{c||}{}& NDUFA5 & NDUFA8 & NDUFA2 &\\ \hline \hline \multirow {5}{*} {ECM3+}& \textbf{NDUFAF5} & GAD1 & GADL1 & \textbf{NDUFAF7} & GAD1 & \textbf{NDUFAF7} & GADL1 & \textbf{NDUFAF5} \\ & DDC & HDC & IVD & \textbf{NDUFAF6} & DDC & HDC & HSDL2 & CSAD \\ & \textbf{NDUFV1} & ACADL & \textbf{NDUFV2} & CSAD & \textbf{NDUFAF1} & CPSF6 & IVD & GAD2 \\ & \textbf{NDUFAF1} & CPSF6 & GAD2 & HSDL2 & ACADL & \textbf{NDUFAF6} & HPDL & HPD \\ & RHBDL1 & MCCC2 & HPDL & ACADVL & \textbf{NDUFV1} & \textbf{NDUFV2} & RHBDL1 & MCCC2 \\ \hline \end{tabular}} \label{tab:complex1}} \caption{ECM3 and its extension ECM3+ derived from the set of 44 human Complex I genes by CLIME 1.0 and CLIME 1.1. Asterisk indicates core subunits of complex I; boldface indicates predictions with recent experimental supports for functional association with the input set.} \end{table} \subsection{Gene sets related to human genetic diseases}\label{sec:OMIM} We performed the analysis by CLIME 1.0 on 409 manually curated gene sets from OMIM (Online Mendelian Inheritance in Man)\citep{hamosh2005online}, where each gene set consists of genes known to be associated with a specific genetic disease. CLIME 1.0 identified non-singleton ECMs in 52 of these 409 gene sets (check \url{http://www.people.fas.harvard.edu/~junliu/CLIME/} for complete results). Figure~\ref{fig:omim} shows the top 20 disease-associated gene sets with the highest strength ECMs. For gene sets related to diseases such as Leigh syndrome, mitochondrial complex I deficiency, and congenital disorder of glycosylation, multiple high-strength ECMs were identified by CLIME 1.0, which suggests that functionally distinct sub-groups may exist in these gene sets. We note that among top five gene sets, three are related to the human ciliary disease (highlighted in red). Specifically, the only non-singleton ECM ($\phi=13.2$) for ciliary dyskineasia, defined by having more than 15 independent loss events, is fully displayed in Figure \ref{fig:omim}B. The expansion list contains 100 novel genes with $\text{LLR}>0$. As illustrated by the heat map in Figure \ref{fig:omim}B, all genes in the ciliary dyskineasia ECM and its expansion list share a remarkable consensus in their phylogentic profiles. Furthermore, about 50 of the 100 expansion genes belong to the Ciliome database \citep{inglis2006piecing}, an aggregation of data from seven large-scale experimental and computational studies, showing strong functional relevance of CLIME 1.0's expansion prediction. \begin{figure} \centering{}\includegraphics[scale=0.72]{./CLIME_cilia}\caption{(A) Top 20 OMIM gene sets with highly informative ECMs by CLIME 1.0, ranked by strength of the top ECM. All non-singleton ECMs are shown as separate dots. Three gene sets related to human ciliary dysfunction are highlighted in red. (B) ECM 1 for ciliary dyskinesia gene set. The inferred gain/loss events are indicated by blue and red tree branches. Blue/white and green/white matrices show phylogenetic profiles of ECM and expanded genes, respectively. Green arrows indicate predicted new genes that are supported by Ciliome database. \label{fig:omim}} \end{figure} We next compared CLIME 1.1 with CLIME 1.0 on this ciliary dyskinesia gene set. The ECM partition by CLIME 1.1 is identical to CLIME 1.0, providing a strong support of such a subgroup structure among the ciliary-dyskinesia-related genes. We further compared the extended gene sets (ECM+) obtained by CLIME 1.0 and CLIME 1.1. Among the top 100 predicted genes, 89 are shared by CLIME 1.0 and CLIME 1.1, with top 20 reported in Table \ref{tab:OMIM}. Majority of the new members predicted by CLIME 1.0 and CLIME 1.1 can be validated as having functional association with cilia (cross-referenced by GeneCards: \url{https://www.genenames.org/}). In addition, the top four predicted genes have been found related to the primary ciliary dyskinesia \citep{horani2016genetics}, further demonstrating the promising power of CLIME 1.0 and CLIME 1.1 in the prediction of functional relevance. \begin{table}[ht] \centering \scriptsize \setlength{\tabcolsep}{4pt} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{r|llll||llll} \hline & \multicolumn{4}{c||}{CLIME 1.0}& \multicolumn{4}{c}{CLIME 1.1} \\ \hline \multirow {2}{*} {ECM} & RSPH4A & HEATR2 & RSPH9 & CCDC39 & RSPH4A & HEATR2 & RSPH9 & CCDC39 \\ & CCDC40 & DNAAF2 & & & CCDC40 & DNAAF2 & & \\ \hline \hline \multirow {4}{*} {ECM+} & \textbf{RSPH6A}* & \textbf{CCDC65}* & \textbf{RSPH3}* & \textbf{C6orf165}* & \textbf{RSPH6A}* & \textbf{CCDC65}* & \textbf{C6orf165}* & \textbf{RSPH3}* \\ & \textbf{DRC1}* & \textbf{SPEF1} & \textbf{PIBF1} & SPATA4 & \textbf{CCDC113} & \textbf{DRC1}* & \textbf{SPEF1} & \textbf{PIH1D3}* \\ & \textbf{MAATS1} & \textbf{CCDC113} & \textbf{CCDC147} & ODF3 & SPATA4 & \textbf{MAATS1} & \textbf{CCDC147} & \textbf{PIBF1} \\ & \textbf{C21orf59}* & \textbf{SPAG16} & \textbf{IQUB} & RIBC2 & ODF3 &\textbf{ IQUB} & \textbf{CCDC135} & CCDC146 \\ & CCDC146 & \textbf{CCDC135} & \textbf{CCDC63} & \textbf{PIH1D3}* & \textbf{TTC26} & \textbf{SPAG16} & \textbf{CEP164} & \textbf{CCDC13} \\ \hline \end{tabular}} \caption{The nonsingleton ECM and its extension ECM+ of the ciliary dyskinesia gene set by CLIME 1.0 and CLIME 1.1. For ECM+, boldface indicates predictions for functional association with the input set; asterisk indicates direct association with ciliary dyskinesia disease by recent experimental or human genetic support.}\label{tab:OMIM} \end{table} \section{Discussion\label{sec:discussion}} Instead of integrating the pairwise co-evolution information of the genes in the input gene set in an {\it ad hoc} way, CLIME 1.0 explicitly models multiple genes in a functional gene set as a set of disjoint gene modules, each with its own evolutionary history. Leveraging information from multiple genes and modeling profile errors are critical because phylogenetic profiles are often noisy due to incomplete assemblies/annotations and errors in detecting distant homologs. Furthermore, CLIME 1.0 automatically infers the number of modules and gene assignments to each module. As an extension, CLIME 1.1 inherits these strengths of CLIME 1.0 and enhances its robustness and accuracy by incorporating uncertainty of evolutionary trees. CLIME 1.1, thereby, takes into account the estimation error in the tree estimation process, as well as the variability of phylogenetic relationships among genes. Simulation studies and leave-one-out cross-validations on real data showed that CLIME 1.0 achieved a significantly improved accuracy in detecting shared evolution compared with benchmark methods we tested. CLIME 1.1 further adds to CLIME 1.0 with improved robustness and precision. Applications of CLIME 1.0 and CLIME 1.1 to real data testified the algorithms' excellent power in predicting functional association between genes and in providing guidance for further biological studies (see \citep{li2014expansion} for more details).{ Based on our exemplary pathway/gene set data, CLIME 1.0 and 1.1 showed a great consistency in identifying evolutionarily conserved subsets of genes, and demonstrated high accuracy in recovering and predicting functionally connected gene groups. CLIME 1.1 further added in with discoveries of improved robustness and relevance.} {Specifically, in our investigations of the 44 complex-I-encoding genes, both CLIME 1.0 and 1.1 were able to identify subgroups of genes encoding different functional modules of complex I, and connect assembly factors associated with each module. CLIME 1.1 added to CLIME 1.0 by combining the two subgroups with nuclear DNA encoded subunits, further improving the biological interpretation of the clusters. This helps provide insights on the complete picture of complex I's catalyzing process and mechanism. We also applied our methods to more than 400 gene sets related to human genetic diseases, where CLIME 1.0 and 1.1 showed great potentials in predicting genes' functional associations with human genetic diseases. Focusing on the ciliary dyskineasia, both CLIME methods established novel connections between classic disease-driven genes and other cilia-related genes from the human genome. CLIME 1.1 furthered prediction relevance with 5\% more cilia-related genes among the top predictions. Most notably, the top four predicted genes by both CLIME methods have been validated by recent studies on primary ciliary dyskineasia. This prompts biologists with a great confidence in using CLIME as a powerful tool and in following up CLIME's findings for further experimental validations and studies on such human genetic diseases.} To trade for a gain in predication accuracy, CLIME 1.0 demands a comparatively high computational capacity. The computational complexity is about $O(Sn^{2})$ per MCMC iteration in the Partition step. For CLIME 1.1, with incorporation of tree uncertainty, the step-wise computational complexity is about $O(N_TSn^2)$. {In practice, to ensure computational efficiency, we recommend implementing CLIME 1.0 firstly for a general, large-scale exploration and CLIME 1.1 for more focused, follow-up analyses and validations, as demonstrated in the Section \ref{sec:complex1} and \ref{sec:OMIM}.} As shown in simulation studies, CLIME 1.0 and CLIME 1.1 gain most of its prediction power from the abundance of independent gene loss events through the evolutionary process. In fact, independent gene losses create variability of phylogenetic profiles between distinctive gene clusters, thus providing a strong signal for CLIME 1.0 and 1.1 to make inference on. Similarly, in real data we observe that CLIME 1.0 and 1.1\textquoteright s power is derived from the diversity of species genomes, as it provides us opportunity to observe more shared loss events. In recent years, the availability of completely sequenced eukaryotic genomes is dramatically increasing. With growing abundance and quality of eukaryotic genome sequences, the power of CLIME 1.0 and 1.1 will increase as evolutionarily distant species are more likely to possess abundant gene loss events, and thus stronger signals for CLIME 1.0 and 1.1 to extract. Further improvements of the model are possible. Currently, we do not estimate $q$ but set it as $0.01$ based on our prior knowledge on the observation error rate. Though we observe that the model is robust to $q$, it is more statistically rigorous to estimate $q$ from data. Furthermore, as there is variation between the quality of sequenced genomes, we can further assume that different species have different mis-observation rates with independent priors, which can be estimated through posterior updating. Admittedly, point estimates for cluster labels by MAP provide an interpretable representation of the posterior results, especially convenient for scientists to conduct follow-up analysis or experiment. We may also consider alternative representation of the posterior on the cluster assignment, for example, the co-assignment probability for genes. The results, a C++ software implementing the proposed method, and an online analysis portal are freely available at \url{http://www.gene-clime.org}. The website was previously introduced in \citep{li2014expansion}. \section*{Acknowledgment} This research was supported in part by the NSF Grant DMS-1613035, NIGMS Grant R01GM122080, and NIH Grant R35 GM122455-02. VKM is an Investigator of the Howard Hughes Medical Institute. \bibliographystyle{imsart-nameyear}
1,116,691,498,085
arxiv
\section{Introduction} We explore the performance of a reinforcement learning (RL) process and a new whole-body impedance and force controller for robust dynamic locomotion on a full biped humanoid dynamical model. Full-bodied 3D humanoid dynamic walking based on inverted pendulum (IP) dynamics and RL has not been studied to date. To enable the RL process to run efficiently, we found that utilizing phase-space planning (PSP) \cite{Zhao:2012de} provides a space of practical parameters that enables the transition function to operate in the reduced inverted pendulum manifold. A key advantage of using an IP model is that it generalizes locomotion for many types of systems with a light dependency with their concrete kinematic structure. Using an IP model not only reduces the search space but also enables the same procedure to be used across different types of full-bodied bipedal humanoid robots. A closely related work to ours, \cite{MacAlpine:2012vp} utilizes a trajectory parametrization of the dynamic locomotion process \cite{Graf:2009uz}, specifically customized for a particular robot, the NAO. As such, our study can be viewed as a generalization of this type of work to more general models typically used in the dynamic walking communities \cite{dynamic-walking-2017} -- e.g. inverted pendulums. In addition, the previous line of work on NAO robots, has remained quiet with respect to quantifying robustness. In the work presented here, we address robustness as a main thrust performing detailed simulations with respect to unplanned and large disturbances at moderate walking speeds. Since IP dynamics are in a reduced manifold, we propose a new type of whole-body controllers that are highly robust and effective to transfer the IP-based locomotion process into the full humanoid model. In particular we incorporate new efficiently-computed feedforward terms, momentum and balancing tasks, and more accurate contact models while maintaining the key capability of task prioritization. In addition we completely reformulate the control structures for whole-body control with respect to previous work of ours on this area of whole-body control. We accommodate for the new models and also achieve high computational efficiency. As such, we build on our long time history of devising whole-body controllers, this time around making significant algorithmic changes. We believe these transformations constitute a quantum leap with respect to whole-body controllers with dynamic locomotion capabilities. The combination of RL-IP-PSP for locomotion pattern generation achieves significant robustness by training a neural network through an actor-critic process with many possible center-of-mass states, representing potential disturbances, then learning successful step timing and foot position policies for recovery. Utilizing step timing and foot positions is not typically explored because simultaneously varying both of these parameters results in nonlinear system dynamics. \cite{Herdt:2010bh} proposed a model predictive control (MPC) method for synthesizing walking patterns based on desired foot positions, kinematic limits, and given step timing. \cite{Kryczka:2015ck} formulated a nonlinear optimization problem to solve two walking steps ahead of time to reduce computational cost. \cite{Khadiv:2016hm} linearized an optimization problem by searching for a solution one step ahead of time. In contrast, instead of relying on runtime optimizations, we train a control policy offline using an IP model, and use it afterwards for real-time control of full humanoid robots being physically disturbed. Therefore, our learned locomotion planning generator can plan hundreds of steps ahead of time in an instant, compared to the stepping time scale. As such, speed is a key characteristic of the proposed planning and control framework compared to the state of the art. Devising a new robust dynamic locomotion generator is insufficient to be directly used in full humanoid robots. Therefore building on our expertise in this area, we devise a new type of whole-body controller (WBC) which we call whole-body locomotion controller (WBLC) that focuses on speed, unilateral contact constraints, and speedy prioritized task control. The proposed WBC algorithm enables to efficiently compute projection-based hierarchical task controllers \cite{Sentis:COl04Q1j} and at the same time incorporate contact inequality constraints which are represented by a quadratic programming (QP) process\cite{koolen2013summary,feng2015optimization,kuindersma2014efficiently}, hierarchical quadratic programming (HQP) \cite{saab2013dynamic}. While QP based controllers have been very successful for field application their computational cost is considerably higher than that of projection-based methods. In contrast, projection-based methods have not incorporated before inequality constraints such as unilateral contact and friction constraints. Our proposed WBC algorithm combines for first time QP and projection-based methods. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{A_valkyrie_figure} \caption{{\bf Type of humanoid platform our controller explores.} The left image shows NASA's Valkyrie humanoid robot, with 135.9 $\si[per-mode=symbol]{\kilo\gram}$ weight and 1.83 $\si[per-mode=symbol]{\meter}$ height. The right image shows our dynamic simulation of Valkyrie using the physics based simulator SrLib.} \label{fig:valkyrie_figure} \vspace{-1mm} \end{figure} In our study, we introduce a centroidal angular momentum (CAM) tasks to improve agile locomotion performance and we examine its effect when used in the whole-body control hierarchy. For example, when a robot quickly shakes or rotates its body, we reduced undesirable arm movements that result from angular momentum compensation by introducing an arm motion task with higher priority than the CAM task. In the proposed WBLC, we devise new projection-based recursive structures that incorporate unilateral contact and friction constraints, yielding the desired reduction on the computational cost compared to other QP based algorithms. However, we don't only achieve computational efficiency by combining QP and projection-based methods. We do achieve it by important improvements on the computation of the projection-based operations themselves. Indeed, in conventional projection-based methods, the computational cost of some operations is considerably high. For instance, one well-known WBC algorithm that uses joint acceleration, \cite{siciliano1991general}, includes costly terms such as the time derivative of a null space projection matrix. In addition, many WBC algorithms contain computations for Coriolis/centrifugal and gravitational forces projected on in operational task space \cite{sentis2005synthesis,mansard2009unified}, which are costly to calculate specially as the number of control tasks increases. Our WBLC algorithms eliminates this problems. An analytic solution of the time derivatives of Jacobians is devised by employing Lie group operators, and an implementation using the Rigid Body Dynamic Library. We also eliminate the need to compute Coriolis/centrifugal terms for every task priority. \begin{figure} \centering \includegraphics[width=\columnwidth]{A_phase_planner_concept} \caption{{\bf PIPM and CoM phase plots.} (a) PIPM 3D position moving on a variable height surface. (b) Overlapped PIPM phase plots corresponding CoM paths during dynamic walking. In the sagittal plane, we can see multiple parabolas connected to each other corresponding to various walking steps. Parabolas in the frontal plane produce limit cycles.} \label{fig:psp_concept} \end{figure} Overall, the main contributions of our study are as follows. First, devising a novel learning framework for robust dynamic locomotion under push disturbances achieving virtually instantaneous re-planning of an order of magnitude more steps that the state-of-the-art. Second, we devise an elegant method to introduce steering capabilities to phase-space planning for dynamically moving in all directions. Third, we devise a new whole-body locomotion controllers, which yields the benefits of QP based computation of reaction forces and projection-based prioritized task control. Due to many optimizations, we believe that this controller is one of the fastest WBC's that fulfills both prioritization and practical inequality constraints. Lastly, we integrate all of these algorithms into a comprehensive software and conduct thorough testing on robust dynamic locomotion under large push disturbances on physics-based simulations of Valkyrie. \begin{figure*} \centering \includegraphics[width = 1.95\columnwidth]{A_psp_explanation} \caption{{\bf Phase Space Planner (PSP).} (a) shows the method to find a switching time and a lateral foot placement position given forward step location and an apex velocity. In the sagittal phase plot, we can see the given current CoM state and the apex state uniquely define switching states \myswitch. From an initial state, the planner computes the switching and apex times. These two timing values are used to find the next step location in the lateral plane. (b) is the process to steer the robot's walking direction. When changing the walking direction, we first align the orientation of the next local frame to the direction the robot intends to go, and second we project the current CoM state into the next local frame. } \label{fig:psp_exp} \end{figure*} The paper is organized as follows. In Section~\ref{sec:rl_psp}, we describe the RL-IP-PSP process after briefly reviewing related work in Section~\ref{sec:related_work}. We then introduce the formulation of WBLC in Section~\ref{sec:wblc} and the methods to efficiently obtain the time derivative of Jacobians for motion control and CAM tasks. In Section~\ref{sec:result}, we study the effects of our framework in agile task such as shaking the robot's body, walking while steering, and recovering from large pushes while walking. We do all of this using a model of NASA's Valkyrie robot and the srLib physics based engine\footnote{Seoul National University Robotics Library. Physics-based simulation. Open-source \url{http://robotics.snu.ac.kr/srlib/}}(Fig.\ref{fig:valkyrie_figure}). A more exhaustive review of previous work can be found in Appendix~\ref{sec:related_work}. \section{Reinforcement Learning based \\Phase Space Planner} \label{sec:rl_psp} We devise an RL process around a PSP framework, the latter significantly enhancing the learning efficiency by exploiting the inherent directional walking constraints of PSP. PSP generates effective step switching information using simplified models such as the prismatic inverted pendulum model (PIPM). In Fig.~\ref{fig:psp_concept}, we show phase plots across multiple walking steps of the CoM sagittal and lateral phase portraits based on PIPM dynamics. In the sagittal plane, the path consists of connected parabolas, while in the frontal plane, the walking path follows semi-periodic parabolas in a closed cycle. For convenience, we will use $x$ for the sagittal plane and $y$ for the frontal plane. \subsection{Phase Space Planner} \label{sec:psp_explain} Leading step planning generators, such as Divergent Component of Motion \cite{Englsberger:2015jp}, find CoM paths given step positions and their timing as input information. The ZMP Preview Control method \cite{Kajita:2003iq} has different mechanics but the generated output can be interpreted as finding the CoM path given step position and timing input information. In contrast, PSP finds the step switching time and lateral foot positions, given sagittal foot positions and apex velocities (Fig.~\ref{fig:psp_exp}). The apex states are those at the instant when the sagittal CoM velocity is at its minimum; equivalently, they can be considered as states where the sagittal CoM position is zero in a local frame attached to the stance foot, i.e. below the CoM sagittal position. In Fig.~\ref{fig:psp_exp}(a), we can see that the current robot's CoM state and the next desired apex state uniquely define a switching state \myswitch, a switching time, and an apex time. These timings are used to find the next lateral foot position, $p_{y,2}$. Note that the resultant locomotion trajectory is straight forward if $\dot{x}_{apex}$ is a positive number and $\dot{y}_{apex} =0$. In contrast, the proposed algorithm, applies a simple and elegant modification that allows to dynamically steer the biped in any direction of walking (see Fig.~\ref{fig:psp_exp}(b)). When we need to turn walking direction, we re-initialize the orientation of the local frame $\{b\}$ to the new direction and project the current state to the new frame. The original PSP algorithm devised locomotion trajectories via numerical integration. However, for algorithmic speed purposes, the methods presented here assume that the CoM height is linear allowing us to exploit an analytical solution (see Appendix.~\ref{sec:append_psp}). Considering a one step ahead plan, an initial CoM state, and desired future states $[p_x,~\dot{x}_{apex},~\dot{y}_{apex}]^{\top}$ PSP finds the next step position and timing, $\begin{bmatrix} p_y, &t_{switch}\end{bmatrix}^{\top}$. Notice that the walking direction is indicated using apex velocities, $[\dot{x}_{apex}, ~\dot{y}_{apex}]^{\top}$. We will now see that our formulation of PSP makes the RL problem more efficient by reducing the dimensionality of the learned state variables. \subsection{The Reinforcement Learning Problem} \label{sec:rl_process} As mentioned before, a central part of our walking methodology is to achieve robustness via reinforcement learning. The technique we use is the Actor-Critic with Eligibility Traces method. We summarize this process in Algorithm~\ref{code:rl_explain} which is an adaptation \cite{sutton2011reinforcement}. \begin{algorithm}[t] \KwIn{$\hat{v}(\mathbf{s}, \mathbf{w})$, $\forall \mathbf{s} \in \mathcal{S}$, $\mathbf{w} \in \mathbb{R}^{18 \times 30 \times 56 + 1}$} \KwIn{$\pi( \mathbf{a}| \mathbf{s}, \bm{\theta})$, $\forall a \in \mathcal{A}$, $\mathbf{s} \in \mathcal{S}$, $\bm{\theta} \in \mathbb{R}^{(18 \times 30 \times 56 + 1) \times 6}$} % \vspace{1.5mm} % \KwResult{$\bm{\theta}$, $\mathbf{w}$} \vspace{1.5mm} % Initialize policy weights $\mathbf{\theta}$ and state-value weights $\mathbf{w}$\vspace{1.5mm} \While{(variances of policy is large)}{\vspace{1mm} Randomly pick $\mathbf{s}$ in $\mathcal{S}$\\ $\mathbf{e}^{\bm{\theta}} \gets \mathbf{0}$ (eligibility trace of policy parameters)\\ $\mathbf{e}^{\mathbf{w}} \gets \mathbf{0}$ (eligibility trace of value parameters)\\ $I \gets 1$\\ \vspace{1mm} \While{($\mathbf{s}$ is not terminal)}{\vspace{1mm} $\mathbf{a} \sim \pi(\cdot | \mathbf{s}, \bf{\theta})$\\ $\mathbf{s'}, R \gets T( \mathbf{s}, \ \mathbf{a})$\\ $\delta \gets R + \gamma \hat{v}(\mathbf{s'}, \mathbf{w}) - \hat{v}(\mathbf{s}, \mathbf{w})$ \\ $\mathbf{e}^{\mathbf{w}} \gets \lambda^{\mathbf{w}}\mathbf{e}^{\mathbf{w}} + I\nabla_{\mathbf{w}} \hat{v}(\mathbf{s}, \mathbf{w})$ \\ $\mathbf{e}^{\bm{\theta}} \gets \lambda^{\bm{\theta}}\mathbf{e}^{\bm{\theta}} + I\nabla_{\bm{\theta}} \log \pi(\mathbf{a}|\mathbf{s}, \bm{\theta})$ \\ $\mathbf{w}\gets \mathbf{w} + \beta \delta \mathbf{e}^{\mathbf{w}}$\\ $\bm{\theta}\gets \bm{\theta} + \alpha \delta \mathbf{e}^{\bm{\theta}}$\\ $I \gets \gamma I$ \\ $\mathbf{s} \gets \mathbf{s'}$\\ } } \caption{Actor-Critic with Eligibility Traces}\label{code:rl_explain} \end{algorithm} We define $\mathbf{s}$, as CoM apex states, $\mathbf{s} \triangleq \begin{bmatrix} y_{apex}, &\dot{x}_{apex}, &\dot{y}_{apex} \end{bmatrix}^{\top}$. Notice that $\mathbf{s}$ does not include the variable $x_{apex}$ because it is assumed to be always zero in the local frame. We define actions, $\mathbf{a} \triangleq \begin{bmatrix} p_x, &\dot{x}_{apex}, &\dot{y}_{apex} \end{bmatrix}^{\top}$, as input parameters to the PSP process. A transition function, $T(\mathbf{s},~\mathbf{a})$, computes the next apex state, $\mathbf{s}'$, and the instantaneous reward value. In Fig.~\ref{fig:transition_fn}, we show the transition function, consisting of two stages: 1) finding step timing and position values via PSP, and 2) computing the next apex state via an analytic solution of the linear inverted pendulum model (LIPM). The first stage is described in Appendix~\ref{sec:append_psp} and allows to find $t_{switch}$, $t_{apex}$, and $p_y$ from the current apex state and chosen action. The second stage, finds the next apex state using the analytic solution of the CoM dynamics (see Eq.~\eqref{eq:x_state}). In Algorithm \ref{code:Phase_Space} the process of finding the switching times and the next apex states is explained in detail. The next item, $\hat{v}(\mathbf{s}, \bm{w})$, corresponds to the value function \-- similar to the cost-to-go function in Dynamic Programming. We store its learned values using a radial basis function (RBF) neural network~\cite{cualinmultidimensional}. The network uses a three-dimensional input vector consisting of the CoM apex state. \begin{equation}\label{eq:state_range} \begin{split} & \cdot ~ -0.14 \leq y_{apex} \leq 0.2 ~(\si[per-mode=symbol]{\meter}),\\[1mm] & \cdot ~ \ \ 0.03 \leq \dot{x}_{apex} \leq 0.61 ~(\si[per-mode=symbol]{\meter\per\second}), \\[1mm] & \cdot ~ -0.55 \leq \dot{y}_{apex} \leq 0.55 ~(\si[per-mode=symbol]{\meter\per\second}). \end{split} \end{equation} The hidden layer consists of a bias term and $18\times 30 \times 56$ Gaussian functions with centers on a grid with 2$\si[per-mode=symbol]{\centi\meter}$ spacing along each input dimension. The policy function also consists of an RBF neural network but a little different from the value function because of actions are chosen based on an stochastic evaluation. \begin{figure} \includegraphics[width=\columnwidth]{A_transition_fn} \caption{{\bf Transition Function.} The transition function relies on two models: PSP and LIPM. Given an apex state and a considered action, PSP computes step timing and location information that serve as inputs to the LIPM model. Then, LIPM solves for the next apex state based on the given state and the provided inputs.} \label{fig:transition_fn} \end{figure} Fig.~\ref{fig:rb_policy} shows that outputs of the RBF network are means and standard deviations of truncated normal distributions, $\pi(\mathbf{a}|\mathbf{s}, \bm{\theta})$. The range of the distributions are selected by considering the desired walking speed and step length limits as follows: \begin{equation} \begin{split} & \cdot ~ 0.1 \leq p_x \leq 0.5 ~ (\si[per-mode=symbol]{\meter}),\\[1mm] & \cdot ~ 0.03 \leq \dot{x}_{apex} \leq 0.37 ~(\si[per-mode=symbol]{\meter\per\second}),\\[1mm] & \cdot ~ -0.25 \leq \dot{y}_{apex} \leq 0.25 ~(\si[per-mode=symbol]{\meter\per\second}). \end{split} \end{equation} The network's outputs are linearly weighted by $\bm{\theta}$; thus, the purpose of RL is to find the weights $\bm{\theta}$ given candidate actions that minimized the desired cost. The instantaneous reward is defined by the forward velocity error and lateral step size error: \begin{equation} R = -(\dot{x}_{apex}^{nom} - \dot{x}_{apex})^2 - 15 \times (p_y^{nom} - p_y)^2 -(\dot{y}_{apex})^2. \end{equation} The set target for the learning process is to achieve recovery behaviors that maintain a straight forward direction, $\dot{y}_{apex} = 0$ while keeping a nominal lateral directional step size. We choose $\dot{x}_{apex}^{nom} = 0.2\si[per-mode=symbol]{\meter\per\second}$ and $p_{y}^{nom} = 0.3\si[per-mode=symbol]{\meter}$. The reward comes from the transition function described before, given current apex and action states selected from the truncated distributions. If the next predicted apex state incurs a terminal condition, the transition function gives a negative reward of $-5.0$, and the process terminates and starts a new iteration. The set of safe conditions (i.e. opposite to the terminal conditions) is the intersection of the following predicates: \begin{equation}\label{eq:terminal_cond} \begin{split} &\cdot ~ t_{apex} > 0.12 \ (\si[per-mode=symbol]{\second}),\\[1mm] &\cdot ~ t_{switch} > 0.12 \ (\si[per-mode=symbol]{\second}),\\[1mm] &\cdot ~ 0.1 < p_{y} < 0.5 \ (\si[per-mode=symbol]{\meter}), \end{split} \end{equation} which reflect the ability of the robot to swing its legs and the lateral step length. Notice that we do not include a predicate about the sagittal step length because it is already bounded by the allowable action range. The learning process ends when the variance of the learned policy becomes small enough ($<0.07$ units in our case). The usual number of iterations required to complete the learning process is about 30,000, and the process usually takes about 1 $\si[per-mode=symbol]{\minute}$ to compute on a dual-core, 3.0 GHz, Intel i7 processor thanks to the speed of our analytic PSP method. \begin{figure} \centering \includegraphics[width=\columnwidth]{A_radial_basis_exp} \caption{ {\bf Radial Basis Function Neural Network for Walking Policy Representation.} Outputs of the neural network are means and standard deviations of each action value. The truncated normal distributions defined by the outputs are used to stochastically pick actions.} \label{fig:rb_policy} \end{figure} \subsection{Evaluation of Learned Policy} Fig.~\ref{fig:rl_check} shows that the performance of the RL-based planner increases as the number of iterations increases. By watching the posture of a robot at different CoM states, we choose the nominal apex state to be $\begin{bmatrix} y_{apex}, &\dot{x}_{apex}, &\dot{y}_{apex} \end{bmatrix}=[~0.056, ~0.2, ~0~]^{\top}$. We proceed by simulating push disturbances to the CoM based on various external forces and directions. We use mean values of the final learned policy as desired actions rather than randomly picking actions from the normal distributions. The results are shown in Fig.~\ref{fig:rl_check} showing the learned policy obtained after many iterations and their enhancements on the walking patterns. In this figure, initially our simulated robot stands with the right foot on the ground and we simulate push disturbances to the left, right, and forward directions of its body. For example, the \mypostimpulse ~post impulse apex state, $[~0.05, ~0.39, ~0.33~]^{\top}$, is the result of an impulse applied to the left-forward direction of the robot's body. Red lines are interrupted within a few walking step indicating that the initial policies fail to find proper actions. In contrast, pink lines correspond to the final learned policy which achieves infinite walking steps without falling given the initial push disturbances. \section{Whole Body Locomotion Control} \label{sec:wblc} We devise a new whole-body locomotion control algorithm, dubbed WBLC, that specifies tasks using a hierarchy of accelerations and uses quadratic programming to determine contact forces. Fig. \ref{fig:wblc} describes the overall process for computing the torque commands. The details are described below. \begin{figure} \centering \includegraphics[width = \columnwidth]{A_rl_check} \caption{{\bf Phase Plots of Sequential Steps from Learned policies.} The initial state considered here corresponds to an impulsive disturbance. The candidate phase trajectories generated by the policy function reach terminal states if they are unsuccesful. As learning proceeds, the policy function finds better actions which avoid terminal states. The final policy achieves infinite number of steps without reaching terminal conditions. } \label{fig:rl_check} \end{figure} \subsection{Acceleration-based Formula with Hierarchy} Task level controllers are computed in operational space as acceleration commands and converted to joint accelerations using differential forward kinematics, \begin{equation} \begin{split} \dot{\mathbf{x}}_{1} &= \bm{J}_{1} \dot{\mathbf{q}},\\ \ddot{\mathbf{x}}_{1} &= \bm{J}_{1}\ddot{\mathbf{q}} + \dot{\bm{J}}_{1}\dot{\mathbf{q}}, \end{split} \end{equation} where $\mathbf{x} \in \mathbb{R}^{n}$ and $\mathbf{q} \in \mathbb{R}^{m}$ represent the task's operational coordinate and the joint positions, respectively, and $\bm{J}$ is the corresponding Jacobian matrix. Then, the joint acceleration for a desired task acceleration, $\ddot{\mathbf{x}}_{1}^{d}$ can be resolved as \begin{equation}\label{eq:ddot_q_first} \ddot{\mathbf{q}}_{1} = \overline{\bm{J}}_{1} \left( \ddot{\mathbf{x}}_{1}^{d} - \dot{\bm{J}}_{1} \dot{\mathbf{q}} \right) = \overline{\bm{J}}_{1} \ddot{\mathbf{e}}_{1}^{d}, \end{equation} where $\overline{\bm{J}}_1$ indicates the dynamically consistent inverse of $\bm{J}_1$, i.e. \begin{equation} \overline{\bm{J}}_{1} = \bm{A}^{-1} \bm{J}_{1}^{\top} \left( \bm{J}_{1} \bm{A}^{-1}\bm{J}_{1}^{\top} \right)^{-1}, \end{equation} where $\bm{A} \in \mathbb{R}^{m \times m}$ indicates the mass/inertia matrix of the rigid body model of the robot. If we consider now the mapping of two operational tasks $\ddot{\mathbf{x}}_{1}^{d}$ and $\ddot{\mathbf{x}}_{2}^{d}$, we propose the following task hierarchy mapping \begin{equation} \ddot{\mathbf{q}} = \overline{\bm{J}}_{1} \ddot{\mathbf{e}}_{1}^{d} + \overline{\bm{J}_{2|1}} \left( \ddot{\mathbf{e}}_{2}^{d} - \bm{J}_{2} \ddot{\mathbf{q}}_{1} \right), \label{eq:two_task} \end{equation} where $\overline{\bm{J}_{2|1}}\triangleq \overline{\left( \bm{J}_{2} \bm{N}_{1} \right)}$ represents the Jacobian associated with the second task, $\bm{J}_2$, projected into the null space of the first task, $\bm{N}_1=\bm{I} - \overline{\bm{J}}_{1} \bm{J}_{1}$, which by definition is orthogonal to the Jacobian associated with the first task, $\bm{J}_{1}$. The Equation (\ref{eq:two_task}) can be extended to the general $n$ task case, using the following hierarchy \begin{equation} \ddot{\mathbf{q}}_{[task]} = \overline{\bm{J}}_{1} \ddot{\mathbf{e}}_{1}^{d} + \sum_{k=2}^{n} \ddot{\mathbf{q}}_{k},\quad (n\geq 2) \label{eq:n_tasks} \end{equation} with \begin{equation} \begin{split} &\ddot{\mathbf{q}}_{k} = \overline{\bm{J}}_{k|prec(k)} \left( \ddot{\mathbf{e}}_{k}^{d} - \bm{J}_{k} \sum_{i=1}^{k-1} \ddot{\mathbf{q}}_{i} \right),\\ &\bm{J}_{k|prec(k)} = \bm{J}_{k} \bm{N}_{prec(k)}, \\ &\bm{N}_{prec(k)} = \prod_{s=1}^{k-1} \bm{N}_{s|s-1} \quad (k\geq 2, \quad \bm{N}_{1|0} = \bm{N}_1), \\ &\bm{N}_{s|s-1} = \bm{I} - \overline{\bm{J}}_{s|prec(s)} \bm{J}_{s|prec(s)} \quad (s \geq 2) \textrm{.} \end{split} \end{equation} This task hierarchy is similar, albeit not identical to \cite{siciliano1991general}. Compared to it, our proposed method is more concise, resulting in less computations for similar control specifications. In particular we do not require the computation of time derivatives of prioritized Jacobians. Details on the similarities and differences between these two works are discussed in Appendix \ref{append_b}. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{A_block_diagram} \caption{{\bf Block Diagram of the Proposed Whole-Body Locomotion Controller.} In WBLC, motion commands are compounded as joint accelerations based on null-space projection methods. The CM task specification is usd to compute reaction forces via QP optimization including unilateral contacts and friction cone constraints. The computed joint acceleration and reaction forces are used to sove for torque commands, which are the final output of WBLC.} \label{fig:wblc} \end{figure} \subsection{Optimizing Reaction Forces of Underactuated Robots} Based on the desired joint acceleration given in (Eq.~\eqref{eq:n_tasks}), WBLC finds torque commands via the following equation: \begin{equation} \label{eq:multi_dyn} \bm{A}(\mathbf{q}) \ddot{\mathbf{q}}^{d} + \mathbf{b}(\mathbf{q},\dot{\mathbf{q}}) + \mathbf{g} (\mathbf{q}) + \bm{J}_{r}^{\top}\mathbf{F}_{r} = \bm{U}^{\top} \bm{\tau}, \end{equation} where $\mathbf{q} \in \mathbb{R}^{n+6}$ and $\mathbf{b}(\mathbf{q},\dot{\mathbf{q}})$ and $\mathbf{g}(\mathbf{q})$ are the joint space coriolis/centrifugal and gravity terms, respectively. $\mathbf{F}_{r}$ and $\bm{J}_{r}$ represent the reaction forces and the corresponding contact Jacobian. $\bm{\tau} \in \mathbb{R}^{n}$ and $\bm{U}^{\top} \in \mathbb{R}^{(n+6) \times (n)}$ represent the actuator torque commands and the selection matrix mapping actuated torque to the floating base dynamics. Note that $\ddot{\mathbf{q}}^d$ is chosen as, \begin{equation}\label{eq:qqd} \ddot{\mathbf{q}}^d = \ddot{\mathbf{q}}_{[task]} + \bm{N}_{n|prec(n)} \ddot{\mathbf{q}}_{res}, \end{equation} with \begin{equation} \bm{N}_{n|prec(n)} = \bm{N}_{prec(n)} \bm{N}_{n|n-1} \end{equation} where $\ddot{\mathbf{q}}_{res}$ is a residual joint acceleration command. To find the reaction forces $\mathbf{F}_r$, we specify a centroidal momentum (CM) operational task. A CM task consists of linear and angular momenta portions. The linear part, corresponds to the robot's CoM behavior, $\mathbf{F}_{cm,lin}$, and is typically used for locomotion planning. On the other hand, the angular behavior, the so-called CAM, $\mathbf{F}_{cm,ang}$, is typically set to zero value. Setting the angular task to zero creates conflict with other tasks, such as body rotational tasks. We circumvent this problem by projecting angular behavior as a lower priority task than body rotational tasks as we will soon see. In addition, sometimes it is not possible to simultaneously fulfill linear and angular momentum specifications. For that reason, we specific CoM behavior as a hard constraint while relaxing angular behavior, i.e. \begin{equation} \begin{split} \min_{\mathbf{F}_r}\quad & \mathbf{F}_{r}^{\top} \bm{Q} \mathbf{F}_{r} + \| \mathbf{F}_{cm,ang}^d - \bm{W}_{ang} \mathbf{F}_{r} \|^{2} \\[1.5mm] \textrm{Subject to.} \quad& \mu |\mathbf{F}_{r,z} | \geq |\mathbf{F}_{r,x}| \\ & \mu |\mathbf{F}_{r,z} | \geq |\mathbf{F}_{r,y}| \\ & \mathbf{F}_{cm,lin}^{d} - \bm{W}_{lin}\mathbf{F}_{r} = \mathbf{0} \end{split} \end{equation} where $\mathbf{F}_{cm,lin}^{d}$ and $\mathbf{F}_{cm,ang}^{d}$ are the desired linear and angular parts of the CM, $\mu$ represents a friction coefficient related to the contact surfaces, $\bm{Q}$ is a weighting matrix, and $\bm{W}_{ang}$ and $\bm{W}_{lin}$ are mappings from reaction forces to angular and linear momenta behaviors. Based on the results of this optimization, $\mathbf{F}_{r}$, the desired value of the CAM task can be calculated as follows: \begin{equation}\label{eq:cm-definition} \bm{I}_{cm} \ddot{\mathbf{x}}_{cm}^{d} = \mathbf{F}_{cm}^{d} = \left[ \begin{array} {cc} \mathbf{F}_{cm,lin}^{d} & \bm{W}_{ang} \mathbf{F}_{r} \end{array} \right]^{\top}, \end{equation} where $\bm{I}_{cm}$ is a spatial inertial term. Notice that the term $\bm{W}_{ang} \mathbf{F}_{r}$ might be different than $\mathbf{F}_{cm,ang}^{d}$ since the desired angular behavior might violate friction cone constraints. From the above equation, we extract the desired CM acceleration command $\ddot{\mathbf{x}}_{cm}^{d}$ for usage in the controller hierarchy. More concretely, $\ddot{\mathbf{x}}_{cm}^{d} = \left ( \ddot{\mathbf{x}}_{CoM}^{d} \, \mathbf{\alpha}_{ang}^{d} \right)$, where the first term within the parenthesis is the desired CoM acceleration and the second term is the desired angular acceleration. Both of these commands are used separately in the hierarchy defined in Eq. \eqref{eq:n_tasks}, to produce the joint acceleration command $\ddot{\mathbf{q}}_{[task]}$ which in turn yields $\ddot{\mathbf{q}}^d$ via Eq.~\eqref{eq:qqd}. Plugging this last term into Eq.~\eqref{eq:multi_dyn} we obtain \begin{equation} \bm{A}\left( \ddot{\mathbf{q}}_{[task]} + \bm{N}_{n|prec(n)} \ddot{\mathbf{q}}_{res} \right) +\mathbf{b} + \mathbf{g} + \bm{J}_{r}^{\top}\mathbf{F}_{r} = \bm{U}^{\top} \bm{\tau}, \end{equation} which can be written in matrix form as \begin{equation} \label{eq:final_step} \left[\begin{array}{cc} \bm{U}^{\top} & -\bm{A}\bm{N}_{n|prec(n)} \end{array}\right] \left[ \begin{array}{c} \bm{\tau} \\ \ddot{\mathbf{q}}_{res} \end{array}\right] = \bm{\tau}_{[task]}, \end{equation} where we have defined the term \begin{equation} \label{eq:task_torque} \bm{\tau}_{[task]} \triangleq \bm{A}\ddot{\mathbf{q}}_{[task]} + \mathbf{b} + \mathbf{g} + \bm{J}_r^{\top}\mathbf{F}_r. \end{equation} We now have an underdetermined matrix system which can be solved via pseudo inversion as \begin{equation} \left[ \begin{array}{c} \bm{\tau} \\ \ddot{\mathbf{q}}_{res} \end{array}\right] = \left[\begin{array}{cc} \bm{U}^{\top} & -\bm{A}\bm{N}_{n|prec(n)} \end{array}\right]^{+} \bm{\tau}_{[task]} \label{eq:final_cmd} \end{equation} where $(.)^{+}$ represents the Moore-Penrose pseudo inverse operation. \section{Time Derivative of Jacobian} \label{sec:jdot} \begin{figure} \centering \includegraphics[width=\columnwidth]{A_jdot_linkage} \caption{{\bf Tracking performance comparison with and without the term $\dot{\bm{J}}\dot{\mathbf{q}}$.} A three-DoF planar manipulator is used to control its end effector to follow a vertical line (red lines) with 2 Hz frequency (blue dashed lines are the end-effector path). The tracking results demonstrte that the (a) controller, which accounts for $\dot{\bm{J}}\dot{\mathbf{q}}$, outperforms the (b) controller.} \label{fig:jdot_linkage} \end{figure} The ability to compute efficiently the time derivative of Jacobian operators for fast operational space control has been overlooked. However it plays an important role on robustifying fast movements. Fig.\ref{fig:jdot_linkage} shows that the tracking performance of a simple serial manipulator is significantly enhanced by using the term $\dot{\bm{J}}\dot{\mathbf{q}}$ via Operational Space Control (OPC), where $J$ is the Jacobian of the end effector and $\dot q$ is the vector of joint velocities. Notice that $\dot J$ is used in our WBLC in Equation~\eqref{eq:ddot_q_first}. The commanded task is to follow a vertical line defined by the function, $\mathbf{x}^{d} = [~0.62, ~0.23 \sin(4\pi t)~]^{\top}$. The OPC input is \begin{equation} \ddot{\mathbf{x}} = \begin{bmatrix} 0 \\ -36.32 \sin(4\pi t) \end{bmatrix} + K_p (\mathbf{x}^{d} - \mathbf{x}) + K_v(\dot{\mathbf{x}}^{d} - \dot{\mathbf{x}}). \end{equation} We will use Lie group theory to compute the derivatives of point task Jacobians~\cite{kimlie}. We implement this functionality using the Rigid Body Dynamics Library \footnote{Open-source \url{https://rbdl.bitbucket.io} }, which is a popular open source dynamics toolbox. In addition, we also devise a new method to compute the time derivative of the CM Jacobian which cannot be computed using Lie group theory. \subsection{Time Derivative of Point Jacobian} \label{sec:time_der_pt_jacobian} Lie group operators provide convenient analytic derivations for Jacobian computations. The $SE(3)$ orientation and position representation of a rigid body in three-dimensional space consists of orientation matrix ($\bm{R}$) and a position vector ($\mathbf{p}$). It can also be represented via the $4\times 4$ homogeneous transformation, \begin{equation} \bm{T}_{g,i} = \left[ \begin{array}{cc} \bm{R}_{g,i} & \mathbf{p}_{g,i} \\ 0 & 1 \end{array} \right], \end{equation} where, $\bm{R}_{g,i}$ and $\mathbf{p}_{g,i}$ represent the orientation and position of the $i^{th}$ frame in global coordinates respectively (See Fig.~\ref{fig:openchain}). The velocity representation in $se(3)$ consists of the 6-dimensional vector, $[\mathbf{w}, \mathbf{v} ]^T$, and yields the $4\times 4$ homogeneous equality, \begin{equation} \bm{V}_i \triangleq \left[ \begin{array}{cc} [\mathbf{w}_i]^{\times} & \mathbf{v}_i \\ 0 & 0 \end{array} \right], \end{equation} where \begin{equation} [\mathbf{w}_i]^{\times} \triangleq \begin{bmatrix} 0 & -w_{i,3} & w_{i,2} \\ w_{i,3} & 0 & -w_{i,1} \\ -w_{i,2} & w_{i,1} & 0 \end{bmatrix} \end{equation} Here, $w_{i,1}, w_{i,2}, w_{i,3}$ are relative angular velocities along the three Cartesian axes, and $\mathbf{v}_i$ is the linear velocity. It can be shown that $\bm{V}_i =\bm{T}_{g, i}^{-1} \dot{\bm{T}}_{g,i}$, and corresponds to the generalized velocity seen from the $i^{th}$ frame. The velocity in the global frame associated with $\bm{V}_i$ can be obtained via adjoint derivations, \begin{equation} \begin{split} {\rm Ad}_{{T}_{g,i}} \left( \bm{V}_i\right) &= \bm{T}_{g,i} \bm{V}_i \bm{T}_{g,i}^{-1} \\ &= \bm{T}_{g,i} \bm{T}_{g,i}^{-1} \dot{\bm{T}}_{g,i} \bm{T}_{g,i}^{-1}\\ & = \dot{\bm{T}}_{g,i} \bm{T}_{g,i}^{-1}. \end{split} \end{equation} The adjoint mapping operator is defined as \begin{equation}\label{eq:adjoint_mtx} {\rm Ad}_{T_{i,j}} \triangleq \begin{bmatrix} \bm{R}_{i,j} & 0 \\ [\mathbf{p}_{i,j}]^{\times} \bm{R}_{i,j} & \bm{R}_{i,j} \end{bmatrix}, \end{equation} where $\bm{R}_{i,j}$ and $\mathbf{p}_{i,j}$ are relative rotations and positions between points $i$ and $j$. The generalized velocity of point $p$ in local coordinates (see Fig.~\ref{fig:openchain}) can be represented as \begin{equation} \begin{split} \mathbf{V}_p & = {\rm Ad}_{{T}_{p,n}} \mathbf{V}_n \\ &= {\rm Ad}_{{T}_{p,n-1}}\mathbf{V}_{n-1} + {\rm Ad}_{{T}_{p,n}}\mathbf{S}_n \dot{\mathbf{q}}_n \\ & \qquad \vdots \\ &= {\rm Ad}_{{T}_{p,0}}\mathbf{V}_{0} + \sum_{i=1}^{n} {\rm Ad}_{{T}_{p,i}}\mathbf{S}_{i} \dot{\mathbf{q}}_{i}. \end{split} \end{equation} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{A_linkage_explain} \caption[Multi-DoF Openchain]{{\bf Multi-DoF Openchain.} The openchain consists of $n$ joints. At the end of the chain, the end-effector is attached to link $n$.} \label{fig:openchain} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{A_body_shaking} \caption{{\bf{Body shaking test using different task hierarchies}}: Whole body movements generated via WBLC change corresponding on the task hiearchy. (a) contains simulations using two task hierarchy sets (b) shows data of quaternion tracking error for a body orientation task (c) shows the norm torques generated by WBLC in each case} \label{fig:shaking_body} \end{figure*} Because $\{0\}$ frame is the global frame (an inertial frame), $\mathbf{V}_{0}$ is equal to zero. On the other hand $\mathbf{S}_{i}$ maps joint velocities to $\mathit{R}^{6}$, e.g. $\mathbf{S}_i$ is $[~0, 0, 1, 0, 0, 0~]^T$ means $\dot q_i$ is a revolute joint rotating along the $z$ local axis. The first three positions of $\mathbf{S}_{i}$ represent rotational axes while the last three positions represent prismatic axes. It can be shown that the Jacobian of a point $p$, is equal to \begin{equation} \bm{J}_p = \begin{bmatrix} {\rm Ad}_{{T}_{p,1}} \mathbf{S}_1 & Ad_{{T}_{p,2}}\mathbf{S}_2 &\cdots & {\rm Ad}_{{T}_{p,n}}\mathbf{S}_n \end{bmatrix} \end{equation} Furthermore, let's break down the adjoint operators into the following chain operation \begin{equation} {\rm Ad}_{{T}_{p,i}} \mathbf{S}_i = {\rm Ad}_{{T}_{p, p'}} {\rm Ad}_{{T}_{p', n}} {\rm Ad}_{{T}_{n, i}} \mathbf{S}_i. \end{equation} Here $p'$ is a virtual point representing the position of $p$ but with local orientation with respect to frame $n$. As such it represents just a position offset. In this case $\bm{T}_{p', n}$ is constant and the $i$-th column of the time derivative of the Jacobian can be resolved as \begin{equation} \begin{split} \dot{\bm{J}}_{p,i} & = \dot{\overbrace{\left\{{\rm Ad}_{{T}_{p,i}}\mathbf{S}_i \right\}}} \\ & = \dot{\overbrace{\left\{{\rm Ad}_{{T}_{p, p'}}\right\}}} {\rm Ad}_{{T}_{p', n}} {\rm Ad}_{{T}_{n, i}} \mathbf{S}_i \\ & \quad + {\rm Ad}_{{T}_{p, p'}} {\rm Ad}_{{T}_{p', n}} \dot{\overbrace{\left\{ {\rm Ad}_{{T}_{n, i}} \mathbf{S}_{i} \right\}}} \\ &= {\rm Ad}_{{T}_{p,p'}} {\rm ad}_{{V}_{p,p'}} {\rm Ad}_{{T}_{p', n}} {\rm Ad}_{{T}_{n, i}} \mathbf{S}_i \\ &\quad + {\rm Ad}_{{T}_{p,p'}} {\rm Ad}_{{T}_{p',n}} \left\{{\rm Ad}_{{T}_{n,i}} {\rm ad}_{{V}_{n,i}} \mathbf{S}_i + {\rm Ad}_{{T}_{n,i}} (\dot{\mathbf{S}}_i ) \right\} \end{split} \end{equation} Here we have used $\dot{{\rm Ad}_{{T}}} = {\rm Ad}_{{T}} ({\rm ad}_{\bm{V}})$ and since $V = \bm{T}^{-1}\dot{\bm{T}}$, then we define ${\rm ad}_{\bm{T}^{-1}\dot{\bm{T}}} \triangleq \bm{\bm{T}^{-1}\dot{\bm{T}}} - \bm{\dot{\bm{T}}\bm{T}^{-1}}$. \subsection{Time Derivative of the Centroidal Momentum Jacobian} The previous equations for the time derivative of point Jacobians are not applicable to the CM Jacobian. The latter can be obtained from the CM task definition of Eq. \eqref{eq:cm-definition} linear part is simply the weighted sum of time derivatives of each link's CoM Jacobian. However, the angular part is not straightforward. Instead of finding $\dot{\bm{J}}_{cm}$, we can find the multiplication of $\dot{\bm{J}}_{cm}$ and the joint velocities, $\dot{\mathbf{q}}$, via operational space dynamics: \begin{equation} \bm{\mathit{\Lambda}}_{cm}(\mathbf{q})\ddot{\mathbf{x}} + \bm{\mu}_{cm}(\mathbf{q}, \dot{\mathbf{q}}) + \mathbf{p}_{cm}(\mathbf{q}) = \mathbf{F}_r, \end{equation} Here, $\bm{\mathit{\Lambda}}_{cm}$, $\bm{\mu}_{cm}$, and $\mathbf{p}_{cm}$ are an inertia matrix, coriolis and centrifugal force, and gravitational force of the CM operational task, respectively. Since there is no coriolis and centrifugal effects in CM space, $\bm{\mu}_{cm}$ is zero. Thus, $\dot{\bm{J}}_{cm}\dot{\mathbf{q}}$ must be equal to $\bm{J}_{cm}\bm{A}^{-1}\mathbf{b}$: \begin{equation} \dot{\bm{J}}_{cm}\dot{\mathbf{q}} = \bm{J}_{cm}\bm{A}^{-1}\mathbf{b}. \end{equation} All terms in $\bm{J}_{cm}\bm{A}^{-1}\mathbf{b}$ are easily computable usingoff-the-shelf dynamics libraries. \section{Results} \label{sec:result} To verify the performance of the proposed methods, we conduct three demonstrations: 1) body orientation control while changing the task hierarchy, 2) dynamic locomotion with directional change, and 3) push-recovery from various directions while walking. Toward these investigations, we implement our algorithms on a simulation of the Valkyrie humanoid robot, and test it using the physics based simulation SrLib. Because our focus is on locomotion, we fix the finger and wrist joints, bringing the total number of joints to 28. To incorporate floating body dynamics, prismatic and ball joints are introduced to connect Valkyrie's pelvis to a fixed frame. In the simulation environment, we use a friction coefficient between the ground and the robot's feet of 0.8. On the other hand the friction cone constraints used in WBLC are set to a value of 0.65 to be conservative. In case our contact control solver fails to find proper reaction forces, we allow for solutions that violate friction constraints by relaxing the friction coefficient to a value of 1.75. The resulting control solution implies that slip occurs but only for very short times (in general less than 0.005 $\si[per-mode=symbol]{\second}$). This simple technique doesn't incur an increase in computational complexity while greatly enhancing the robustness of WBLC with respect to external disturbances. \subsection{Body Orientation Control with Various Task Hierarchies} Body shaking behavior is a difficult skill that we use here to study the dynamic performance of WBLC tasks. In traditional humanoid control methods, CoM and CAM tasks are controlled within the same priority level. We propose to split them via WBLC into different hierarchy levels. To demonstrate this feature, we define the following six tasks: \begin{itemize} \item[$\cdot$] $\ddot{\mathbf{x}}_{1} \in \mathbb{R}^{3}$: Linear CoM \item[$\cdot$] $\ddot{\mathbf{x}}_{2} \in \mathbb{R}^{3}$: Centroidal Angular Momentum (CAM) \item[$\cdot$] $\ddot{\mathbf{x}}_{3} \in \mathbb{R}^{3}$: Body Orientation \item[$\cdot$] $\ddot{\mathbf{x}}_{4} \in \mathbb{R}^{22}$: Partial Joint Posture (all joints except shoulder pitch, shoulder roll, and knee pitch) \item[$\cdot$] $\ddot{\mathbf{x}}_{5} \in \mathbb{R}^{3}$: Pelvis Orientation \item[$\cdot$] $\ddot{\mathbf{x}}_{6} \in \mathbb{R}^{28}$: Full Joint Posture \end{itemize} Most tasks above are self-explanatory. We introduce a partial joint posture consisting on keeping the initial joint positions for all robot joints except for the shoulder pitch and roll, and the knee pitch. This task is used for the sole purpose of testing performance when multiple tasks conflict. In particular, the partial joint posture conflicts with the CAM task within the above task set and viceversa. For our test, we use two hierarchies: \begin{equation} \begin{split} \mathbb{H}_{1} &= \left\{ \ddot{\mathbf{x}}_{1} \rightarrow \ddot{\mathbf{x}}_{2} \rightarrow \ddot{\mathbf{x}}_{3} \rightarrow \ddot{\mathbf{x}}_{4} \rightarrow \ddot{\mathbf{x}}_{5} \rightarrow \ddot{\mathbf{x}}_{6} \right\} \\ \mathbb{H}_{2} &= \left\{ \ddot{\mathbf{x}}_{1} \rightarrow \ddot{\mathbf{x}}_{3} \rightarrow \ddot{\mathbf{x}}_{4} \rightarrow \ddot{\mathbf{x}}_{5} \rightarrow \ddot{\mathbf{x}}_{2} \rightarrow \ddot{\mathbf{x}}_{6} \right\} \end{split} \end{equation} The second hierarchy, $\mathbb{H}_{2}$, is more appropriate than the first one, $\mathbb{H}_{1}$ to achieve accurate control of the body shaking (orientation) task. This is accomplished by assigning higher priority to the body orientation task and moving backwards the CAM task. As shown in Fig. \ref{fig:shaking_body} (a), changing the hierarchy levels cause different whole body motions. Fig. \ref{fig:shaking_body} (b) shows the body orientation task error for the two task hierarchies. The body orientation performance from $\mathbb{H}_{2}$ is better than that of $\mathbb{H}_{1}$. This can be seen in the interval from 4.0 $\si[per-mode=symbol]{\second}$ to 4.5 $\si[per-mode=symbol]{\second}$ in Fig. \ref{fig:shaking_body} (b). In addition, the different hierarchies cause not only different movements but also different torque profiles. As shown in Fig. \ref{fig:shaking_body} (c), higher torques are needed for $\mathbb{H}_{2}$ than for $\mathbb{H}_{1}$. \subsection{Dynamic Walking with Directional Change} Walking can be broken down into three phases: double contact, right foot contact, and left foot contact. To represent these phases we define the following task hierarchy in WBLC: \begin{itemize} \item[$\cdot$] $\ddot{\mathbf{x}}_{1} \in \mathbb{R}^{3}$: Linear CoM position \item[$\cdot$] $\ddot{\mathbf{x}}_{2} \in \mathbb{R}^{3}$: Pelvis Orientation \item[$\cdot$] $\ddot{\mathbf{x}}_{3} \in \mathbb{R}^{3}$: Body Orientation \item[$\cdot$] $\ddot{\mathbf{x}}_{4} \in \mathbb{R}^{3}$: (for the single contact phases) Foot Orientation \item[$\cdot$] $\ddot{\mathbf{x}}_{5} \in \mathbb{R}^{3}$: (for the single contact phases) Foot Position \item[$\cdot$] $\ddot{\mathbf{x}}_{6} \in \mathbb{R}^{6}$: Neck and Torso Joint Posture \item[$\cdot$] $\ddot{\mathbf{x}}_{7} \in \mathbb{R}^{3}$: Centroidal Angular Momentum \item[$\cdot$] $\ddot{\mathbf{x}}_{8} \in \mathbb{R}^{10}$: Arms Joint Posture \end{itemize} To produce swing foot trajectories, we define third degree B-splines, which guarantee acceleration continuity. The orientation coordinates for the robot's body, pelvis, and feet are described using quaternions. For each step, these orientation tasks are commanded to smoothly switch from the current frame to next one. Given initial CoM states, our locomotion planner computes foot positions and their timing while satisfying the desired walking directional changes. In our test shown in Fig. \ref{fig:turn}, Valkyrie takes first 12 steps while continuously changing its walking direction by $18.8^{\si[per-mode=symbol]{\degree}}$ per step. After that, Valkyrie takes 5 forward steps with no directional change. Then, Valkyrie takes another 12 steps while changing direction by, $-18.8^{\si[per-mode=symbol]{\degree}}$ per step. The user only specifies the walking directions while RL-PSP automatically finds the foot positions and their timing using the learned policy. The learned policy consists only on switching states and step locations. The desired position, velocity, and acceleration of the CoM are computed with the analytic equation of the LIP model at runtime. \begin{figure} \centering \includegraphics[width=0.85\columnwidth]{A_turning_result} \caption{{\bf Continuous walking directional change} Valkyrie shows a complex dynamic walking pattern involving changing walking direction. (a) shows a top view of Valkyrie and its walking path. (b) Shos how the robot's CoM state mapped to the next local frame. Local frames rotate with the desired walking direction. For each step, the stance foot becomes the origin of the local frame, and the orientation of the frame is aligned with the desired walking direction. The previous switching CoM state is projected to the current local frame, and the planner finds the foot placement with the new state.} \label{fig:turn} \end{figure} \newcommand*\myswitchyellow{{\protect \includegraphics[width=0.8em]{A_st_state_yellow}}} \subsection{Push Recovery while Walking} \begin{figure} \centering \includegraphics[width = \columnwidth]{A_push_recovery} \caption{{\bf Robustness study.} (a) Shows a walking behavior without external disturbance. (b) When an external impulse of 520 $\si[per-mode=symbol]{\newton}$ and 0.1 $\si[per-mode=symbol]{\second}$ duration is exerted on the robot's pelvis, Valkyrie replans its walking trajectories using the learned policy and maintains its balance without stopping.} \label{fig:replan} \end{figure} To validate push recovery, we conduct simulated experiments under large external disturbances and in various directions. Although WBLC is robust to small deviations of the CoM trajectory, for external disturbances we rely on the learned recovery policies described in the theory sections. When the norm of the CoM state error, \begin{equation} \mathbf{error}= \begin{bmatrix} \mathbf{x}^{d} - \mathbf{x}\\ 0.5 (\dot{\mathbf{x}}^{d} - \dot{\mathbf{x}}) \end{bmatrix}, \end{equation} is over a threshold equal to $0.05~\si[per-mode=symbol]{\meter}$ and for longer than 0.02$\si[per-mode=symbol]{\second}$, our planner computes a new trajectory starting from the current CoM state. Instead of setting the new CoM control goal to be the current (disturbed) CoM state, we have experienced that it is better to define a controller goal, $\mathbf{x}^{new}$, equal to \begin{equation} \mathbf{x}^{new} = \gamma\mathbf{x}^{d} + (1-\gamma)\mathbf{x}, \end{equation} where $\gamma$ can be selected heuristically, and we use a value of 0.8. In our tests, we push Valkyrie while she is dynamically walking using various disturbance forces applied for a duration of 0.1 $\si[per-mode=symbol]{\second}$. The maximum disturbance impulse that we apply to Valkyrie is 520 $\si[per-mode=symbol]{\newton}$ for 0.1 $\si[per-mode=symbol]{\second}$. The results are shown in Fig.~\ref{fig:replan} compared to the undisturbed trajectories. The CoM phase trajectory in the lateral plane shown in Fig.~\ref{fig:replan} (b) shows that the planner is able to find a new trajectory after an external impulse is applied. The time to compute 15 steps after the disturbance is less than 1$\si[per-mode=symbol]{\milli \second}$ using a dual-core 3.0 GHz Intel i7 processor. At the moment that the replanning process occurs, we also find a new swing foot trajectory that transitions from the original swing trajectory to the new goal. \begin{figure*} \includegraphics[width=2.0\columnwidth]{A_recovery_list} \caption{{\bf Details of push recovery given various external forces.} In this test we demonstrate the ability of Valkyrie to recover from pushes of various magnitudes and directions. In (a), Valkyrie's feet collide with each other, but the planner finds another path that allows it to recover.} \label{fig:multi_test} \end{figure*} Fig.~\ref{fig:multi_test} shows results of push recoveries while dynamically walking when being subject to various external forces. In all cases, Valkyrie succeeds to sustain the disturbances and continue walking without stopping. The robustness capabilities in this test are competitive to the results of \cite{Khadiv:2017th} which is not based on statistical learning. In contrast to this state of the art, due to our use of offline learning our planner is able to come up with numerous steps almost instantaneously with respect to the walking time frame. \section{Conclusion and Discussion} In this paper, we propose an RL based robust locomotion planner and a new WBC, dubbed WBLC. By utilizing PSP in the RL formulation, we can quickly find locomotion policies for 3D walking. The newly developed WBLC takes into account realistic contact and friction cone constraints. At the same time, WBLC maintains task priorities using projection operators which is missing in previous QP based WBCs. Overall, WBLC simultaneously exploits the benefits of QP based WBC's and projection based WBC's, achieving versatility and computational efficiency. Another benefit of our methods is the planning speed. Our locomotion planner almost instantaneously finds a multistep walking trajectory faster than the state of the art. By devising the replanning process during dynamic walking, robots can quickly react to external forces and achieve significant robustness. One interesting aspect of our planning algorithm is the value function we used in the learning process. In the future we could use this value function as an indicator for walking risk given the disturbed states. Many researchers have suggested indicators for locomotion quality. For example, ZMP \cite{VUKOBRATOVIC:2004ej} and CP \cite{Pratt:2006ct} are indicators of balance stability but they don't take into account other important information such as kinematic constraints or swing time limits. Recently, an allowable CoM acceleration region \cite{Caron:2015cm} has been proposed for multi-contact stability. However, there is no indication of kinematic or dynamic limitations such as step size or swing time. In contrast our value function takes into account some kinematic and dynamic constraints that could ultimately make it a versatile metric for walking quality evaluation. In the future, we will experiment with more complex functions to represent learned values and policies (e.g. deep neural network). In this paper, we have focused on finding simple walking patterns. However, complex neural networks, which can represent highly nonlinear and abstract behaviors, can enable more versatile planners. For instance, future planners may be able to traverse rough terrain by exploiting various locomotion modes such as walking, running, or jumping. We also plan to implement the proposed algorithms in real systems and evaluate their performance. In our previous work \cite{Kim:2016jg}, we showed agile bipedal balance with a point-foot biped with series elastic actuators. Since the system is highly unstable by nature, we did not apply external disturbances. We believe that the robustness capabilities we have outlined in this paper may allow us ot accomplish sophisticated behaviors in the real testbeds. \appendices \section{Analytic Solution of the Phase Space Planner} \label{sec:append_psp} When we constraint PIPM dynamics to a piecewise linear height surface, $z = a(x-p_x) + b$, we can find $t_{switch}$ and $p_y$ without numerical integration and bisection search because the system of equations becomes linear, resulting in the following CoM behavior: \begin{equation} \label{eq:x_state} \begin{split} x(t) & = A e^{\omega t} + B e^{-\omega t} + p_x, \\ \dot{x}(t) & = \omega (A e^{\omega t} - B e^{-\omega t} ), \end{split} \end{equation} where, \begin{equation} \begin{split} \omega &= \sqrt{\frac{g}{a p_x + b}}, \\[2mm] A &= \frac{1}{2}\Big( (x_{0} - p_x) + \frac{1}{\omega}\dot{x}_{0} \Big), \\[2mm] B &= \frac{1}{2}\Big( (x_{0} - p_x) - \frac{1}{\omega}\dot{x}_{0} \Big). \end{split} \end{equation} Note that this equation is the same for the $y$ direction. Based on Eq.~\eqref{eq:x_state}, we can find an analytical solution for PSP, summarized in Algorithm \ref{code:Phase_Space}. $~\mathbf{x}_{1}$, $\mathbf{y}_{1}$, $\mathbf{x}_{apex,2}$, and $\mathbf{x}_{switch}$ are vector quantities corresponding to the variables $(x_1,\dot{x}_1)$, $(y_1,\dot{y}_1)$, $(x_{apex,2}, \dot{x}_{apex,2})$, and $(x_{switch},\dot{x}_{switch})$. \begin{algorithm} \caption{Computation of $t_{switch}$, $p_y$}\label{code:Phase_Space} \SetKwFunction{FindXSwitchingState}{Find\_Switching\_State} \SetKwFunction{GetTimeAtState}{Get\_Time} \SetKwFunction{FindPy}{Find\_Py} \SetKwFunction{Integration}{GetState} \KwIn{ $\mathbf{x}_1, \mathbf{y}_1, p_x, \dot{x}_{apex}, \dot{y}_{apex}$} \KwResult{ $(t_{switch}, p_y)$ } \vspace{1.5mm} $ \mathbf{x}_{switch} \gets$ \FindXSwitchingState{$\mathbf{x}_1, p_x, \dot{x}_{apex}$} ;\\ \tcp*[f]{Eq.\eqref{eq:vel_x}, \eqref{eq:x_switch}} \vspace{1mm}\\ $ t_{switch} \gets $ \GetTimeAtState{$\mathbf{x}_1, \mathbf{x}_{swtich}$} \tcp*[r]{Eq.\eqref{eq:t_eqn}} \vspace{1.0mm} $ t_{apex} \gets$ \GetTimeAtState{$\mathbf{x}_{switch}, p_x, \mathbf{x}_{apex}$} \tcp*[r]{Eq.\eqref{eq:t_eqn}} \vspace{1.0 mm} $\mathbf{y}_{switch} \gets$ \Integration{$\mathbf{y}_1, t_{switch}$} \tcp*[r]{Eq.\eqref{eq:x_state}} \vspace{1.0 mm} $p_y$ $\gets$ \FindPy{$\mathbf{y}_{switch}, \dot{y}_{apex}, t_{apex}$} \tcp*[r]{Eq.\eqref{eq:yp}} \end{algorithm} Let us focus on obtaining the step switching time. We can easily manipulate Eq.~\eqref{eq:x_state} to analytical solve for the time variable, \begin{equation} \begin{split} & x + \frac{1}{\omega}\dot{x} = 2A e^{\omega t} + p_x, \\[2mm] & x + \frac{1}{\omega}\dot{x} - p_x = 2A e^{\omega t}, \end{split} \end{equation} which renders \begin{equation} \label{eq:t_eqn} t = \frac{1}{\omega} \ln \Big( \frac{x + \frac{1}{\omega}\dot{x} - p_x}{2 A} \Big). \end{equation} To find the dynamics, $\dot{x} = f(x)$, which will lead to the switching state solution, let us remove the $t$ term by plugging Eq.~\eqref{eq:t_eqn} into Eq.~\eqref{eq:x_state}. \begin{equation} x = A \frac{x + \frac{\dot{x}}{\omega} - p_x}{2 A} + B \frac{2 A}{x + \frac{\dot{x}}{\omega} - p_x } + p_x \end{equation} \begin{align} \frac{1}{2}(x - p_x - \frac{\dot{x}}{\omega}) &= \frac{2 AB}{x + \frac{\dot{x}}{\omega} - p_x} \\ (x - p_x)^2 - \Big(\frac{\dot{x}}{\omega}\Big)^2 &= 4 AB \end{align} By performong some algebra we get, \begin{equation} \begin{split} \dot{x}^2 &= \omega^2 ( (x - p_x)^2 - 4 AB ), \\[2mm] \dot{x}^2 &= \omega^2 \Big( (x - p_x)^2 - (x_{0} - p_x)^2 \Big) + \dot{x}_0^2, \end{split} \end{equation} which yields, \begin{equation} \label{eq:vel_x} \dot{x} = \pm \sqrt{\frac{g}{h} \Big( (x - p_x)^2 - (x_{0} - p_x)^2 \Big) + \dot{x}_{0}^2 }. \end{equation} Given two phase trajectories associate with consecutive walking steps, $p_{x,1}$ and $p_{x,2}$ and assuming the robot walks forward, i.e. $\dot{x}_{switch}$ is positive, we calculate the phase space intersection point via continuity of velocities from Eq.~\eqref{eq:vel_x}: \begin{equation} \label{eq:x_switch} \begin{split} x_{\rm switch}&=\frac{1}{2}\Big( \frac{C}{p_{x,2} - p_{x,1}} + (p_{x,1} + p_{x,2}) \Big)\\ C&=(x_{{0},1}-p_{x,1})^2 - (x_{{0},2} - p_{x,2})^2 + \frac{\dot{x}_{{0},2}^2 - \dot{x}_{{0},1}^2}{\omega^2} \end{split} \end{equation} We can now find the step switching time by plugging the computed switching position into Eqs~\eqref{eq:vel_x} and~\eqref{eq:t_eqn}. In addition, we can obtain the timing at the apex velocity from Eq.~\eqref{eq:t_eqn}. The final step is to find the $y$ directional foot placement. We first calculate $\mathbf{y}_{switch}$ by pluggin $t_{switch}$ into the $y$ directional state equation, which has identical form to Eq.~\eqref{eq:x_state}. Then, by using the equality that $\dot{y} (t_{\rm apex})=\dot{y}_{apex}$, we can find $p_{y}$, \begin{equation} \label{eq:yp} \begin{split} p_y &= \frac{\dot{y}_{apex}-C}{D},\\[2mm] C &= \frac{\omega}{2} \big( (y_{switch}+\frac{\dot{y}_{switch}}{\omega})e^{\omega t_{apex}} - \\ &\quad\quad\quad\quad (y_{switch}-\frac{\dot{y}_{switch}}{\omega})e^{-\omega t_{apex}}\big)\\ D &= \frac{\omega}{2}(e^{-\omega t_{apex}} - e^{\omega t_{apex}}) \end{split} \end{equation} After calculating $p_y$, we can easily get $y_{apex}$ and $\dot{y}_{apex}$ by using Eq.~\eqref{eq:x_state}. \section{Equivalent Hierarchy-based Joint Acceleration} \label{append_b} The joint velocity associated with an operational task ${\mathbf{x}}_{1}$ is \begin{equation} \begin{split} \dot{\mathbf{q}} = \bm{J}_{1}^{+} \dot{\mathbf{x}}_{1} + \bm{N}_{1} \dot{\mathbf{q}}_{0}. \label{eq:qdot_first} \end{split} \end{equation} The definition of the null-space projection matrix using a pseudo inverse and its time derivative yields the following expression: \begin{equation} \begin{split} \bm{N}_{1} = \bm{I} - \bm{J}_{1}^{+} \bm{J}_{1} &\Rightarrow \dot{\bm{N}}_{1} = -\dot{\bm{J}}_{1}^{+} \bm{J}_{1} - \bm{J}_{1}^{+} \dot{\bm{J}}_{1}. \end{split} \end{equation} The resulting joint acceleration can be obtained by time-derivativating equation (\ref{eq:qdot_first}) as described in \cite{siciliano1991general} \begin{equation} \begin{split} \ddot{\mathbf{q}} &= \bm{J}_{1}^{+} \ddot{\mathbf{x}}_{1} + \dot{\bm{J}}_{1}^{+}\dot{\mathbf{x}}_{1} + \dot{\bm{N}}_{1} \dot{\mathbf{q}}_{0} + \bm{N}_{1}\ddot{\mathbf{q}}_{0} \\ &= \bm{J}_{1}^{+} \ddot{\mathbf{x}}_{1} + \dot{\bm{J}}_{1}^{+} \bm{J}_{1} \dot{\mathbf{q}} + \dot{\bm{N}}_{1} \dot{\mathbf{q}}_{0} + \bm{N}_{1}\ddot{\mathbf{q}}_{0} \end{split} \end{equation} using the equality $\dot{\bm{J}}_{1}^{+} \bm{J}_{1} = - \dot{\bm{N}}_{1} - \bm{J}_{1}^{+} \dot{\bm{J}}_{1}$ we get \begin{equation} \begin{split} \ddot{\mathbf{q}} &= \bm{J}_{1}^{+} \ddot{\mathbf{x}}_{1} -\bm{J}_{1}^{+} \dot{\bm{J}}_{1} \dot{\mathbf{q}} - \dot{\bm{N}}_{1} \dot{\mathbf{q}} + \dot{\bm{N}}_{1} \dot{\mathbf{q}}_{0} + \bm{N}_{1}\ddot{\mathbf{q}}_{0} \\ \end{split} \label{eq:ddotq_1} \end{equation} This allows us to simplify equation (\ref{eq:ddotq_1}) to \begin{equation} \begin{split} \ddot{\mathbf{q}} &= \bm{J}_{1}^{+}\left( \ddot{\mathbf{x}}_{1} - \dot{\bm{J}}_{1} \dot{\mathbf{q}}\right) -\dot{\bm{N}}_{1} \bm{J}_{1}^{+}\dot{\mathbf{x}}_{1} + \bm{N}_{1} \ddot{\mathbf{q}}_0. \end{split} \label{eq:qdot_middle} \end{equation} If we consider a secondary task $\mathbf{x}_2$, the term $\dot{\mathbf{q}}_0$ becomes \begin{equation} \dot{\mathbf{q}}_{0} = \left( \bm{J}_{2} \bm{N}_{1} \right)^{+} \left( \dot{\mathbf{x}}_{2} - \bm{J}_{2} \bm{J}_{1}^{+} \dot{\mathbf{x}}_{1} \right) \label{eq:qdot_0} \end{equation} because it can be shown that $\bm{N}_{1} \left( \bm{J}_{2} \bm{N}_{1} \right)^{+} = \left( \bm{J}_{2} \bm{N}_{1} \right)^{+}$, we get, \begin{equation} \begin{split} \dot{\mathbf{q}}_{0} &= \bm{N}_{1} \dot{\mathbf{q}}_{0} \\ \ddot{\mathbf{q}}_0 &= \bm{N}_{1} \ddot{\mathbf{q}}_0 + \dot{\bm{N}}_{1} \dot{\mathbf{q}}_0 \textrm{.} \label{eq:qdot_nqdot} \end{split} \end{equation} From Eq. (\ref{eq:qdot_0}), the term $\ddot{\mathbf{q}}_{0}$ becomes \begin{equation} \begin{split} \ddot{\mathbf{q}}_{0} &= \bm{J}_{2|1}^{+} \left( \ddot{\mathbf{x}}_{2} - \dot{\bm{J}}_{2} \bm{J}_{1}^{+} \dot{\mathbf{x}}_{1} - \bm{J}_{2} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} - \bm{J}_{2} \bm{J}_{1} \ddot{\mathbf{x}}_{1} \right) \\ &+ \dot{\bm{J}}_{2|1}^{+}\left( \dot{\mathbf{x}}_{2} - \bm{J}_{2}\bm{J}_{1}^{+}\dot{\mathbf{x}}_{1} \right) \end{split} \end{equation} where $ \bm{J}_{2|1}\triangleq \bm{J}_{2} \bm{N}_{1}$ and \begin{equation} \dot{\bm{J}}_{2|1}^{+} = - \bm{J}_{2|1}^{+} \dot{\bm{J}}_{2|1} \bm{J}_{2|1}^{+}. \end{equation} Then, we can manipulate the equation above defining $\ddot{\mathbf{q}}_{0}$ to yield \begin{equation} \begin{split} \ddot{\mathbf{q}}_{0} &= \bm{J}_{2|1}^{+} \left( \ddot{\mathbf{x}}_{2} - \dot{\bm{J}}_{2} \bm{J}_{1}^{+} \dot{\mathbf{x}}_{1} - \bm{J}_{2} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} - \bm{J}_{2} \bm{J}_{1}^{+} \ddot{\mathbf{x}}_{1} \right)\\ & - \bm{J}_{2|1}^{+} \dot{\bm{J}}_{2|1} \dot{\mathbf{q}}_{0}\\ &= \bm{J}_{2|1}^{+} \left\{ \ddot{\mathbf{x}}_{2} - \dot{\bm{J}}_{2} \dot{\mathbf{q}} - \bm{J}_{2} \bm{J}_{1}^{+} \left(\ddot{\mathbf{x}}_{1} - \dot{\bm{J}}_{1} \dot{\mathbf{q}} \right) \right\} \\ & + \bm{J}_{2|1}^{+} \left( \dot{\bm{J}}_{2} \dot{\mathbf{q}}_{0} - \bm{J}_{2} \bm{J}_{1}^{+} \dot{\bm{J}}_{1} \dot{\mathbf{q}} - \bm{J}_{2} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} - \dot{\bm{J}}_{2|1} \dot{\mathbf{q}}_{0} \right) \end{split} \end{equation} For simplicity, we define $ \bm{X}\triangleq\ddot{\mathbf{x}}_{2} - \dot{\bm{J}}_{2} \dot{\mathbf{q}} - \bm{J}_{2} \bm{J}_{1}^{+} \left(\ddot{\mathbf{x}}_{1} - \dot{\bm{J}}_{1} \dot{\mathbf{q}} \right)$. Then the equation on $\ddot{\mathbf{q}}_{0}$ can be further expressed as \begin{equation} \begin{split} \ddot{\mathbf{q}}_{0} &= \bm{J}_{2|1}^{+} \bm{X} + \bm{J}_{2|1}^{+} \left(-\bm{J}_{2} \bm{J}_{1}^{+} \dot{\bm{J}}_{1} \dot{\mathbf{q}} - \bm{J}_{2} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} - \bm{J}_{2} \dot{\bm{N}}_{1} \dot{\mathbf{q}}_{0} \right) \\ &= \bm{J}_{2|1}^{+} \bm{X} + \bm{J}_{2|1}^{+} \left\{ -\bm{J}_{2} \bm{J}_{1}^{+} \dot{\bm{J}}_{1} \left( \bm{J}_{1}^{+}\dot{\mathbf{x}}_{1}+ \dot{\mathbf{q}}_{0} \right) \right. \\ & \left. - \bm{J}_{2} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} + \bm{J}_{2}\left(\dot{\bm{J}}_{1}^{+} \bm{J}_{1} + \bm{J}_{1}^{+} \dot{\bm{J}}_{1} \right) \dot{\mathbf{q}}_{0} \right\} \\ &= \bm{J}_{2|1}^{+} \bm{X} + \bm{J}_{2|1}^{+} \left( -\bm{J}_{2} \bm{J}_{1}^{+} \dot{\bm{J}}_{1} \bm{J}_{1}^{+}\dot{\mathbf{x}}_{1} - \bm{J}_{2} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} +\bm{J}_{2} \dot{\bm{J}}_{1}^{+} \bm{J}_{1}\dot{\mathbf{q}}_{0} \right) \end{split} \end{equation} Because $\bm{J}_{1} \dot{\mathbf{q}}_{0} = \bm{0}$, the previous equation becomes \begin{equation} \begin{split} \ddot{\mathbf{q}}_{0} &= \bm{J}_{2|1}^{+} \bm{X} + \bm{J}_{2|1}^{+} \left( -\bm{J}_{2} \bm{J}_{1}^{+} \dot{\bm{J}}_{1} \bm{J}_{1}^{+}\dot{\mathbf{x}}_{1} - \bm{J}_{2} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} \right)\\ &= \bm{J}_{2|1}^{+} \bm{X} + \bm{J}_{2|1}^{+} \left( \bm{J}_{2} \bm{J}_{1}^{+} \bm{J}_{1} \dot{\bm{J}}_{1}^{+}\dot{\mathbf{x}}_{1} - \bm{J}_{2} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} \right)\\ &=\bm{J}_{2|1}^{+} \bm{X} + \bm{J}_{2|1}^{+} \bm{J}_{2} \left( \bm{J}_{1}^{+} \bm{J}_{1} - \bm{I} \right) \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1}\\ &=\bm{J}_{2|1}^{+} \bm{X} - \bm{J}_{2|1}^{+} \bm{J}_{2} \bm{N}_{1} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} \textrm{.} \end{split} \end{equation} Let us develop the term below using the above expression, \begin{equation} \begin{split} -\dot{\bm{N}}_{1} \bm{J}_{1}^{+}\dot{\mathbf{x}}_{1} + \bm{N}_{1} \ddot{\mathbf{q}}_0 &= -\dot{\bm{N}}_{1} \bm{J}_{1}^{+}\dot{\mathbf{x}}_{1} + \ddot{\mathbf{q}}_0 \\ &=\bm{J}_{2|1}^{+} \bm{X} - \dot{\bm{N}}_{1} \bm{J}_{1}^{+}\dot{\mathbf{x}}_{1} - \bm{J}_{2|1}^{+} \bm{J}_{2} \bm{N}_{1} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} \\ &= \bm{J}_{2|1}^{+} \bm{X} + \left(\bm{I}- \bm{J}_{2|1}^{+} \bm{J}_{2} \bm{N}_{1} \right) \bm{N}_{1} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} \\ &= \bm{J}_{2|1}^{+} \bm{X} + \bm{N}_{2|1} \bm{N}_{1} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1} \end{split} \end{equation} Thus, equation (\ref{eq:qdot_middle}) becomes \begin{equation} \begin{split} \ddot{\mathbf{q}} &= \bm{J}_{1}^{+}\left( \ddot{\mathbf{x}}_{1} - \dot{\bm{J}}_{1} \dot{\mathbf{q}}\right) + \bm{J}_{2|1}^{+} \bm{X} + \bm{N}_{2|1} \bm{N}_{1} \dot{\bm{J}}_{1}^{+} \dot{\mathbf{x}}_{1}\\ &= \bm{J}_{1}^{+}\left( \ddot{\mathbf{x}}_{1} - \dot{\bm{J}}_{1} \dot{\mathbf{q}}\right) + \bm{J}_{2|1}^{+} \bm{X} + \bm{N}_{2|1} \bm{N}_{1} \ddot{\mathbf{q}}_{res} \end{split} \label{eq:qdot_final} \end{equation} \section{Related Work} \label{sec:related_work} \subsection{Reinforcement Learning based Locomotion Planner} One of the main challenges for learning robust dynamic locomotion policies is handling the high number of continuous variables describing the motion and force interactions of full humanoid robots. To deal with the dimensionality problem, we review previous work that has greatly inspired us. \cite{Morimoto:2007eh}, solves a periodic locomotion generation problem via RL on a planar biped robot. We advance upon this work by solving the learning problem for 3D robots, avoiding reliance on human walking trajectories, and generating policies for non-periodic gaits. Other important works employ learning as an optimization problem over known locomotion trajectories. In \cite{Sugimoto:2011ge}, a periodic locomotion problems is solved by optimizations of a known stable central patter generated (CPG) walking trajectories using RL. However, no focus is given to dealing with large external disturbances. In addition, our focus is on the generation of trajectories from scratch without prior stable locomotion patterns. In \cite{Missura:2014kn}, robust walking trajectories to external pushes are achieved based on capture point trajectory optimization via gradient based learning updates. In this work, the capture point method is used as an analytic controller to initiate the learning process with information about foot placement, step-timing, and ZMP controls. Although the authors also show learning of push recovery strategies without previous capture point generated trajectories, our focus is stronger on autonomously learning the locomotion process without reliance on already stable walking gaits. As such, we belief our algorithm is able to learn from scratch recovery strategies in a more generic sense, for instance to recover from pushes in any direction while walking. Like ours, autonomous learning of periodic gaits has been explored before in passive dynamic walkers \cite{Tedrake:2004ip}. Once more, our focus is on gait generators that can produce non-periodic gaits and tolerate large push disturbances in all directions of motion. The dynamic locomotion community has previously used online optimization methods instead of RL, such as model predictive control (MPC). The main problem of these approaches is the high computational cost. To mitigate this problem, researchers have made significant efforts to develop efficient computational processes. \cite{Erez:2013cl} used the gradient of a cost function to solve the MPC problem efficiently. \cite{Khadiv:2017th} linearized the planning problem by optimizing over one step ahead of time. Our approach relying on learned neural networks replaces the need for complex online computations, enabling the generation of hundreds of steps in an instant compared to the stepping time scales. \cite{Whitman:2009im} proposed a controller for a 12 DoF biped system by using dynamic programming and a lookup table that was obtained offline based on simple models. The multiple policies achieved from each simple model were combined to control the target system. In contrast, our work relies on the generic inverted pendulum locomotion model, and a versatile full-humanoid body controller, i.e., WBLC. \subsection{Whole-Body Control} WBC \cite{sentis2005synthesis} is a family of multi and prioritized task-space trajectory controllers for humanoid robots that rely on floating-base dynamic and computed torque commands as inputs to the plant. It yields asymptotically stable control policies for multiple tasks with simultaneous control of operational forces when needed. Priorities address resource allocation when two or more task trajectories cannot physically be tracked by the robotic system. It naturally integrates equality constraints such as biarticular transmission constraints \cite{sentis2013implementation}. Other groups have explored richer versions of WBC with inequality constraints such as joint limit avoidance \cite{flacco2012motion,mansard2009unified,lee2012intermediate}, collision avoidance \cite{kanoun2011kinematic}, and singularity avoidance \cite{moe2015stability}. Several groups used evolved and more practical versions of WBC such as controllers used in the DARPA Robotics Challenge of 2013 and 2014. For instance, \cite{koolen2013summary, johnson2015team} incorporate reaction forces as inequality constraints based on solving a quadratic programming optimization problem with desired center of mass trajectories. Treatment of reaction forces as inequality constraints in the WBC communities dates back to the work by \cite{stephens2010dynamic}. And it showcases one of the weaknesses of our group's formulation of WBC. In early versions \cite{sentis2010compliant} we treated reaction forces as equality constraints. Such treatment corresponds to bilateral contact constraints, i.e. assuming that the floor contacts are actually rigid anchors. This is obviously an inaccurate model. One of the main objectives of this paper is to use a realistic unilateral contact model for WBC while maintaining one of its main strengths, efficient prioritized control. Bipedal and quadrupedal walking capabilities have been devised using WBC. \cite{hutter2014quadrupedal} demonstrated locomotion of a quadrupedal robot by utilizing hierarchical tasks based on least-square problems. The integration of the versatile capture point (CP) as an operational space of WBC was proposed and controlled either as a constraint or a task for bipedal humanoid robots \cite{ramos2014whole}. The robot's Center of Gravity (CoG) has been used as a task controller for a while, such as in \cite{mistry2007task}. Walking pattern generators have been incorporated into WBC in multiple instances such as in \cite{carpentier2016versatile}. During the DARPA robotics challenge, several top participants incorporated WBC's into their strategy for achieving mobile dexterous capabilities. For instance, high-level trajectory optimization and low-level optimization with inverse dynamics were integrated into the framework by \cite{feng2015optimization}. As stated before, during the DRC several humanoid robots were controlled via WBC including QP solvers for dealing with reaction forces. By introducing QP and task hierarchy (HQP), whole-body motion of humanoid robots could be controlled with the intrinsic reactive advantages of task prioritization \cite{escande2014hierarchical}. Compared with projection-based WBC algorithms, optimization-based WBC, such as HQP, can incorporate multiple inequality constraints \cite{saab2013dynamic}, which are useful for describing contact conditions such as friction cones \cite{abe2007multiobjective}. Overall, optimization-based WBC have been a success for practical applications \cite{koolen2013summary,feng2015optimization,kuindersma2014efficiently}. However, their computational cost remains a challenge, specially if being considered as models for motion planning, such as model predictive control. Therefore, efficiency of our newly proposed whole-body controller, dubbed WBLC, is a key consideration of this paper. To achieve the speed boost, we rely on a projection-base formulation. However, it is difficult to incorporate inequality constraints into analytical projection-based methods; thus, our goal is to combine both and also to maintain desired task hierarchy capabilities. The proposed WBLC incorporates an efficient QP, the dimension of which depends only on the number of contact points, and a joint acceleration level controller which only relies on projective operators, thus yielding the speed efficiency that we advocate for. \section*{Acknowledgment} The authors would like to thank the members of the Human Centered Robotics Laboratory at The University of Texas at Austin for their great help and support. This work was supported by the Office of Naval Research, ONR Grant [grant \#N000141512507] and NASA Johnson Space Center, NSF/NASA NRI Grant [grant \#NNX12AM03G]. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,116,691,498,086
arxiv
\section{Introduction} Crowdsourcing involves using the power of crowd to perform a task \cite{brabham2008crowdsourcing}. The sheer power in involving the mass to distribute a job of big proportions makes this idea successful in performing various kinds of tasks, skilled or not-so skilled, technical or creative \cite{kittur2013future}. The aim of this study is to exercise the power of crowdsourcing for carrying out tasks like collaborative story writing, and creative plot building, a field which can not be automated by machines. As the people fill in their text descriptions to make stories, we intend to record the input in the form of creative links between story elements in the form of images (depicting scenarios). Like any crowdsourcing platforms, this too thrives on the abundance of data. As the number of people interacting with the interface increases, the accuracy, diversity and content on the platform also rises. To employ this idea for creative plot building, we have primarily studied the existing collaborative editors and gained insights. This is finally used to design a platform that provides an image based interaction. The stories are basically written through connecting images, termed as Image Chains. This creates a universal platform to merge together ideas of different crowd workers. It has the capability to create growing and evolving stories with time involving increased number of users and is, thus, a step toward organized story writing. \section{Related Work} Creative Crowdsourcing is currently a highly exercised concept, with many small start-ups using it to accomplish tasks and attract users. Platforms like DesignHill \cite{designhill} exploit the inputs from crowd workers to help design logos for postings made by people. Another popular platform SquadHelp \cite{squadhelp} employs crowdsourcing to name products and ideas. Graphic designing is also done using crowd inputs by the platforms like 99designs.com \cite{99designs}. However, these platforms work by selecting only one from multiple inputs provided by the crowd contributors. They essentially pick the best out of a pool, with the crowd helping to fill that pool. CorpWiki is a self-regulating wiki system for effective acquisition of high-quality knowledge content from the corporate employees \cite{Lykourentzou2010}. However, such platforms are not for creative tasks. Collabowriters is a platform that turns crowdsourced inputs into novels. People are allowed to enter lines consisting of a maximum 140 characters, and they are subsequently voted to decide the most popular one. The highest voted line is then added as the next sentence to a novel \cite{collabowriters}. This short lived project has tried to build a well written, coherent story of the size of a book, with the help of crowd. However, we aim at making short stories at first, with the idea being to link creative thoughts together. There are also some Wiki-based interfaces for collaborative story writing. One such platform asks students to edit on a common platform, with an interface like Microsoft Word, and builds new stories through posting a discussion \cite{lspseminar}. The users of this platform have reported that the interface is not receptive to multiple people editing a document simultaneously. This platform also suffers from the problem of content deletion by the other users whenever a new story is being formed. The users have also noticed the lack of an interactive way to add ideas to a story. Some platforms \cite{storybird}, \cite{inklewriter}, \cite{wattpad} allow users to write a complete story online on a platform, such that people can view their stories. An online audience provides continuous feedback to the writers, helping them guide the story, and also to improve the content. This in turn also provides readers with a place to read new stories written by crowd workers. However, these sites work on adding complete stories, and are focused more toward an online platform to judge and read new stories. We merge the working principles of platforms like \cite{storybird} and \cite{lspseminar} to provide users a place to get linked as well as vote for new stories. It does not rely on people being expert story tellers, because people have multiple roles to fulfill. So, most of these said approaches are unorganized. There are several crowd-powered models that serve the purpose of organized creative writing too. Motif is a recent platform which guides users through adding video snippets from a journey or incident, and adding story-like descriptions to each `scene' they add \cite{kim2015motif}. These are joined together to form coherent stories. Motif thus generates good quality stories by inputs from novices and experts alike, by providing an organized platform for creation. Another platform by Kim et al., Storia \cite{kim2016storia}, works to link social media updates about an event to make a coherent story about a particular incident. The motive remains linking social media updates, but the approach involves asking the crowd to generate summaries from inputs. Storia hence takes the short social media updates from Twitter, Facebook, etc. as nodes in the story, which are to be linked to form a well written story. A crowd-powered model by Kim et al., Mechanical Novel \cite{kim2018mechanical}, attempts at microtasking the 2 facets of story-writing, choosing the target for a story and writing independent scenes of the story, through mTurk. This paper is focused on using the crowd to break down a high-level goal such as creating a story into microtasks which can be self-managed by the crowd to fulfill/extend the primary goal. It allows the crowd to decide on the current state of a story, and how it can be improved or added to. Then the crowd workers propose the changes which should be made to a story, and these changes are voted upon by the others. We however, allow people to merge two story paths together, and to branch one story into a completely new one. The addition of text/task of writing text for an Image Chain, which finally becomes a piece of story, allows people to create what they feel is the best narrative for a given set of images. These story pieces are voted by others to choose the best story for a given sequence. Mechanical Novel does not allow users to continue a current story in a direction they want to, unless the whole crowd decides on it. Our platform aims to provide the flexibility of growing stories in any manner as users want. \section{Motivational Insights} The current paper basically aims at building a platform which allows users to add content to a creative story. There are several reasons why a common document editing platform (e.g., Google Doc) will not serve the said purpose. The human co-ordination can be managed by many existing platforms but the challenges remain to be the lack of an organized structure, possible inclusion of noise, chaotic editing, inconsistent results, etc. The main challenges that we have observed are listed below. \begin{itemize} \item \textbf{Lack of organization:} If the platform is just an open document which everyone could edit, there is a lot of chaotic input. \item \textbf{Preservation of content:} People can even delete each others' inputs, and a lot of good ideas get wasted as a result (log files can be ignored by others in the long run). \item \textbf{Absence of role distribution:} If there is no distribution of roles among the people, it leads to people overriding each others' functions at any instant. \item \textbf{Recency bias:} Recent edits get more priority than the older edits. \item \textbf{Arguments related to ownership and content deletion}: Since people can delete others' inputs, it leads to unnecessary arguments between the collaborators. \end{itemize} \section{Platform Design} A text-only platform initially seems like a good idea to connect plots with the help of a crowd. However, as the size of a story increases (the number of scenarios added to a story increases), the complexity of reading through already existing story elements to decide which ones to connect becomes higher. The lack of images makes it difficult for people to easily visualize what other people are creating, without having to read through the whole paragraph. This gives rise to the idea of an even more organized approach, and the ability to interact better with users. We have already pointed out several limitations of existing approaches that led to the better design of our platform. In our designed platform, a sequence of images which depicts a flow of narrative is defined as an Image Chain. A starting image from which such Image Chains are formed is referred to as a \textbf{Base Image}. A crowd worker can either start such a story (with a Base Image), continue a story (by extending an Image Chain), or write or edit a story (on an existing Image Chain), and finally vote for such a story (see Fig.~\ref{Fig:Publish}). All these steps, as listed hereunder, are however optional. \begin{figure} \centering \includegraphics[scale=0.3]{Snapshot.png} \caption{A snapshot of the page where people write their stories or vote for other stories.} \label{Fig:Publish} \end{figure} \begin{enumerate} \item \textbf{Starting a story:} A crowd worker can start a story by uploading an image with a description of it or by choosing an image already existing in the database. \item \textbf{Continuing a story:} A crowd worker can continue a story by selecting a particular Image Chain (ordered chain of images depicting a flow of events created by a crowd worker). Note that, the selected Image Chain also starts with a Base Image. The crowd worker can either upload an (or multiple) image(s) to continue or select an image from the existing database of images. An uploaded image by a user is added on to our pool of images immediately so that the next crowd worker can use the same as and when required. \item \textbf{Publishing a story:} A crowd worker is entitled to write a story based on the Image Chain he has formed. Every crowd worker who has contributed to the same Image Chains can write their stories by their own or take help from other contributors. Suppose a particular crowd worker has written something about an Image Chain. Subsequent crowd workers writing for the same Image Chain can view what the former has written and use the insights to create another version of the story. Now, the former crowd worker can again use the insights of the latter to create a revised version of the story. \item \textbf{Voting for a story:} A crowd worker can select a particular Image Chain to vote from the set of all the story chains formed till then. He can select from all the stories written for that Image Chain and vote for his favorite. In this way, a story is voted upon. Internally, votes for an Image Chain are assessed when a crowd worker creates an Image Chain already created by another crowd worker. In that case, instead of creating a redundant Image Chain, we increase the number of votes for that Image Chain. Image Chains with higher votes have a greater probability to be included in the recommendation list. \end{enumerate} \section{Empirical Analysis} Total 25 crowd workers (male = 16, female = 9, mean age = 21.8 years) have taken part in the deployment session by getting connected with computers and mobile phones. None of them are by profession story writers or storytellers. Most of these people have used crowdsourcing platforms earlier, albeit not knowing it is crowdsourced. They have used the platform for 10-72 min (mean time of use = 45 min) in total. During this time, they have used the platform to add images, build stories, and also give feedback about the use, interface and interest via a feedback form. From a starting pool of 30 images (provided as Base Images), the platform has finally grown to 64 images at the end of experimental period of about a month. Total 34 Image Chains (images selected by the crowd workers depicting an ordered flow of thoughts) have been formed and the users have contributed to 22 independent Story Texts. We have analyzed the Image Chains to study their average length (number of images they contain), and how likely people are to extend chains of a particular length. The average length of an Image Chain is found to be 4.67 after the experimental session, the maximum length being 11. To get an idea about whether users prefer to add images to (extend) smaller chains or bigger ones, we have divided these Image Chains into two groups based on a length threshold value of 5 (Since our average length was calculated as 4.67). Out of these groups, the average number of chains for lengths below ($<=$) 5 images is found to be 5.5 and for lengths greater than 5 images is found to be 2. Putting these two populations under a t-test, we found them to be significantly different from each other ($p$-value = 0.0086; t-test). A possible reason could be that the majority of crowd workers have extended 1-3 sized Image Chains and added 2-3 images more. Hence, even when a crowd worker is adding images to an Image Chain of size $> 7$, the inclination is to extend it to 1-2 images more. Then these crowd workers would end the chain and start writing a story for the same. \begin{table} \scriptsize{ \caption{Analysis of the length of Image Chains and votes obtained by them.} \centering \begin{tabular}{|c|c|} \hline Average length of Image Chains & 4.67 \\\hline Average number of Image chains of length $\leq 5$ & 5.5 \\\hline Average number of Image Chains of length $> 5$ & 2 \\\hline Average number of votes for a story text & 3.18 \\\hline Average votes for story texts for Image Chains of length $\leq 5$ & 2.4 \\\hline Average votes for story texts for Image Chains of length $> 5$ & 3.833 \\\hline \end{tabular} \label{Table:Length} } \end{table} To ensure whether larger Image Chains obtain more votes, we again compare the two groups of Image Chains (as listed above, with threshold for chain length = 5). The mean and standard deviation values of votes obtained from the users for Image Chains are reported in Table~\ref{Table:Length}. The comparison of the two groups of Image Chains (segregated on the basis of their lengths) was put to a t-test, which gives a significant observation that longer Image Chains obtain significantly higher number of votes ($p$-value = 0.0366; t-test). Hence people are more inclined to alter or grow Image Chains of shorter length ($\leq 5$, in our data), which gives us increased concentration at the lower lengths, while people are opting to vote more for images of longer lengths, may be because they appear to be more complete as a story. \section{Conclusion} Content filtering is one of the primary concerns of a crowdsourcing platform. A system to filter the content, as well as activity, on the platform needs to be present to ensure the quality. The proposed platform attempts to do that by majority approval and storing many versions of one Image Chain with the argument that any chain can be extended later. Additionally, the recommendation facility should be tuned to the genre interest of the user. For this, image descriptions have to be categorized into buckets of similar tastes so that a user selecting images from one bucket is shown images and Image Chains pertaining to the same or similar buckets (buckets with similar kind of genres). Recommendation can also be provided based on the nature of contributions. A crowd worker may extend the work of another crowd worker. Till now, the recommendation facility gives importance only to the voting procedure. Highly voted Image Chains and their corresponding texts are shown in the recommendation section. A balance of votes, genres, contributors and the submission time of any story should make a much better recommendation system. Better incentives are also an important concern here. We did not use any means to incentivize the crowd workers except from providing an encouragement through a Leaderboard. Any such platform would need some form of fund generation or fund collection mechanism to financially support the competent crowd workers. \section{Acknowledgement} This publication is an outcome of the R\&D work undertaken in the project under the Visvesvaraya PhD Scheme of Ministry of Electronics \& Information Technology, Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia). \bibliographystyle{aaai}
1,116,691,498,087
arxiv
\section{Introduction} \label{intro} The study of the hadron properties in the strange asymmetric matter at finite temperature is of considerable interest to understand the QGP phase diagram in the strong interaction physics \cite{Tolos2020,Holzenkamp1989,Petschauer2016,Haidenbauer2019,Kumar2015,Chhabra2017,Chhabra2018,Mishra2009, Kumar2011}. Experimentally, the heavy ion-collisions (HICs) play an important role to study the hot and dense matter in the region of non-perturbative QCD. In HICs, two asymmetric nuclei are collided with each other and for a short interval of time, a fireball containing quark gluon plasma (QGP) comes into existence. Within a very short interval, the fireball expands and converts into an ensemble of particles which consists of nucleons, hyperons, and mesons, collectively known as hadronic matter \cite{Kumar2019}. In a hadronic medium, the baryons and mesons are the degrees of freedom therefore only non-perturbative physics can be applied here. Furthermore, due to the presence of the strange particles in the medium, it is very important to include the strangeness fraction while studying the properties of mesons in the hadronic matter. We see significant progress to understand the properties of strange dense matter at a moderate temperature with the construction of future experiments such as CBM at Facility for Antiproton and Ion Research (FAIR), Nuclotron-based Ion Collider Facility (NICA) at Dubna, Russia and, J-PARC in Japan \cite{Kumar2019,Rapp2010}. The in-medium study of light vector mesons ($\rho$, $\omega$ and $\phi$) is of interest to theoretical and experimental researchers \cite{Kim2020,Mishra2015,Mishra2019a,Shivam2019,Leupold2010,Hayano2010, Krein2016,Martinez2016,Ko1992,Li1995} because of their role in exploring the dilepton production in HICs. The dilepton production is considered as promising observable because of its weak interaction with baryons and mesons \cite{Xiong1990,Xiong1990a,Ko1992,Korpa1990,Ko1989,Gale1987,Xia1988}. Among these mesons, there is a particular interest in $\phi$ meson, because of its strong interaction with nucleons and $u$/$d$ quarks although it has pure strange content ($s\bar s$) \cite{Martinez2016}. Due to its strange nature, it is also imperative to study its interactions with strange baryons and mesons. In literature, using QCD Van der Waals forces, the $\phi$ meson is used to test the multi gluon exchange theory \cite{Sibirstev2006} and it has implications to understand the dark matter \cite{Gubler2014,Bottino2002,Ellis2008} as well, which is beyond the regime of QCD physics. There is also a possibility of $\phi$-mesic nuclei formation due to its negative mass-shift in nuclear matter \cite{Martinez2017,Martinez2016,Buhler2010,Ohnishi2014,Jlab}. The strong interaction of $\phi$ with $u$/$d$ quarks occurs due to the interplay with $K\bar K$ pairs, therefore, the in-medium properties of the kaon and antikaon play a crucial role. It was Kaplan and Nelson, who initiated the study of in-medium properties of kaons and antikaons \cite{Kaplan1986}. They observed a negative mass shift of antikaons in the neutron star medium and suggested the possibility of antikaon condensation. The downward mass-shift comes from strong attractive interaction of antikaons with nucleons. Due to this attractive interaction, the effective energy of the mesons becomes low by transforming in the attractive scalar field. The annihilation of $K \bar K$ into a dilepton mainly yields through the $\phi$ meson, therefore dilepton production in HICs helps us to understand the properties of the $\phi$ meson in the dense hadronic medium \cite{Xiong1990,Xiong1990a,Ko1992,Korpa1990,Ko1989,Gale1987,Xia1988}. Due to the relevance of $K$ and $\bar K$ isospin doublets in heavy-ion collisions, a lot of theoretical \cite{Mishra2009,Li1997,Ko2001,Pal2001,Cassing1997, Bratkovskaya1997,Cassing1999,Lutz1998,Lutz2002,Lutz2002a,Ramos2000,Tolos2002} and experimental \cite{Laue1999,Menzel2000,Sturm2001,Forster2002} investigations have been made. The free space antikaon-nucleon scattering amplitudes are obtained from a covariant unitarized chiral coupled-channel approaches which include the method of partial waves systematically \cite{Lutz1998,Ramos2000,Tolos2001,Tolos2006,Lutz2008}. The evaluation of the kaon self-energy from this mechanism has successfully described the $K^-$ meson interaction in the hadronic matter. At nuclear saturation density, an attractive potential of about 40 to 60 MeV is obtained from these calculations. Using the chiral SU(3) model, the properties of kaons and antikaons are studied in isospin asymmetric nuclear matter at finite temperature in Ref. \cite{Mishra2008}, and in strange matter at zero temperature \cite{Mishra2009}. In these articles, the in-medium self energies of $K$ and $ \bar K$ mesons are studied and an attractive in-medium optical potential is found. The in-medium theoretical observations of $\phi$ meson mass and decay width has been studied extensively in the literature. Several authors have speculated a small downward mass shift and broadening in the decay width of $\phi$ meson \cite{Ko1992,Klingl1998,Hatsuda1991,Hatsuda1996,Oset2000,Cabrera2002,Martinez2016}. By considering the contributions of the kaon-antikaon loop to the self-energy, Ko $et.al.$~\cite{Ko1992} used chiral perturbation theory to calculate the density-dependent kaon mass and found that at nuclear saturation density, $\rho_0$, the $\phi$ meson mass decreases very little (at most $2\%$), and the width has the value $\approx 25$~MeV. They also observed that for large densities the decay width broadens substantially. In Ref.~\cite{Klingl1998}, at $\rho_0$, Klingl $et.al.$ reports a downward mass shift in $\phi$ mass ( $ < 1\%$) and a broadened decay width of 45 MeV. Using the QCD sum rule approach with linear density approximation, Hatsuda and Lee computed the in-medium $\phi$ mass, and predicted a small decrease at nuclear saturation density ~\cite{Hatsuda1991,Hatsuda1996}. The large broadening of $\phi$ decay width is also predicted by other investigators as in Ref.~\cite{Oset2000}, Oset $et.al.$ predicted a decay width of 22 MeV and in Ref.~\cite{Cabrera2002} a decay width of 30 MeV was observed. More recently, in Ref.~\cite{Martinez2016}, Martinez $et.al.$ reported a downward mass shift of $25$ MeV and a large broadening width of 32.8 MeV for a cut-off parameter of 3000 MeV at nuclear saturation density. In most of the experiments, a large broadening of the in-medium decay width has been investigated \cite{Muto2005,Ishikawa2004,Mibe2007,Qian2009}. At nuclear saturation density, the KEK-E325 collaboration computed a decrement in mass ($3.4\%$) and increase in the in-medium decay width ($\approx 14.5$ MeV) of $\phi$ meson ~\cite{Muto2005}. Whereas in Ref.~\cite{Ishikawa2004} SPring8 reported a large value of $\phi N$ cross-section in the medium which results in a decay width of 35 MeV, which is in close agreement with the experiments \cite{Mibe2007,Qian2009}. To have a more clear picture, further experimental efforts are needed to understand $\phi$ meson in the medium. In the present article, we report the results of the in-medium $\phi$ meson mass and decay width in the hot asymmetric strange hadronic matter by taking into account the medium induced kaon and antikaon masses. The in-medium $K$ and $\bar K$ properties are incorporated by a chiral effective Lagrangian using the chiral SU(3) model \cite{Mishra2009,Kumar2020}. We calculate the in-medium masses of $K$ and $\bar K$ mesons at finite temperatures in the strange hadronic matter and use those as input to calculate the in-medium mass and decay width of $\phi$ meson. The chiral model is a non-perturbative hadron based model to describe the in-medium properties of hadronic matter \cite{Papazoglou1999,Kumar2014,Kumar2019a,Mishra2019, Mishra2004a,Mishra2004,Mishra2006,Mishra2008, Kumar2020a,Reddy2018,Dhale2018,Kumar2015,Chhabra2017,Chhabra2018,Kumar2010}. It has also been applied to study the effect of the magnetic field on the in-medium properties of quarkonia \cite{Kumar2019,Kumar2019a} and open charm mesons \cite{Kumar2020,Kumar2020a}. To study the $K \bar K$ loop contributions in the $\phi$ meson decay, we use effective Lagrangian of $\phi K \bar K$ interactions and solve the loop integral by using regularization techniques \cite{Martinez2016,Krein2010}. In the current work, we include the contributions from $K$ and $\bar K$ loop by utilizing the in-medium masses $m_{K^+}^{*}$, $m_{K^0}^{*}$, $m_{K^-}^{*}$ and $m_{\bar K^0}^{*}$ which is different from previous works as in Ref. \cite{Martinez2016,Martinez2017}, the $\bar K$ contribution to the $K \bar K$ loop was suppressed by equalising the mass via relation $m^*_K$=$m^*_{\bar K}$. The layout of the present paper is as follows: In the next subsection \ref{subsec2.1}, we will concisely discuss the methodology to obtain the in-medium scalar and vector fields in the hyperonic matter. In the subsection \ref{subsec2.2}, we will calculate the induced mass of kaon and antikaon via interactions of chiral model fields. The theoretical approach to calculate the in-medium mass and decay width of $\phi$ meson will be discussed in subsection \ref{subsec2.3}. In the forthcoming section \ref{sec:3}, quantitative results of the present findings will be discussed and in closing, we will conclude our work in section \ref{sec:4}. \section{ Methodology } We use chiral SU(3) model to study the impact of isospin asymmetry and strangeness fraction on the scalar and vector fields, which is further used to calculate the in-medium mass of kaon and antikaon. Moreover, the $\phi$ meson mass and decay width is calculated from self consistent Lagrangian approach. In the forthcoming subsections, we briefly narrate the formalism to obtain the results. \subsection{THE HADRONIC CHIRAL SU(3) MODEL} \label{subsec2.1} The hadronic chiral effective Lagrangian is given as \begin{equation} {\cal L}_{chiral} = {\cal L}_{kin} + \sum_{ M =S,V} {\cal L}_{BM} + {\cal L}_{vec} + {\cal L}_0 + {\cal L}_{SB}. \label{genlag} \end{equation} As a description of hadronic matter, the model comprises the fundamental QCD features such as trace anomaly and non-linear realization of chiral symmetry \cite{Weinberg1968,Coleman1969,Zschiesche1997,Bardeen1969, Kumar2020,Papazoglou1999,Kumar2019}. In this model, the isospin asymmetry of the matter is introduced by the inclusion of the scalar isovector field $\delta$ and vector-isovector field $\rho$ \cite{Kumar2020} and the impact of strangeness is measured by incorporating the scalar field $\zeta$ and vector field $\phi$. The broken scale invariance property of QCD is conserved by the insertion of the scalar dilaton field $\chi$ \cite{Papazoglou1999,Kumar2020}. To simplify, the effect of fluctuations near phase transitions are neglected by using mean-field approximation \cite{Kumar2020,Reddy2018}. This model has been used successfully to study the nuclear matter, hypernuclei, finite nuclei, and neutron stars \cite{Weinberg1968,Coleman1969,Zschiesche1997,Bardeen1969, Kumar2020,Papazoglou1999,Kumar2019}. In the Eq.(\ref{genlag}), ${\cal L}_{kin}$ denotes the kinetic energy term, ${\cal L}_{BM}$ is the baryon-meson interaction term, where $S$ and $V$ represents the scalar and vector mesons, respectively. The term $ {\cal L}_{vec}$ produces the vector meson mass through the interactions with scalar mesons and contains the quartic self-interaction terms, $ {\cal L}_{0}$ defines the spontaneous chiral symmetry breaking, and ${\cal L}_{SB} $ defines the explicit chiral symmetry breaking. There exist the $D$-type (symmetric) and $F$-type (antisymmetric) couplings for the baryon-vector meson interactions terms. Here we use the antisymmetric coupling \cite{Mishra2009,Kumar2015} as per the vector meson dominance model the $D$-type coupling should be less which also follows the universality principle \cite{Sakurai1969}. Moreover, we choose the medium parameters \cite{Mishra2009} so as to dissociate nucleons from the strange field $ \phi_\mu\sim\bar{s} \gamma_\mu s $, which leads to an ideal mixing between $\phi$ and $\omega$. By using Euler Lagrange equations for $\sigma$, $\zeta$, $\delta$, $\omega$, $\rho$, $\phi$ and $\chi$ mesonic fields of the chiral model, we obtained the following coupled equations of motions: \begin{eqnarray} k_{0}\chi^{2}\sigma-4k_{1}\left( \sigma^{2}+\zeta^{2} +\delta^{2}\right)\sigma-2k_{2}\left( \sigma^{3}+3\sigma\delta^{2}\right) -2k_{3}\chi\sigma\zeta \nonumber\\ -\frac{d}{3} \chi^{4} \bigg (\frac{2\sigma}{\sigma^{2}-\delta^{2}}\bigg ) +\left( \frac{\chi}{\chi_{0}}\right) ^{2}m_{\pi}^{2}f_{\pi} =\sum g_{\sigma i}\rho_{i}^{s} , \label{sigma} \end{eqnarray} \begin{eqnarray} k_{0}\chi^{2}\zeta-4k_{1}\left( \sigma^{2}+\zeta^{2}+\delta^{2}\right) \zeta-4k_{2}\zeta^{3}-k_{3}\chi\left( \sigma^{2}-\delta^{2}\right)\nonumber\\ -\frac{d}{3}\frac{\chi^{4}}{\zeta}+\left(\frac{\chi}{\chi_{0}} \right) ^{2}\left[ \sqrt{2}m_{K}^{2}f_{K}-\frac{1}{\sqrt{2}} m_{\pi}^{2}f_{\pi}\right] =\sum g_{\zeta i}\rho_{i}^{s} , \label{zeta} \end{eqnarray} \begin{eqnarray} k_{0}\chi^{2}\delta-4k_{1}\left( \sigma^{2}+\zeta^{2}+\delta^{2}\right) \delta-2k_{2}\left( \delta^{3}+3\sigma^{2}\delta\right) +2k_{3}\chi\delta \zeta \nonumber\\ + \frac{2}{3} d \chi^4 \left( \frac{\delta}{\sigma^{2}-\delta^{2}}\right) =\sum g_{\delta i}\tau_3\rho_{i}^{s} , \label{delta} \end{eqnarray} \begin{eqnarray} \left (\frac{\chi}{\chi_{0}}\right) ^{2}m_{\omega}^{2}\omega+g_{4}\left(4{\omega}^{3}+12{\rho}^2{\omega}\right) =\sum g_{\omega i}\rho_{i}^{v} , \label{omega} \end{eqnarray} \begin{eqnarray} \left (\frac{\chi}{\chi_{0}}\right) ^{2}m_{\rho}^{2}\rho+g_{4}\left(4{\rho}^{3}+12{\omega}^2{\rho}\right)=\sum g_{\rho i}\tau_3\rho_{i}^{v} , \label{rho} \end{eqnarray} \begin{eqnarray} \left (\frac{\chi}{\chi_{0}}\right) ^{2}m_\phi^2\phi+8g_4\phi^3&=& \sum g_{\phi i}\rho_{i}^{v}, \label{phi} \end{eqnarray} and \begin{eqnarray} k_{0}\chi \left( \sigma^{2}+\zeta^{2}+\delta^{2}\right)-k_{3} \left( \sigma^{2}-\delta^{2}\right)\zeta + \chi^{3}\left[1 +{\rm {ln}}\left( \frac{\chi^{4}}{\chi_{0}^{4}}\right) \right] +(4k_{4}-d)\chi^{3} \nonumber\\ -\frac{4}{3} d \chi^{3} {\rm {ln}} \Bigg ( \bigg (\frac{\left( \sigma^{2} -\delta^{2}\right) \zeta}{\sigma_{0}^{2}\zeta_{0}} \bigg ) \bigg (\frac{\chi}{\chi_0}\bigg)^3 \Bigg )+ \frac{2\chi}{\chi_{0}^{2}}\left[ m_{\pi}^{2} f_{\pi}\sigma +\left(\sqrt{2}m_{K}^{2}f_{K}-\frac{1}{\sqrt{2}} m_{\pi}^{2}f_{\pi} \right) \zeta\right] \nonumber\\ -\frac{\chi}{{{\chi_0}^2}}(m_{\omega}^{2} \omega^2+m_{\rho}^{2}\rho^2) = 0. \label{chi} \end{eqnarray} In above equations, the model parameters $k_i (i=1$ to $4)$ are fitted to regenerate the vacuum values of scalar fields \cite{Kumar2010} and $m_\pi$, $m_K$, $f_\pi$ and $f_K$ denote the masses and decay constants of pions and kaons, respectively. The values of these parameters along with other coupling constants which are fitted in the model to reproduce vacuum masses of baryon octet are tabulated in \cref{ccc}. In these equations of motion, $\rho^{v}_{i}$ and $\rho^{s}_{i}$ characterize the vector and scalar densities of $i^{th}$ baryons ($i=p,n, \Lambda, \Sigma ^\pm, \Sigma ^0, \Xi ^-, \Xi ^0$) \cite{Kumar2019,Mishra2009} and are defined through relations: \begin{eqnarray} \rho_{i}^{v} = \gamma_{i}\int\frac{d^{3}k}{(2\pi)^{3}} \Bigg(\frac{1}{1+\exp\left[\beta(E^{\ast}_i(k) -\mu^{*}_{i}) \right]}-\frac{1}{1+\exp\left[\beta(E^{\ast}_i(k) +\mu^{*}_{i}) \right]} \Bigg), \label{rhov0} \end{eqnarray} and \begin{eqnarray} \rho_{i}^{s} = \gamma_{i}\int\frac{d^{3}k}{(2\pi)^{3}} \frac{m_{i}^{*}}{E^{\ast}_i(k)} \Bigg(\frac{1}{1+\exp\left[\beta(E^{\ast}_i(k) -\mu^{*}_{i}) \right]}+\frac{1}{1+\exp\left[\beta(E^{\ast}_i(k) +\mu^{*}_{i}) \right]} \Bigg), \label{rhos0} \end{eqnarray} respectively, where $\beta = \frac{1}{kT}$ and $\gamma_i$ is the degeneracy factor. Through the temperature dependence in above relations, we introduce the temperature dependence in the values of scalar and vector fields and hence, masses of kaons and antikaons and further, in the masses and decay width of $\phi$ mesons. In addition, in this model the isospin asymmetry and strangeness is incorporated through the parameters, $\eta = -\frac{\Sigma_i \tau_{3i} \rho^{v}_{i}}{2\rho_{B}}$ and $f_s = \frac{\Sigma_i \vert s_{i} \vert \rho^{v}_{i}}{\rho_{B}}$ respectively. Where $\tau_3$, $\vert s_{i} \vert$ and $\rho_B$ symbolize the isospin quantum number (3$^{rd}$ component), number of strange quarks and the total baryonic density respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $g_{\sigma n(p)}$ & $g_{\zeta n(p) }$ & $g_{\delta n(p) }$ & $g_{\omega n(p)}$ & $g_{\rho n(p)}$ & $g_{\sigma \Lambda}$ &$g_{\zeta \Lambda}$& $g_{\delta \Lambda}$& $g_{\sigma \Sigma}$& $g_{\zeta \Sigma}$\\ \hline 10.56 & -0.46 & 2.48 & 13.35 & 5.48 &7.52&5.8&0&6.13&5.8 \\ \hline $g_{\delta \Sigma}$ & $g_{\delta \Sigma^0}$ & $g_{\sigma \Xi}$ & $g_{\zeta \Xi}$ & $g_{\delta \Xi}$&$g_{\omega \Lambda}$&$g_{\omega \Sigma}$&$g_{\rho \Sigma}$&$g_{\rho \Sigma^0}$&$g_{\omega \Xi}$\\ \hline 6.79 &0&3.78&9.14 &2.36 &$\frac{2}{3}$ $g_{\omega N}$&$\frac{2}{3}$ $g_{\omega N}$&$\frac{2}{3}$ $g_{\omega N}$&0& $\frac{1}{3}$ $g_{\omega N}$ \\ \hline $g_{\rho \Lambda}$ & $g_{\rho \Xi}$ & $g_{\phi \Lambda}$ & $g_{\phi \Sigma}$ & $g_{\phi \Xi}$&$\sigma_0$ (MeV)& $\zeta_0$(MeV) & $\chi_0$(MeV) & $d$ & $\rho_0$ ($\text{fm}^{-3}$) \\ \hline 0 &$\frac{1}{3}$ $g_{\omega N}$&$\frac{1}{3}$ $g_{\omega N}$&-$\frac{\sqrt 2}{3}$ $g_{\omega N}$&-$\frac{2 \sqrt 2}{3}$ $g_{\omega N}$ &-93.29 & -106.8 & 409.8 & 0.064 & 0.15 \\ \hline $m_\pi $(MeV) &$ m_K$ (MeV)&$ f_\pi$(MeV) & $f_K$(MeV) & $g_4$&$k_0$ & $k_1$ & $k_2$ & $k_3$ & $k_4$ \\ \hline 139 & 494 & 93.29 & 122.14 & 79.91 &2.53 & 1.35 & -4.77 & -2.77 & -0.218 \\ \hline $m_\omega$ (MeV) & $m_\rho$ (MeV) & $m_\phi$ (MeV) & & &&&&& \\ \hline 783 & 783 & 1020 & & &&&&& \\ \hline \end{tabular} \caption{Various parameters used in the present calculations in strange hadronic matter \cite{Kumar2011}.} \label{ccc} \end{table} \subsection{KAON AND ANTIKAON INTERACTIONS IN THE CHIRAL MODEL} \label{subsec2.2} In this subsection, we evaluate the in-medium mass of $K (\bar K)$ via dispersion relation \cite{Mao1999} in hot asymmetric strange hadronic matter \cite{Mishra2006,Kumar2015}. As we discussed earlier, the in-medium masses of kaons and antikaons within chiral SU(3) model are studied in asymmetric nuclear matter at finite temperature whereas in strange matter at zero temperature only. Whereas in the present work we will study these properties in strange matter at finite temperature. The scalar and vector fields modify the scalar and vector densities of the baryons which further modifies the self-energy of the kaons and antikaons. The interaction Lagrangian density for kaons and antikaons can be written as \cite{Mishra2009} \begin{eqnarray} \cal L _{KB} & = & -\frac {i}{4 f_K^2} \Big [\Big ( 2 \bar p \gamma^\mu p +\bar n \gamma ^\mu n -\bar {\Sigma^-}\gamma ^\mu \Sigma ^- +\bar {\Sigma^+}\gamma ^\mu \Sigma ^+ - 2\bar {\Xi^-}\gamma ^\mu \Xi ^- - \bar {\Xi^0}\gamma ^\mu \Xi^0 \Big) \nonumber \\ & \times & \Big(K^- (\partial_\mu K^+) - (\partial_\mu {K^-}) K^+ \Big ) \nonumber \\ & + & \Big ( \bar p \gamma^\mu p + 2\bar n \gamma ^\mu n +\bar {\Sigma^-}\gamma ^\mu \Sigma ^- -\bar {\Sigma^+}\gamma ^\mu \Sigma ^+ - \bar {\Xi^-}\gamma ^\mu \Xi ^- - 2 \bar {\Xi^0}\gamma ^\mu \Xi^0 \Big) \nonumber \\ & \times & \Big(\bar {K^0} (\partial_\mu K^0) - (\partial_\mu {\bar {K^0}}) K^0 \Big ) \Big ] \nonumber \\ &+ & \frac{m_K^2}{2f_K} \Big [ (\sigma +\sqrt 2 \zeta+\delta)(K^+ K^-) + (\sigma +\sqrt 2 \zeta-\delta)(K^0 \bar { K^0}) \Big ] \nonumber \\ & - & \frac {1}{f_K}\Big [ (\sigma +\sqrt 2 \zeta +\delta) (\partial _\mu {K^+})(\partial ^\mu {K^-}) +(\sigma +\sqrt 2 \zeta -\delta) (\partial _\mu {K^0})(\partial ^\mu \bar {K^0}) \Big ] \nonumber \\ &+ & \frac {d_1}{2 f_K^2}(\bar p p +\bar n n +\bar {\Lambda^0}{\Lambda^0} +\bar {\Sigma ^+}{\Sigma ^+} +\bar {\Sigma ^0}{\Sigma ^0} +\bar {\Sigma ^-}{\Sigma ^-} +\bar {\Xi ^-}{\Xi ^-} +\bar {\Xi ^0}{\Xi ^0} )\nonumber \\ &\times & \big ( (\partial _\mu {K^+})(\partial ^\mu {K^-}) +(\partial _\mu {K^0})(\partial ^\mu {\bar {K^0}}) \big ) \nonumber \\ &+& \frac {d_2}{2 f_K^2} \Big [ (\bar p p+\frac {5}{6} \bar {\Lambda^0}{\Lambda^0} +\frac {1}{2} \bar {\Sigma^0}{\Sigma^0} +\bar {\Sigma^+}{\Sigma^+} +\bar {\Xi^-}{\Xi^-} +\bar {\Xi^0}{\Xi^0} ) (\partial_\mu K^+)(\partial^\mu K^-) \nonumber \\ &+ &(\bar n n +\frac {5}{6} \bar {\Lambda^0}{\Lambda^0} +\frac {1}{2} \bar {\Sigma^0}{\Sigma^0} +\bar {\Sigma^-}{\Sigma^-} +\bar {\Xi^-}{\Xi^-} +\bar {\Xi^0}{\Xi^0} ) (\partial_\mu K^0)(\partial^\mu {\bar {K^0}}) \Big ], \label{lagd} \end{eqnarray} with the vectorial interaction term (Weinberg-Tomozawa term) as first term which is obtained from the kinetic part of the interaction Lagrangian. The second term is obtained from explicit symmetry breaking and the third term is acquired by the kinetic terms of pseudoscalar meson of the chiral effective Lagrangian \cite{Mishra2006,Mishra2008}. The fourth and fifth terms are called range terms which basically arise from the baryon meson interaction Lagrangian of chiral model \cite{Mishra2004a,Mishra2006} and are given as \begin{equation} {\cal L }_{d_1}^{BM} =\frac {d_1}{2} Tr (u_\mu u ^\mu)Tr( \bar B B), \end{equation} and, \begin{equation} {\cal L }_{d_2}^{BM} =d_2 Tr (\bar B u_\mu u ^\mu B), \end{equation} where $B$ denotes the baryon octet. The dispersion relation for kaon and antikaon is obtained by Fourier transformation of interaction Lagrangian and given by \begin{equation} -\omega^2+ {\vec k}^2 + m_{K (\bar K)}^2 -\Pi^*(\omega, |\vec k|)=0, \label{drk} \end{equation} where $\Pi^*$ symbolize the in-medium self-energy of kaon and antikaon, and for the kaon isospin doublet, ($K^+$,$K^0$), it is explicitly given as \begin{eqnarray} \Pi^*_K (\omega, |\vec k|) &= & -\frac {1}{4 f_K^2}\Big [3 (\rho^v_p +\rho^v_n) \pm (\rho^v_p -\rho^v_n) \pm 2 (\rho^v_{\Sigma^+}-\rho^v_{\Sigma^-}) -\big ( 3 (\rho^v_{\Xi^-} +\rho^v_{\Xi^0}) \pm (\rho^v_{\Xi^-} -\rho^v_{\Xi^0}) \big) \Big ] \omega\nonumber \\ &+&\frac {m_K^2}{2 f_K} (\sigma ' +\sqrt 2 \zeta ' \pm \delta ') \nonumber \\ & +& \Big [- \frac {1}{f_K} (\sigma ' +\sqrt 2 \zeta ' \pm \delta ') +\frac {d_1}{2 f_K ^2} (\rho_s ^p +\rho_s ^n +{\rho^s} _{\Lambda^0}+{\rho^s} _{\Sigma^+}+{\rho^s} _{\Sigma^0} +{\rho^s} _{\Sigma^-} +{\rho^s} _{\Xi^-} +{\rho^s} _{\Xi^0} )\nonumber \\ &+&\frac {d_2}{4 f_K ^2} \Big (({\rho^s} _p +{\rho^s} _n) \pm ({\rho^s} _p -{\rho^s} _n) +{\rho^s} _{\Sigma ^0}+\frac {5}{3} {\rho^s} _{\Lambda^0} + ({\rho^s} _{\Sigma ^+}+{\rho^s} _{\Sigma ^-}) \pm ({\rho^s} _{\Sigma ^+}-{\rho^s} _{\Sigma ^-})\nonumber \\ &+ & 2 {\rho^s} _ {\Xi^-}+ 2 {\rho^s} _ {\Xi^0} \Big ) \Big ] (\omega ^2 - {\vec k}^2), \label{sek} \end{eqnarray} where the $\pm$ signs mention the self-energy for $K^+$ and $K^0$ respectively. In the above expression the fluctuations $\sigma'(=\sigma-\sigma _0)$, $\zeta'(=\zeta-\zeta_0)$ and $\delta'(=\delta-\delta_0)$ indicate the digression of the field expectation values from their vacuum expectation values. Also, $m_{K(\bar K)}$ in Eq.(\ref{drk}) denotes the vacuum mass of kaon (antikaon). In a similar manner, the in-medium self-energy for antikaon isospin doublet, ($K^-$,$\bar {K^0}$), is evaluated as \begin{eqnarray} \Pi^*_{\bar K} (\omega, |\vec k|) &= & \frac {1}{4 f_K^2}\Big [3 (\rho^v_p +\rho^v_n) \pm (\rho^v_p -\rho^v_n) \pm 2 (\rho^v_{\Sigma^+}-\rho^v_{\Sigma^-}) - \big ( 3 (\rho^v_{\Xi^-} +\rho^v_{\Xi^0}) \pm (\rho^v_{\Xi^-} -\rho^v_{\Xi^0}) \big) \Big ] \omega\nonumber \\ &+&\frac {m_{\bar K}^2}{2 f_{\bar K}} (\sigma ' +\sqrt 2 \zeta ' \pm \delta ') \nonumber \\ & +& \Big [- \frac {1}{f_{\bar K}} (\sigma ' +\sqrt 2 \zeta ' \pm \delta ') +\frac {d_1}{2 f_{\bar K}^2} (\rho_s ^p +\rho_s ^n +{\rho^s} _{\Lambda^0}+{\rho^s} _{\Sigma^+}+{\rho^s} _{\Sigma^0} +{\rho^s} _{\Sigma^-} +{\rho^s} _{\Xi^-} +{\rho^s} _{\Xi^0} )\nonumber \\ &+&\frac {d_2}{4 f_K ^2} \Big (({\rho^s} _p +{\rho^s} _n) \pm ({\rho^s} _p -{\rho^s} _n) +{\rho^s} _{\Sigma ^0}+\frac {5}{3} {\rho^s} _{\Lambda^0} + ({\rho^s} _{\Sigma ^+}+{\rho^s} _{\Sigma ^-}) \pm ({\rho^s} _{\Sigma ^+}-{\rho^s} _{\Sigma ^-})\nonumber \\ &+ & 2 {\rho^s} _ {\Xi^-}+ 2 {\rho^s} _ {\Xi^0} \Big ) \Big ] (\omega ^2 - {\vec k}^2), \label{sekb} \end{eqnarray} with the $\pm$ signs for $K^-$ and $\bar {K^0}$ respectively. In strange hadronic matter, the in-medium mass of $K (\bar K)$ meson is evaluated by solving the Eq. (\ref{drk}) under the condition, $m_{K(\bar K)}^*=\omega(|\vec k|$=0). The parameters $d_1$ and $d_2$ in the expression of self energies are taken as $ 2.56/m_K $ and $ 0.73/m_K $ respectively \cite{Mishra2019}, fitted with the help of experimental values of kaon-nucleon ($KN$) scattering length \cite{Barnes1994}. In the present work, the vacuum value of $K^+(K^-)$ mass is taken as 494 MeV whereas for $K^0(\bar K^0)$ it is taken as 498 MeV. \subsection{\label{subsec2.3} IN-MEDIUM MASS AND DECAY WIDTH OF $\phi$ MESON } \begin{figure}[h] \includegraphics[scale=0.1]{loop1.eps} \caption{ $\phi K \bar{K}$ interaction at one loop level.} \label{loop} \end{figure} In this subsection, we first compute the $\phi$ meson in-medium self-energy for the decay process $\phi$ $\rightarrow$ $K \bar K$ at one loop level (see \cref{loop}). The interaction Lagrangian $\mathcal{L}_{int}$ \cite{{Ko1992},{Klingl1996}} is given as \begin{equation} \label{eqn:Lint} \mathcal{L}_{int} = \mathcal{L}_{\phi K \bar K}, \end{equation} with \begin{equation} \label{eqn:phikk} \mathcal{L}_{\phi K \bar {K}} = i g_{\phi}\phi^{\mu} \left[ \bar K(\partial_{\mu} K)-(\partial_{\mu} \bar K)K\right]. \end{equation} In above, $g_\phi$ is the coupling constant and $K \left(\begin{array}{c} K^{+} \\ K^{0} \end{array} \right),$ and $\bar K \left(K^{-}\;\overline{K}^{0}\;\right)$ are the isospin doublets of kaons and antikaons. In our present work we haven't considered the interactions of type $\phi\phi K \bar{K}$ as these have very little contribution to in-medium masses and decay width as compared to $\phi K \bar K$ interactions \cite{Martinez2016}. In the rest frame of $\phi$ meson, the scalar part of the in-medium self energy for the loop diagram can be written as \begin{equation} \label{eqn:phise} i\Pi^*_{\phi}(p)=-\frac{8}{3}g_{\phi}^{2}\int \frac {d^4q}{(2\pi)^4} \vec{q}^{\,2} D_{K}(q)D_{\bar K}(q-p) \, , \end{equation} where $D_{K}(q)$=$\left(q^{2}-m_{K}^{*^{2}}+i\epsilon\right)^{-1}$ is the kaon propagator and $D_{\bar K}(q$-$p)$=$\left((q-p)^{2}-m_{\bar K}^{*^{2}}+i\epsilon\right)^{-1}$ is the antikaon propagator; $p=(p^{0}=m^*_{\phi},\vec{0})$ is the $\phi$ meson four-momentum vector, with $m^*_{\phi}$ denoting the in-medium $\phi$ meson mass; $m^*_{K}$(=$\frac{m_{K^+}^{*}+m_{K^0}^{*}}{2}$) and $m^*_{\bar K}$(=$\frac{m_{K^-}^{*}+m_{\bar K^0}^{*}}{2}$) are the masses of kaon and antikaon respectively. The values of $m_{K^+}^{*}$, $m_{K^0}^{*}$, $m_{K^-}^{*}$ and $m_{\bar K^0}^{*}$ will be solved using Eq.(\ref{drk}) and the in-medium mass of the $\phi$ is determined from the real part of $\Pi^*_{\phi}(p)$ by following relation \cite{Martinez2016} \begin{equation} \label{eqn:phimassvacuum} m_{\phi}^{*^{2}}=\left(m_{\phi}^{0}\right)^{2}+\Re\Pi^*_{\phi}(m_{\phi}^{*^{2}}), \end{equation} where $m_{\phi}^{0}$ being the bare mass of the $\phi$ meson. The real part of self-energy can be written as \cite{Martinez2016} \begin{equation} \label{eqn:repiphi} \Re\Pi^*_{\phi}=-\frac{4}{3}g_{\phi}^{2} \, \mathcal{P}\!\! \int \frac {d^3q} {(2\pi)^3} \vec{q}^{\,2}\frac{(E^*_K+E^*_{\bar K})}{E^*_{K} E^*_{\bar K} ((E^*_K+E^*_{\bar K})^2-m_{\phi}^{*^2})} \, , \end{equation} with $\mathcal{P}$ denotes the principal value of the integral Eq.~(\ref{eqn:phise}), $E^*_{K}=(\vec{q}^{\,2}+m_{K}^{*^2})^{1/2}$ and $E^*_{\bar K}=(\vec{q}^{\,2}+m_{\bar K}^{*^2})^{1/2}$. The integral in Eq.(\ref{eqn:repiphi}) is divergent and to avoid the singularities we regularized the integral with the help of a phenomenological form factor with a cut-off parameter $\Lambda_{c}$ \cite{Krein2010}, whose value is taken as 3 GeV in the present investigation. The integral after regularization is given as \begin{equation} \label{eqn:regphi} \Re\Pi^*_{\phi}=-\frac{4}{3}g_{\phi}^{2} \, \mathcal{P}\!\! \int^{\Lambda_c}_{0} \frac {d^3q} {(2\pi)^3} \vec{q}^{\,4}\left( \frac{\Lambda^2_c+m_{\phi}^{*^2}}{\Lambda^2_c+4E_{K}^{*^2}}\right)^4 \frac{(E^*_K+E^*_{\bar K})}{E^*_{K} E^*_{\bar K} ((E^*_K+E^*_{\bar K})^2-m_{\phi}^{*^2})} \, . \end{equation} The value of coupling constant $g_{\phi}$ is determined as 4.539 from the empirical width of the $\phi$ meson in vacuum \cite{PDG2015}. The bare mass of $\phi$ is fixed through the constant $g_\phi$ and the vacuum mass of $\phi$ meson, which is taken as 1019.461 MeV \cite{PDG2015}. The decay width of the $\phi$ meson is calculated from imaginary part of the self energy $\Im\Pi^*_{\phi}$, and is given in terms of $\phi$, $K$ and $\bar K$ mass \cite{Li1995} \begin{equation} \label{eqn:phidecaywidth} \Gamma^*_{\phi} = \frac{g_{\phi}^{2}}{24\pi } \frac{1}{m_{\phi}^{*^5}} \left((m_{\phi}^{*^2}-(m_{K}^{*}+m_{\bar K}^{*})^2)(m_{\phi}^{*^2}-(m_{K}^{*}-m_{\bar K}^{*})^2)\right)^{3/2} \, . \end{equation} \section{Results and Discussions} \label{sec:3} In this section, we will discuss the numerical results obtained in the present work. First, we will discuss the in-medium dependence of scalar fields in subsection \ref{subsec:3.1}. In subsection \ref{subsec:3.2} density, temperature , isospin asymmetry and strangeness fraction dependence of masses of kaons and antikaons will be presented and finally in subsection \ref{subsec:3.3} will be devoted to present the results of $\phi$ meson masses and decay width. \subsection{Scalar Fields of the Chiral Model in Strange Hadronic Matter} \label{subsec:3.1} As discussed in the subsection \ref{subsec2.1}, within the chiral model we have solved the coupled equations of motion of $\sigma$, $\zeta$, $\delta$, $\chi$, $\omega$, $\rho$ and $\phi$ mesonic fields. In \cref{fieldsT0}, we plot the variation of the scalar fields $\sigma$ and $\zeta$, as a function of baryonic density at finite values of temperature. We also observe the effect of strangeness fraction and isospin asymmetry in this plot. For all combinations of $\eta$ and $f_s$, we see that the magnitude of $\sigma$ and $\zeta$ fields decrease linearly up to nuclear saturation density ($\rho_0$) and afterward it decreases slowly with further increase in baryonic density. However, the $\zeta$ field modifies very less as compared to $\sigma$ field, for example, in a symmetric and non-strange medium at zero temperature and $\rho_B$=4$\rho_0$, the value of $\sigma$ field changes by 67$\%$ (as compared with its vacuum value) whereas $\zeta$ field modifies by 14$\%$ only. Furthermore, considering the effect of isospin asymmetry of the medium with respect to density, at a particular temperature, the scalar-isoscalar $\sigma$ field shows good $\eta$ dependence whereas strange scalar-isoscalar $\zeta$ field shows negligible $\eta$ dependence. This is because of the quark content of the respective field, the former mesonic field contains $u$ and $d$ quark which interacts with the medium asymmetry, on the other hand, the latter contains strange quark pair ($s\bar s$). This picture becomes opposite in the strange medium, as in the presence of hyperons, the strange $\zeta$ field shows appreciable modifications whereas the non-strange $\sigma$ field shows very little variation. For example, in a symmetric and strange medium at zero temperature and $\rho_B$=4$\rho_0$, the value of $\zeta$ field changes by 16$\%$ (as compared with its value at $f_s$=0) whereas $\sigma$ field modifies by 2$\%$ only. In symmetric medium if we move from zero to non-zero temperature, we observe that in high density regime the value of $\sigma$ field modifies appreciably in both strange and non-strange medium, at a specific value of density, the magnitude of $\sigma$ field increase with the increase in temperature. Whereas for the non-strange medium, the strange $\zeta$ field shows negligible $T$ dependence but it turns out to be appreciable in the strange medium. This is because of the presence of strange content in the medium (hyperons) modifies the scalar densities of the hyperons (which depends upon the Fermi distribution functions \cite{Kumar2019}) therefore the strange scalar field $\zeta$. As discussed earlier, in the present investigation, the scalar and vector fields are calculated from the coupled equations of motion that contain the expressions of scalar and vector densities of baryons. In \cref{fieldsT100}, the variation of in-medium $\delta$ and $\chi$ fields are shown for the same medium attributes. As discussed earlier, the asymmetry dependence is introduced in this model by the incorporation of $\delta$ field and $\eta$ parameter, therefore the $\delta$ field shows appreciable variations in the asymmetric matter but no modification in the symmetric matter. At $\eta$=0.5 and non-strange medium, the magnitude of $\delta$ increases with density, and it pronounces more in the presence of strange baryons. For the finite baryonic density, the $\delta$ undergo less drop at high temperature as compared to the $T$=0 situation. The dilaton field $\chi$, which is introduced in the model to mimic the trace anomaly property of QCD \cite{Kumar2010}, varies less with the increase in baryonic density. The magnitude of $\chi$ field decreases as a function baryonic density and it shows variations due to asymmetry and strangeness in the high density regime. This is because the $\chi$ field is solved simultaneously in the coupled equations of motion along with other medium fields \cite{Kumar2020}. In the present chiral SU(3) model, the modifications of scalar fields with the medium's temperature are consistent with the results of chiral quark mean-field model \cite{Wang2001}. \begin{figure}[h] \includegraphics[width=16cm,height=21cm]{sg.eps} \caption{(Color online) The in-medium $\sigma$ and $\zeta$ fields in nuclear and hyperonic matter. } \label{fieldsT0} \end{figure} \begin{figure}[h] \includegraphics[width=16cm,height=21cm]{dc.eps} \caption{(Color online) The in-medium $\delta$ and $\chi$ fields in nuclear and hyperonic matter. } \label{fieldsT100} \end{figure} \subsection{Kaons and Antikaons in Strange Matter at Finite Temperature} \label{subsec:3.2} In this subsection, we discuss the numerical observations of the kaon and antikaon mass in the strange hadronic matter. We use the medium induced scalar and vector densities of baryons in the dispersion relation (Eq.(\ref{drk})) to calculate the in-medium mass of these mesons. In \cref{tablems}, we listed the in-medium masses of the kaons and antikaons for distinct medium parameters. We plot the in-medium mass of $K^+$ and $\bar K^0$ as a function of baryonic density in \cref{ms_k}. We plot this figure for different values of asymmetry, strangeness, and temperature. In a non-strange medium, the mass of $K^+$ and $K^0$ mesons increase with the increase in baryonic density. It increases almost linearly for high temperature but comparative slow for lower temperatures. As discussed earlier, the various terms of the Eq.(\ref{lagd}) explain the kaon and antikaon baryon interaction and the first term of the Eq.(\ref{lagd}) (known as Weinberg-Tomozawa term) gives repulsive contributions to the masses of $K^+$ and $K^0$ mesons \cite{Mishra2009,Mishra2008}. The meson exchange term arising from the $\sigma$ and $\delta$ fields is attractive for both $K^+$ and $K^0$ mesons. In isospin symmetric matter ( at a particular value of $f_s$), the $K^0$ mass shows the exact same behaviour as of $m^*_{K^+}$. This is because the $K^+$ and $K^0$ meson belong to the same isospin doublet. However, in asymmetric nuclear matter, the masses of these mesons do not remain same because of the asymmetric terms ($\rho_i-\rho_j$; $i\neq j$) present in the Weinberg-Tomozawa term and isospin dependent range ($d_2$) terms. The self-energy of $K^+$ meson largely depends upon the scalar and vector densities of the baryons which have positive $\tau_3$ values, whereas the self-energy of $K^0$ largely depends upon the densities of baryons with negative $\tau_3$ values and due to this the Weinberg-Tomozawa term become more repulsive for $K^0$ mesons and suppress the attractive contributions from $d_2$ term. When we move from non-strange to strange medium, the mass of $K^+$ meson first slightly increases and then decreases concerning baryonic density for all temperatures. This is due to the fact that when we go from non-strange to strange medium, the range term, $d_1$ becomes more negative whereas the $d_2$ term becomes less negative. Therefore the attractive nature of $d_1$ term dominates the range terms and overall due to this the mass of $K$ and $K^0$ decreases in the strange medium. The temperature effects on the mass of $K^+$ and $K^0$ mesons become less for non-strange asymmetric nuclear matter and for the strange asymmetric matter, it shows appreciable modifications. In the regime of high density, the mass of $K$ mesons decrease with the decrease in temperature. The above behaviour can be explained on the basis of the temperature dependence of baryon scalar densities. As we increase the medium's temperature, the attractive contributions from range terms start decreasing. In a similar manner, we plot the in-medium mass of antikaons $K^-$ and $\bar K^0$ in \cref{ms_kb}. For the non-strange medium, we observe an opposite behavior in the mass of antikaons as compared to kaons because of the attractive contributions from the Weinberg-Tomozawa term. Furthermore, since $K^-$ and $\bar K^0$ meson also belong to isospin doublet, the mass of these mesons do not show any difference in the symmetric nuclear matter (for a particular value of strangeness). The mass of $K^-$ and $\bar K^0$ mesons show less impact of temperature in non-strange but asymmetric baryonic matter but for strange asymmetric medium, it shows an appreciable variation with temperature. The explanation lies in the same fact as was discussed for $K$ mesons in the previous paragraph. Using the quark meson coupling model, in Ref. \cite{Martinez2016}, the mass of $K$ meson is studied in non-strange symmetric nuclear matter at zero temperature. In this article, the authors observed that the mass of kaons decreases as a function of baryonic density. Moreover, in Ref. \cite{Li1995}, using relativistic transport model Li $et.al.$ studied the mass of $K(\bar K)$ meson along with the mass of $\rho$ and $\phi$ meson in the hadronic medium which contains nucleons, pions, and deltas. They observed that the mass of kaons increases with the increase in baryonic density while the antikaons mass decreases. Using kaon and antikaon in-medium mass, one can also calculate the in-medium optical potential for finite momentum situations via relation ($U^*_{K (\bar K)}(\omega, k) = \omega (k) - m_{K (\bar K)}$) \cite{Mishra2008,Mishra2009}. In Ref. \cite{Lutz1998,Ramos2000,Tolos2001,Tolos2006,Lutz2008}, the vacuum antikaon-nucleon scattering amplitudes are obtained from the coupled-channel techniques by including the method of partial waves to calculate the self-energies of the kaons in the hadronic matter and observed an attractive potential of the range 40-60 MeV. \begin{figure} \includegraphics[width=16cm,height=21cm]{ms_k.eps} \caption{(Color online) The in-medium mass of isospin doublet $(K^+,K^0)$ in nuclear and hyperonic matter. } \label{ms_k} \end{figure} \begin{figure} \includegraphics[width=16cm,height=21cm]{ms_kb.eps} \caption{(Color online) The in-medium mass of isospin doublet $(K^-,\bar K^0)$ in nuclear and hyperonic matter. } \label{ms_kb} \end{figure} \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{4}{c|}{T=50 MeV} & \multicolumn{4}{c|}{T=150 MeV} \\ \cline{3-10} &$f_s$ & \multicolumn{2}{c|}{$\eta$=0} & \multicolumn{2}{c|}{$\eta$=0.5 }& \multicolumn{2}{c|}{$\eta$=0}& \multicolumn{2}{c|}{$\eta$=0.5 }\\ \cline{3-10} & &$\rho_0$&$4\rho_0$ &$\rho_0$ &$4\rho_0$ & $\rho_0$ &$4\rho_0$&$\rho_0$&$4\rho_0$ \\ \hline $ m^{*}_{{K}^+}$& 0&524&565.1&509.5&527&526.2&587.9&510.6&530.17\\ \cline{2-10} &0.5&503.9&470.3&506.6&498.2&506&491.7&498.9&475.2 \\ \cline{1-10} $m^*_{{K}^0}$&0&524&565.1&532.7&613.4&526.2&587.9&534.5&625.1\\ \cline{2-10} &0.5&503.9&470.4&495.64&426.9&506&491.7&506.7&490.3 \\ \cline{1-10} $ m^{*}_{K^-}$&0&458.7&314.9&466.7&352.2&461.8&326.2&468.5&355.1 \\ \cline{2-10} &0.5&473.2&344.1&453.6&302.1&475.6&360.3&470&353 \\ \hline $ m^*_{\bar K^0}$&0&458.7&314.9&444.5&283.3&461.8&326.2&446.6&287.7\\ \cline{2-10} &0.5&473.2&344.1&487.5&366.8&475.6&360.3&474.3&347.96 \\ \cline{1-10} \end{tabular} \caption{In above table, we tabulated the values of in-medium masses of $K^+,K^0,K^-$ and $\bar K^0$ mesons (in units of MeV).} \label{tablems} \end{table} \subsection{ In-Medium Mass and Decay Width of $\phi$ Meson} \label{subsec:3.3} The $\phi$ meson mass is calculated through the in-medium self-energy of $\phi$ meson at one loop level (see subsection \ref{subsec2.3}). In the previous works of Ref. \cite{Martinez2016,Martinez2017}, the loop integral was solved under the assumption, $i.e.$, $m^{*}_{K }$=$m^{*}_{\bar K}$ but in the present work the self-energy of $\phi$ meson is calculated by solving the regularized loop integral in the presence of medium modified kaons and antikaons where they behave differently as discussed in subsection \ref{subsec:3.2}. Note that the temperature dependence of $\phi$ meson masses and decay width in our present calculations is evaluated through the temperature dependence of kaons and antikaon masses. In \cref{mphi}, we show the in-medium mass of $\phi$ meson as a function of baryonic density by considering the effect of isospin asymmetry, strangeness, and temperature. The medium induced mass for different parameter combinations is tabulated in the \cref{tabledw1}. From \cref{mphi}, we observe that the mass of $\phi$ meson decreases as a function of medium density. For all permutations of $\eta$ and $f_s$, we see that if the temperature of the medium is decreased then the mass of $\phi$ meson also decreases which reflects the temperature dependence of kaons and antikaons mass. Furthermore, the effect of temperature is more visible in the symmetric matter than the asymmetric matter. Moreover, the increase in strange quarks in the hadronic medium lead to a more decrease in $m^*_\phi$. The in-medium behaviour of $\phi$ meson mass reflects the in-medium masses of $K$ and $\bar K$ as the self-energy of the $\phi$ meson loop is calculated using these medium modified entities. In Ref. \cite{Martinez2016}, using in-medium $\phi$ self-energy in non-strange symmetric nuclear matter, the medium induced mass of $\phi$ was studied. In this article, Martinez $et.al.$ observed that the mass of $\phi$ meson decrease with the increase of nucleonic density and they plotted the results for three different choices of a cut-off parameter, $\Lambda_c$ $i.e.$ 1,2 and 3 GeV, and found that the $m^*_\phi$ decrease more with the increase in $\Lambda_c$. At $\Lambda_c$=3 GeV and $\rho_B=\rho_0$, they observed 25 MeV decrement in the $\phi$ meson mass whereas we observed 2.59 MeV drop. This is because in our calculation kaons and antikaons behave differently in the medium, for example, as we discussed earlier, the Weinberg Tomozawa term give the repulsive contribution to $K$ and attractive to $\bar{K}$ meson mass. The values of $m^*_K$ and $m^*_{\bar K}$ are observed as 522.49 and 456.75 MeV in symmetric nuclear matter at density $\rho_0$ and temperature $T = 0$. However, in Ref. \cite{Martinez2016}, difference in in-medium masses of kaons and antikaons is not taken into calculations and at $\rho_0$, the mass $m^*_K$ is simply considered as $430$ MeV. In our article, we observed less downward shift in $K$ and $ \bar K$ masses and therefore a little drop in $\phi$ mass. Using QCD Sum Rules at zero temperature, Klingl $et.al.$, calculated 1 $\%$ drop in $\phi$ mass at nuclear saturation density \cite{Klingl1998}. Furthermore, using the unification of chiral SU(3) model and QCD Sum Rules, the author studied the in-medium mass of $\phi$ meson in asymmetric strange matter at zero temperature \cite{Mishra2015} and observed a very small drop. They reported the mass shift of about 20 MeV at a density of 5$\rho_0$ in the nuclear medium. In \cref{gphi}, we plot the in-medium partial decay width of $\phi$ meson decaying into $K \bar K$ pairs. The formula for decay width was derived by extracting the imaginary part of the loop integral. The medium modified value of $\Gamma^*_\phi$ is also listed in the \cref{tabledw1} along with $m^*_\phi$. In this figure, we observe the partial decay width increase (broadens) with the increase in baryonic density for all cases of strangeness, temperature, and isospin asymmetry. The decay width reflects opposite behavior as was observed for $m^*_\phi$. However, in the asymmetric matter, we observe the temperature effects to less appreciable which was also same for $m^*_\phi$. But here in the decay width case, at a particular value of baryonic density the in-medium decay width increase more for low temperature whereas it was opposite for the mass of $\phi$ meson. The impact of strange matter with the $\phi$ mesons lead to more increase in the decay width with baryonic density. This is because of peculiar behavior of $K$ and $\bar{K}$ mesons in strange matter. This highlight the importance of the study of strangeness affects on properties of $\phi$ mesons. We have also calculated the decay width for different value of $\Lambda_c$ and observed that in strange matter the trend of decay width remains same concerning baryonic density but its broadening decrease as we lower the value of the cut-off parameter which is consistent with the observations of Ref. \cite{Martinez2016}. As per other theoretical and experimental investigations discussed in the introduction section, the decrement (increment) in $\phi$ meson mass (decay width) in the present investigation is consistent with the observations of existing literature with some distinctions. The cause for these distinctions may lie in the estimation of the kaon–antikaon loop contributions from different approaches. On the application side, by utilizing the decay width, the production of $\phi$ mesons in the $pN$ collisions can be measured \cite{Polyanskiy2011}. The comparison of the experimental data with model calculations will speculate the absorption, production, and momentum dependence of $\phi$ meson in the hadronic medium \cite{Paryev2018}. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{4}{c|}{T=50 MeV} & \multicolumn{4}{c|}{T=150 MeV} \\ \cline{3-10} &$f_s$ & \multicolumn{2}{c|}{$\eta$=0} & \multicolumn{2}{c|}{$\eta$=0.5 }& \multicolumn{2}{c|}{$\eta$=0}& \multicolumn{2}{c|}{$\eta$=0.5 }\\ \cline{3-10} & &$\rho_0$&$4\rho_0$ &$\rho_0$ &$4\rho_0$ & $\rho_0$ &$4\rho_0$&$\rho_0$&$4\rho_0$ \\ \hline $m^*_\phi$&0&1017.6&999.9&1016.3&1001.3&1018.7&1006.1&1017&1003.3 \\ \cline{2-10} &0.5&1016.3&987&1015&984&1017&993&1016&990\\ \cline{1-10} $\Gamma^*_\phi$ &0&4.8&26.5&5.8&24.4&3.9&18&5.2&21.6 \\ \cline{2-10} &0.5&5&47.5&6&53.1&5&35.9&6&41.5 \\ \cline{1-10} \hline \end{tabular} \caption{In the above table, we tabulated the values of medium induced $\phi$ meson mass and partial decay width (MeV) for $\phi$ $\rightarrow$ $K\bar K$ process for different parameters of the medium.} \label{tabledw1} \end{table} \section{SUMMARY} \label{sec:4} To summarize, using an effective Lagrangian approach we calculated the medium modified mass and decay width of $\phi$ meson by employing in-medium $K$ and $\bar K$ masses from the chiral SU(3) model. We have calculated these properties up to four times the nuclear saturation density and in the medium, we considered nucleons and hyperons as degrees of freedom. At finite temperature, we observed appreciable effect of strangeness on the in-medium $K$ and $\bar K$ mesons. The kaon baryon interactions in a strange medium lead to a decrease in the $K$ and $ \bar K$ mass. The mass of antikaons decreases more appreciably than kaons in the medium. Despite a significant drop in $K$ and $ \bar K$ mass, we observed a small downward mass-shift in the in-medium mass of $\phi$ meson. The impact of temperature becomes less in asymmetric baryonic medium whereas becoming high in the symmetric medium. On the other hand, in the same medium, the decay width shows broadening and it decreases with the increase of strange content in the medium. In the extension of our work, we will study the appreciable effects on $\phi$ meson such as strangeness enhancement in the $\phi$-mesic nuclei \cite{Jparc}, absorption and production of $\phi$ meson in hadronic medium \cite{Polyanskiy2011,Paryev2018}. It requires future experimental efforts to understand the medium induced changes in $\phi$ properties in a strange medium. A more systematic study is intended to study the mass-shift of vector mesons with higher statistics in the J-PARC E16 collaboration \cite{E16}. There is also a proposal at J-Lab (following the 12 GeV upgrade), to study the binding of Helium nuclei with $\phi$ and $\eta$ meson \cite{Jlab}. Furthermore, the $K^+/K^0$ and $K^-/\bar K^ 0$ ratios for different isospin of the beam and target is promising observables to study the asymmetry effects in the CBM experiment at the future project FAIR at GSI, Germany and Rare Isotope Accelerator (RIA) laboratory in USA \cite{Mishra2009}. \begin{figure} \includegraphics[width=16cm,height=16cm]{mphi.eps} \caption{(Color online) The in-medium mass of $\phi$ meson in nuclear and hyperonic matter. } \label{mphi} \end{figure} \begin{figure} \includegraphics[width=16cm,height=16cm]{gphi.eps} \caption{(Color online) The in-medium decay width of $\phi$ $\rightarrow$ $K \bar K$ channel nuclear and hyperonic matter. } \label{gphi} \end{figure} \centering \section*{Acknowledgement} One of the author, (R.K) sincerely acknowledge the support towards this work from Ministry of Science and Human Resources Development (MHRD), Government of India via Institute fellowship under National Institute of Technology Jalandhar.
1,116,691,498,088
arxiv
\section{Introduction} As a essential problem in network science, network cluster detecting is significant for computer science\cite{1}, biology\cite{2,7,24}, communication and social networks\cite{3,4} and marketing strategy\cite{5}. And it gains lots of attention from researchers in related fields. Especially in recent years, understanding networks deeper and deeper, people get rich harvests in the study of network cluster detecting\cite{6}. Many detecting algorithms and evaluation criteria is proposed. Some of the algorithms are based on some operating process on network structure\cite{7,8,9,23}, some are based on spectrum analysis algorithm\cite{10,11} and some are based on network dynamics\cite{6,12,24} and so on. The criteria various from Q function\cite{13} to association quality, overlapping quality\cite{14} and Benchmark graphs\cite{25}, etc. From the angle of category, there are overlapping clustering\cite{15} and non-overlapping clustering\cite{7,8,9}. \\\indent The new network cluster detecting method we put forward is based on a measure on associated bigraph(AG). In this paper, discrete equidistant imbedding(DEI) and continuous imbedding(CI) separately provide two different measures and two different methods. \section{AG and THE TWO METHODS} In this section, we give a definition to Associated Bigraph(AG) and from it we propose two methods, DEI method and CI method. And to test the methods, we give partitions to a computer-generated network, Zachary network\cite{17} and Dolphin network\cite{18}. At last, we compare the results by our methods with those by modularity method in Gephi\cite{22}. \subsection{Definition of AG and DEI} Suppose graph $G=(V,E)$, $V$ refers to its vertex set, $E$ refers to its edge set, $|V|=N$. Then, the associated bigraph of $G$ is $G_A=(V_1\bigcup V_2, E_A)$. If $V=\{v_1,v_2,\cdots,v_N\}$, ~then $V_1=\{v_{11},v_{12},\cdots,v_{1N}\}$,~$V_2=\{v_{21},v_{22},\cdots,v_{2N}\}$,~for any $i$, $v_i$ corresponds to $v_{1i}$ and $v_{2i}$. If and only if $(v_i,v_j)\in E$,~$(v_{1i},v_{2j})\in E_A$. It is easy to know, $G_A$ is a bigraph. $V_1$ and $V_2$ are two parts of it, $|E_A|=|E|$(suppose undirected edges be bidirectional edges). If we merge the corresponding vertexes in $V_1$ and $V_2$, $G_A$ is equal to $G$. See FIG.\ref{fg:1}. \begin{figure}[!ht]\centering \includegraphics[scale=0.22]{FIG1.eps} \caption{Graph and its AG are equidistantly imbedded on the two lines.}\label{fg:1} \end{figure} We place the vertexes of AG as in FIG.\ref{fg:1}. With equal interval, place the vertexes from sets $V_1$, $V_2$ on two parallel lines $(L_1\&L_2)$ and let those with corresponding labels be at corresponding positions. We call the placing pattern described above discrete equidistant imbedding (DEI). Of a given graph, the AG has $N!/2$ kinds of different DEIs.( If the graph has some symmetry, the number will decrease.) Without loss of generality, we let the allowed coordinate of vertexes in DEI successively be $1,2,3\dots N$. Thus, the distance between adjacent vertexes is $1$. \subsection{DEI method} Now, we consider the simple graphs(undirected, non-weighted, acyclic, non-multiple edges). If there are clusters structures in such graphs, among different DEIs of an AG, at least there is one that the vertexes are arrayed in the sequence of cluster. That is to say, vertexes of a same cluster will be placed together. In detail, different clusters will be placed nearer if they have closer relations. In the interior of a cluster, vertexes with closer relations are placed nearer. This arrangement is called optimal DEI. \\\indent We define the distance in DEI as follow: the distance between $v_{1i}$ and $v_{1j}$ is $|xi-xj|$, where $x_i$ and $x_j$ are the coordinates of $v_{1i}$ and $v_{2j}$. If edge $(v_{1i},v_{2j})$ exists, we define the length of $(v_{1i},v_{2j})$ as $|xi-xj|$. Let \begin{equation} Z = \sum\limits_{ij} {{a_{ij}}| {{x_i} - {x_j}}|}. \label{eq:1} \end{equation} We treat Z as an objective function, and minimize it under the condition of DEI, the solution of which is the optimal DEI. \\\indent If an edge $a$ connects $v_{1i}$ and $v_{2j}$(suppose $i<j$), we can find it in the interval of $k$ and $k+1$(FIG.\ref{fg:1}), where $i<=k<k+1<=j$. The number of edges found in the interval of $k$ and $k+1$ is defined as the cross of $k$. Let the number of crosses be $\{m_1,m_2,\cdots,m_{N-1}\}$, it is easy to see that $Z=\sum\limits_i {{m_i}}$, which means the optimal DEI corresponds to \textquoteleft the minimum sum of crosses\textquoteright. \\\indent This optimization equals to the follow operation in the adjacency matrix $A$: $|x_i-x_j|$ suggests the absolute difference of the element $a_{ij}$'s column number and row number, which can measure the distance of the element and main diagonal. In order to minimize $Z$, by swapping vertexes we move the non-zero elements to the main diagonal as near as possible. \\\indent Actually, this definition of $Z$ may give \textquoteleft greater rights\textquoteright ~to vertexes with larger degree. In order to minimize $Z$, some vertexes with large degree may draw connected vertexes close to themselves. This may drown the structures of other vertexes. Or say, edges from some vertexes is so large a proportion of total edges that these vertexes affect the arrangement too strong and the effect of other vertexes are unimportant. In order to avoid this, we have to correct $Z$. A natural correction is to average the weight of each vertex. \begin{equation} Z = \sum\limits_{ij} {\frac{{{a_{ij}}| {{x_i} - {x_j}}|}}{{{k_i}}}}. \label{eq:2} \end{equation} Since it is a undirected graph, Eq(\ref{eq:2})is equivalent to \begin{equation} Z = \frac{1}{2}\sum\limits_{ij} {{a_{ij}}(\frac{1}{{{k_i}}} + \frac{1}{{{k_j}}})\left| {{x_i} - {x_j}} \right|}. \label{eq:3} \end{equation} Although we correct $Z$ by $1/k$, people may have different opinions on whether this correction is reasonable or not. Some people may believe that vertex with large degree should have a greater effect. While, our suggestion is: the definition of clustering do not need to be unique. Different definitions should be allowed in different case. It is more important that a good definition should match the practical problem. However, the simulation results suggest that $Z$ corrected by $1/k$ has a higher resolution power. (FIG.\ref{fg:2}) \begin{figure}[!ht] \centering \includegraphics[scale=0.35]{128_xin_xin_xin.eps} \caption{are graphs of R matrix of computer-generating network without correction and with 1/k correction. \\ the network is generated as follow: First generate four ER networks of 32 vertexes with $p_1=0.6$. Then randomly construct edges among different networks with $p_2=0.2$\cite{40}. In this picture, vertexes are arranged in the order of four networks. We can see that our methods can uncover the four clusters correctly.} \end{figure} \begin{figure*} \centering \includegraphics[scale=0.42]{2_he.eps} \caption{(a) are graphs of R matrix of Zachary Network without correction and with 1/k correction. The left number refers to a vertex label. In the history, the club divided into two parts. R matrices can correctly distinguish them. \\ (b) shows R matrice of Dolphin Network without correction and with 1/k correction and clusters labeled by colors.}\label{fg:2} \end{figure*} \begin{figure*}[!ht]\centering \includegraphics[scale=0.54]{R4_he_xin_xin_xin.eps} \caption{We detect an overlapping vertex by comparing its average correlation coefficient with other vertexes in different clusters. The Dolphin Network is partitioned into four clusters. Vertex $21$, $51$ and $62$ are overlappings by the method without correction. Vertex $21$ and $51$ are overlappings by the method with $1/k$ correction.}\label{fg:3} \end{figure*} We use Simulated Annealing Algorithm to find the solution of minimum $Z$. \\\indent A consequent question is even though we have found the optimal DEI, how can we know which vertexes belong to the same cluster? A intuitive idea is to count the crosses between adjacent vertexes. The sum of crosses between adjacent vertexes belonging to the same cluster will be larger than that between adjacent vertexes belonging to different clusters.(considering that a large degree vertex may drown the information of other vertexes, we can give each edge a weight $1/k$). However, after the simulation, we find that this method is not so effective and cannot show the \textquoteleft bond' of real cluster well. Thus, we adopt the following movement correlation method. \\\indent Suppose we move some vertex pair $(v_{1i},v_{2i})$ on $L_1$ and $L_2$, some other vertexes have to move in order to keep $Z$ as small as possible. If a vertex always follows another, it is that there is movement correlation between them. We use the strength of the movement correlation measured by Pearson correlation coefficient matrix R to partition clusters. Vertexes in the same cluster are with strong correlation. where \[Rij = \frac{{Cov({x_i},{x_j})}}{{\sqrt {Cov({x_i},{x_i})} \sqrt {Cov({x_j},{x_j})} }}.\] In the simulation, we randomly fix a small part of vertexes(e.g.5\%) and minimize $Z$. Repeat this operation several times, we can get many optimal DEIs in this case and calculate the coordinate correlation of different vertexes. Then we get matrix $R$. It is worth mentioning that overlap is allowed in this method (FIG.\ref{fg:3}). \\\indent Although we get $R$, we do not have a clear-cut criterion for clustering. For example, in FIG.{\ref{fg:2}}, if we set different resolutions, we can get different results of clustering. Maybe two clusters, maybe three, maybe four. What partition is reasonable is worth of discussion. Further more, we have done an elementary analysis. We can consider this partition criterion from two aspects. 1.amplitude criterion, 2. step criterion. The first criterion means that we can set a value and when a element of matrix $R$ is less than this value, we set it to zero. At last a non-zero diagonal block is a cluster(allow overlapping). The second criterion means that we can consider the step (difference) of adjacent elements. We can confirm the \textquoteleft bond' of a cluster by finding a position with big step. We can achieve this by high-pass filtering. Criterion 2 is affected seriously by the sequence of vertexes in figure of matrix $R$. In fact, \textquoteleft how many the clusters is there' is just a question has more than one answer, for when the criterion is \textquoteleft loose', there may be two clusters , and when the criterion is \textquoteleft strict', there may be four clusters. \\\indent Actually, $Z$ in Eq.(\ref{eq:1}) and Eq.(\ref{eq:2}) is regarded as $L^1$ norm. We can generally define the measure based on $L^p$ norm. Here we write two possible definitions of $L^2$ norm. \\ Uncorrected : \[Z = \sum\limits_{ij} {{a_{ij}}(} {x_i} - {x_j}{)^2}.\] Corrected: \[Z = \sum\limits_{ij} {\frac{{{a_{ij}}{{({x_i} - {x_j})}^2}}}{{{k_i}}}} or Z = \sum\limits_{ij} {\frac{{{a_{ij}}{{({x_i} - {x_j})}^2}}}{{{k_i}^2}}} .\] On the previous two real networks, similar partitions can be made by $L^2$ norm and by $L^1$ norm. But the resolution of $L^2$ norm method is lower than that of $L^1$ norm. \subsection{CI method} We can change DEI to continuous imbedding(CI). CI means that a vertex can be placed at any point on the line and the vertex pair with same label still should has the same coordinate. Comparing CI and DEI, the objective function $Z$ does not change, but the feasible region changes from all arrangements of $\{1,2,\dots,N\}$ to $R^N$. Corresponding to the discrete imbedding, here we set the constraints of continuous imbedding: \\ For $L^1$ norm: $1.\sum\limits_i {{x_i}} = 0$, $2.\sum\limits_i {\left| {{x_i}} \right|} = 1$.\\ For $L^2$ norm: $1.\sum\limits_i {{x_i}} = 0$, $2.{\sum\limits_i {{x_i}} ^2} = 1$. \\ There is a special relation between $L^2$ norm CI method and spectral method\cite{10,11}. Next, we only discuss $L^1$ case with $1/k$ correction. \\\indent For $L^1$ case with $1/k$ correction,in simulations, the vertexes with minimum $Z$ are always scattered in two groups (FIG.{\ref{fg:4}}). \begin{figure}[!ht]\centering \includegraphics[scale=0.28]{he_xiaotu.eps} \caption{99-chain is a network of 99vertexes,and they link one by one like a chain. 99-ring is a ring of 99vertexes. The optimal CI divide 99-chain into two groups from the middle and divide 99-ring into two chains with equal amount of vertexes. 100-3 equal branches is a graph consisting of three 34-chains with a common vertex. The optimal CI randomly put two into a group. 3-Ring of 33 Fully Connected Graph is a 99 vertexes graph consisting of three fully Connected subgraphs of 33vertrexes. The three subgraphs link each other forming a ring with three symmetry. The optimal CI randomly put two subgraphs into one group.The interval of each bar is 0.002}\label{fg:4} \end{figure} Thus, we can put forward a CI method measured by $L^1$ norm with $1/k$ correction. The algorithm for a given network G is as follow: \begin{description} \item[Step A] minimize Z of G and get the optimal solution X, of which the positive components the induced subgraph affiliating with is called G1,and the induced subgraph of remaining vertexes is called G2. \item[Step B] respectively redo Step A on G1 and G2 until each vertex is a induced subgraph. \end{description} This process generates a binary tree called cluster tree. \\\indent How can we take advantage of the cluster tree to uncover the clusters?\\\indent Criteria are needed. Here, we adopt $Q$ function. In detail, we partition the leaves of the cluster tree with maximized $Q$ in all possible ways and each part is a cluster. \begin{figure}[!ht] \centering \subfloat[]{\includegraphics[scale=0.3]{Znetwork_5_ab_he.eps}} \subfloat[] {\includegraphics[scale=0.3]{5_cd_he.eps}}\label{fg:5} \caption{(a) is the cluster tree of Zachary Network by $L^1$ norm CI method with $1/k$ correction. The partition by the dashed has the maximum Q. $\square$ and $\bigcirc$ refers to vertexes of the two divided clubs in history. Different colors marks different clusters of Zachary Network. (b) shows Dolphin Network clustering by $L^1$ norm CI method with 1/k correction. Different colors marks different clusters. We use modularity tool of Gephi to detect clusters on Dolphin Network. We set mode random and resolution 1.} \end{figure} For our method is based on numerical optimization, among solutions of each computing, usually, there is a little difference always observed in the overlapping vertexes or between clusters with close relation.\\\indent It should be pointed out that we cannot conclude our method is worse than that of Gephi(based on \cite{22}) for the reason that $Q$ of ours is less than $Q$ of Gephi. No one can prove $Q$ function is the most appropriate evaluation criterion. In fact, there are different opinions about $Q$\cite{19}. Nevertheless, $Q$ partly has validity which is verified by a lot of real networks. But focusing on the little difference of $Q$s of two methods is meaningless. \section{Discussion} Theoretically, we can expand the method to directed graph and weighted graph, for \textquoteleft undirected' or \textquoteleft unweighted\textquoteright~ is not the necessary condition. We just need to replace adjacent matrix with weighted adjacent matrix for weighted graph. \\\indent Based on DEI and movement correlation, we can put forward a different method as follow: \begin{enumerate} \item Derive optimal DEI \item Extend the feasible region of objective function $Z$ to $R^N$. The solution vector is a list of coordinates of all vertexes. Randomly select a small part of vertexes and with certain probability, add a small displacement to the optimal DEI coordinates of these vertexes selected (e.g. with uniform distribution in $-0.1\thicksim0.1$). Fix these vertexes and calculate the gradient direction of $Z$ taking the rest vertexes as arguments. Multiply the minus gradient direction with the total displacement and add it to the solution vector. \item Repeat step 2. Each time we can get a $N$ dimensional column vector, and they together form a matrix. Calculate all correlation coefficients between any two rows of the matrix, then we can get the correlation matrix $R$. \item Rearrange the vertexes which corresponds to the elements of $R$ in the order of optimal DEI. \end{enumerate} This method will lesson the time complexity greatly comparing with the method mentioned above. The validity should be tested in future work. \\\indent In this paper, we imbed AG in 1-D \textquoteleft line' space. While we can imbed that in high dimensional spaces or some spaces with very different topological structures (e.g. 1-D \textquoteleft ring\textquoteright space \cite{20}). What is optimum structure? This is a question. Considering the high dimensional \textquoteleft line' space, maybe there is some $M$, in any \textquoteleft line' space whose dimension is higher than $M$, the configurations of the optimal imbedding are the same, which is called faithful imbedding. $M_0$, the infimum of $M$, is able to reflect the complexity of clustering structure(E.g., we can define the quantity $M_0/(N-1))$. Otherwise, we can coarse grain the configurations of faithful imbedding, each grain is a cluster and coarse grained topologies shows the network's skeleton. There are a lot of related questions worth considering and studying.
1,116,691,498,089
arxiv
\section{Introduction} Molecular clouds are birthplaces of stars that form the visible backbones of galaxies. Therefore, it is important to investigate the molecular gas mass, its distribution in galaxies and its physical properties to understand galaxy evolution. Molecular gas in galaxies mainly consists of H$_2$ molecules. However, H$_2$ molecules do not radiate line emissions in cold environments such as molecular clouds whose temperature is typically a few tens K, since H$_2$ is a homonuclear diatomic molecule (i.e., no dipole moment). Molecular gas mass is usually estimated indirectly from observations of heteronuclear diatomic molecules such as $^{12}$CO and its isotopologues since they are the second most abundant molecules after H$_2$ in the interstellar medium. Therefore, $^{12}$CO$(J=1-0)$ line ($^{12}$CO, hereafter) has been commonly used to observe molecular gas in external galaxies as well as the Galactic objects. It has been observationally shown that the $^{12}$CO line luminosity correlates with the molecular gas mass estimated in several different ways, such as virial techniques, dust emission, extinction mapping and gamma-ray observation (\cite{Bolatto+2013} and references therein). The correlation between $^{12}$CO luminosity and molecular gas mass is explained theoretically under the assumptions that 1) molecular clouds are virialized, 2) the mass of clouds are dominated by H$_{2}$, 3) the clouds follow the size-line width relation and 4) they have a constant temperature (\cite{Bolatto+2013} and references therein). According to the observational evidence and the theoretical explanation above, the mass and distribution of molecular gas in nearby galaxies have been investigated with the observed $^{12}$CO maps by assuming a constant CO-to-H$_2$ conversion factor over a whole galaxy (e.g., BIMA SONG, \cite{Helfer+2003}; Nobeyama CO atlas, \cite{Kuno+2007}, hereafter K07; Heracles, \cite{Leroy+2009}). For a more accurate estimation of the molecular gas mass, multiple CO-isotopologues should be used in a complementary manner. The mass of virialized molecular clouds can be measured with $^{12}$CO and the higher opacity of $^{12}$CO allows us to trace the molecular gas with lower densities on the order of $10^2$ cm$^{-3}$. On the other hand, the opacity of $^{12}$CO is too high to estimate the column density through the molecular cloud unlike more optically thin lines such as $^{13}$CO$(J=1-0)$ and C$^{18}$O$(J=1-0)$ ($^{13}$CO and C$^{18}$O, hereafter). However, the line intensities of these CO-isotopologues are usually much weaker than that of $^{12}$CO (by factors of $\sim1/5-1/20$ for $^{13}$CO and \hspace{0.3em}\raisebox{0.4ex}{$<$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em}$1/20$ for C$^{18}$O, respectively) and require a lot of telescope time to be detected. Therefore, $^{13}$CO or C$^{18}$O mapping observations toward nearby galaxies have been limited to only a handful of cases \citep{Huttemeister+2000,Paglione+2001,Tosaki+2002,Hirota+2010,Watanabe+2011}. Some studies have investigated the relation between the physical states of molecular gas and galactic structures such as arms and bars from the CO multi-isotopologue observations (e.g. \cite{Huttemeister+2000,MeierTurner2004}). \citet{Watanabe+2011} (hereafter W11) observed NGC~3627 in $^{13}$CO and showed high $^{12}$CO/$^{13}$CO intensity ratios in the bar region. They concluded that the high $^{12}$CO/$^{13}$CO ratio indicates the existence of gravitationally unbound diffuse gas as a result of the strong streaming motion in the bar region. Spiral arms are also likely to affect not only the dynamics of molecular clouds but also their internal physical conditions. The molecular gas in spiral galaxies is expected to be accumulated at the arm region through galactic shocks (e.g. \cite{Fujimoto1968,Roberts1969,Egusa+2011}) and sheared out in interarm regions due to the differential rotation. However there are only a few studies that focus on the physical states of molecular gas in interarm regions since even the $^{12}$CO emission is quite weak \citep{Tosaki+2002}. To understand the effects of spiral arms on the molecular clouds, it is necessary to detect emissions of CO isotopologues in the first place and then investigate the physical states of molecular gas in the interarm region. The aim of this paper is to detect weak $^{13}$CO emission and investigate the relationship between the galactic structures and properties of molecular gas by comparing $^{12}$CO and $^{13}$CO spectra in the characteristic regions of spiral galaxies. We stacked $^{12}$CO and $^{13}$CO spectra of a nearby barred spiral galaxy, NGC~3627\footnote{ NGC~3627 is classified as SABb in the Third Reference Catalog of Bright Galaxies (RC3, \cite{deVaucouleurs+1991}) and has slightly asymmetric spiral arms. This asymmetric feature of the spiral arms is thought to have arisen from a past interaction with a neighboring galaxy, NGC~3628 (\cite{Haynes+1979,Zhang+1993}). } obtained in previous studies (K07\nocite{Kuno+2007} and W11\nocite{Watanabe+2011}) after shifting the velocity axis so that the zero of the $^{13}$CO spectra corresponds to the local mean $^{12}$CO velocity. This stacking method was originally proposed by \citet{Schruba+2011} where the local mean velocity of HI is adopted as the zero velocity of $^{12}$CO spectra in the outer region of galaxies. This allows us to improve the signal-to-noise ratios (S/N) of the spectra. Then we discuss the physical properties of molecular gas in the interarm region by comparing the $^{12}$CO and $^{13}$CO spectra. The structure of this paper is as follows: The data and method are explained in section \ref{DandA}. We show the results of the stacking analysis of the $^{12}$CO and $^{13}$CO spectra in the different regions of NGC~3627 in section \ref{Results}. We compare the $^{12}$CO with $^{13}$CO stacked spectra in section \ref{Analyses} and discuss the physical properties of the molecular gas in the different regions of NGC~3627 in section \ref{Discussion}. Finally, we summarize this study in section \ref{Summary}. \section{Data \& stacking technique}\label{DandA} We first summarize the data we used and the stacking method in the following sub-sections. \subsection{Data} \begin{table*} \caption{Summary of the $^{12}$CO and $^{13}$CO observations.} \begin{center} \begin{tabular}{lcc} \hline &$^{12}$CO&$^{13}$CO\\ \hline \hline Date & April in 2004 & May in 2007 and April in 2008\\ Telescope & the 45-m telescope at NRO & the 45-m telescope at NRO\\ $\eta_{\rm mb}$ & 0.4 & 0.31\\ Receiver & BEARS & BEARS\\ Backend & digital spectrometers & digital spectrometers\\ Band width (MHz) & 512 & 512\\ Grid spacing & $10''.3$ & $10''.3$\\ r.m.s. (mK, $T_{\rm mb}$) & $40-100$ & $6-16$\\ Velocity resolution (km s$^{-1}$) & 5 & 20\\ Reference & \cite{Kuno+2007} (K07) & \cite{Watanabe+2011} (W11)\\ \hline \end{tabular} \end{center} \label{tab-1} \end{table*}% $^{12}$CO and $^{13}$CO mapping data of NGC~3627 were both obtained with the 25-BEam Array Receiver System (BEARS) which is a 25 $(5\times5)$ beam SIS receiver mounted on the 45-m radio telescope at the Nobeyama Radio Observatory (NRO) (K07\nocite{Kuno+2007}; W11\nocite{Watanabe+2011}). The rest-frame frequencies of $^{12}$CO and $^{13}$CO are adopted as $115.27120$ GHz and $110.20135$ GHz, respectively. For the backend, 25 digital spectrometers \citep{Sorai+2000} were used with a total bandwidth of 512 MHz and frequency resolution of 605 kHz centered on the frequency corresponding to the local standard of rest (LSR) receding velocity. We adopted $16''$ as a half-power beam width (HPBW) of both data in the same way as W11\nocite{Watanabe+2011}, which corresponds to $\sim$ 800 pc assuming a distance of 11.1 Mpc to NGC~3627 \citep{Saha+1999}\footnote{To be exact, the beam sizes at the frequency of $^{12}$CO and $^{13}$CO are $\sim16''$ and $\sim17''$, respectively but no correction is applied since the error due to this is expected to be small, especially for the spatially extended sources ($\sim10\%$ even for a point source, W11\nocite{Watanabe+2011}).}. The grid spacing of the map is $10''.3$ ($\sim550$ pc). The typical r.m.s. noise temperatures (in $T_{\rm mb}$ scale) of $^{12}$CO and $^{13}$CO data are $40-100$ mK at 5 km s$^{-1}$ resolution and 6$-$16 mK at 20 km s$^{-1}$ resolution, respectively. The profile maps of $^{12}$CO and $^{13}$CO are shown in figure \ref{fig:ProfileMap}. The observations are summarized in table \ref{tab-1}. More detailed information of the observations is described in the original studies, K07\nocite{Kuno+2007} for $^{12}$CO and W11\nocite{Watanabe+2011} for $^{13}$CO, respectively. \subsection{Stacking analysis of $^{12}$CO and $^{13}$CO spectra with Velocity-axis Alignment (VA)} Schruba et al.(2011, 2012\nocite{Schruba+2011,Schruba+2012}) improved the sensitivity for $^{12}$CO emission in outer regions of galaxies by up to about one order of magnitude over previous studies with the stacking method they invented. One way to reduce the r.m.s. noise temperature is to stack the spectra from various regions in the galaxy. However, because the systemic velocities of each part of the galaxy are different due to galactic rotation, simple stacking will result in a smeared spectrum and may not yield the highest S/N. The method adopted by \citet{Schruba+2011} overcomes this problem by shifting the spectrum in the velocity axis so that they are aligned with the mean of the local HI velocity, before stacking the spectra. The S/N of the integrated intensity increases by a factor of $\sqrt{\Delta V_{\rm emi}/\Delta V_{\rm emi, VA}}$, taking into account that the error of the integrated intensity can be expressed as $\Delta T \sqrt{\Delta V_{\rm emi} \Delta v}$, where $\Delta V_{\rm emi}$ is the velocity range to be integrated to calculate the integrated intensity of the spectra obtained with normal stacking, $\Delta V_{\rm emi, VA}$ is one with stacking after the velocity-axis alignment, $\Delta T$ is the r.m.s. noise temperature and $\Delta v$ is the velocity resolution of the data \citep{Schruba+2012}. Here $\Delta V_{\rm emi, VA}$ is narrower than $\Delta V_{\rm emi}$ thanks to the velocity-axis alignment. Another reason is the reduction of frequency-dependent noise produced by systematic effects of weather, receiver instabilities and standing waves occurring during the transmission of the signal. These sources of additional noise are canceled out if the spectra are stacked after applying different velocity shifts for each pixel \citep{Schruba+2011}. \citet{Schruba+2011} stacked $^{12}$CO spectra of the outer HI-dominated region after shifting the velocity axis so that the zero velocity of the $^{12}$CO spectra corresponds to the local mean HI velocity. In this paper, we adopt an intensity weighted mean velocity of $^{12}$CO as a reference velocity for the velocity-axis alignment procedure and apply the stacking method of \citet{Schruba+2011} to improve the S/N of $^{13}$CO spectra as well as that of $^{12}$CO spectra. First, the velocity field of NGC~3627 is estimated with the $^{12}$CO data. The intensity-weighted mean velocity of each pixel of the $^{12}$CO map is given by, \begin{equation} \overline{v_{\rm ^{12}CO}} = \frac{\int v T_{\rm mb}(v) dv}{\int T_{\rm mb}(v) dv}. \end{equation} The obtained first-moment map is shown in figure \ref{fig:ProfileMap} (d). Then the shifted velocity $v_{\rm VA}$ of each spectrum is defined as \begin{equation} v_{\rm VA} = v_{\rm LSR} - \overline{v_{\rm ^{12}CO}}, \end{equation} where $v_{\rm LSR}$ is the original velocity of the spectrum. Hereafter, we refer to this procedure as velocity-axis alignment (VA). The averaged spectra of six different regions, 1) center, 2) bar, 3) bar-end, 4) offset\footnote{Offset region denotes an area where the emission runs off toward the leading side of the stellar bar (W11\nocite{Watanabe+2011}).}, 5) arm and 6) interarm were obtained by stacking the spectra after the VA procedure\footnote{ We integrated full velocity range for the calculation of the local mean velocities of the all regions but the offset region. The mean velocities of the spectra in the offset region, which is shown as an orange region in figure \ref{fig:ProfileMap}, are estimated by the integration ranging from $487.5$ to $687.5$ km s$^{-1}$, since we could not obtain adequate value in case of the full range integration due to poor quality of the baseline.}. The regions of 1) $-$ 4) were determined according to W11\nocite{Watanabe+2011}. We visually defined 5) arm and 6) interarm regions by dividing the ``other'' region in W11\nocite{Watanabe+2011} according to optical and near infrared (NIR) images. NIR image ($3.6$ $\mu$m) from the SIRTF Nearby Galaxies Survey (SINGS, \cite{Kennicutt+2003}) is shown in figure \ref{fig:ProfileMap}c. Each area is illustrated in different colors in figures \ref{fig:ProfileMap}a and \ref{fig:ProfileMap}b in the following way, the center in red, bar in green, bar-end in blue, offset in orange, arm in purple and interarm regions in yellow. Finally, we averaged the velocity-axis aligned spectra with equal weights and obtained a stacked spectrum in each region. \begin{figure*} \includegraphics[width=168mm]{profile_map-01.eps} \vspace{0cm} \caption{(a) $^{12}$CO and (b) $^{13}$CO profile maps of NGC~3627 obtained with the 45-m telescope at NRO (K07\nocite{Kuno+2007}; W11\nocite{Watanabe+2011}). The mapping area is $3'.2\times3'.2$ centered on $(\alpha, \delta)_{\rm J2000} = (11^{\rm h}20^{\rm m} 15^{\rm s}.027, +12^\circ 59' 29''.58)$. The grid size of both data are $10''.3$ and $\sim550$ pc in a linear scale. The center, bar, bar-end, offset, arm and interarm regions are colored red, green, blue, orange, purple, and yellow, respectively. (c) Spitzer/IRAC $3.6$ $\mu$m image \citep{Kennicutt+2003}. (d) 1st-moment map of the $^{12}$CO emission.} \label{fig:ProfileMap} \end{figure*} \section{Results}\label{Results} \begin{figure*} \includegraphics[width=150mm]{Spectra_stacking-01.eps} \vspace{0cm} \caption{(a) $^{12}$CO stacked spectra without the velocity-axis alignment (VA), (b) $^{12}$CO stacked spectra with VA, (c) $^{13}$CO stacked spectra without VA and (d) $^{13}$CO stacked spectra with VA. The vertical axis of each spectrum is main-beam temperature in mK. The velocity resolution of $^{12}$CO and $^{13}$CO spectra is 20 km s$^{-1}$. The fitting results with a Gaussian for the stacked spectra with VA are also plotted with red lines on each spectrum. } \label{fig:StackedSpectra} \end{figure*} \begin{table*} \caption{The properties of the spectrum in each different region.} \label{tab-2} \begin{minipage}{\textwidth} \begin{center} \begin{tabular}{@{}lccccc} \hline Line& $^{12}$CO & $^{12}$CO & $^{13}$CO & $^{13}$CO\\ VA procedure & no & yes & no & yes\\ \hline \hline Center & & & & & \\ $I_{\rm CO}$ (K km s$^{-1}$)& $70.0\pm2.0$ (36)\footnotemark[$\ast$] & $70.6\pm2.2$ (32) & $3.37\pm0.53$ (6) & $3.59\pm0.38$ (10) \\ FWHM\footnotemark[$\ast\ast$] (km s$^{-1}$) & $225\pm11$ & $194\pm6$ & $209\pm36$ & $195\pm21$ \\ $T_{\rm peak}$ (mK) & $308\pm20$ (15) & $347\pm22$ (15) & $22.3\pm5.3$ (4) & $18.2\pm3.9$ (5) \\ \hline Bar & & & & & \\ $I_{\rm CO}$ (K km s$^{-1}$)& $45.1\pm1.6$ (29) & $45.2\pm1.6$ (28) & $1.17\pm0.24$ (5) & $1.42\pm0.09$ (16)\\ FWHM (km s$^{-1}$) & $287\pm34$ & $147\pm7$ & -- & $126\pm14$ \\ $T_{\rm peak}$ (mK) & $241\pm17$ (14) & $323\pm19$ (17) & $8.28\pm2.64$ (3) & $13.4\pm1.1$ (12) \\ \hline Bar-end & & & & & \\ $I_{\rm CO}$ (K km s$^{-1}$)& $45.5\pm1.3$ (36) & $46.6\pm0.6$ (74) & $3.54\pm0.22$ (16) & $3.56\pm0.11$ (33)\\ FWHM (km s$^{-1}$) & $87\pm5$, $126\pm14$\footnotemark[$\ast\ast\ast$] & $110\pm4$ & $112\pm9$, $76\pm14$ & $100\pm4$ \\ $T_{\rm peak}$ (mK) & $220\pm14$ (15), $215\pm14$ (15) & $400\pm9$ (44) & $18.1\pm2.5$ (7), $20.8\pm2.5$ (8) & $34.5\pm1.6$ (22) \\ \hline Offset & & & & & \\ $I_{\rm CO}$ (K km s$^{-1}$)& $20.6\pm1.1$ (19) & $20.1\pm0.9$ (23) & $1.46\pm0.25$ (6) & $1.52\pm0.23$ (7)\\ FWHM (km s$^{-1}$) & $67\pm5$ & $67\pm3$ & $50\pm6$ & $54\pm7$ \\ $T_{\rm peak}$ (mK) & $276\pm20$ (14) & $279\pm16$ (17) & $28.7\pm4.5$ (6) & $26.9\pm4.4$ (6) \\ \hline Arm & & & & & \\ $I_{\rm CO}$ (K km s$^{-1}$) & $29.5\pm1.0$ (31) & $28.6\pm0.6$ (51) & $2.24\pm0.27$ (8) & $2.01\pm0.11$ (19)\\ FWHM (km s$^{-1}$) & $206\pm11$ & $104\pm4$ & $168\pm16$ & $97\pm8$ \\ $T_{\rm peak}$ (mK) & $133\pm11$ (12) & $248\pm7$ (36) & $13.7\pm3.2$ (4) & $19.1\pm1.3$ (15) \\ \hline Interarm & & & & & \\ $I_{\rm CO}$ (K km s$^{-1}$) & $22.4\pm0.9$ (25) & $20.4\pm1.0$ (20) & $1.02\pm0.27$ (4) & $0.824\pm0.118$ (7)\\ FWHM (km s$^{-1}$) & $268\pm23$ & $159\pm7$ & $170\pm55$ & $94\pm13$ \\ $T_{\rm peak}$ (mK) & $88\pm10$ (9) & $130\pm13$ (10) & $6.89\pm2.82$ (2) & $8.24\pm1.52$ (5) \\ \hline \end{tabular} \end{center} \footnotetext[$\ast$]{The signal-to-noise ratios of $I_{\rm CO}$ and $T_{\rm peak}$ are shown in the round bracket after each value.} \footnotetext[$\ast\ast$]{The FWHM is estimated with Gaussian fitting.} \footnotetext[$\ast\ast\ast$]{ For the stacking results without the VA procedure in the bar-end region, the FWHM and $T_{\rm peak}$ values of the two velocity components in the spectra are shown separately.} \end{minipage} \end{table*} The stacked spectra with and without the VA procedure of $^{12}$CO and $^{13}$CO are shown in figure \ref{fig:StackedSpectra}. The $^{12}\rm{CO}$ spectra are binned so that the velocity resolution matches that of the $^{13}\rm{CO}$ spectra (i.e., 20 km s$^{-1}$). The smoothed $^{12}$CO spectra are employed for the following analysis. The r.m.s. noise temperatures of the stacked $^{12}$CO and $^{13}$CO spectra per $20$ km s$^{-1}$ are typically reduced to $7-22$ mK and $1.1-4.4$ mK, respectively. The error of the $T_{\rm peak}$ listed in table \ref{tab-2} is the r.m.s. noise temperature, which is calculated within a narrow velocity range outside the emission line. This is because we must calculate the r.m.s. noise temperature within the baseline range in which the same number of data are stacked as the emission line range. The number of the stacked data is getting small as the velocity offset from the line center is large. In table \ref{tab-2}, some errors of $T_{\rm peak}$ and $I_{\rm CO}$ of the stacked spectra with VA are slightly larger than those of the stacked spectra without VA. This may be partly attributed to the poor statistics in estimation of the r.m.s. noise temperature. The increase of the peak temperature and the reduction of noise level by the stacking analysis after the VA procedure allow us to detect $^{13}$CO emission even from the interarm region with S/N of 5 where the emission was not detected in the spectrum of each pixel in the original data or in the stacked spectrum without the VA procedure (S/N $=2$). The fitting results with a Gaussian for the stacked spectra after the VA procedure are also plotted with red lines on each spectrum in figure \ref{fig:StackedSpectra}. The integrated intensity ($I_{\rm CO}$), full width at half maximum (FWHM) and the peak temperature ($T_{\rm peak}$) of $^{12}$CO and $^{13}$CO with and without the VA procedure in different regions of NGC~3627 are shown in table \ref{tab-2}. Although the line profiles of the stacked spectra, especially the one without the VA procedure, do not look like a Gaussian, the FWHM and its error are determined by a Gaussian fitting and the fitting error, respectively. If the FWHM is calculated literally as the width at the half of the peak intensity, the derived FWHM is more susceptible to the r.m.s. noise temperature than the one from global fitting. In table \ref{tab-2}, we also separately present the FWHM and $T_{\rm peak}$ values of the two velocity components in the bar-end spectrum obtained without the VA procedure. The validity of this method is examined by comparing the integrated intensity of the stacked spectra with and without the VA procedure. Both spectra obtained with and without the VA procedure are the averaged spectra in each region. Therefore, the integrated intensity estimated in both ways should be the same. In table \ref{tab-2}, we see that the two $I_{^{12}\rm{CO}}$ values of each region are almost the same within the error, confirming the validity of the stacking method with the VA procedure. Moreover the S/N of $I_{\rm CO}$ of each spectrum is improved by a factor of up to 3.2. \section{Analyses}\label{Analyses} \subsection{Surface density of molecular gas mass} We estimate the surface density of molecular gas mass in the six regions of NGC~3627 from the $^{12}$CO and $^{13}$CO spectra. $I_{\rm ^{12}CO}$ and $I_{\rm ^{13}CO}$ estimated from Gaussian fitting are used for the calculation hereafter, since the line profiles of the stacked spectra after the VA procedure are well fitted with a Gaussian (see figure \ref{fig:StackedSpectra})\footnote{ The $^{13}$CO emission line of the stacked spectra in the center region has a bluewards wing that is not seen in $^{12}$CO spectra with higher S/N than $^{13}$CO. Therefore the wing is not expected to be a realistic feature but a wrong feature due to bad baselines of $^{13}$CO spectra. The Gaussian fitting is not expected to be severely affected by this feature, because the FWHM of $^{13}$CO is consistent with the one of $^{12}$CO within the margin of error. }. The integrated intensities estimated with a Gaussian fitting agree with the values listed in table \ref{tab-2} within the margin of error. The column density of H$_2$, $N_{\rm H_2}$ (cm$^{-2}$), for extragalactic objects is commonly estimated from $I_{\rm ^{12}CO}$ with a CO-to-H$_2$ conversion factor, $X_{\rm CO}$ (cm$^{-2}$ [K km $\rm s^{-1}$]$^{-1}$) under the premise that $^{12}$CO is optically thick. If $^{12}$CO is optically thick, the brightness temperature is not mainly related to the column density of gas but to the excitation temperature of the $\tau_{\rm ^{12}CO}\sim1$ surface of the virialized molecular clouds \citep{Bolatto+2013}. $N_{\rm H_2}$ is described as \begin{equation} N_{\rm H_2} = X_{\rm CO} I_{\rm ^{12}CO}. \end{equation} We can also estimate $N_{\rm H_2}$ from the column density of $^{13}$CO, $N_{\rm ^{13}CO}$ as long as the $^{13}$CO emission is optically thin. Under the local thermal equilibrium (LTE) approximation, $N_{\rm ^{13}CO}$ can be calculated as \begin{equation} N_{\rm ^{13}CO} = \frac{3 k_{\rm B}}{4 \pi^3 \mu^2 \nu_{\rm ^{13}CO}}\exp{\left(-\frac{h\nu_{\rm ^{13}CO}J}{2 k_{\rm B} T_{\rm k}} \right)}\frac{I_{\rm ^{13}CO}}{1-\exp{\left(-\frac{h \nu_{\rm ^{13}CO}}{k_{\rm B} T_{\rm k}} \right)}}\ \ \ \rm{cm^{-2}}, \label{eq-4} \end{equation} where $k_{\rm B}$ is the Boltzmann constant, $\mu$ is the dipole moment of $0.11\times10^{-18}$ esu cm, $h$ is the Planck constant, $\nu_{\rm ^{13}CO}$ is the rest-frame frequency of $^{13}$CO, $J$ is the rotational quantum number of lower energy state and $T_{\rm k}$ is the kinetic temperature. We obtain $(N_{\rm H_2}/{\rm cm}^{-2})=8.07\times10^{20} (I_{\rm ^{13}CO}/\rm{K\ km\ s}^{-1})$ by assuming an $N_{\rm H_2}/N_{\rm ^{13}CO}$ ratio of $7.5\times10^5$ \citep{Frerking+1982} and $T_{\rm k}=20$ K. It is useful to estimate a lower limit of $N_{\rm H_2}$ by assuming that the $^{12}$CO line has a small optical depth ($\tau_{\rm ^{12}CO} \ll 1.0$). The minimum H$_2$ column density is calculated in the same manner as equation (\ref{eq-4}) with $I_{\rm ^{12}CO}$. We obtain $(N_{\rm H_2}/{\rm cm^{-2}})=9.88\times10^{18} (I_{\rm ^{12}CO}/\rm{K\ km\ s^{-1}})$ with $N_{\rm H_2}/N_{\rm ^{12}CO}$ ratio of $1.0\times10^4$ \citep{YoungScoville1991} and $T_{\rm k}=20$ K. We utilize this conversion factor in the discussion in section \ref{Discussion}. The surface density of the molecular gas, $\Sigma_{\rm mol}$ is calculated as, \begin{equation} \Sigma_{\rm mol}=1.36\times 2 \times m_{\rm H} N_{\rm H_2} \cos{(i)}, \end{equation} where 1.36 is a factor to account for the contribution of He by mass, $m_{\rm H}$ is the mass of the hydrogen atom and $i$ is the inclination of NGC~3627 ($52^\circ$). Then we can obtain the surface density of the molecular gas for the case of optically thick $^{12}$CO emission as, \begin{equation} \left( \frac{\Sigma_{\rm mol,12,thick}}{M_\odot\ \rm{pc^{-2}}} \right ) = 1.34\ \left( \frac{I_{\rm ^{12}CO}}{\rm K\ km\ s^{-1}} \right), \label{eq:mol12thick} \end{equation} for the case of optically thin $^{13}$CO emission as, \begin{equation} \left( \frac{\Sigma_{\rm mol,13,thin}}{M_\odot\ \rm{pc^{-2}}} \right ) = 10.8\ \left( \frac{I_{\rm ^{13}CO}}{\rm K\ km\ s^{-1}} \right), \label{eq:mol13thin} \end{equation} and for the case of optically thin $^{12}$CO emission as, \begin{equation} \left( \frac{\Sigma_{\rm mol,12,thin}}{M_\odot\ \rm{pc^{-2}}} \right ) = 0.133\ \left( \frac{I_{\rm ^{12}CO}}{\rm K\ km\ s^{-1}} \right), \label{eq:mol12thin} \end{equation} where $\Sigma_{\rm mol,12,thick}$, $\Sigma_{\rm mol,13,thin}$ and $\Sigma_{\rm mol,12,thin}$ are the surface densities of molecular gas estimated from $^{12}$CO (optically thick), $^{13}$CO (optically thin) and $^{12}$CO (optically thin), respectively. We adopt $X_{\rm CO}$ of $1\times10^{20}$ cm$^{-2}$ $[\rm K\ km\ s^{-1}]^{-1}$ \citep{NakaiKuno1995}. The $\Sigma_{\rm mol,12,thick}$, $\Sigma_{\rm mol,13,thin}$, $\Sigma_{\rm mol,12,thin}$ and $\Sigma_{\rm mol,12,thick}/\Sigma_{\rm mol,13,thin}$ ratios in all the regions are shown in table \ref{tab-3}. In table \ref{tab-3}, there is a wide variety in $\Sigma_{\rm mol,12,thick}/\Sigma_{\rm mol,13,thin}$ ratios among the six regions. Here we focus on the relative difference of the $\Sigma_{\rm mol,12,thick}/\Sigma_{\rm mol,13,thin}$ ratios among different regions rather than the discrepancy between $\Sigma_{\rm mol,12,thick}$ and $\Sigma_{\rm mol,13,thin}$ in each region. The errors of $\Sigma_{\rm mol}$ in table \ref{tab-3} are calculated only from the errors of the integrated intensities. However, the $\Sigma_{\rm mol}$ values calculated here implicitly include the following assumptions; 1) the observed molecular clouds mainly consists of H$_2$ molecules, are virialized, follow the size-line width relation and have constant temperature and the $^{12}$CO emission from them is optically thick (for the constant CO-to-H$_2$ conversion factor), 2) the kinetic temperature and the abundance ratios of $^{13}$CO to H$_2$ are free parameters (for the LTE assumption). In particular, the coefficient in equation (\ref{eq:mol13thin}) varies from 6.16 to 25.1 for $T_{\rm k}=10$ and $50$ K, respectively. We estimate $T_{\rm k}$ from the comparison between $\Sigma_{\rm mol, 12, thick}$ and $\Sigma_{\rm mol, 13, thin}$ by assuming that both $^{12}$CO and $^{13}$CO emissions are radiated from the same region, the abundance ratios of $^{13}$CO to H$_2$ do not vary within the galaxy, and the estimated value of $\Sigma_{\rm mol, 12, thick}$ with $X_{\rm CO}$ is correct. In table \ref{tab-3}, we can see that the $\Sigma_{\rm mol, 12, thick}/\Sigma_{\rm mol, 13, thin}$ ratios in the center, bar and interarm regions are $\sim3$ while those in the other regions are as small as $\sim1.6$. We obtain $T_{\rm k}\sim50$ K for the center, $\sim90$ K for the bar, $\sim70$ K for the interarm, and $\sim30-40$ K for the other regions. \citet{Galametz+2012} derived the dust temperature distribution of NGC~3627 with a grid size of $18''$, which corresponds to half of the effective spatial resolution of $36''$. They found a radial temperature gradient declining from $\sim25$ K to $\sim17$ K from their SED fitting using the dust temperature and emissivity index as free parameters (figure 4 of \cite{Galametz+2012}). In their plot, the highest temperature ($\sim25$ K) is found in the center and the bar-end regions and the lowest values are seen in the interarm region ($\sim17$ K). In the center region, which contains nuclear starburst and active galactic nuclei \citep{Krips+2008}, the high $\Sigma_{\rm mol, 12, thick}/\Sigma_{\rm mol, 13, thin}$ ratio may be partly attributed to the high $T_{\rm k}$ although the averaged temperature of $\sim50$ K over the 800-pc beam is too high. However, it is quite unlikely that the temperatures of the molecular gas in the bar and interarm regions are higher than the arm, bar-end and offset regions where stars are actively forming. Therefore, the high $\Sigma_{\rm mol, 12, thick}/\Sigma_{\rm mol, 13, thin}$ ratios in the bar and the interarm regions are likely attributed to the overestimation of $\Sigma_{\rm mol, 12, thick}$ estimated with $I_{\rm ^{12}CO}$ and a constant CO-to-H$_2$ conversion factor. We compare the $^{12}$CO and $^{13}$CO spectra of each region in the following subsections to physically explain the high $\Sigma_{\rm mol, 12, thick}/\Sigma_{\rm mol, 13, thin}$ ratios in the bar and interarm regions. \begin{table*} \caption{The surface densities of molecular gas $\Sigma_{\rm mol}$ estimated under the assumption of optically thick $^{12}$CO (equation (\ref{eq:mol12thick})), optically thin $^{13}$CO (equation (\ref{eq:mol13thin})) and optically thin $^{12}$CO (equation (\ref{eq:mol12thin})).} \label{tab-3} \begin{minipage}{\textwidth} \begin{center} \begin{tabular}{@{}lcccccc} \hline & Center & Bar & Bar-end & Offset & Arm & Interarm\\ \hline \hline $\Sigma_{\rm mol,12,thick}$ ($M_\odot$ pc$^{-2})$& $93.7\pm3.7$ & $60.2\pm3.7$ & $63.4\pm3.0$ & $27.6\pm1.8$ & $36.0\pm1.8$ & $28.1\pm1.6$\\ $\Sigma_{\rm mol,13,thin}$\footnotemark[$\ast$] ($M_\odot$ pc$^{-2}$)& $38.4\pm5.5$ & $17.5\pm2.6$ & $39.5\pm2.3$ & $17.4\pm2.8$ & $20.5\pm2.3$ & $8.32\pm1.51$\\ $\Sigma_{\rm mol,12,thin}$\footnotemark[$\ast$] ($M_\odot$ pc$^{-2}$)& $9.30\pm0.37$ & $5.98\pm0.36$ & $6.29\pm0.30$ & $2.74\pm0.18$ & $3.58\pm0.18$ & $2.79\pm0.15$ \\ $\Sigma_{\rm mol,12,thick}/\Sigma_{\rm mol,13,thin}$\footnotemark[$\ast\ast$] & $2.4\pm0.4$ & $3.4\pm0.6$ & $1.6\pm0.1$ & $1.6\pm0.3$ & $1.8\pm0.2$ & $3.0\pm0.6$\\ \hline \end{tabular} \end{center} \footnotetext[$\ast$]{$\Sigma_{\rm mol}$ is estimated with $T_{\rm k}=20$.} \footnotetext[$\ast\ast$]{$\Sigma_{\rm mol,12,thick}/\Sigma_{\rm mol,13,thin}$ ratio is also shown.} \end{minipage} \end{table*} \subsection{Integrated intensity, FWHM, and peak temperature ratios of the $^{12}$CO and $^{13}$CO spectra}\label{subsec-Discussion2} We show the $I_{^{12}\rm{CO}}$/$I_{^{13}\rm{CO}}$, FWHM$_{^{12}\rm{CO}}$/FWHM$_{^{13}\rm{CO}}$ and $T_{\rm peak, ^{12}CO}/T_{\rm peak, ^{13}CO}$ ratios in table \ref{tab-4}. We find for the first time that the $I_{^{12}\rm{CO}}$/$I_{^{13}\rm{CO}}$ ratio in the interarm region is almost twice as high as those in the bar-end, offset and arm regions. The high ratios in the bar and center regions reported in W11\nocite{Watanabe+2011} are confirmed by the stacking analysis. The $I_{^{12}\rm{CO}}$/$I_{^{13}\rm{CO}}$ ratios obtained in the bar-end, offset and arm regions are consistent with the values obtained in \citet{Paglione+2001} where they observed 17 nearby galaxies in $^{12}$CO and $^{13}$CO along the major axes and obtained the $I_{^{12}\rm{CO}}/I_{^{13}\rm{CO}}$ ratios of $4-22.8$ ($45''$ spatial resolution). The bar region shows a higher value of $T_{\rm peak, ^{12}CO}/T_{\rm peak, ^{13}CO}=24.0$ than the other regions ($10-16$) and the center region shows an intermediate value between them ($19.1$). It is noteworthy that the FWHM$_{^{12}\rm{CO}}$/FWHM$_{^{13}\rm{CO}}$ ratio in the interarm region is $1.7$ whereas the other regions have almost unity. In other words, the line width of $^{12}$CO is larger than that of $^{13}$CO in the interarm region. To investigate local effects on this trend, we separate the interarm region into Northeastern (NE) and Southwestern (SW) parts, produce the averaged spectra in each part, measure the FWHM ratios, and check whether this trend still holds or not. Their FWHM estimated with Gaussian fitting are presented in table \ref{tab-5}. Although the S/N (the peak temperature-to-noise ratio) of the data is not so high ($\sim4$), the trend that FWHM of $^{12}$CO is larger than that of $^{13}$CO still exists. The FWHM ratios that are estimated with NE and SW interarm spectra separately are consistent with the interarm ratio of $\sim1.7$ within the margin of error. Thus we conclude that the difference in FWHM between the $^{12}$CO and $^{13}$CO spectra is a characteristic feature in the interarm region of NGC~3627 rather than any local feature, which has nothing to do with the galactic structures. \begin{table*} \caption{The $^{12}$CO$/$$^{13}$CO ratios of intensity, FWHM and $T_{\rm peak}$ estimated from Gaussian fitting in each region.} \label{tab-4} \begin{tabular}{@{}lcccccc} \hline & Center & Bar & Bar-end & Offset & Arm & Interarm\\ \hline \hline $I_{^{12}\rm{CO}}$/$I_{^{13}\rm{CO}}$ & $19.7\pm1.4$ & $31.8\pm3.9$ & $13.1\pm0.5$ & $13.3\pm1.4$ & $14.2\pm0.6$ & $24.7\pm2.8$\\ FWHM$_{^{12}\rm{CO}}$/FWHM$_{^{13}\rm{CO}}$ & $0.99\pm0.11$ & $1.17\pm0.14$ & $1.10\pm0.06$ & $1.24\pm0.17$ & $1.07\pm0.10$ & $1.69\pm0.24$\\ $T_{\rm peak, ^{12}CO}/T_{\rm peak, ^{13}CO}$ & $19.1\pm2.8$ & $24.0\pm3.8$ & $11.6\pm0.7$ & $10.4\pm1.0$ & $13.0\pm0.7$ & $15.8\pm2.4$ \\ \hline \end{tabular} \end{table*} \begin{table*} \caption{FWHM estimated with Gaussian fitting to the stacked spectra of the interarm region.} \begin{center} \begin{tabular}{lccc} \hline &FWHM$_{^{12}\rm{CO}}$&FWHM$_{^{13}\rm{CO}}$&FWHM$_{^{12}\rm{CO}}$/FWHM$_{^{13}\rm{CO}}$\\ & (km s$^{-1}$)&(km s$^{-1}$)&\\ \hline \hline All & $159\pm7$ & $94\pm13$ & $1.69\pm0.24$\\ Northern-East & $145\pm9$ & $66\pm12$ & $2.20\pm0.43$\\ Southern-West & $175\pm8$ & $123\pm16$ & $1.42\pm0.19$\\ \hline \end{tabular} \end{center} \label{tab-5} \end{table*}% \subsubsection{The radial trends of $I_{\rm ^{12}CO}$/$I_{\rm ^{13}CO}$, FWHM$_{\rm ^{12}CO}$/FWHM$_{\rm ^{13}CO}$ and $T_{\rm peak, ^{12}CO}$/$T_{\rm peak, ^{13}CO}$} Each region has a different mean galactocentric distance and therefore the variations in $I_{\rm ^{12}CO}$/$I_{\rm ^{13}CO}$, FWHM$_{\rm ^{12}CO}$/FWHM$_{\rm ^{13}CO}$ and $T_{\rm peak, ^{12}CO}$/$T_{\rm peak, ^{13}CO}$ found in the previous section may be attributed to a radial trend. To investigate the radial gradient of $I_{\rm ^{12}CO}$/$I_{\rm ^{13}CO}$, FWHM$_{\rm ^{12}CO}$/FWHM$_{\rm ^{13}CO}$ and $T_{\rm peak, ^{12}CO}$/$T_{\rm peak, ^{13}CO}$, we produced the stacked spectra of $^{12}$CO and $^{13}$CO in five concentric annuli (r1$-$r5, from the galaxy center) with a width of $\sim1.1$ kpc in the galactic plane. The fitting results with a Gaussian to the stacked spectra after the VA procedure of the five annuli are summarized in table \ref{tab-6}. The radial profiles of the $I_{\rm ^{12}CO}$/$I_{\rm ^{13}CO}$, FWHM$_{\rm ^{12}CO}$/FWHM$_{\rm ^{13}CO}$ and $T_{\rm peak, ^{12}CO}$/$T_{\rm peak, ^{13}CO}$ ratios are shown in figure \ref{fig:RadialDistributions} with black lines. In these plots, we also show the fraction of the number of pixels of each six morphologically defined regions included in a concentric annulus to the total number of pixels in each annulus. The colors of these lines are the same as the six regions in figure \ref{fig:ProfileMap}. Grey points and lines (``others'') represent the area which is not categorized in the (1)$-$(6) regions but included in the concentric annuli. We find a radial gradient in the $T_{\rm peak, ^{12}CO}/T_{\rm peak, ^{13}CO}$ ratio plot (figure \ref{fig:RadialDistributions}a) and a weak gradient in the $I_{^{12}\rm{CO}}$/$I_{^{13}\rm{CO}}$ ratio plot (figure \ref{fig:RadialDistributions}c). The radial gradient of $I_{^{12}\rm{CO}}$/$I_{^{13}\rm{CO}}$ reported in W11\nocite{Watanabe+2011} is confirmed with the data obtained with the stacking analysis. The $T_{\rm peak, ^{12}CO}/T_{\rm peak, ^{13}CO}$ and the $I_{^{12}\rm{CO}}$/$I_{^{13}\rm{CO}}$ ratios tend to be higher at smaller galactocentric distance $D$ (kpc) while we found the highest values of both ratios in the bar and interarm regions outside r1. In figure \ref{fig:RadialDistributions}b, we do not see any systematic radial gradient in the FWHM$_{^{12}\rm{CO}}$/FWHM$_{^{13}\rm{CO}}$ plot but there is a bump at r2$-$r4. This radial range contains the interarm region and the highest contribution from the interarm region is found in r2$-$r3. Additionally, the FWHM$_{^{12}\rm{CO}}$/FWHM$_{^{13}\rm{CO}}$ ratios at the bump ($\sim1.3$) are smaller than those of the interarm region ($\sim1.7$). Hence, we conclude that the high FWHM$_{\rm ^{12}CO}$/FWHM$_{\rm ^{13}CO}$ ratios at r2$-$r4 are due to the inclusion of the interarm region. Accordingly, the differences seen in the $I_{\rm ^{12}CO}$/$I_{\rm ^{13}CO}$, FWHM$_{\rm ^{12}CO}$/FWHM$_{\rm ^{13}CO}$ and $T_{\rm peak, ^{12}CO}$/$T_{\rm peak, ^{13}CO}$ ratios in the morphologically defined regions do not represent the radial trends of these ratios. \begin{table*} \caption{Gaussian fitting results of the stacked spectra after the VA procedure in different concentric radii.} \begin{center} \begin{tabular}{lccc} \hline Line & $^{12}$CO & $^{13}$CO & $^{12}$CO/$^{13}$CO\\ \hline \hline r1: $D=0-1.1$ (kpc)&&&\\ $I_{\rm CO}$ (K km s$^{-1}$) & $72.7\pm2.8$ & $3.07\pm43$ & $23.7\pm3.4$\\ FWHM (km s$^{-1}$) & $191\pm5$ & $184\pm20$ & $1.03\pm0.11$\\ $T_{\rm peak}$ (mK) & $357\pm9$ & $15.6\pm1.4$ & $22.8\pm2.2$\\ \hline r2: $1.1-2.2$ (kpc)&&&\\ $I_{\rm CO}$ (K km s$^{-1}$) & $35.4\pm1.3$ & $1.76\pm0.20$ & $20.1\pm3.4$\\ FWHM (km s$^{-1}$) & $167\pm5$ & $149\pm13$ & $1.12\pm0.10$\\ $T_{\rm peak}$ (mK) & $199\pm5$ & $11.1\pm0.8$ & $17.9\pm1.4$\\ \hline r3: $2.2-3.3$ (kpc)&&&\\ $I_{\rm CO}$ (K km s$^{-1}$) & $25.7\pm1.5$ & $1.25\pm0.11$ & $20.5\pm2.2$\\ FWHM (km s$^{-1}$) & $144\pm6$ & $108\pm7$ & $1.33\pm0.11$\\ $T_{\rm peak}$ (mK) & $167\pm6$ & $10.9\pm0.7$ & $15.3\pm1.1$\\ \hline r4: $3.3-4.4$ (kpc)&&&\\ $I_{\rm CO}$ (K km s$^{-1}$) & $30.4\pm1.2$ & $1.80\pm0.11$ & $16.9\pm1.2$\\ FWHM (km s$^{-1}$) & $94.1\pm2.8$ & $70.7\pm3.1$ & $1.33\pm0.1$\\ $T_{\rm peak}$ (mK) & $304\pm8$ & $23.9\pm0.9$ & $12.7\pm0.6$\\ \hline r5: $4.4-5.5$ (kpc)&&&\\ $I_{\rm CO}$ (K km s$^{-1}$) & $20.2\pm1.0$ & $1.56\pm0.09$ & $13.0\pm1.0$\\ FWHM (km s$^{-1}$) & $119\pm4$ & $118\pm5$ & $1.01\pm0.06$\\ $T_{\rm peak}$ (mK) & $160\pm5$ & $12.5\pm0.5$ & $12.8\pm0.7$\\ \hline \end{tabular} \end{center} \label{tab-6} \end{table*}% \begin{figure*} \includegraphics[width=170mm]{RadialDistribution-b-01.eps} \vspace{0cm} \caption{Radial distributions of a) $T_{\rm peak, ^{12}CO}/T_{\rm peak, ^{13}CO}$, b) FWHM$_{^{12}\rm{CO}}$/FWHM$_{^{13}\rm{CO}}$ and c) $I_{^{12}\rm{CO}}$/$I_{^{13}\rm{CO}}$. These ratios are shown as a black line in each graph. Red, green, blue, orange, purple, yellow, and grey lines represent the fraction of spectra of center, bar, bar-end, arm, interarm, offset regions and others included in each concentric annulus. The error bars of the three ratios (vertical axis) are estimated from the fitting error (1$\sigma$). The horizontal bar of each point represents the radial range within which the stacked spectrum is calculated. } \label{fig:RadialDistributions} \end{figure*} \subsection{The intensity ratios of $T_{^{12}\rm{CO}}/T_{^{13}\rm{CO}}$ in the different regions} \label{subsec:slope} We compare the $^{12}\rm{CO}$ and $^{13}\rm{CO}$ spectra in each region in figure \ref{fig:SpectraComparison1213}. Column (a) of this figure is $^{12}$CO and $^{13}$CO spectra multiplied by a fixed value of $10$ for the comparison of the heights of the $^{12}$CO and the $^{13}$CO emission lines in each different region. Column (b) shows the $^{12}$CO and the rescaled $^{13}$CO spectra, which are matched at the peak of the $^{12}$CO. This is done to compare the widths of the $^{12}$CO and the $^{13}$CO emission lines. The emission ranges of the spectra are indicated as the unshaded regions and the baseline range are indicated as the shaded regions in the column (a) and (b). Hereafter we refer to them as the ``emission'' and ``baseline'' ranges, respectively. In column (c) of this figure, we plot the $T_{\rm mb}$ of $^{12}$CO versus $^{13}$CO for all the velocity channels in each region. Grey and black points in this plot represent the data points of the ``baseline'' and ``emission'' ranges of the spectra. The linear fitting results of the data within the ``emission'' range (black-filled circles) are also shown in red solid lines. They are fitted so as to cross the origin. Column (a) in figure \ref{fig:SpectraComparison1213} visually confirms that the $^{12}$CO spectra of the center, bar and interarm regions are still much higher than $10\times^{13}$CO, unlike those in the three other regions. This difference is quantitatively expressed by different slopes of the fitting lines in column (c) in figure \ref{fig:SpectraComparison1213}. The slope of 22.7 in the bar region is almost twice as high as those in the bar-end, offset and arm regions ($11-13$) and those in the center and interarm regions show the intermediate value ($17-18$) between them. \begin{figure*} \includegraphics[width=110mm]{hikaku1213-01.eps} \vspace{+1.0cm} \caption{Comparison between the $^{12}$CO and $^{13}$CO spectra. (a) $^{12}$CO and $10\times ^{13}$CO spectra for a comparison of $T_{\rm mb}$. (b) $^{12}$CO and $^{13}$CO spectra multiplied by adequate values where the peak temperatures of $^{12}$CO and $^{13}$CO become comparable for a comparison of FWHM. The ``emission'' range of spectra is indicated as an unshaded region and the ``baseline'' ranges are indicated as shaded regions. (c) $T_{\rm ^{12}CO}-T_{\rm ^{13}CO}$ correlation plots. Grey and black points in the plot represent the data points of ``baseline'' and ``emission'' range of the spectra. The linear fitting results of data within the ``emission'' range (black points) are also shown as a black solid line. The error bars represent the r.m.s. noise temperatures of the $^{12}$CO and $^{13}$CO spectra ($1\sigma$). They are fitted so as to cross the origin. A magnified figure of the dotted rectangle region on the plot of the interarm region is shown in figures \ref{fig:SpectraComparison1213Interarm}b and \ref{fig:SpectraComparison1213Interarm}c. } \label{fig:SpectraComparison1213} \end{figure*} The most notable result in this study is that the width of the $^{12}$CO emission line is larger than that of $^{13}$CO in the interarm region whereas both lines in the other regions have comparable widths (column (b) in \ref{fig:SpectraComparison1213}). We separate the ``emission'' range of the interarm spectra into the ``peak'' and the ``outskirt'' ranges that are indicated with blue and green horizontal lines in figure \ref{fig:SpectraComparison1213Interarm}a. In figure \ref{fig:SpectraComparison1213Interarm}b, we show a magnified plot of the dotted rectangle region of figure \ref{fig:SpectraComparison1213}c. Figure \ref{fig:SpectraComparison1213Interarm}c is the same as figure \ref{fig:SpectraComparison1213Interarm}b, but the data points of the ``peak'' and the ``outskirt'' ranges of the spectra are indicated as blue and green points, respectively. The blue and green solid lines represent the linear fitting results of the data within the ``outskirt'' and the ``peak'' points. In figure \ref{fig:SpectraComparison1213Interarm}c, there are distinct spectral features of 1) a large gradient ($26.4\pm5.3$) at $T_{\rm ^{13}CO}$\hspace{0.3em}\raisebox{0.4ex}{$<$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em}3 mK and 2) a small gradient ($7.2\pm1.9$) at $T_{\rm ^{13}CO}$\hspace{0.3em}\raisebox{0.4ex}{$>$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em}3 mK. The two slopes are significantly different taking into account the error. This result suggests that there are two gas components with different velocity widths and the component with a broader line width has the highest $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ ratio of $26.4$ among all the regions. \begin{figure*} \includegraphics[width=150mm]{hikaku1213_interarm-01.eps} \caption{Comparison between the $^{12}$CO and $^{13}$CO spectra in the interarm region. (a) $^{12}$CO and $^{13}$CO spectra multiplied by 15. The ``baseline,'' ``peak,'' and ``outskirt'' ranges are indicated as shaded region, green and blue horizontal lines, respectively. (b) A magnified figure of the dotted rectangle region in the original plot in figure \ref{fig:SpectraComparison1213}c. (c) The same plot as (b). But the data points are shown in different colors and symbols: grey filled circles for ``baseline,'' blue filled circles for ``outskirt,'' and green open square for ``peak.'' The blue and green solid lines on this plot represent the fitting results of the data within the ``outskirt'' and ``peak'' ranges, respectively. The error bars in (b) and (c) represent the r.m.s. noise temperatures of the $^{12}$CO and $^{13}$CO spectra ($1\sigma$). } \label{fig:SpectraComparison1213Interarm} \end{figure*} \subsection{Brief summary of the analyses} The surface densities of molecular gas were calculated with the $^{12}$CO and $^{13}$CO spectra in the six regions of NGC~3627. We found that the bar and interarm regions have higher $\Sigma_{\rm ^{12}CO, thick}/\Sigma_{\rm ^{13}CO, thin}$, i.e., $I_{\rm ^{12}CO}/I_{\rm ^{13}CO}$ than the other regions. The $I_{^{12}\rm{CO}}/I_{^{13}\rm{CO}}$ ratio in the bar region is high because the $T_{^{12}\rm{CO}}/T_{^{13}\rm{CO}}$ ratios are fairly high ($22.7$) within the velocity range of the emission line. For the case of the interarm region, the high $I_{^{12}\rm{CO}}/I_{^{13}\rm{CO}}$ ratio is attributed to the broader line width of the $^{12}$CO spectra compared to the $^{13}$CO spectra. The difference of the FWHM of $^{12}$CO and $^{13}$CO suggests the existence of two molecular gas components with different FWHM in the interarm region. $T_{^{12}\rm{CO}}/T_{^{13}\rm{CO}}$ ratio of the gas component with broader FWHM in the interarm region is higher than the other regions. \section{Discussion}\label{Discussion} A new and the most important result of our study is that the $I_{\rm ^{12}CO}/I_{\rm ^{13}CO}$ in the interarm region is high due to broader FWHM$_{\rm ^{12}CO}$ than FWHM$_{\rm ^{13}CO}$. The obtained spectra in this study are the emissions from an ensemble of giant molecular clouds (GMCs), giant molecular associations (GMAs) and/or the ambient components, since the typical sizes of those structures are \hspace{0.3em}\raisebox{0.4ex}{$<$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em}$40$ pc (GMCs) and $\sim200$ pc (GMAs), which are much smaller than the beam size of $\sim800$ pc. Therefore, the observed FWHM of the $^{12}$CO and $^{13}$CO spectra are likely to represent not a random motion within a cloud but a random motion between clouds since the FWHM of the former (\hspace{0.3em}\raisebox{0.4ex}{$<$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em}$10$ km s$^{-1}$, \cite{Sanders+1985,Scoville+1987,Solomon+1987}) is much smaller than that of the latter. If the molecular gas (or clouds) within a beam has almost the same $^{12}$CO to $^{13}$CO intensity ratio in each velocity channel, the FWHM of the spectra of the $^{12}$CO and $^{13}$CO lines are expected to be almost the same. This is the case for every region except for the interarm region. The different FWHM between the $^{12}$CO and $^{13}$CO lines in the interarm region is hardly explained by the ensemble of molecular gas with uniform physical states within a beam. We discuss the physical conditions of molecular gas in each region of NGC~3627 in the following subsections. \subsection{Differences in the physical states of molecular gas among different regions} \label{subsec:physicalstate} The brightness temperature of line radiation, $T_{\rm B}(\nu)$, under the assumption of local thermal equilibrium (LTE), is described as \begin{equation} T_{\rm{B}}(\nu) = \Phi [ J_\nu (T_{\rm ex}) - J_\nu (T_{\rm bg}) ] [1 - \exp{(-\tau_\nu)}], \end{equation} where $\Phi$ is the beam-filling factor, $J_\nu (T)$ is the Plank function $(J_\nu (T) = [2h\nu^3/c^2] [\exp{(h\nu/k_B T)} - 1]^{-1}$, where $c$ is the speed of light), $T_{\rm ex}$ is the excitation temperature and equals the kinetic temperature $T_{\rm k}$ under LTE conditions, $T_{\rm bg}$ is the temperature of the cosmic microwave background ($2.73$ K) and $\tau_\nu$ is the optical depth. Under the assumptions that 1) both $^{12}$CO and $^{13}$CO lines are emitted from the same cloud (same $\Phi$ for $^{12}$CO and $^{13}$CO), and 2) both molecules are thermalized (same $T_{\rm ex}=T_{\rm k}$ for $^{12}$CO and $^{13}$CO), the line ratio of ${T_{^{12}\rm{CO}}}/{T_{^{13}\rm{CO}}}$ can be described as a function of the optical depth of $^{12}$CO as, \begin{equation} \frac{T_{^{12}\rm{CO}}}{T_{^{13}\rm{CO}}} \approx \frac{1-\rm{exp}(-\tau_{^{12}\rm{CO}})}{1-\rm{exp}(-\tau_{^{13}\rm{CO}})} \approx \frac{1-\rm{exp}(-\tau_{^{12}\rm{CO}})}{1-\rm{exp}(-\frac{\tau_{^{12}\rm{CO}}}{\it R_{\rm 12/13}})}, \label{eq:12} \end{equation} where $R_{12/13}$ is the abundance ratio of $^{12}$C and $^{13}$C. $R_{12/13}$ can be used as a proxy for the $N_{\rm ^{12}CO}/N_{\rm ^{13}CO}$ ratio as long as the isotope fractionation is negligible\footnote{ To be precise, the $N_{\rm ^{12}CO}/N_{\rm ^{13}CO}$ ratio may be affected by the isotope fractionation in response to the competing processes of the isotope exchange reaction \citep{Watson+1976,SmithAdams1980} and the selective dissociation \citep{BallyLanger1982,vanDishoeck+1988,Kopp+1996}. }. The $T_{^{12}\rm{CO}}/T_{^{13}\rm{CO}}$ ratio as a function of $\tau_{\rm ^{12}CO}$ is shown in figure \ref{fig:tau12CO}. According to the equation (\ref{eq:12}), a larger $\tau_{\rm ^{12}CO}$ gives us a lower $T_{^{12}\rm{CO}}/T_{^{13}\rm{CO}}$ with a fixed $R_{12/13}$ and a lower $R_{12/13}$ gives us a lower $T_{^{12}\rm{CO}}/T_{^{13}\rm{CO}}$. \citet{Milam+2005} showed a radial dependence of $R_{12/13}$ of our Galaxy as \begin{equation} R_{12/13}(D) = 6.21\ D +18.71, \label{eq-R1213} \end{equation} where $D$ is the galactocentric distance in kpc. With an assumption that the profile of the emission line is expressed by a Gaussian function, the optical depth of the $^{13}$CO line can be described as \begin{equation} \tau_{\rm ^{12}CO} \approx \frac{4 \pi^3 \nu_{\rm ^{12}CO} \mu^2 N_{\rm ^{12}CO}}{3 k_{\rm B} T_{\rm ex} \Delta v} \exp{\left(\frac{-h \nu_{\rm ^{12}CO} J}{2 k_{\rm B} T_{\rm ex}}\right)} \left\{ 1 - \exp{\left(\frac{-h \nu_{\rm ^{12}CO}}{k_{\rm B} T_{\rm ex}}\right)}\right\}, \end{equation} where $N_{\rm ^{12}CO}$ is the column density of $^{12}$CO and $\Delta v$ is the line width. If $h \nu \ll k T_{\rm ex}$, the optical depth can be described as \begin{equation} \tau_{\rm ^{12}CO} \propto \frac{N_{\rm ^{12}CO}}{\Delta v\ T_{\rm k}^2}. \label{eq:tau} \end{equation} Therefore, $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ ratio is a function of $N_{\rm ^{12}CO}$, $\Delta v$ and $T_{\rm k}$ as well as $R_{12/13}(D)$. We can estimate the $\tau_{\rm ^{12}CO}$ from the $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ ratios of each region, if we assume that the $R_{12/13}$ dependence on the galactocentric distance of NGC~3627 is the same as that of the Galaxy. In table \ref{tab-7}, the $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ ratios estimated by linear fitting in figures \ref{fig:SpectraComparison1213} and \ref{fig:SpectraComparison1213Interarm}, radial extents, expected $R_{12/13}$ and $\tau_{\rm ^{12}CO}$ of each region are presented. The $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ ratios of each region are overplotted on the $\tau_{\rm ^{12}CO}-T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ plot in figure \ref{fig:tau12CO}. Figure \ref{fig:tau12CO} and table \ref{tab-7} indicate the variety of $\tau_{\rm ^{12}CO}$ among different regions in NGC~3627. The bar-end, offset and arm regions seem to have high values of $\tau_{\rm ^{12}CO}$ larger than $\sim3$. On the other hand, $\tau_{\rm ^{12}CO}$ in the center region, bar region and of the larger FWHM component in the interarm region are expected to be $\sim0.2-0.9$, $\sim0.2-0.7$ and $\sim0.3-1.5$, respectively. If we adopt the local interstellar medium value of $R_{12/13}\sim68$ \citep{Milam+2005}, the optical depths in these three regions are expected to be $\sim4.0$, $\sim2.9$ and $\sim2.4$. Therefore, the higher $T_{^{12}\rm{CO}}/T_{^{13}\rm{CO}}$ ratios in the center, bar, and interarm regions are likely due to the lower optical depth of $^{12}$CO than the other regions as far as we assume the Galactic $R_{12/13}(D)$. \begin{figure} \includegraphics[width=70mm]{tau_12CO-01.eps} \vspace{0cm} \caption{Brightness temperature ratio, $\frac{T_{\rm ^{12}CO}}{T_{\rm ^{13}CO}}$ as a function of $\tau_{\rm ^{12}CO}$ in case of $R_{12/13}=68,\ 60,\ 50,\ 40,\ 30,\ 20$. The horizontal lines show $\frac{T_{\rm ^{12}CO}}{T_{\rm ^{13}CO}}$ in the center (red), bar (green), bar-end (blue), offset (orange), arm (purple) regions and the ``outskirt'' component of the $^{12}$CO spectrum in the interarm region (yellow). } \label{fig:tau12CO} \end{figure} \begin{table*} \caption{The $T_{\rm ^{12}CO}/T_{\rm ^{12}CO}$ ratios, radial extents, expected $R_{12/13}$ and $\tau_{\rm ^{12}CO}$ of each region.} \label{tab-7} \begin{minipage}{\textwidth} \begin{center} \begin{tabular}{@{}lcccccc} \hline & Center & Bar & Bar-end & Offset & Arm & Interarm$^\ast$\\ \hline \hline $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$$^{\ast\ast}$& $17.1$ & $22.7$ & $12.2$ & $11.2$ & $12.9$ & $26.4$\\ $D$ (kpc) & $0-1$ & $1-2$ & $2-5$ & $3-6$ & $3-6$ & $2-5$\\ $R_{12/13}$ & $19-25$ & $25-31$ & $31-50$ & $37-56$ & $37-56$ & $31-50$ \\ $\tau_{\rm ^{12}CO}$ & $0.2-0.9$ & $0.2-0.7$ & $2.4-4.2$ & $3.3-5.2$ & $2.8-4.5$ & $0.3-1.5$\\ \hline \end{tabular} \end{center} \footnotetext[$\ast$]{``outskirt'' range.} \footnotetext[$\ast\ast$]{The slopes obtained in figure \ref{fig:SpectraComparison1213} and \ref{fig:SpectraComparison1213Interarm}.} \end{minipage} \end{table*} \subsubsection{The cause of the low $\tau_{\rm ^{12}CO}$ in the bar and center regions} According to equation (\ref{eq:tau}), the $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ ratio is a function of $R_{12/13}(D)$, $N_{\rm ^{12}CO}$, $\Delta v$ and $T_{\rm k}$. The physical states of molecular gas in the bar and center regions are discussed in W11\nocite{Watanabe+2011}. They concluded that the higher value of $I_{\rm ^{12}CO}/I_{\rm ^{13}CO}$ in the center and bar regions are due to differences in the environments: the high $T_{\rm k}$, which is due to the starburst or nuclear activity in the center region, and the low $N_{\rm CO}/\Delta v$, which is due to a streaming motion in the bar region. The moderate value of $\tau_{\rm ^{12}CO}$ in the central region has also been reported in the study of our Galaxy \citep{Oka+1998}. \subsubsection{The possible origin of the low $\tau_{\rm ^{12}CO}$ gas component in the interarm region} The previous studies of GMCs in the Milky Way and in the extragalactic objects have proposed an evolution scenario of GMCs in a galactic disk. \citet{Sawada+2012a} investigated the structure and physical conditions of molecular gas in the Milky Way including the arm and interarm regions using data taken with the 45-m telescope at NRO (spatial resolution of $\sim0.5$ pc). They concluded that when faint and diffuse molecular gas in the interarm region enters the spiral arm, this gas develops bright and compact structures at the arm and once the gas leaves the arms, it returns to a diffuse state \citep{Sawada+2012b}. \citet{Koda+2009} utilized CARMA data of M~51 (spatial resolution of $\sim160$ pc) and showed that molecular clouds with mass of $\sim 10^{7-8}$ $M_\odot$ are found only in the arm region while one with $\sim 10^{5-6}$ $M_\odot$ can be found in the interarm as well as arm regions. They claimed that massive clouds that are accumulated in the arm region are not fully dissociated into atomic gas, but dissolved into small clouds as they pass through the arm due to the shear motion in the interarm region. Recently, \citet{Colombo+2014} confirmed the results of \citet{Koda+2009} quantitatively with a large number of GMC samples of M~51 (spatial resolution of $\sim50$ pc) by comparing the mass function of GMC in different regions. However, these extragalactic studies mainly treat individual and discrete clouds and might miss the diffuse and extended objects. Our result suggests the existence of diffuse non-optically thick $^{12}$CO component ($\tau_{\rm ^{12}CO}\sim0.3-1.5$) in the interarm region. This is consistent with the conclusion given in Sawada et al. (2012a; 2012b)\nocite{Sawada+2012a,Sawada+2012b} although they did not analyze the optical depth of the ``diffuse'' component. \citet{Polk+1988} also reported that a significant contribution to the large-scale CO emission from the Galaxy is made by diffuse gas, which is indicated from the extremely high $I_{\rm ^{12}CO}/I_{\rm ^{13}CO}$ ratio of $\sim20-50$ \citep{KnappBowers1988}. The diffuse component observed in our study might be a result of the dissolution of massive molecular clouds in the interarm region, as suggested by \citet{Koda+2009} and \citet{Colombo+2014}. Relatively low $^{12}$CO optical depth of the interarm region may be a result of low $N_{\rm ^{12}CO}$$/\Delta v$ due to a shear motion. Therefore, it is possible that GMCs formed at the arm are dissolved into smaller GMCs and diffuse molecular component in the interarm regions. This picture is consistent with the prediction from a recent numerical simulation of gas component under spiral potential in a disk galaxy \citep{DobbsPringle2013}. Some of the GMCs at the arm may be dissociated into atomic gas in the interarm region by star-formation feedback (e.g., \cite{Dobbs+2006}). However, a detailed understanding {\rm of} the mechanism of dissolution of GMC or dissociation of molecular gas requires molecular and atomic gas observations in high resolution and sensitivity with Atacama Large Millimeter/submillimeter Array (ALMA) and Square Kilometer Array (SKA). There may be a contribution of the bar structure on the low $\tau_{\rm ^{12}CO}$ value in the interarm region, since the interarm region that we define in this study includes the neighboring areas on the sides of the bar region. The bar consists of a lot of characteristic orbits of stars and one of the most prominent orbits is called the x1 orbit, which shapes the elongated structure. Since the gas component has viscosity contrary to stars, its orbit around the bar structure deviates from the sequence of x1 orbits of the stellar components. \citet{Wada1994} provided an analytical model for the orbits of gas component in the bar-structure, damped orbit model. In his model, some gas orbits in the bar, especially at the dust lane, gradually deviate from the bar region (for example, see figure 10 of \cite{Sakamoto+1999}). Hence, some gas components orbiting the bar structure may be categorized in this study as interarm gas components. Furthermore, the orbits which are crowded at the bar-end region become sparse on the sides of the bar region. The gap between orbits widens, perhaps reducing ${N_{\rm ^{12}CO}}/\Delta v$ and consequently decreasing $\tau_{\rm ^{12}CO}$ on the sides of the bar. Unfortunately, no study so far has investigated this effect in detail. The investigation of this effect is an issue for a future paper. \subsection{Non-universal CO-to-H$_2$ conversion factor in a galaxy?} \label{subsec:conversionfactor} Most previous studies of nearby galaxies have adopted a universal conversion factor for the entire galaxy and investigated the distribution of molecular gas and star-formation efficiency (SFE), which is determined as a ratio of star-formation rate and molecular gas mass (e.g., \cite{Helfer+2003,Kuno+2007,Leroy+2009}). Some studies claimed that the SFE in the bar region is lower than the other regions and suggested that intense phenomena such as a streaming motion inhibits star formation in the bar (e.g., \cite{ReynaudDowns1998}). W11\nocite{Watanabe+2011} compared SFE obtained from the $^{12}$CO data and from $^{13}$CO data of NGC~3627 and concluded that SFE in the bar region is comparable to that of the arm region if $^{13}$CO data were used to estimate the molecular gas mass. A lower conversion factor in the bar region than the other disk regions is also suggested in the $^{12}$CO data of Maffei II combined with an LVG (large velocity gradient) analysis \citep{Sorai+2012}. In this study, we detect $^{13}$CO emission from the interarm region of NGC~3627 for the first time and find $^{12}$CO component with a broad line width and high $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ ratio indicating non-optically thickness. It is difficult to estimate the molecular gas mass in the interarm region only with $^{13}$CO since a part of the molecular gas in the interarm region is expected to be too diffuse to emit $^{13}$CO emission. To infer the impact of the non-optically thick $^{12}$CO component on the observed $^{12}$CO integrated intensity and the CO-to-H$_2$ conversion factor, we use the stacked $^{12}$CO spectrum to estimate the fraction of the emission from the diffuse component and the molecular gas mass in the interarm region. We decomposed the $^{12}$CO spectra with two Gaussians for $^{12}$CO optically thick and non-optically thick components. For the optically-thick component, we adopted the center velocity of $-13.7$ km s$^{-1}$ and the FWHM of $94.4$ km s$^{-1}$ from the Gaussian fitting results of the $^{13}$CO spectra and assumed three cases of $T_{\rm peak}$ of $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ = 5, 10 and 13. The $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ ratios are determined by reference to the typical $I_{\rm ^{12}CO}/I_{\rm ^{13}CO}$ ratio of the Galactic GMCs of $5-7$ \citep{Solomon+1979,Polk+1988} and the values which we found in the bar-end, offset and arm regions of NGC~3627. As a result, the fractions of the non-optically thick $^{12}$CO emission with respect to the total $^{12}$CO flux, $f_{\rm thin}$ are $82\pm11$ \%, $64\pm12$ \%, and $52\pm13$ \% if we assume $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ = 5, 10 and 13, respectively. The fitting results for these three cases are shown in figure \ref{fig:DualGaussians} and summarized in table \ref{tab-8}. The surface density of the molecular gas $\Sigma_{\rm mol}$ in the interarm region is described with integrated intensities of the optically thick and thin components, $I_{\rm ^{12}CO, thick}$ and $I_{\rm ^{12}CO, thin}$ using equation (\ref{eq:mol12thick}) and (\ref{eq:mol12thin}) as\footnote{ It is reported that $N_{\rm ^{12}CO}/N_{\rm H_2}$ in the diffuse gas is smaller than the one in the GMCs from the CO-absorption observations \citep{Sonnentrucker+2007,Liszt2007,Burgh+2007,Shetty+2008}. However, the $N_{\rm ^{12}CO}/N_{\rm H_2}$ values reported in those studies have a large dispersion ($\sim 10^{-7}-10^{-5}$). Therefore we adopt the GMC value of $10^{-4}$ \citep{YoungScoville1991} throughout this paper. } \begin{equation} \Sigma_{\rm mol,interarm}\approx 1.34\ I_{\rm ^{12}CO, thick} + 0.133\ I_{\rm ^{12}CO, thin}\ \ \ M_\odot\ \rm{pc^{-2}}. \label{eq-11} \end{equation} We obtain the surface densities of $7.4\pm2.3$ $M_\odot$ pc$^{-2}$ ($T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ = 5), $12.0\pm2.6$ $M_\odot$ pc$^{-2}$ (10) and $14.8\pm3.0$ $M_\odot$ pc$^{-2}$ (13), which are significantly lower than the one calculated by assuming all $^{12}$CO emission is optically thick by factors of $3.8\pm1.2$, $2.3\pm0.6$ and $1.9\pm0.5$, respectively. Here we assume $N_{\rm ^{12}CO}/N_{\rm H_2}=10^{-4}$ and $T_{\rm k}=20$ K. It should be noted that this is an extreme case where the non-optically thick component is optically thin ($\tau_{\rm ^{12}CO}\ll 1$) so these factors set only the upper limits. A radial gradient of $X_{\rm CO}$ in nearby galaxies has been investigated observationally, but the consensus has not been obtained \citep{Sandstrom+2013,Blanc+2013}. \citet{Blanc+2013} investigated the dependence of $X_{\rm CO}$ of NGC~628 on the metallicity, gas surface density, and UV radiation field, which all affect the balance between the shielding and dissociation of CO molecule in the photodissociation regions on the edge of molecular clouds. The conversion factor is expected to vary according to not only the radiative transfer of UV between the star-forming region and the subjected clouds in the galaxy but also the radiative transfer of $^{12}$CO between the clouds and us. The previous studies have mainly focused on the factors that affect the former radiative transfer and investigated the dependence of $X_{\rm CO}$ on the metallicity, gas surface density, and UV radiation field. Here we show that the optical depth of $^{12}$CO that influences the latter radiative transfer may be different in the different regions in NGC~3627, resulting in the different conversion factor. We need careful consideration and treatment when we estimate the molecular gas mass not only in the region with quite different metallicity, ISM density and UV radiation field but also in the region where $^{12}$CO is not expected to be optically thick. Accurate estimation of the molecular gas mass is also important to evaluate other physical parameters of galaxies such as SFE. \begin{figure*} \includegraphics[width=140mm]{DualGaussians.eps} \vspace{0cm} \caption{Fitting results with two Gaussians of $^{12}$CO spectrum in the interarm region. Blue and green solid lines show the fitting results for non-optically thick and optically thick components, respectively. Red solid line shows the sum of them.} \label{fig:DualGaussians} \end{figure*} \begin{table*} \caption{Integrated intensity estimated with a dual-Gaussian fitting$^\ast$ to the stacked $^{12}$CO spectra of the interarm region.} \begin{minipage}{\textwidth} \begin{center} \begin{tabular}{lcccc} \hline $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ & $T_{\rm peak, ^{12}CO, thick}$ & $I_{\rm ^{12}CO, thick}$ & $I_{\rm ^{12}CO, thin}$ & $f_{\rm thin}$\\ & (mK) & (K km s$^{-1}$) & (K km s$^{-1}$) & (\%)\\ \hline \hline 5 & $38.2\pm4.5$ & $3.8\pm0.7$ & $17.2\pm1.7$ & $82\pm11$\\ 10 & $76.5\pm9.0$ & $7.7\pm1.4$ & $13.3\pm2.0$ & $64\pm12$\\ 13 & $99.4\pm11.7$ & $10.0\pm1.8$ & $10.8\pm2.2$ & $52\pm13$\\ \hline \end{tabular} \end{center} \footnotetext[$\ast$]{ Fitting with two Gaussians: one is for the optically thick component and the other for the non-optically thick component. The free parameters for fitting are $T_{\rm peak}$, line center velocity and FWHM for the non-optically thick component. The center velocity and FWHM for the optically thick component are fixed to $-13.7$ km s$^{-1}$ and 94.1 km s$^{-1}$, respectively, which are estimated by fitting the $^{13}$CO spectra with a Gaussian. } \end{minipage} \label{tab-8} \end{table*}% \section{Summary}\label{Summary} We obtained the averaged spectra of $^{12}$CO and $^{13}$CO in the center, bar, bar-end, offset, arm and interarm regions of NGC~3627 with the stacking analysis after the velocity-axis alignment (VA) procedure according to the velocity field estimated from the $^{12}$CO mapping data. The $^{13}$CO spectrum in the interarm region of NGC~3627 where the emission does not have enough S/N in the original data was successfully detected. Main results of this paper are as follows: \begin{enumerate} \item A weak $^{13}$CO emission in the interarm region of NGC~3627 is successfully detected for the first time with the stacking analysis after the VA procedure (figure \ref{fig:StackedSpectra} of section \ref{Results}). \item The validity of the stacking method with VA is confirmed by comparing the integrated intensity between stacked spectra with and without the VA procedure. Moreover, the S/N of stacked spectra with VA is improved by a factor of up to 3.2 compared to those without VA (table \ref{tab-2} of section \ref{Results}). \item The integrated intensity ratios $I_{^{12}\rm{CO}}/I_{^{13}\rm{CO}}$ in the bar and interarm regions are almost two times higher than those in the other regions. $I_{^{12}\rm{CO}}/I_{^{13}\rm{CO}}$ in the center region is the intermediate value between them. High values of $I_{^{12}\rm{CO}}/I_{^{13}\rm{CO}}$ in the bar and center regions are attributed to the higher intensity ratios ($T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$) and one in the interarm regions is attributed to the higher FWHM$_{^{12}\rm{CO}}/$FWHM$_{^{13}\rm{CO}}$ ratio than the other regions. The difference in the line width between $^{12}$CO and $^{13}$CO suggests two gas components, one with a narrow ($\sim$ FWHM$_{\rm ^{13}CO}$) and the other with a broad line width ($\sim$ FWHM$_{\rm ^{12}CO}$) in the interarm region (tables \ref{tab-4} and \ref{tab-5} of section \ref{subsec-Discussion2}). \item $T_{\rm ^{12}CO}/T_{\rm ^{13}CO}$ in the center and bar regions and of the broad line width components in the interarm region are $17.1$, $22.7$ and $26.4$ indicating that the $^{12}$CO lines are not completely optically thick in those regions if we assume the same $^{12}$C/$^{13}$C radial gradient as that of our Galaxy (figures \ref{fig:SpectraComparison1213}, \ref{fig:SpectraComparison1213Interarm} and \ref{fig:tau12CO} of sections \ref{subsec:slope} and \ref{subsec:physicalstate}). \item More than half of the $^{12}$CO emission from the interarm region is likely to be radiated from the diffuse gas component, if the $^{12}$CO spectra is decomposed with two Gaussians, one with FWHM$_{^{13}\rm{CO}}$ and the other with $\sim$ FWHM$_{^{12}\rm{CO}}$ (figure \ref{fig:DualGaussians} of section \ref{subsec:conversionfactor}). \item The existence of non-optically thick component of $^{12}$CO in the center, bar, and interarm regions indicates a lower CO-to-H$_2$ conversion factor compared to the other regions. It is necessary to take into account the non-universal conversion factor in a galaxy in case of comparing the molecular gas distribution and SFE in the different regions. Otherwise, the molecular gas mass and SFE may be respectively overestimated and underestimated by factors of a few in case of the interarm region of NGC~3627 (section \ref{subsec:conversionfactor}). \end{enumerate} \bigskip We would like to thank an anonymous referee for very productive comments. KMM thanks Shuuro Takano, Tetsuhiro Minamidani, Tomoki Morokuma, Junichi Baba, Daisuke Iono, Jin Koda and all members of NRO for their support and fruitful discussions. This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
1,116,691,498,090
arxiv
\section*{Abstract} {\bf We present a method to reduce the variance of stochastic trace estimators used in quantum typicality (QT) methods via a randomized low-rank approximation of the finite-temperature density matrix $e^{-\beta H}$. The trace can be evaluated with higher accuracy in the low-rank subspace while using the QT estimator to approximate the trace in the complementary subspace. We present two variants of the trace estimator and demonstrate their efficacy using numerical experiments. The experiments show that the low-rank approximation outperforms the standard QT trace estimator for moderate- to low-temperature. We argue this is due to the low-rank approximation accurately represent the density matrix at low temperatures, allowing for accurate results for the trace. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} Quantum typicality (QT) methods are powerful tools used for studying finite-temperature physics with exact diagonalization (ED), however, they fall under a broad class of stochastic trace estimators having applications in many other fields~\cite{drabold93,jaklic94,aichhorn03,long03,weise06,avron11,sugiura13,hanebaum14,hyuga14,roosta-khorasani15,saibaba17,sugiura17,okamoto18,schnack20}. More sophisticated applications of QT range from dynamic quantum typicality (DQT) for calculating real-time dynamics at finite temperature~\cite{bartsch09,elsayed13}, to minimally entangled typical thermal states (METTS) used for calculating finite-temperature physics with matrix and tensor product states~\cite{white09,stoudenmire10,wietek19}. QT methods generally fall under two categories, each using a different approximation of the trace: \begin{gather} {\rm Tr}\left(O e^{\beta H}\right)\approx \frac{1}{M}\sum_{i=1}^M \langle z_i |e^{-\frac{\beta}{2} H} O e^{-\frac{\beta}{2} H} |z_i\rangle\label{eq:QT_sym}\\ {\rm Tr}\left(O e^{\beta H}\right)\approx \frac{1}{M}\sum_{i=1}^M \langle z_i | O e^{-\beta H} |z_i\rangle.\label{eq:QT_asym} \end{gather} Here, $H$ is the Hamiltonian of the system, $\beta$ is the inverse temperature, and $|z_i\rangle$ are independent identically distributed random vectors in the relevant Hilbert space. To evaluate $e^{-\tau H} |z_i\rangle$, one employs Lanczos or some other method that can efficiently capture the action of the matrix exponential. The variance of the QT trace estimator is a major factor in the effectiveness of a given QT method. When the variance is high, more samples are required to obtain a certain precision for the trace~\cite{schnack20,sugiura13}. When applying QT with ED, the variance can depend very strongly on the temperature and even differ between Eqs.~\eqref{eq:QT_sym} and \eqref{eq:QT_asym}. At high temperatures, both Eqs.~\eqref{eq:QT_asym} and \eqref{eq:QT_sym} have a similar variance but Eq.~\eqref{eq:QT_sym} has a smaller variance as the temperature decreases~\cite{aichhorn03}. We will refer to Eq.~\eqref{eq:QT_sym} and Eq.~\eqref{eq:QT_sym} as low-temperature quantum typicality (LTQT) and high-temperature quantum typicality (HTQT) respectively. In the case of METTS, the variance is reduced by picking $|z_i\rangle$ as product states and using Markov chain Monte Carlo to sample product states with the largest weights~\cite{white09,stoudenmire10}. However, for ED, the Markov chain method for sampling $|z_i\rangle$ is too expensive due to the significant computational effort required to evolve a state in imaginary time. While the random vectors act as a means to approximate the trace, one can also interpret QT as an approximation of the density matrix $\rho\propto e^{-\beta H}$. This observation becomes more apparent by inserting a complete set of states into Eq.~\eqref{eq:QT_sym} and rearranging the terms to form the trace over a density matrix defined as a statistical mixture of thermal pure states~\cite{hyuga14,sugiura17}. Some recent work proposed using a randomized low-rank approximation to reduce the variance of stochastic trace estimators~\cite{lin17,saibaba17,meyer20}. Randomized low-rank approximations are a class of algorithms that use random vectors and the action of the matrix-vector product operation to create a low-rank approximation of said matrix\cite{halko12}. This paper will show that one can use thermal pure states to generate a randomized low-rank approximation of the canonical density matrix. We then show how to use the low-rank approximation to construct new trace estimators for QT and DQT. We will call the new variants: low-rank quantum typicality (LR-QT) and low-rank dynamic quantum typicality (LR-DQT). To demonstrate the advantage of LR-QT and LR-DQT, we use numerical experiments on the spin-$1/2$ XXZ chain. We show that our low-rank versions of QT perform as well as or better than standard QT methods using a similar amount of computational effort. We also show that our new estimators work exceptionally well at low to moderate temperatures where standard QT methods have difficulties, making our low-rank variants preferable to regular QT. The rest of the paper is organized as follows: In Sec.~\ref{sec:trace} discuss the low-rank approximation applied to the trace estimator. Then, in Sec.~\ref{sec:lrqt} we use those formal results to construct LR-QT and LR-DQT. In Sec.~\ref{sec:exp} we present the results of the numerical experiments on the spin-$1/2$ XXZ chain. Finally, in Sec.~\ref{sec:con}, we end the paper with a discussion of possible applications of LR-QT. \section{Low-Rank Trace Estimator} \label{sec:trace} In this section we briefly outline the algorithm presented in Ref.~\cite{meyer20}. First, we define a low-rank approximation of order $r$ for a $N\times N$ matrix $A$ as: \begin{equation} A\approx Q\left[A\right]_r Q^\dagger, \end{equation} where the columns of $Q$ are a set of $r$ orthonormal vectors and $[A]_r$ is the $r\times r$ matrix. There are many algorithms available to construct such a low-rank approximation; however, for this algorithm, the low-rank approximation is generated using a so-called randomized low-rank approximation. To construct $Q$ using a randomized low-rank approximation of $A$, one generates a set of $r$ independent identically distributed random vectors as columns in a $N\times r$ matrix $S$. Next, by applying the matrix $A$ on $S$, we obtain a new matrix $Y$. It has been proven that the span of the vectors in $Y$ closely approximates the rank $r$ subspace of $A$; therefore, one may obtain $Q$ by generating an orthonormal basis from the vectors stored in $Y$ using any number of methods~\cite{halko12}. The trace of $A$ can be estimated using this low-rank approximation by breaking the calculation into two parts: the trace over the low-rank subspace spanned by $Q$ and the trace over the subspace that is complimentary to the span of $Q$. In the limit where the low-rank approximation becomes exact, the contribution from the complementary subspace will be small~\cite{meyer20}. As $r\ll N$, it is feasible to evaluate the trace of $A$ in the low-rank subspace exactly, but we can only estimate the trace over the complementary subspace. To accomplish this, we use the stochastic trace estimator by taking a set of $r$ identically distributed random vectors in the columns of the matrix $G$ and project the matrix onto the complementary subspace of $Q$ by evaluating: \begin{equation} \tilde{G} = G - QQ^\dagger G.\label{eq:G_proj} \end{equation} Using $\tilde{G}$ and $Q$ the trace can be estimated as follows~\cite{meyer20}: \begin{equation} {\rm Tr}(A) \approx {\rm Tr}\left(Q^\dagger A Q\right) + \frac{1}{r}{\rm Tr}\left(\tilde{G}^\dagger A\tilde{G}\right).\label{eq:lr_trace_A} \end{equation} The first term is the trace over the low-rank approximation of $A$ while the second term corresponds to the stochastic trace estimator applied to the projection of $A$ onto the complimentary subspace~\cite{meyer20}. The same analysis applies to matrix $A=B^2$, but we can modify the expression to make it symmetric if $B$ is Hermitian: \begin{equation} {\rm Tr}(B^2) \approx {\rm Tr}\left((BQ)^\dagger (BQ)\right) + \frac{1}{r}{\rm Tr}\left((B\tilde{G})^\dagger(B\tilde{G})\right).\label{eq:lr_trace_B} \end{equation} Some final notes on this algorithm related to generating the orthogonal basis $Q$ from $Y$. While Gram-Schmidt can be numerically unstable, it may be a good method for generating $Q$ because each column of $Y$ is independent. Thus, it is possible to iteratively generate the states in $Q$ until the error of the low-rank approximation falls below a predefined tolerance~\cite{halko12}. It is also possible to use a Cholesky decomposition of the overlap matrix $Y^\dagger Y$ to generate $R$ by noting that: \begin{equation} Y^\dagger Y = (Q R)^\dagger (Q R) = R^\dagger R = LL^\dagger. \end{equation} Then, one can obtain $Q$ by inverting $R$ and solving $Y=QR$. While this approach might be appealing due to the significant decrease in the computational overhead in generating $R$, this method fails when the two or more vectors in $Y$ are linearly dependent. This is due to the Cholesky decomposition being ill-defined because $Y^\dagger Y$ is not positive definite. \section{Low-rank Quantum Typicality} \label{sec:lrqt} In Eqs.~\eqref{eq:lr_trace_A} and \eqref{eq:lr_trace_B} if we replace $A\rightarrow e^{-\beta H}$ and $B\rightarrow e^{-\frac{\beta}{2} H}$ we two different approximations for the partition function. To calculate expectation values we could apply the trace estimator directly to $Oe^{-\beta H}$, or $e^{-\frac{\beta}{2} H}Oe^{-\frac{\beta}{2} H}$. However, we can approximate the trace of observables with the vectors used for estimating the partition function, similar to standard QT. Let us rewrite Eqs.~\eqref{eq:lr_trace_A} and \eqref{eq:lr_trace_B} in terms of the states stored in the columns of $\tilde{G}\rightarrow |\tilde{g}_i\rangle$ and $Q\rightarrow |q_i\rangle$. Let us also replace $A\rightarrow e^{-\beta H}$ and $B\rightarrow e^{-\frac{\beta}{2} H}$: \begin{gather} {\rm Tr}\left(e^{-\beta H}\right)\approx \sum_{i=1}^r \langle q_i|e^{-\beta H}|q_i\rangle+\frac{1}{r}\sum_{i=1}^r\langle \tilde{g}_i|e^{-\beta H}|\tilde{g}_i\rangle\\ {\rm Tr}\left(\left(e^{-\frac{\beta}{2} H}\right)^2\right)\approx \sum_{i=1}^r \langle q_i|e^{-\frac{\beta}{2} H}e^{-\frac{\beta}{2} H}|q_i\rangle+\frac{1}{r}\sum_{i=1}^r\langle \tilde{g}_i|e^{-\frac{\beta}{2} H}e^{-\frac{\beta}{2} H}|\tilde{g}_i\rangle \end{gather} Now the similarities between the LR-QT and standard QT are more pronounced. The structure is identical except that in LR-QT, there are different types of vectors used in the two parts of the trace estimator. Drawing inspiration from regular QT, we can write down expressions for tracing over an operator written in terms of the original vectors used in calculating the partition function: \begin{gather} {\rm Tr} \left(Oe^{-\beta H}\right) \approx \sum_{i=1}^r \langle q_i|Oe^{-\beta H}|q_i\rangle+\frac{1}{r}\sum_{i=1}^r\langle \tilde{g}_i|Oe^{-\beta H}|\tilde{g}_i\rangle,\label{eq:lr_asym}\\ {\rm Tr} \left(e^{-\frac{\beta}{2} H}Oe^{-\frac{\beta}{2} H}\right) \approx \sum_{i=1}^r \langle q_i|e^{-\frac{\beta}{2} H}Oe^{-\frac{\beta}{2} H}|q_i\rangle+\frac{1}{r}\sum_{i=1}^r\langle \tilde{g}_i|e^{-\frac{\beta}{2} H}Oe^{-\frac{\beta}{2} H}|\tilde{g}_i\rangle.\label{eq:lr_sym} \end{gather} Based on their structure, we will refer to Eqs.~\eqref{eq:lr_sym} and \eqref{eq:lr_asym} as low-rank low-temperature quantum typicality (LR-LTQT) and low-rank high-temperature quantum typicality (LR-HTQT), respectively. To extend the equations to study real-time dynamics simply replace $O\rightarrow O(t)$ and evolve the vectors $|q_i\rangle$ and $|\tilde{g}_i\rangle$ in the same way as one would do for DQT. Both LR-QT and LR-DQT can be summarized in the following steps: \begin{enumerate} \item[$(1)$] Generate random set of $r$ column vectors, $S$ and, calculate $Y = e^{-\beta H} S$ using Krylov, Chebyshev, etc. \item[$(2)$] orthogonalize $Y$ to express $Y = QR$. \item[$(3)$] Generate another set of $r$ random column vectors, $G$, and calculate $\tilde{G} = G - QQ^\dagger G$. \item[$(4)$] Calculate the partition function and ${\rm Tr} \left(Oe^{-\beta H}\right)$ using Eq.~\eqref{eq:lr_sym} or Eq.~\eqref{eq:lr_asym} by applying $ e^{-\beta H}$ or $e^{-\frac{\beta}{2} H}$ to each vector in $Q$ and $\tilde{G}$, evolving in real-time for LR-DQT. \end{enumerate} The computational cost of LR-LTQT and LR-HTQT can be broken into three main pieces: $(i)$ calculation of the matrix exponential on a vector, $(ii)$ calculating the trace in Eq.~\eqref{eq:lr_sym} or Eq.~\eqref{eq:lr_asym}, and $(iii)$ performing QR decomposition and calculate $\tilde{G}$. steps $(i)$ and $(ii)$ are the same steps required in standard QT, while $(iii)$ is unique to LR-QT. At first glance, step $(iii)$ may prove to significantly increase the amount of effort required to calculate expectation values at multiple temperatures. For standard QT methods, one can obtain results for multiple temperatures using a single Lanczos basis per random vector with no extra effort~\cite{wietek19,krishnakumar19,schnack20}. For LR-QT, an intermediate step involves orthogonalizing the vectors in $Y$ to obtain $Q$, followed by the re-application of $e^{-\tau H}$ to $Q$. So the question becomes: is it possible to generate $e^{-\tau H}Q$ without explicitly applying the matrix exponential to the vectors in $Q$. As we will show, the answer is yes with a small amount of computational overhead. First, let us assume that we have two families of vectors $Y(\tau)=e^{-\tau H} Y$ and $G(\tau)=e^{-\tau H} G$ with $Y$ and $G$ being two sets of $r$ independent identically distributed random vectors. Recall that for a given value of $\beta$, we must orthogonalize the vectors in $Y(\beta)$ in order to obtain $Q(\beta)$ leading to the decomposition: \begin{equation} Y(\beta) = Q(\beta) R(\beta). \end{equation} Inverting this equation allows one to construct $Q(\beta)$ in terms of $Y(\beta)$. One might opt to use an RQ decomposition instead of a QR decomposition as it can be numerically more stable to invert the equation $Y^\dagger = R Q^\dagger$, regardless, the mechanics are the same for both methods. After evolving $Q(\beta)$ in imaginary time, we find: \begin{equation} e^{-\tau H}Q(\beta)=e^{-\tau H}Y(\beta)R(\beta)^{-1}=Y\left(\beta+\tau\right)R(\beta)^{-1}. \end{equation} To calculate the vectors required for the trace over the complimentary subspace we use Eq.~\eqref{eq:G_proj} to write: \begin{equation} e^{-\tau H}\tilde{G} = G\left(\tau\right) - \left(e^{-\tau H}Q(\beta)\right) Q(\beta)^\dagger G(0). \end{equation} Thus, we have overcome the issue of having to re-apply the matrix exponential for each temperature. The only extra cost here is the orthogonalization step for each temperature, as well as having to evolve states to $\tau_{\rm max}=1.5\beta_{\rm max}$ for LR-LTQT and $\tau_{\rm max}=2\beta_{\rm max}$ for LR-HTQT. If one uses a Cholesky decomposition on $Y^\dagger Y$ with the above procedure, it is possible to calculate LR-QT estimators without explicitly calculating $Q$ making LR-QT feasible for METTS as well as large scale ED calculations. \section{Numerical Experiments} \label{sec:exp} QT methods are powerful because Eq.~\eqref{eq:QT_asym} and Eq.~\eqref{eq:QT_sym} can give very accurate results with $M$ that is much smaller than the size of the Hilbert space. This behavior is due to the sample-to-sample variance of expectation values in thermal pure states decaying exponentially with increasing system size~\cite{sugiura13}. So far, we have only focused on the estimates for the trace; however, an expectation value requires taking the ratio of two trace estimates. To make our results practically relevant, we will focus on the variance of expectation values, averaged over independent realizations\footnote{By "variance" we are not discussing the variance of individual pure states, we are interested in the variance of the the estimate coming from a collection of pure states.}. To make a fair comparison between LR-QT and QT, we choose rank $r$ and $M=3r$ respectively so that the number of times we call the matrix exponential function is the same for each method. We focus primarily on the matrix exponential as that is the most expensive part of the calculation, making the two methods roughly equal in terms of computational complexity. In our numerical experiments we will study behavior of both thermodynamic and time-dependent expectation values in the spin-1/2 $XXZ$ chain with periodic boundary conditions: \begin{equation} H = \sum_{i=1}^L (1+\Delta)S^z_i S^z_{i+1} + S^x_iS^x_{i+1}+S^y_iS^y_{i+1},\label{eq:H} \end{equation} where we have set the units such that $\hbar=k_B=1$. The observable we will measure is the nearest neighbor $S^z S^z$ correlator, \begin{equation} C = \frac{1}{L}\sum_{i=1}^L S^z_i S^z_{i+1}. \label{eq:nn_corr} \end{equation} For all cases considered, the trace is calculated over the sector with $\sum_{i}S^z_{i}=0$, and we fix the length of the chain to be $L=14$. We use full diagonalization to compute the matrix exponential, and we sample random vectors by drawing from the normal distribution for each entry. \begin{figure}[t] \centering \includegraphics[width=0.99\textwidth]{static.pdf} \caption{We show statistics of the trace estimator for all four QT methods applied to the nearest neighbor $S^z S^z$ correlator with $\Delta=0$. Panel $(a)$ shows the mean value of the four QT trace estimators with $r=10$, the error bars correspond to the variance. In panel $(b)$, we plot variance as function of temperature, $T$, with rank $r=100$, Finally, in panel $(c)$: The variance as a function of $r$ with $T=1$. The dotted and dashed lines in panel $(c)$ correspond power-laws proportional to $1/\sqrt{r}$ and $1/r$ respectively. These lines represent the predictions for the scaling of the variance based on probabilistic error bounds cite and derived respectively in Ref.~\cite{meyer20}. All calculations shown are averaged over $1000$ independent realizations.} \label{fig:static} \end{figure} To begin, we show in Fig.~\ref{fig:static}$(a)$: the finite-temperature expectation value of Eq.~\eqref{eq:nn_corr} for $\Delta=0$ using both QT and LR-QT with $r=10$ as a function of temperature, $T$. The error bars correspond to the variance of the trace estimator calculated from $1000$ independent realizations of each method. All four methods give the exact result as we average over independent random realizations. As expected, HTQT has the largest variance at low-temperatures while the error-bars are smaller than the markers for LTQT and the LR-QT methods. Next, we direct our focus to the variance of the trace estimators as a function of $T$ and $r$. In Fig.~\ref{fig:static}$(b)$ we show results as a function of $T$ for $r=100$ and $\Delta=0$. At high temperatures, the variance for the LR-QT methods is slightly larger than both LTQT and HTQT; however, as the temperature decreases, both LR-LTQT and LR-HTQT decrease monotonically. Compare this to the standard variants of QT, which have a greater variance as the temperature decreases. Another question is: how does the variance of LR-QT depend on $r$ compared to standard QT. In Fig.~\ref{fig:static}$(c)$, we plot the variance as a function of $r$ with $T=1$, again with $\Delta=0$. We also plot two lines corresponding to the scaling laws based on the probabilistic error-bounds presented in Ref.~\cite{meyer20}. While these error bounds are derived assuming a different distribution for the random vectors and a different trace estimator\footnote{compare Eqs.~\eqref{eq:lr_asym} and \eqref{eq:lr_sym} to Eq.~\eqref{eq:lr_trace_A} which is the formal result in reference~\cite{meyer20}}, the numerical results follow the scaling remarkably well. If it is true that those theoretical bounds apply here, it will prove that LR-QT scales are better than standard QT. The numerical results are promising, however, a rigorous proof would be nice to have. \begin{figure}[t] \centering \includegraphics[width=0.99\textwidth]{quench_100_scipost.pdf} \caption{Here we show dynamics nearest-neighbor $S^z S^z$ correlator calculated as a function of time after a quench of the anisotropic parameter, $\Delta$, in the XXZ spin chain. We prepare the initial state at $T=1/2$ with $\Delta=0$, and then the Hamiltonian is quenched to $\Delta=4$. We show results using full diagonalization, LR-QT, and DQT. In panel $(a)$ we plot the exact solution in blue along with 20 individual realizations calculated using DQT shown as dashed orange lines. Panel $(b)$ shows a similar plot to panel $(a)$ but for LR-LTQT instead. Panel $(c)$ shows the variance of the DQT and LR-QT methods as a function of time using $100$ independent realizations.}\label{fig:dyn} \end{figure} To benchmark LR-DQT, we study a protocol where we initialize the XXZ chain with $\Delta=0$ at $T=1/2$ and quench to $\Delta=4$. Same as before, we calculate expectation values of $C$, but now as a function of time after the quench. We present three different results from our calculations of both LR-DQT and DQT with $r=100$ in Figure.~\ref{fig:dyn}. Panel $(a)$ we plot the exact solution on top of $20$ realizations of DQT based on LTQT. Panel $(b)$ is similar to panel $(a)$ but plotting the $20$ realization of the LR-QDT based on LR-LTQT. Finally, in Figure.~\ref{fig:dyn}$(c)$ we show the variance of the trace estimator for $\langle C(t)\rangle$ as a function of time for both DQT and LR-DQT averaged over $100$ realizations. Just as we observed in the previous results for thermodynamic quantities, the LR-DQT method has a smaller variance than DQT. We specifically chose an example in which the variance was large enough to see by the eye. However, varying $T$ and $r$, we observed that LR-DQT is on par with or outperforms DQT. More importantly, when calculating real-time dynamics, the LR-DQT variant has the edge over standard DQT because one only has to evolve $2r$ vectors in real-time versus $3r$ vectors for DQT. Given that the most computationally expensive part of this calculation is the evolution in real-time, the $1/3$ reduction in cost is significant on top of the reduced variance. \section{Conclusion} \label{sec:con} In this paper, we have introduced LR-QT and LR-QDT to drastically improve the convergence of QT methods. The enhanced convergence is due to the usage of a randomized low-rank approximation of the density matrix. We have shown how to construct the low-rank approximation using existing thermal pure states from standard QT methods. Using numerical experiments on the spin-$1/2$ XXZ chain, we have shown that LR-QT outperforms standard QT when calculating thermodynamic quantities in low to moderate temperature regimes and we have shown that LR-DQT can give better results with less computational effort than standard DQT. We argue that the computational overhead required for LR-QT methods is small and generally worth the effort given the significant increase in the precision of expectation values. \begin{figure}[t] \centering \includegraphics[width=2.5in]{trace.pdf} \caption{Relative errors for two different estimates of the partition function $Z={\rm Tr}\left(e^{-\beta H}\right)$ for various Temperatures. The solid lines corresponds to the trace of the low-rank approximation of $e^{-\beta H}$, the dotted line corresponds to the error in the exact trace of $e^{-\beta H}$ truncated at the $r$-th eigenvalue. The solid lines plotted here are averaged over $100$ independent realizations of the randomized low-rank approximation.}\label{fig:trace_err} \end{figure} An intuitive explanation for why LR-QT methods have a lower variance at low-temperatures is related to the spectrum of the density matrix, $e^{-\beta H}$. In this regime, the eigenvalues decay exponentially, allowing for a better low-rank approximation~\cite{halko12}. The increase in accuracy of the low-rank approximation implies that the contribution from the stochastic piece of the trace in Eq.~\eqref{eq:lr_asym} decreases. Therefore, any fluctuations that occur sample to sample are suppressed. To support this argument, we plot the relative between the trace of low-rank approximation of the density matrix and the exact trace as a function of rank $r$ and compare it to the relative error of the trace truncated the $r$-th eigenvalue. In Figure.~\ref{fig:trace_err} the solid lines correspond to the error of a single realization of the low-rank approximation while the dotted lines correspond to the truncated trace error. The plot clearly shows that the truncated trace and the low-rank approximation follow the same trend, supporting our argument. This argument is not the only reason for the success of LR-QT. While we do not have a formal proof, the numerical results in this work indicate that the LR-QT estimators follow the probabilistic error bounds derived for Eq.~\eqref{eq:lr_asym} in Ref.~\cite{meyer20}, indicating that LR-QT should, on average, scale better than QT as $r$ increases. Other than directly using LR-QT to study particular models, a natural application would be to combine LR-QT with numerical linked cluster expansions (NLCE)~\cite{rigol06,rigol07_1,rigol07_2,tang12}. In recent work, QT methods have successfully been applied to NLCE; however, one of the major obstacles was the high precision required to subtract out the contributions from all sub-clusters accurately. In Refs.~\cite{richter19,richter19_2} the calculation was possible in 1D because the series is particularly simply only having to subtract out one sub-cluster. In Ref.~\cite{krishnakumar19} the authors overcome the issue of precision by using Lanczos with full-orthogonalization. The Lanczos procedure generates a low-rank approximation using the Hamiltonian's extremal eigenvalues. The Hamiltonian's extremal eigenvalues can be mapped directly onto the density matrix's largest eigenvalues and provide an accurate approximation of the trace at low-temperatures, much like LR-QT. Finally, It is also worth noting that randomized low-rank approximations exist for a wide variety of matrix decompositions~\cite{halko12}. It may be helpful to apply these methods to other numerical methods beyond QT. \section{Acknowledgements} This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences Grant Number DE-SC0019275. The authors would like to thank M. Bukov, A. Feiguin, and P. Patil for their useful comments and to D. Hendry for pointing out the relationship between the QR and Cholesky decompositions. All the calculations presented here where done using the QuSpin exact diagonalization package~\cite{weinberg17,weinberg19}. \bibliographystyle{SciPost_bibstyle}
1,116,691,498,091
arxiv
\section{Introduction} It is common knowledge that the zero-point and thermal fluctuations of the electromagnetic field are responsible for several interesting physical phenomena which attract much experimental and theoretical attention in the last few years. The best known example is the fluctuation-induced force acting between two closely spaced uncharged bodies in vacuum. At separations below a few nanometers one can neglect the presence of retardation and this force is known as the van der Waals force. At larger separations, where the effects of retardation come into play, it is conventional to speak about the Casimir force (see the recent reviews \cite{1,2,3}). Theoretical description of the van der Waals and Casimir forces is given by the Lifshitz theory \cite{4} which is derivable from the fluctuation-dissipation theorem of quantum statistical physics, or from the scattering approach, or by summing up the oscillator free energies in the framework of quantum field theory \cite{5,5a,6,7}. The Lifshitz theory allows computation of the van der Waals and Casimir energies, free energies and forces given the frequency-dependent dielectric permittivities of the interacting bodies are available. These permittivities are obtained from the complex index of refraction which has been measured for a number of materials over some frequency regions \cite{8}. The characteristic feature of the van der Waals and Casimir forces is that their calculation requires a knowledge of the dielectric permittivities over very wide frequency regions including at zero frequency. The latter contributes significantly to the final result. Because of this, it is necessary to extrapolate the available optical data down to zero frequency on theoretical grounds. As an example, for metals extrapolations of this kind are usually made by means of the Drude model which takes into account the electron-phonon interaction at low frequencies. The physical significance and properties of the Drude model are discussed in detail in Ref.~\cite{8a}. Surprisingly, in a number of experiments performed by two different groups \cite{9,10,11,12,13,14,15,16,17,18,18a} it was found that the measurement results exclude theoretical predictions of the Lifshitz theory obtained using an extrapolation of the optical data by means of the Drude model. The same results turned out to be in agreement with theory if the optical data reflecting the role of core (bound) electrons are extrapolated down to zero frequency by the free-electron plasma model \cite{9,10,11,12,13,14,15,16,17,18,18a}. An important role in these comparisons is played by the contribution of the zero-frequency term of the Lifshitz formula which depends heavily on the extrapolation used. It seems quite unusual that the Lifshitz theory is excluded by the measurement data if the actual relaxation properties of conduction electrons at low frequencies are taken into account but it agrees with the data if these properties are disregarded. Taking into consideration that the difference between two alternative theoretical predictions at the experimental separations of Refs.~\cite{9,10,11,12,13,14,15,16,17,18,18a} was below a few percent, attempts of solving the problem at the expense of some background effects or computational inaccuracies have been undertaken (see, e.g., Refs.~\cite{19,20,21,22}). The experimental situation was finally cleared up employing the differential force measurements proposed in Ref.~\cite{23} where the theoretical predictions of the Lifshitz theory obtained with the help of the Drude- and plasma-model extrapolations of the optical data differ by up to a factor of 1000. The experiment of this kind \cite{24} conclusively excluded an extrapolation by means of the Drude model and turned out to be in agreement with theoretical results using the plasma model. A disagreement between theoretical predictions of the Lifshitz theory obtained using the physically justified Drude model and measurement results from many experiments is considered as puzzling \cite{25}. The roots of the Casimir puzzle are directly related to the fact that according to the Drude model there is no contribution to the Casimir force from the transverse electric mode at zero frequency \cite{25a}. As a result, at large separations between the interacting plates the predicted Casimir force is only one half of the one predicted using the plasma model. The single experiment performed at large separations up to 7.3 micrometers \cite{25b} was interpreted as being in agreement with the Drude model prediction. In this experiment, however, not the Casimir force alone, but up to an order of magnitude larger force presumably originating from the so-called patch potentials was measured. The Casimir force itself was obtained indirectly using large subtraction of some analytic expression containing two fitting parameters. According to Refs.~\cite{25c,25d}, this makes the results of Ref.~\cite{25b} uncertain. Various aspects of the Casimir puzzle are discussed at length in Refs.~\cite{1,3,7,8a,18a,24,25,25e,25f,25g}. Taking into account that a fundamental understanding of the Casimir puzzle is still missing, it seems warranted to reconsider the response of metals to the low-frequency electromagnetic field used in the Lifshitz theory. In the low frequency range, the electromagnetic response is determined by the intraband part which is essentially governed by the behavior of conduction electrons. Experimental studies using a variety of spectroscopic techniques show that much insight can be gained in the band structure properties of the materials as well as the scattering processes carriers exhibit in their dynamics \cite{26}. The relaxation parameter $\gamma_{ep}$ of the standard Drude model is determined by the electron-phonon scattering. However, at low frequencies electron-electron, electron-impurity, electron-surface and other interactions contribute to the total relaxation parameter as well (see Ref.~\cite{26a} and review \cite{27}), where in clean metallic systems electron-electron scattering is the major addition to the electron-phonon one. It has been known that for noble metals the contribution of electron-electron scattering $\gamma_{ee}$ to the relaxation parameter can be described by the Gurzhi formula \cite{27,28,29,30} which contains both the frequency- and temperature-dependent terms. Replacing $\gamma_{ep}$ in the dielectric permittivity of the Drude model with $\gamma_{ep}+\gamma_{ee}$, one obtains the so-called extended Drude, or Gurzhi model for the dielectric permittivity of metals. In this paper, we investigate possible applications of the Gurzhi model in the Lifshitz theory for calculations of the Casimir force. We explore the analytic properties of the Gurzhi dielectric permittivity and demonstrate that it violates the causality condition which precludes its use over the entire frequency axis. Next, we consider the dielectric permittivities of the Gurzhi, Drude and plasma models in combination with the measured optical data. It is confirmed that over some frequency region below 2~eV the Gurzhi model provides a better analytic approximation to the measured imaginary part of the dielectric permittivity of Au than the Drude model. The Casimir pressure between two parallel plates made of Au is computed using the optical data extrapolated down to zero frequency by means of the Gurzhi, Drude and plasma models, and the obtained results are compared. This allowed estimation of the possible role of electron-electron interactions in the Casimir force. The Casimir pressures computed with different models of the dielectric permittivity are correlated with precise experiments on measuring the Casimir interaction. It is shown that although the Gurzhi model provides a better analytic approximation to the optical data in some frequency range than the Drude one, it does not resolve the Casimir puzzle. The paper is organized as follows. In Sec.~II we describe the main features of the Gurzhi model and consider its analytic properties in connection with the causality principle. Section~III is devoted to comparison between different analytic models of the dielectric permittivity of Au combined with the measured optical data. In Sec.~IV the Casimir interaction computed using different permittivities is compared with the measurement results of two precise experiments. In Sec.~V the reader will find our conclusions and discussion. \section{The Gurzhi dielectric permittivity and its analytic properties} It is well known that at sufficiently low frequencies the response of metals to electromagnetic field is essentially described by the dielectric permittivity of the Drude model \begin{equation} \veD(\omega,T)=1- \frac{\omega_p^2}{\omega[\omega+i\gamma_{ep}(T)]}, \label{eq1} \end{equation} \noindent where $\omega_p$ is the plasma frequency and $\gamma_{ep}(T)$ is the temperature-dependent relaxation parameter determined by the process of electron-phonon scattering. An extended version of the Drude dielectric permittivity, which is called sometimes the Gurzhi model, takes a similar form \begin{equation} \veG(\omega,T)=1- \frac{\omega_p^2}{\omega[\omega+i\gamma(\omega,T)]}. \label{eq2} \end{equation} \noindent Here, however, the relaxation parameter consists of two terms \begin{equation} \gamma(\omega,T)=\gamma_{ep}(T)+\gamma_{ee}(\omega,T) \label{eq3} \end{equation} \noindent taking into account the processes of electron-phonon and electron-electron scattering. The theoretical expression for $\gamma_{ee}$ was derived in Ref.~\cite{28} (see also Refs.~\cite{27,29,30}) based on the Boltzmann quantum equation for the electronic Fermi liquid and the Kubo formula which relates the conductivity to the current-current correlation function \begin{equation} \gamma_{ee}(\omega,T)=D\left[(k_BT)^2+ \left(\frac{\hbar\omega}{2\pi}\right)^2\right]. \label{eq4} \end{equation} \noindent Here, the coefficient $D=\pi^3\Gamma\Delta/(12\hbar E_F)$ where $E_F$ is the Fermi level of the metal under consideration, $k_B$ is the Boltzmann constant, $\Delta=0.75$ is the fractional umklapp scattering, and $\Gamma=0.55$ is the averaged scattering probability over the Fermi surface. Note that Eq.~(\ref{eq4}) has been verified in several experiments for noble metals \cite{31,32,33,34} in the near infrared frequency range up to the interband absorption frequencies. In so doing, the temperature-dependent contribution in Eq.~(\ref{eq4}) is small as compared to the frequency-dependent one. For Au, which is the metal of our interest below, one has $D=0.94~\mbox{fs}^{-1}\mbox{eV}^{-2}$ \cite{30}. Substituting Eq.~(\ref{eq4}) in Eq.~(\ref{eq3}), the relaxation parameter taking into account both the electron-phonon and electron-electron scattering can be written in the form \begin{equation} \gamma(\omega,T)=C(T)+B\omega^2, \label{eq5} \end{equation} \noindent where \begin{equation} C(T)=\gamma_{ep}(T)+D(k_BT)^2, \quad B=D\left(\frac{\hbar}{2\pi}\right)^2. \label{eq6} \end{equation} The dielectric permittivity of the Gurzhi model (\ref{eq2}), (\ref{eq5}), besides the singular point at $\omega=0$, has poles in the plane of complex frequencies determined by the roots of the quantity \begin{equation} iB\omega^2+\omega+iC(T)=0. \label{eq7} \end{equation} \noindent By solving this equation, one obtains \begin{equation} \omega^{(1,2)}=i\xi^{(1,2)}=\frac{i}{2B}\left[1\pm\sqrt{1+4BC(T)}\right], \label{eq8} \end{equation} \noindent where $\omega^{(1)}$ and $\omega^{(2)}$ belong to the upper and lower half-planes, respectively. Along the imaginary frequency axis $\omega=i\xi$ the Gurzhi dielectric permittivity (\ref{eq2}), (\ref{eq4}) takes the real values \begin{equation} \veG(i\xi)=1+\frac{\omega_p^2}{\xi\left[\xi+C(T)-B\xi^2\right]}. \label{eq9} \end{equation} \noindent As an example, in Fig.~\ref{fg1} $\veG$ is shown as a function of $\xi$ for Au at room temperature using the parameters of the Gurzhi model indicated above and the experimental parameters of the Drude model $\hbar\omega_p=8.68$~eV and $\hbar\gamma_{ep}(T=295\,\mbox{K})=30.3$~meV \cite{30}. From Eq.~(\ref{eq9}) and Fig.~\ref{fg1} it is seen that the dielectric permittivity $\veG$ reaches the minimum value $\veG(i\xi^{(m)})=1.1267$ at the point \begin{equation} \hbar\xi^{(m)}=\frac{\hbar}{3B}\left[1+\sqrt{1+3BC(T)}\right]= 42.2094\,\mbox{eV} \label{eq10} \end{equation} \noindent and has a break of continuity at the point $\hbar\xi^{(1)}=63.3217\,\,$eV defined in Eq.~(\ref{eq8}). It is important to note that in the region from $\hbar\xi^{(1)}$ to $\hbar\xi^{(0)}\approx64.47$~eV the Gurzhi permittivity takes the negative values and vanishes at $\xi=\xi^{(0)}$: $\veG(i\xi^{(0)})=0$. For $\xi>\xi^{(0)}$ $\veG$ increases monotonously and goes to unity when $\xi$ goes to infinity. These properties are not normal for commonly employed dielectric permittivities which must meet some necessary physical conditions. It has been known that the electric displacement {\boldmath$D$}$(t)$ is determined by the values of electric field {\boldmath$E$}$(t)$ at all {\it previous} moments of time \cite{35} \begin{equation} \mbox{\boldmath$D$}(t)=\mbox{\boldmath$E$}(t)+ \int_0^{\infty}f(\tau)\mbox{\boldmath$E$}(t-\tau)d\tau, \label{eq11} \end{equation} \noindent where the function of time $f(\tau)$ is finite at all $\tau$, depends on the properties of a medium and defines the frequency-dependent dielectric permittivity \begin{equation} \ve(\omega)=1+ \int_0^{\infty}f(\tau)e^{i\omega\tau}d\tau. \label{eq12} \end{equation} \noindent Equations (\ref{eq11}) and (\ref{eq12}) constitute the mathematical formulation of the principle of causality stating that future has no effect on the past. From Eq.~(\ref{eq12}) it is seen that in the upper half-plane (${\rm Im}\omega>0$) the integral converges and, thus, $\ve(\omega)$ has no singular points \cite{35}. This statement is a consequence of the principle of causality. It is easily seen also that in the upper half-plane (including the real frequency axis) the dielectric permittivity cannot turn into zero \cite{35}. All the above results in the Kramers-Kronig relations which link to one another the real and imaginary parts of the dielectric permittivity. {}From Fig.~\ref{fg1} and related discussion it is seen that the dielectric permittivity of the Gurzhi model does not satisfy the principle of causality and the Kramers-Kronig relations. Several attempts to recover the Kramers-Kronig relations have been undertaken (see, for instance, Refs.~\cite{26,31,36}) by introducing the so-called {\it memory function}, but its explicit form remains unavailable. In fact to cancel the first order pole at the point $i\xi^{(1)}$ in the upper half-plane of complex frequencies it would be necessary to replace the plasma frequency squared in Eq.~(\ref{eq9}) with the frequency- and temperature-dependent quantity \begin{equation} {\tilde{\omega}}_p^2(\omega,T)=\left[2iB\omega+1+\sqrt{1+4BC(T)}\right] g(\omega)\omega_p^2, \label{eq13} \end{equation} \noindent where $g(\omega)$ is any analytic function in the upper half-plane. In this case, however, the physically meaningful term of the order of $\omega^2$ in Eq.~(\ref{eq4}) would be lost. Thus, one can conclude that the Gurzhi model can be used only in some restricted region of low frequencies as more or less good phenomenological description for the dielectric permittivity of noble metals. In this respect it would be interesting to compare it with other analytic models used in computations of the Casimir force and with the experimental permittivity obtained from the measured optical data. \section{Different models of the dielectric permittivity and the optical data} It is well known that the Lifshits formulas for the Casimir free energy and pressure are most conveniently expressed via the dielectric permittivity of plate materials along the imaginary frequency axis. The latter quantity, in its turn, can be found by means of the Kramers-Kronig relations \begin{equation} \ve(i\xi)=1+\frac{2}{\pi}\int_0^{\infty} \frac{\omega\,{\rm Im}\ve(\omega)}{\omega^2+\xi^2}d\omega \label{eq14} \end{equation} \noindent or \begin{equation} \ve(i\xi)=1+\frac{2}{\pi}\int_0^{\infty} \frac{\omega\,{\rm Im}\ve(\omega)}{\omega^2+\xi^2}d\omega +\frac{\omega_p^2}{\xi^2} \label{eq15} \end{equation} \noindent expressing $\ve(i\xi)$ through the imaginary part of $\ve$ defined along the real frequency axis. Equation (\ref{eq14}) is valid for the permittivities that are regular at zero frequency or have a first order pole \cite{7,35}, whereas Eq.~(\ref{eq15}) is obeyed by the permittivities having a second order pole and the residue $\omega_p^2$ at zero frequency \cite{7,37}. Here we compare the imaginary parts of the dielectric permittivities of Au found from the measured optical data \cite{8} and given by the Drude and Gurzhi models in the frequency region below 2~eV, i.e., well below the first absorption frequency. In Figs.~\ref{fg2}(a) and \ref{fg2}(b) the imaginary part of the dielectric permittivity of Au is shown by dots using the values for real and imaginary parts of the complex index of refraction of Au measured at frequencies above 0.125~eV \cite{8}. The solid and dashed lines demonstrate the imaginary part of the dielectric permittivity of Au given by the Gurzhi (\ref{eq2}) and Drude (\ref{eq1}) models, respectively. In Fig.~\ref{fg2}(a) the experimental values of parameters $\hbar\omega_p=8.68$~eV and $\hbar\gamma_{ep}(T=295\,\mbox{K})=30.3$~meV \cite{30} have been used. These parameters, however, are sample-dependent \cite{19}. Because of this, in Fig.~\ref{fg2}(b) ${\rm Im}\ve$ given by the Gurzhi and Drude models are plotted using the values $\hbar\omega_p=9.0$~eV and $\hbar\gamma_{ep}(T=295\,\mbox{K})=35.0$~meV which were found most appropriate for Au films employed in precise measurements of the Casimir force \cite{9,10,11,12,13,14,15,16,17,18,18a,24}. As is seen in both Figs.~\ref{fg2}(a) and \ref{fg2}(b), the Gurzhi model better reproduces the optical data than the Drude model over the frequency region from 0.3 to 2~eV. From the comparison of Fig.~\ref{fg2}(a) and Fig.~\ref{fg2}(b) it is seen also that the values of $\omega_p$ and $\gamma$ used in experiments on measuring the Casimir force result in a better agreement between ${\rm Im}\ve$ obtained from the optical data and from the Gurzhi and Drude models than the values of Ref.~\cite{30}. This result is readily illustrated by comparing insets in Figs.~\ref{fg2}(a) and \ref{fg2}(b) where the frequency regions from 0.06 to 0.2~eV are shown on an enlarged scale. {}From the inset in Fig.~\ref{fg2}(a) one can see that there is a break of continuity between the values of ${\rm Im}\ve$ found from the optical data at the minimum frequency, where they are available, and from the analytic models. By contrast, in the inset to Fig.~\ref{fg2}(b) there is a smooth extrapolation of the optical data by the Gurzhi and Drude models in the frequency region from 0 to 0.125~eV. As mentioned in Sec.~I, one of the approaches to the calculation of the Casimir force consists in using the optical data for ${\rm Im}\ve$ over the entire frequency range where they are available (from 0.125 to 9919~eV), extrapolated to below 0.125~eV (i.e., in the region from 0 to 0.125~eV) by means of the imaginary part of the Drude model [i.e., by the dashed line in Fig.~\ref{fg2}(b)]. In doing so the values of $\ve(i\xi)$ are obtained from Eq.~(\ref{eq14}) (there is no need in extrapolation of the optical data to the region above 9919~eV). This is the so-called Drude model approach to calculating the Casimir force which takes into account all real processes involving conduction and core electrons at frequencies above 0.125~eV and the electron-phonon interaction occurring at lower frequencies. Another approach to calculate the Casimir force uses ${\rm Im}\ve$ determined by the optical data of Au only over the frequency range from 2 to 9919~eV related to interband transitions. It is assumed that ${\rm Im}\ve=0$ within the frequency region from 0 to 2~eV, i.e., all the processes involving conduction electrons are disregarded. It is assumed also that the dielectric permittivity has a pole of second order and a residue equal to $\omega_p^2$ at zero frequency. Then, the dielectric permittivity along the imaginary frequency axis is found from Eq.~(\ref{eq15}). This is called the plasma model approach to the calculation of the Casimir force. As described in Sec.~I, the Casimir puzzle lies in the fact that all precise experiments on measuring the Casimir force at separations below $1~\mu$m exclude the Drude model approach considered, and are in good agreement with the plasma model one (see also Sec.~IV). In the frequency region, where the Gurzhi dielectric permittivity is in reasonably good agreement with the optical data (i.e., below 2~eV), it could also be applied for calculating the Casimir force. For this purpose, at $\hbar\omega>2$~eV ${\rm Im}\ve$ is obtained from the optical data and at $\hbar\omega<2$~eV it is given by the imaginary part of the Gurzhi model. Then the dielectric permittivity along the entire imaginary frequency axis is found from Eq.~(\ref{eq14}). This could be called the Gurzhi model approach to the Casimir force. In the frequency region from 0.125 to 2~eV it takes into account analytically both the electron-phonon and electron-electron interactions (taken into account via the optical data in the Drude model approach). Both these processes are accounted for also at $\hbar\omega<0.125$~eV [see the solid line in Fig.~\ref{fg2}(b)], whereas the Drude model approach disregards the electron-electron scattering within this frequency region. In the end of this section, we particularly emphasize that the type of singularity of $\ve$ at zero frequency makes a profound effect on its values at pure imaginary frequencies. As shown in Ref.~\cite{38}, the behavior of $\ve(i\xi)$ over the entire axis $0<\xi<\infty$ can be found theoretically using the available optical data for the complex index of refraction with no any extrapolation. This is achieved through the application of the so-called weighted Kramers-Kronig transform, which suppresses a contribution of the frequency regions where the optical data are not available, and assumes the presence of either the first or the second order pole of $\ve$ at zero frequency. \section{Calculation of the Casimir pressure in different approaches and comparison with experiments} The Casimir pressure between two parallel metallic plates of more than 100~nm thickness at temperature $T$ separated by the vacuum gap of width $a$ is the same as between two semispaces. It is given by the Lifshitz formula \cite{1,2,3,4,5,7} \begin{eqnarray} && P(a,T)=-\frac{k_BT}{\pi}\sum_{l=0}^{\infty}{\vphantom{\sum}}^{\prime} \int_0^{\infty}\!\!\!q_lk_{\bot}dk_{\bot} \nonumber \\ &&~~~~~~~\times \sum_{\alpha} \left[r_{\alpha}^{-2}(i\xi_l,k_{\bot})e^{2aq_l}-1\right]^{-1}, \label{eq16} \end{eqnarray} \noindent where the prime on the first summation sign corresponds to dividing the term with $l=0$ by 2, $k_{\bot}$ is the magnitude of projection of the wave vector on the plane of plates, $\alpha$ implies a summation over the transverse magnetic ($\alpha={\rm TM}$) and transverse electric ($\alpha={\rm TE}$) polarizations of the electromagnetic field, $\xi_l=2\pi k_BTl/\hbar$ with $l=0,\,1,\,2,\,\ldots$ are the Matsubara frequencies, and $q_l=(k_{\bot}^2+{\xi_l^2}/{c^2})^{1/2}$. The reflection coefficients in Eq.~(\ref{eq16}) are defined as \begin{equation} r_{\rm TM}(i\xi_l,k_{\bot})=\frac{\ve_lq_l-k_l}{\ve_lq_l+k_l}, \quad r_{\rm TE}(i\xi_l,k_{\bot})=\frac{q_l-k_l}{q_l+k_l}, \label{eq18} \end{equation} \noindent \noindent where \begin{equation} \ve_l\equiv\ve(i\xi_l), \quad k_l=\left(k_{\bot}^2+\ve_l\frac{\xi_l^2}{c^2}\right)^{1/2}. \label{eq19} \end{equation} \noindent {}From Eq.~(\ref{eq4}) it is seen that at the first Matsubara frequency it holds $\gamma_{ee}(i\xi_1,T)=0$. This is the so-called {\it first-Matsubara-frequency} rule \cite{27}. We have calculated the ratio of the Casimir pressure (\ref{eq16}) at $T=295~$K to that between two ideal metal plates at zero temperature, $P_{0}(a)=-{\pi^2\hbar c}/(240a^4)$, as a function of separation between the plates, using three theoretical approaches described in the end of Sec.~III, i.e., the plasma, Drude, and Gurzhi. The computational results are presented in Fig.~\ref{fg3} by the three solid lines counted from top to bottom. These lines are obtained by using extrapolations of the optical data to below 2~eV by means of the plasma model with $\hbar\omega_p=9.0$~eV, to below 0.125~eV by means of the Drude model with $\hbar\omega_p=9.0$~eV, $\hbar\gamma_{ep}=35$~meV, and to below 2~eV by means of the Gurzhi model with $\hbar\omega_p=8.68$~eV, $\hbar\gamma_{ep}(T=295\,\mbox{K})=30.3$~meV, respectively. The dashed line is found by extrapolating the optical data to below 2~eV using the Gurzhi model with $\hbar\omega_p=9.0$~eV and $\hbar\gamma_{ep}=35$~meV. As is seen in Fig.~\ref{fg3}, the Drude and Gurzhi approaches lead to rather close results for the Casimir pressure, especially if the Gurzhi model employs the same Drude parameters as the Drude model (see the dashed line). At the same time, the computational results found within the plasma model approach are quite different. This is explained by distinct behaviors of the dielectric permittivities at zero frequency (the first order pole in the cases of Drude and Gurzhi models and the second order one for the plasma model). To quantify the role of the optical data below 2~eV and at higher frequencies in the region of absorption bands, in Table~I we present several computational results found with the partial or total exclusion of the use of optical data in favor of the simple plasma or Drude models. Columns 3, 5, and 8 of Table~I contain magnitudes of the Casimir pressure computed using the plasma, Drude and Gurzhi model approaches, respectively, at separation distances indicated in column~1. These computations are performed with the optical data, as described in explanations to Fig.~\ref{fg3}, with the Drude parameters $\hbar\omega_p=9.0$~eV and $\hbar\gamma_{ep}=35$~meV. In column 2 of Table~I, we present the mean measured magnitudes of the Casimir pressure and their total experimental errors determined at the 95\% confidence level by the results of Refs.~\cite{11,12}. In columns 4 and 7, the magnitudes of the Casimir pressure computed by using the simple plasma and Drude models are presented, respectively, i.e., \begin{equation} \ve_p(i\xi_l)=1+\frac{\omega_p^2}{\xi_l^2}, \quad \veD(i\xi_l)=1+\frac{\omega_p^2}{\xi_l[\xi_l+\gamma_{ep}(T)]}. \label{eq21} \end{equation} \noindent Finally, column 6 contains the computational results using the optical data of Au at $\hbar\omega>2$~eV and the simple Drude model at $\hbar\omega\leq 2$~eV. We note that computations of the Casimir pressure by using the simple Gurzhi model (\ref{eq2}) applied over the entire frequency range would be inconsistent. The reason is that in the region from $\hbar\xi^{(1)}=63.3217$~eV to $\hbar\xi^{(0)}=64.47$~eV one has $\veG(i\xi)<0$ (see Sec.~II). The width of the frequency interval where $\veG(i\xi)$ is negative is almost independent of the values of the Drude parameters. For instance, for $\hbar\omega_p=9.0$~eV and $\hbar\gamma_{ep}=35$~meV it holds $\hbar\xi^{(1)}=63.3261$~eV in place of $63.3217$~eV. As a result, at room temperature ($T=295~$K) one obtains that $\veG(i\xi_l)$ with $397\leq l\leq 403$ takes the negative values. This leads to the complex $k_l$ in Eq.~(\ref{eq19}) within some interval of $k_{\bot}$ and, finally, to the complex reflection coefficients and Casimir pressures in Eq.~(\ref{eq16}). It may be argued that the Matsubara terms with such high $l$ do not contribute to the pressure at separations considered. The presence of the complex-valued terms (even though they were negligibly small in magnitudes) is, however, quite impermissible theoretically. Furthermore, at sufficiently short separations between the plates it is necessary to take into account much larger number of Matsubara terms in order to calculate the Casimir pressure with sufficient precision. Usually one should include all terms up to $15\omega_c=15c/(2a)$ \cite{7}. As a result, at $a=15~$nm the first 650 Matsubara terms should be included at room temperature. This makes apparent that the simple Gurzhi model cannot be used over the entire frequency range not only theoretically but from the practical standpoint as well. Now we discuss a correlation between the magnitudes of the Casimir pressure in columns 3--8 of Table~I. We note that these values are burdened by the errors of approximately 0.5\% determined by inaccuracies in the optical data and values of parameters in the models used \cite{7}. An interrelationship between the values in columns 3, 5, and 8 (obtained using the plasma, Drude, and Gurzhi approaches, respectively) is the same as already discussed above for respective lines in Fig.~\ref{fg3}. By comparing column 4 with column 3, it is seen that the use of the simple plasma model (column 4) results in slightly smaller magnitudes of the Casimir pressure, and the impact of the optical data becomes more pronounced with decreasing separation between the plates. If one uses the simple Drude model at all frequencies below 2~eV (column 6), slightly smaller magnitudes of the Casimir pressure are obtained as compared to column 5, where the optical data are extrapolated down to zero frequency by the simple Drude model in the region $\hbar\omega<0.125$~eV. When the simple Drude model is applied over the entire frequency axis (column 7), even smaller values for the magnitudes of the Casimir pressure are obtained. With increasing separation, however, differences between the Casimir pressures in columns 5, 6, and 7 become negligibly small which reflects a decreasing impact of the optical data in the region of absorption bands on the computational results. Note also that the Gurzhi model approach to calculation of the Casimir force (column 8) leads to almost the same (but slightly larger) pressure magnitudes as those in column 6 obtained using the simple Drude model below 2~eV and the optical data at all higher frequencies. A comparison between the Casimir pressures found with the Gurzhi model approach (column 8) and by means of the Drude model below 2~eV (column 6) allows estimation of the role of electron-electron scattering in the Casimir interaction. At $a=0.2~\mu$m it contributes only about 0.16\% of the pressure and its contribution decreases with increasing separation. By comparing the experimental Casimir pressures in column 2 with the theoretical ones in columns 3--8 one can conclude that in the limits of experimental and theoretical errors the measurement data are in agreement with the theoretical predictions made using the plasma model approach and exclude the predictions of all other approaches. This conclusion can be made quantitative taking into account that all precise experiments on measuring the Casimir interaction have been performed in the sphere-plate geometry (rather than in the plate-plate one) and that the test bodies have some surface roughness, which is not taken into account in the theoretical results of columns 3--8. In the experiment of Refs.~\cite{11,12}, performed by means of a micromechanical oscillator, the immediately measured quantity was the gradient of the Casimir force acting between the sphere of $R=150~\mu$m radius and a plate. This quantity can be recalculated in the magnitude of the Casimir pressure between two parallel plates presented in column 2 of Table~I using the proximity force approximation \cite{1,7} \begin{equation} |P(a,T)|=\frac{1}{2\pi R}\,\frac{\partial F_{sp}(a,T)}{\partial a}. \label{eq22} \end{equation} \noindent The relative corrections to an approximate expression (\ref{eq22}), which are less than $a/R$ \cite{21,22}, are negligibly small in this experiment. Small corrections due to the surface roughness have been taken into account perturbatively \cite{1,7,39} in the Casimir pressure $P_{\rm theor}$ (note that the surface roughness plays a more important role at very short separations between the test bodies \cite{40}). In Fig.~\ref{fg4}, we plot the differences between the theoretical Casimir pressures computed using the plasma, Drude and Gurzhi model approaches and mean experimental pressures measured in Refs.~\cite{11,12} (three sets of dots counted from bottom to top, respectively) as the functions of separation. The Drude parameters in the Gurzhi model are chosen as (a) $\hbar\omega_p=8.68$~eV and $\hbar\gamma_{ep}=30.3$~meV and (b) $\hbar\omega_p=9.0$~eV and $\hbar\gamma_{ep}=35$~meV. The solid lines are formed by the borders of the confidence intervals found at each separation by combining the total experimental and theoretical errors determined at the 95\% confidence probability. As is seen in Figs.~\ref{fg4}(a) and \ref{fg4}(b), both the Drude model approach and the Gurzhi model one used with any set of the Drude parameters are excluded by the measurement data at the 95\% confidence level, whereas the plasma model approach is experimentally consistent. We also compare the theoretical predictions of all three approaches with the recently measured gradient of the Casimir force acting between the Au-coated surfaces of a sphere and a plate refined by means of the UV and Ar-ion cleaning \cite{18}. In this work, performed by means of an atomic force microscope, the sphere radius was reduced to $R=43~\mu$m, and the corrections due to the use of the proximity force approximation have been taken into account through the results of Ref.~\cite{22}. The corrections due to the surface roughness were also included in the theoretical gradients of the Casimir force \cite{18}. In Fig.~\ref{fg5} the differences between the theoretical gradients of the Casimir force computed within the plasma, Drude and Gurzhi model approaches and mean experimental gradients \cite{18} (the sets of dots counted from top to bottom, respectively) are shown as the functions of separation. The Drude parameters in the Gurzhi model are again chosen as (a) $\hbar\omega_p=8.68$~eV and $\hbar\gamma_{ep}=30.3$~meV and (b) $\hbar\omega_p=9.0$~eV and $\hbar\gamma_{ep}=35$~meV. The solid lines indicate the borders of the confidence intervals determined in this experiment at the 67\% confidence probability by combining the total experimental and theoretical errors. {}From Figs.~\ref{fg5}(a) and \ref{fg5}(b) it is seen that both the Drude model approach and the Gurzhi model approach are excluded by the measurement data which are consistent with the plasma model approach to calculation of the Casimir force. \section{Conclusions and discussion} In the foregoing, we have discussed the extended Drude model or, as it also named, the Gurzhi model, which describes the relaxation properties of conduction electrons originating from electron-phonon and electron-electron scattering. Although this model is often used in condensed matter physics and, specifically, in the theory of high-temperature superconductors, its applications in the theory of Casimir forces were not considered so far. Taking into account that the Casimir puzzle remains unsolved for already 20 years (see Sec.~I), investigation of possible extensions of the Drude model in connection with the Lifshitz theory is a subject of much current interest. We have considered the analytic properties of the dielectric permittivity of the Gurzhi model. It is shown that this permittivity has a first order pole in the upper half-plane of complex frequencies and, thus, violates the causality principle. Additionally, within some interval along the pure imaginary frequency axis, the Gurzhi dielectric permittivity takes the negative values. One thus concludes that for calculating the Casimir force it can be used only in the frequency region below the absorption bands of a metal in combination with the dielectric permittivity obtained from the measured optical data at higher frequencies. Next, we have considered the imaginary part of the dielectric permittivity of the Gurzhi model for Au at frequencies below 2~eV which can be used to calculate the Casimir pressure by means of the Lifshitz formula. It was found to be in closer agreement with ${\rm Im}\ve$ defined from the optical data and it leads to almost the same, as the Drude model, extrapolation down to zero frequency, i.e., to the region where the optical data are not available. The concept of the Gurzhi model approach to the Casimir force is introduced by analogy with the Drude and plasma model approaches using the respective models combined with the optical data. As discussed in Sec.~I, the two latter approaches are the subject of a considerable literature in connection with the Casimir puzzle. The Casimir pressure between two Au plates was calculated using the Drude, plasma and Gurzhi model approaches, as well as by using the simple Drude and plasma models, and also by means of the Drude model applied in the region from zero frequency to 2~eV and supplemented by the optical data at higher frequencies. The obtained results are compared with the data of two precise experiments on measuring the Casimir interaction. The contribution of electron-electron interaction to the Casimir force is estimated to be less than 0.16\%. The Gurzhi model approach is shown to be excluded by the measurement data, as it was demonstrated earlier for the Drude model approach. An agreement of the plasma model approach with the measurement data at separations below $1~\mu$m is confirmed. Although the above results do not solve the Casimir puzzle, they attach special significance to novel experiments on measuring the Casimir force in the micrometer separation range proposed in Refs.~\cite{41,42,43}. \section*{Acknowledgments} The work of G.L.K.~and V.M.M.~was partially supported by the Peter the Great Saint Petersburg Polytechnic University in the framework of the Program ``5--100--2020". V.M.M.~was partially funded by the Russian Foundation for Basic Research, Grant No. 19-02-00453 A. His work was also partially supported by the Russian Government Program of Competitive Growth of Kazan Federal University. L.M.W.\ acknowledges financial support from US Department of Energy under grant No.\ DE-FG02-06ER46297.
1,116,691,498,092
arxiv
\section{Introduction} The search for superconductivity (SC) with high critical temperature $T_c$ has been the dream of the condensed-matter community for decades. It is generally believed that the right route to seek for high-$T_c$ SC (HTCS) is to acquire strong spin fluctuations via proximity to antiferromagnetic-ordered phases, with the cuprates and the iron-based superconductors as two well-known examples \cite{DJScalapino12}. Along this route, a new research area was generated recently: graphene-based SC. Among the early attempts in this area, the most famous idea might be to generate d+id HTCS \cite{Chubukov,Qianghua,Thomale} in the monolayer graphene in proximity to the spin-density-wave (SDW) ordered state \cite{TaoLi,Qianghua} at the quarter-doping. However, such high doping concentration is hardly accessible by experiment. The newly discovered SC in the magic-angle-twisted bilayer graphene \cite{Cao1} in close proximity to the ``correlated insulator" phase \cite{Cao2} opened a new era in this area. It is proposed that the ``correlated insulator" in this material is a SDW insulator \cite{Yang, Xu}, and the SC is driven by SDW spin fluctuations \cite{Yang,Xu,Ashvin,Fu}. However, due to the greatly reduced Fermi energy ($\approx10$ meV) in this material, the $T_c\approx 1.7$ K might be not far from its upper limit. Here we propose another graphene-based material, i.e., octagraphene \cite{Sugang}, which has a square-octagon lattice structure with each site accommodating one single $2p_z$ orbital. This system has large Fermi energy and we predict that slightly doping this material will induce HTCS, driven by SDW spin-fluctuations. The octagraphene is a two-dimensional (2D) material formed by a monolayer of carbon atoms arranged into a square-octagon lattice as shown in Fig.~\ref{fig:lattice}. This lattice is $C_{4v}$-symmetric and each unit cell contains four sites forming a square enclosed by the dotted lines shown in Fig.~\ref{fig:lattice}. First-principles calculations indicate that such a planar structure is kinetically stable at low temperature \cite{Sugang,Pod} and that its energy is a local minimum \cite{Sugang}, which suggests that the material can potentially be synthesized in laboratories. Actually, this lattice structure has attracted a lot of research interest recently because it not only is hosted by quite a few real materials \cite{CaVO,KFeSe1,KFeSe2,YZhang} but also has various intriguing phases on this lattice that have been revealed by theoretical calculations \cite{Scalettar,Troyer,White,Sachdev,Zheng_Weihong,Bose,Manuel,Farnell,Kwai,Fiete,Yamashita,Yanagi,Yamada14,Wu,Iglovikov,Long_Zhang,Gong,Bao}. Here we notice another remarkable property of this 2D lattice: its band structure can have perfect Fermi-surface (FS) nesting in a wide parameter regime at half filling, which easily leads to antiferromagnetic SDW order. When the system is slightly doped, the SDW order will be suppressed and the remnant SDW fluctuation will mediate HTCS. In this paper, we study a possible pairing state in the single-orbital Hubbard-model on the square-octagon lattice with only nearest-neighbor hopping terms. To treat this Hubbard-model with different limits of the coupling strength, we adopt three distinct approaches, i.e., the random-phase approximation (RPA), the slave-boson mean field (SBMF), and the variational Monte Carlo (VMC), which are suitable for the weak, the strong, and the intermediate coupling strengths, respectively. All the three approaches consistently identify the single $s^\pm$-wave pairing as the leading pairing symmetry. We propose octagraphene as a possible material realization of the model. Our VMC calculation adopting realistic interaction strength yields a pairing gap amplitude of about 50 meV, which is comparable with the cuprates, implying a comparable $T_c$ between the two families. Our study also applies to other materials with similar lattice structure. \section{Material, Model, and Approaches} \begin{figure} \begin{center} \subfigure[]{\includegraphics[width=1.6in]{lattice.pdf}\label{fig:lattice}} \subfigure[]{\includegraphics[width=1.5in]{bandn100t212.pdf}\label{fig:band}} \\ \subfigure[]{\includegraphics[width=1.55in]{FSn100t212.pdf}\label{fig:FSn100}} \subfigure[]{\includegraphics[width=1.5in]{FSn110t212.pdf}\label{fig:FSn110}} \caption{(a) Sketch of the square-octagon lattice and illustration of the intrasquare nearest-neighbor hopping $t_1$ and the intersquare nearest-neighbor hopping $t_2$. The dotted square denotes the unit cell. (b) Band structure of the TB model (\ref{TB}) along the high symmetric lines in the first Brillouin zone. Panels (c) and (d) show the FSs of the undoped and $10\%$ electron-doped cases, respectively. The site contributions on the FS sheets are shown by color: the red (green) represents that the weights contributed by the sublattices 1 and 3 (2 and 4) are dominant. The TB parameters are $t_1=1,t_2=1.2$ through out the work.} \end{center} \end{figure} From density-functional theory (DFT) calculations \cite{Sugang}, each carbon atom in the octagraphene is $\sigma$ bonded with its three surrounding atoms via $sp^2$ hybridization. The low-energy degree of freedom near the Fermi level is dominantly contributed by the $2p_z$ orbitals, which form $\pi$ bonds similar to the graphene. With each carbon atom contributing one electron in one $2p_z$ orbital, the resulting band structure can be well captured by the following single-orbital TB model: \begin{eqnarray} H_{\text{TB}}=-t_1 \sum_{\langle i,j \rangle,\sigma} \left( c^{\dagger}_{i\sigma} c_{j\sigma} + H.c. \right) - t_2 \sum_{[i,j],\sigma} \left( c^{\dagger}_{i\sigma} c_{j\sigma} + H.c. \right).\nonumber\\\label{TB} \end{eqnarray} Here $c^{\dagger}_{i\sigma} \left(c_{i\sigma}\right)$ creates (annihilates) an electron with spin $\sigma$ at site $i$. The terms with coefficients $t_1$ ($\approx$2.5eV) and $t_2$ ($\approx$2.9eV) describe the intrasquare nearest-neighbor ($NN$) and intersquare $NN$ hoppings respectively, as shown in Fig.~\ref{fig:lattice}. In the following, we set $t_1$ as the energy unit and $t_2/t_1=1.2$. The band structure of this TB model along the high symmetric lines in the first Brillouin zone is presented in Fig.~\ref{fig:band}. For the half-filling case, the band $\varepsilon_2(\mathbf{k})$ and $\varepsilon_3(\mathbf{k})$ cross the Fermi level to form a hole pocket ($\alpha$) centering around the $\Gamma$ point, and an electron pocket ($\beta$) centering around the $M$ point, as shown in Fig.~\ref{fig:FSn100}. The red (green) color indicates that site 1 and 3 (2 and 4) dominate the weights of bands. Remarkably, the two pockets are identical, connected by the perfect nesting vector $\mathbf{Q}=(\pi,\pi)$. Such perfect FS-nesting is robust at half filling in the parameter regime $0 < \left|\frac{t_2}{t_1}\right| \le 2$, where the FS exists. However, upon doping, the perfect FS nesting is broken, leaving a remnant nesting at a nesting vector shifted from $\mathbf{Q}$, as shown in Fig.~\ref{fig:FSn110}. Due to the screening effect in the doped compound, the strong Coulomb repulsions between the $2p_z$ electrons in the graphene-based material can be approximated as the Hubbard interaction \cite{Neto}. Therefore, we obtain the following well-known (repulsive) Hubbard-model: \begin{equation} H =H_{\text{TB}}+H_{\text{int}}=H_{\text{TB}}+U \sum_i \hat{n}_{i\uparrow} \hat{n}_{i\downarrow},\label{model} \end{equation} Although there is a rough estimate of $U\approx10$ eV for the graphene-based material, an accurate value of $U$ is hard to obtain \cite{Neto}. Therefore, in the following, we first engage three different approaches, i.e., the RPA, the SBMF, and the VMC, to treat with the model with different limits of $U$ and check the $U$ dependence of the pairing symmetry. As we shall see, they yield consistent results. Then, we fix $U=10$ eV, and adopt the VMC approach suitable for this $U$ to estimate the $T_c$. \section{Theoretical solutions and numerical results} \subsection{Results for the random-phase approximation} We adopt the standard multi-orbital RPA approach \cite{KKubo07,SGraser09,QLLuo10,TAMaier11,FengLiu13,TXMa14,XXWu15,LDZhang15,HKontani98,HKondo01,KKuroki02} to treat the weak-coupling limit of the model (\ref{model}). Strictly speaking, this is an ``intra-unit-cell multisite model'' without orbital degrees of freedom, which is easier because of the absence of an inter-orbital Coulomb interaction and Hund's coupling. This approach handles the interactions at the RPA level, from which we determine the properties of the magnetism and SC for interactions above or below the critical interaction strength $U_c$, respectively. Generally, the RPA approach only works well for weak-coupling systems. Let us define the following bare susceptibility for $U=0$: \begin{align} \chi^{(0)l_1l_2}_{l_3l_4} \left(\mathbf{q},i\omega_n\right) \equiv \frac{1}{N} \int_0^{\beta} d\tau e^{i\omega_n\tau} \sum_{\mathbf{k}_1\mathbf{k}_2} \big\langle T_{\tau} c^{\dagger}_{l_1}(\mathbf{k}_1,\tau) \nonumber \\ \times c_{l_2}(\mathbf{k}_1+\mathbf{q},\tau) c^{\dagger}_{l_3}(\mathbf{k}_2+\mathbf{q},0) c_{l_4}(\mathbf{k}_2,0) \big\rangle_0. \end{align} Here $l_i(i=1,...,4)$ denotes the sublattice indices. The largest eigenvalue $\chi(\mathbf{q})$ of the static susceptibility matrix $\chi^{(0)}_{lm}(\mathbf{q}) \equiv \chi^{(0)l,l}_{m.m}(\mathbf{q},i\omega=0)$ for each $\mathbf{q}$ represents the eigensusceptibility in the strongest channel, while the corresponding eigenvector $\xi(\mathbf{q})$ provides information on the fluctuation pattern within the unit cell. The information about the distribution of $\chi(\mathbf{q})$ over the Brillouin zone, as well as the fluctuation pattern for the peak momentum, is shown in Fig.~\ref{fig:chi} for different dopings. Figure \ref{fig:chi0x00} illustrates the distribution of $\chi(\mathbf{q})$ over the Brillouin zone for the undoped case, which sharply peaks at $\mathbf{Q}=(\pi,\pi)$, reflecting the perfect FS nesting at that wave vector, as shown in Fig.~\ref{fig:FSn100}. On the other hand, the eigenvector $\xi(\mathbf{Q})=(\frac{1}{2},-\frac{1}{2},\frac{1}{2},-\frac{1}{2})$ reflects the intra-unit-cell fluctuation pattern, which is shown in Fig.~\ref{fig:magnetism} together with the inter-unit-cell pattern for this momentum, which suggests a Neel pattern. With the development of doping, the peak in the distribution of $\chi(\mathbf{q})$ splits each into four and deviates from $\mathbf{Q}=(\pi,\pi)$ to $\mathbf{Q}_{x}=(\pi\pm\delta,\pi\pm\delta)$, as shown in Fig.~\ref{fig:chi0x10} for $x=10\%$ electron doping as an example. The relation between $\delta$ and $x$ shown in Fig.~\ref{fig:delta} suggests a linear relation, revealing incommensurate inter-unit-cell fluctuation pattern, just like the Yamada relation in the cuprates\cite{Yamada98}. In the meantime, the eigenvectors $\xi(\mathbf{Q}_{x})$ nearly keep unchanged, and thus the intra-unit-cell fluctuation pattern is still approximately described by Fig.~\ref{fig:magnetism}. \begin{figure} \flushleft \subfigure[]{\includegraphics[width=1.63in]{chi0n100t212.pdf}\label{fig:chi0x00}} \subfigure[]{\includegraphics[width=1.63in]{chi0n110t212.pdf}\label{fig:chi0x10}} \\ \hspace{0.02in} \subfigure[]{\includegraphics[width=1.35in]{delta_t212.pdf}\label{fig:delta}} \hspace{0.25in} \subfigure[]{\includegraphics[width=1.45in]{magnetism.pdf}\label{fig:magnetism}} \caption{{Panels (a) and (b) show the $\bf{q}$ dependence of the eigensusceptibilities $\chi(\mathbf{q})$ in the first Brillioun zone, corresponding to the undoped and $10\%$ electron doped compounds, respectively. The temperature is set as $T = 0.001$. (c) The incommensurability $\delta$ as a function of doping $x$. (d) The AFM ordered spin pattern in the octagraphene.} \label{fig:chi}} \end{figure} For $U>0$, we obtain the following renormalized spin (s) and charge (c) susceptibilities at the RPA level, \begin{align} \chi^{(s/c)}\left(\mathbf{q},i\omega_n \right) = \left[I \mp \chi^{(0)}\left(\mathbf{q},i\omega_n\right)(U)\right]^{-1} \chi^{(0)}\left(\mathbf{q},i\omega_n\right) \label{eq:RPA} \end{align} Here $\chi^{(s/c)}\left(\mathbf{q},i\omega_n\right)$, $\chi^{(0)}\left(\mathbf{q},i\omega_n\right)$ and $(U)$ are used as $4^2 \times 4^2$ matrices and $I$ is the unit matrix. In our model, $U^{l_1 l_2}_{l_3 l_4} = U \delta_{l_1=l_2=l_3=l_4}$. For $U>0$, the spin fluctuation dominates the charge fluctuation, thus the fluctuation pattern illustrate in Fig.~\ref{fig:magnetism} actually describes the spin fluctuation. Note that the RPA approach only works for $U<U_c$, with the critical interaction strength $U_c$ determined by $\det\left[I - \chi^{(0)}\left(\mathbf{q},0\right)U\right]=0$. For $U>U_c$ the spin susceptibility diverges, which suggests that long range SDW order with the pattern shown in Fig.~\ref{fig:magnetism} emerges. The doping-dependence of $U_c$ is shown in Fig.~\ref{fig:Uc}, where one finds $U_c=0$ for $x=0$ due to the perfect FS-nesting, which means that arbitrarily weak repulsive interaction will cause SDW order. For $x>0$, we have $U_c>0$. In such cases, the SDW order maintains for some doping regime where $U_c<U$, but with the wave vector shifting to incommensurate values $\mathbf{Q}_{x}=(\pi\pm\delta,\pi\pm\delta)$. \begin{figure} \centering \subfigure[]{\includegraphics[height=1.3in]{Uc.pdf}\label{fig:Uc}} \subfigure[]{\includegraphics[height=1.3in]{lambda_U.pdf}\label{fig:lambda}} \\ \subfigure[]{\includegraphics[height=1.3in]{lambda_x.pdf}\label{fig:lambda_x}} \subfigure[]{\includegraphics[height=1.33in]{delta_singlet.pdf}\label{fig:pairing1}} \caption{{(a) $U_c/t_1$ as a function of the electron doping density $x$. The largest pairing eigenvalues $\lambda$ in four different pairing symmetry channels as a function of (b) $U/t_1$ and (c) $x$. (d). The {\bf k}-dependent superconducting order parameter $\Delta_{\alpha}({\mathbf{k}})$ projected onto the FS for the leading $s^\pm$-wave pairing. The doping density for panels (b) and (d) is $x=10\%$. The interaction parameter adopted is $U=1.8t_1$.}} \end{figure} When the doping concentration $x$ further increases so that $U<U_c$, the long-ranged SDW order is killed. In such parameter regime, the remnant SDW fluctuation will mediate an effective pairing potential $V^{\alpha\beta}(\mathbf{k},\mathbf{k}^\prime)$ \cite{FengLiu13,XXWu15} between the Cooper pairs. Then we can solve the following linearized gap equation to determine the leading pairing symmetry: \begin{align} -\frac{1}{(2\pi)^2}\sum_\beta \oint_{FS} d\mathbf{k}^\prime_{\parallel} \frac{V^{\alpha\beta}(\mathbf{k},\mathbf{k}^\prime)}{v^\beta_F(\mathbf{k}^\prime)} \Delta_\beta(\mathbf{k}^\prime) = \lambda\Delta_\alpha(\mathbf{k}). \label{eq:gap} \end{align} Here $v^\beta_F(\mathbf{k})$ is the Fermi velocity and $\mathbf{k}^\prime_{\parallel}$ denotes the component along the FS. The pairing eigenvalue $\lambda$ is related to $T_c$ through $T_c\approx W_{D} e^{-1/\lambda}$ with the ``Debye frequency'' $W_D$ for the spin fluctuations to be about an order of magnitude lower than the bandwidth, and the pairing symmetry is determined by the eigenfunction $\Delta_\alpha(\mathbf{k})$ corresponding to the largest $\lambda$. \begin{figure}[htbp] \centering \subfigure[]{\includegraphics[height=1.25in]{SBMF-a.pdf}\label{SBMF-a}} \hspace{0.05in} \subfigure[]{\includegraphics[height=1.25in]{SBMF-b.pdf}\label{SBMF-b}} \\ \subfigure[]{\includegraphics[height=1.3in]{SBMF-c.pdf}\label{SBMF-c}} \subfigure[]{\includegraphics[height=1.25in]{SBMF-d.pdf}\label{SBMF-d}} \caption{(color online). The SBMF results. (a) Doping-dependence of the energy (per unit cell) difference between the $s$-wave pairing and the $d$-wave one, $\Delta E\equiv E_s-E_d$, in units of $t_1$. (b) Doping-dependence of the four SBMF order parameters for the $s$-wave solution. (c) The $s$-wave gap function projected on the FS. (d) Doping dependence of the superconducting order parameter.} \label{SBMF} \end{figure} The $U$ dependence of the largest $\lambda$ for each pairing symmetry is shown in Fig.~\ref{fig:lambda} for a typical doping $x=10\%$. Obviously, $\lambda$ enhances promptly with the growth of $U$ due to the enhancement of spin fluctuations. The leading pairing symmetry turns out to be the $s$-wave. In Fig.~\ref{fig:lambda_x}, the doping-dependence of the largest $\lambda$ for each pairing symmetry is shown for a typical $U=1.8t_1$. After a prompt drop near the critical doping (about $\pm5\%$), the $\lambda$ for the four pairing symmetries vary smoothly for a wide doping range up to $20\%$, where the $s$-wave SC dominates all the other pairings. Figure \ref{fig:lambda} and \ref{fig:lambda_x} illustrates the robustness of the $s$-wave SC against parameters variation. The $C_{4v}$-symmetric distribution of the pairing gap function $\Delta({\mathbf{k}})$ of the obtained $s$-wave SC is shown on the FS in Fig.~\ref{fig:pairing1}. Remarkably, this gap function keeps the same sign within each pocket and changes sign between the two pockets. Therefore, we have established here a one-orbital realization of the standard $s^\pm$ SC, which used to be realized in the multi-orbital Fe-based superconductor family. Note that the interaction parameter $U=1.8t_1\approx4.5$ eV adopted here is considerably weaker than realistic value of $U\approx10$ eV \cite{Neto}, and due to the weak-coupling perturbative character of RPA, it is unreasonable to adopt a stronger $U$. In the next section, we adopt the SBMF approach to treat with the strong-coupling limit. \subsection{The slave-boson mean-field results} We start from the following effective $t$-$J$ model to study the strong-coupling limit of the Hubbard-model (\ref{model}), \begin{align} H=H_{\text{TB}}+J_1\sum _{\left \langle i,j \right \rangle }\bm{\widehat{S}_i}\cdot \bm{\widehat{S} } _{j} +J_2\sum _{\left [ i,j \right ] }\bm{\widehat{S} }_{i}\cdot \bm{\widehat{S} } _{j},\label{tJ} \end{align} Here the intrasquare $NN$ ($J_{1}$) and intersquare $NN$ ($J_{2}$) effective superexchange coupling constants are generated in the strong-coupling limit, which roughly satisfy $J_2/J_1\approx(t_2/t_1)^2\approx1.4$. In the following, we adopt $J_1=0.5t_1$ and $J_2=0.7t_1$. This Hamiltonian should be understood as acting on the subspace of empty (double-occupancy) and single occupied sites for the hole-doped (electron-doped) system. In the SBMF approach\cite{Kotliar}, we decompose the electron operator $c_{i\sigma}$ into $c_{i\sigma}\to f_{i\sigma}b^{\dagger}_i$, with the bosonic holon (doublon) operator $b^{\dagger}_i$ and the fermionic spinon operator $f_{i\sigma}$ subject to the no-double-occupancy constraint $b^{\dagger}_ib_i+\sum_{\sigma}f^{\dagger}_{i\sigma}f_{i\sigma}=1$. This constraint is treated in the mean-field level in SBMF, and at zero temperature the condensation of bosonic $b^{\dagger}_i$ leads to $b^{\dagger}_i\to \sqrt{x}$ and we are left with only the fermionic $f_{i\sigma}$ degree of freedom. The quartic term of $f_{i\sigma}$ in $H$ is further mean-field decomposed with the following two order parameter channels: \begin{align}\label{sce} \kappa _{(i,j)} =&\left \langle f^{\dagger}_{j\uparrow }f_{i\uparrow} \right \rangle=\left \langle f^{\dagger}_{j\downarrow }f_{i\downarrow} \right \rangle \nonumber\\ \Delta _{(i,j)}=&\left \langle f_{j\downarrow }f_{i\uparrow}-f_{j\uparrow }f_{i\downarrow}\right \rangle. \end{align} Here we actually have two mean-field $\kappa _{(i,j)}$ ($\Delta _{(i,j)}$) parameters, i.e., $\kappa _{1}$ ($\Delta _{1}$) for intrasquare $NN$ and $\kappa _{2}$ ($\Delta _{2}$) for intersquare $NN$ $(i,j)$, respectively, which are obtained by solving the mean-field equation self-consistently. Our SBMF results are shown in Fig.~\ref{SBMF}. Here we have tried two different pairing symmetries, i.e., the $s$ wave and $d$ wave, with their total energy difference $\Delta E\equiv E_s-E_d$ shown in Fig.~\ref{SBMF-a}, where the $s$-wave SC gains more energy and becomes the ground state. The doping dependence of the four order parameters $\kappa _{1,2}$ and $\Delta _{1,2}$ for the $s$-wave pairing is shown in Fig.~\ref{SBMF-b}, where the intersquare order parameters obviously dominate the intrasquare ones. Figure \ref{SBMF-c} shows the projection of the gap function onto the FS, where one clearly verifies the standard $s^\pm$-pairing state, which is well consistent with the gap function obtained by RPA shown in Fig.~\ref{fig:pairing1}. The doping-dependence of the superconducting order parameter $\Delta^{(c)} _{(i,j)}=\left \langle c_{j\downarrow }c_{i\uparrow}-c_{j\uparrow }c_{i\downarrow}\right \rangle=x \Delta _{(i,j)}$ is shown in Fig.~\ref{SBMF-d}, which illustrates a dome-shape similar to the cuprates. If we use the BCS relation $2J\Delta^{(c)}/T_c\approx3.53$ to roughly estimate $T_c$, we get the highest $T_c\approx180$ K near $x=10\%$ for our choice of $J_1$ and $J_2$. However, as the effective superexchange parameters $J_1$ and $J_2$ for real material with intermediate $U$ is hard to estimate, the $T_c$ obtained here might not be accurate. In the following, we adopt the VMC approach to study the problem. \subsection{The variational Monte Carlo results} The above weak-coupling RPA and strong-coupling SBMF approaches consistently yield the $s^\pm$-wave pairing. However, to obtain a more reasonable estimation of $T_c$, we should adopt a realistic interaction parameter $U$. The realistic $U\approx10$ eV is comparable with the total bandwidth, thus it belongs to intermediate coupling strength. We adopt the VMC approach here, which is suitable for the intermediate coupling strength. We adopt the following partially Gutzwiller-projected BCS wave function \cite{YangVMC} in our VMC study, \begin{align}\label{wave} \left |G \right \rangle=g^{\sum_{i} n_{i\uparrow}n_{i\downarrow}} (\sum_{\bf{k}\alpha}\frac{v^{\alpha }_{\bm{k}}}{u^{\alpha }_{\bm{k}}} c^{\dagger}_{\bf{k}\alpha\uparrow}c^{\dagger}_{\bf{-k}\alpha \downarrow})^{\frac{N_e}{2}}\left |0 \right \rangle. \end{align} Here $g\in(0,1)$ is the penalty factor of the double occupancy, $N_e$ is the total number of electrons, and \begin{align*} \frac{v^{\alpha }_{\bm{k}}}{u^{\alpha }_{\bm{k}}}=\frac{\Delta^{\alpha}_{\bm{k}}}{\varepsilon_{\alpha }(\bm{k})+\sqrt{\varepsilon^2_{\alpha }(\bm{k})+\left |\Delta ^{\alpha }_{\bm{k}}\right |^{2} }}, \end{align*} where $\Delta ^{\alpha }_{\bm{k}}=\Delta ^{\alpha }f(\bm{k})$ is the superconducting gap function. Here we only consider intra-band pairing on the $\alpha=2,3$ bands crossing the FS, with $\Delta^{2}=\Delta^{3}\equiv\Delta$. The following four different form factors $f(\bm{k})$ are considered in our calculations, \begin{align}\label{factor} f(\bm{k})=\left\{\begin{matrix} \cos k_{x}+\cos k_{y}\quad &(s^\pm)\\ \cos k_{x}\cos k_{y}\quad &(s^{++})\\ \cos k_{x}-\cos k_{y}\quad &(d_{x^{2} -y^{2}})\\ \sin k_{x}\sin k_{y}\quad &(d_{xy}) \end{matrix}\right. \end{align} There are three variational parameters, i.e., $g$, $\mu_c$, and $\Delta$ for each pairing channel in our trial wave function. \begin{figure} \subfigure[]{\includegraphics[width=1.68in]{ttu.pdf}\label{ttu}} \subfigure[]{\includegraphics[width=1.68in]{ttmu.pdf}\label{ttmu}} \caption{(a) The VMC results for the energy per unit cell as function of $\Delta$ for the four different gap form factors $s^\pm$, $s^{++}$, $d_{x^{2} -y^{2}}$ and $d_{xy}$ with $g$ and $\mu_c$ optimized for each $\Delta$. (b) The $\bm{k}-$dependent superconducting order parameter $\Delta(\bm{k})$ projected on the FS for the 10\% electron-doped compound. The interaction parameter adopted is $U=4t_1=10$ eV.} \end{figure} We employ the VMC approach to calculate the expectation value $E$ of the Hubbard Hamiltonian (\ref{model}) \cite{YangVMC} and optimize the variational parameters. The $\Delta$ dependence of the energy per unit cell for each form factor is shown in Fig.~\ref{ttu} for $U=4t_1=10$eV for a typical doping $x=10\%$, with $g$ and $\mu_c$ optimized for each $\Delta$. Note that the optimized $g=0.5475$ is almost equal to the optimized value without SC, and that $\mu_c$ is almost equal to the value obtained in the mean-field calculation. From Fig.~\ref{ttu}, one finds that the $s^\pm$- wave pairing causes the most energy gain among the four gap form factors, with the optimized gap amplitude at $\Delta=0.022t_1\approx$50meV, comparable with the cuprates, implying similar $T_c$ between them. The gap function of the $s^\pm$-wave SC obtained is shown on the FS in Fig.~\ref{ttmu}, which is well consistent with that obtained in the RPA calculation. Note that we have not included antiferromagnetic order in our trial wave function as we mainly focus on SC here. Generally, such antiferromagnetic order will be favored at low dopings and decay with further doping. In the framework of VMC, the antiferromagnetic order possibly coexists with SC at low dopings. We leave this topic for future studies. \section{Discussion and Conclusion} The synthesis of octagraphene is on the way. Recently, graphene-like nanoribbons periodically embedded with four- and eight-membered rings have been synthesized \cite{Zhong}. A scanning tunneling microscopy and atomic force microscopy study revealed that four- and eight-membered rings are formed between adjacent perylene backbones with a planar configuration. This 2D material can be taken as an intermediate between the graphene and the octagraphene studied here. Most probably, the octagraphene might be synthesized in the near future, which will provide a material basis for the study here. In conclusion, we have studied possible pairing states in the single-orbital Hubbard model on the square-octagon lattice with only nearest-neighbor hopping terms. Due to the perfect FS nesting in the undoped system, slight doping would induce HTCS, driven by strong incommensurate SDW fluctuations. Our combined RPA-, SBMF-, and VMC-based calculations suitable for the weak, strong, and intermediate couplings strengths, respectively, consistently yield standard $s^\pm$-wave SC in this simple one-orbital system. The smoking-gun evidence of this intriguing pairing state would be the pronounced subgap spin resonance mode emerging upon the superconducting transition, which can be detected by inelastic neutron scattering. We propose octagraphene as a possible material realization of the model, and our VMC calculations adopting realistic interaction parameter for this material yield a pairing gap amplitude of about 50 meV, comparable with that of the cuprates, which implies comparable $T_c$ between the two systems. Our study will also apply to other materials with similar lattice structure. Our results, if confirmed, would start a new stage in the discovery of high-$T_c$ SC. \begin{acknowledgments} F.Y. acknowledges the support from NSFC under the Grants No. 11674025, No. 11334012, and No. 11274041. Y.-T.K. and D.-X.Y. are supported by NKRDPC Grants No. 2017YFA0206203, No. 2018YFA0306001, NSFC-11574404, and NSFG-2015A030313176, Special Program for Applied Research on Super Computation of the NSFC-Guangdong Joint Fund, National Supercomputer Center In Guangzhou, and Leading Talent Program of Guangdong Special Projects. \end{acknowledgments} \null\vskip-8mm \bibliographystyle{apsrev4-1}
1,116,691,498,093
arxiv
\section{Introduction} In this paper we prove new results in two areas of theoretical computer science that have received a lot of attention recently: \emph{streaming algorithms} and \emph{locally decodable codes}. \emph{Streaming algorithms} is a model of computation introduced by Alon, Matias and Szegedy~\cite{alon1999space} (for which they won the G{\"o}del Prize in 2005) in order to understand the space complexity of approximation algorithms to solve problems. In the last decade, there have been several results in the direction of proving upper and lower bounds for streaming algorithms for combinatorial optimization problems~\cite{verbin2011streaming,goel2012communication,kapralov2014streaming,guruswami2017streaming,kapralov20171+,kapralov2019optimal,DBLP:conf/approx/GuruswamiT19,DBLP:conf/focs/ChouGV20,chou2021approximabilityA,chou2021approximabilityB,assadi2021simple,chen2021almost}. The goal here is to obtain a $1/\gamma$ approximation (for some $\gamma \leq 1$) of the optimum value of the combinatorial optimization problem with as little space as possible. One favourite problem considered by many works is the well-known \emph{Max-Cut}, or its generalization over large alphabets $\ensuremath{\mathbb{Z}}_r$, \emph{Unique Games}. Here, giving a $2$-approximation algorithm for Max-Cut on $n$ vertices can be done in logarithmic space, while a sequence of works~\cite{kapralov2014streaming,kapralov20171+,kapralov2019optimal} showed that getting a $(2-\varepsilon)$-approximation requires linear space, matching the upper bound by~\cite{ahn2012graph}. A similar, but less optimized, scenario was observed for Unique Games, i.e., there is a threshold behaviour in complexity going from $r$ to $(r-\varepsilon)$-approximation. Curiously, many of these lower bounds were proven via variants of a problem called \emph{Boolean Hidden Matching} $(\ensuremath{\mathsf{BHM}}$) and it is well known that $\ensuremath{\mathsf{BHM}}$ can be solved using logarithmic \emph{quantum space}, so a natural question is, could quantum space help solving these combinatorial optimization problems? One corollary from~\cite{kapralov2014streaming,shi2012limits} is that obtaining the \emph{strong} $(1+\varepsilon)$-approximation factor for Max-Cut and Unique Games streaming algorithms is quantum-hard. However, understanding the space complexity of streaming in the widely-studied, weaker regime of $(2-\varepsilon)$-approximation (for Maxcut) or $(r-\varepsilon)$-approximation (for Unique games over $\ensuremath{\mathbb{Z}}_r$) algorithms, it is still unclear whether there could be any savings in the quantum regime. \emph{Locally decodable codes} ($\mathsf{LDC}$s) are error correcting codes $C:\Sigma^n\rightarrow\Gamma^N$ (for alphabets $\Sigma,\Gamma$) that allow transmission of information over noisy channels. By querying a few locations of a noisy codeword $\tilde{C}(x)$, one needs to reconstruct an arbitrary coordinate of $x\in \Sigma^n$ with probability at least $1/|\Sigma|+\varepsilon$. The main goal in this field is to understand trade-offs between $N$ and $n$. $\mathsf{LDC}$s have found several applications in pseudorandom generators, hardness amplification, private information retrieval schemes, cryptography, complexity theory (refer to~\cite{yekhanin2012locally,gopi2018locality} for a detailed exposition). Despite their ubiquity, $\mathsf{LDC}$s are not well understood, even with the simplest of case of \emph{$2$-query $\mathsf{LDC}$s}. For the case when $\Sigma=\Gamma=\{0,1\}$, exponential lower bounds of $N=2^{\Omega(n)}$ were established over two decades back~\cite{goldreich2002lower,DBLP:journals/jcss/KerenidisW04,dvir2007locally}. In contrast, a breakthrough result of Dvir and Gopi~\cite{dvir20162} in 2015 showed how to construct $2$-query $\mathsf{LDC}$s with \emph{subexponential} length in the regime when $\Sigma=\{0,1\}$ and $\Gamma$ is a finite field $\ensuremath{\mathbb{F}}_N$. Despite these results, our knowledge of such $N$ and $n$ trade-offs for $2$-query $\mathsf{LDC}$s is still lacking, specially for the not very well studied case when $\Sigma=\Gamma=\ensuremath{\mathbb{Z}}_r$. Prior works that handled simpler versions of the questions above used one technical tool successfully: \emph{hypercontractivity} for real-valued functions over the Boolean cube. Since we are concerned with proving quantum lower bounds for streaming algorithms and establishing lower bounds for $\mathsf{LDC}$s when the input alphabet is over $\ensuremath{\mathbb{Z}}_r$, it leads us to the following main question: \emph{Is there a version of hypercontractivity for matrix-valued functions over~$\ensuremath{\mathbb{Z}}_r$?} \subsection{Our results} Summarizing our main contributions, we first prove a version of hypercontractivity for matrix-valued functions $f:\ensuremath{\mathbb{Z}}_r^n\rightarrow\mathbb{C}^{m\times m}$. The proof of this crucially relies on proving uniform convexity for trace norms of $r$ matrices, which in turn {generalizes} the powerful $2$-uniform convexity by Ball, Carlen and Lieb~\cite{ball1994sharp}. Using this new hypercontractivity theorem, we prove our two applications. First, we prove a quantum space lower bound for streaming algorithms. It is easy to see that obtaining a $2$-approximation algorithm for Max-$k$-Cut on $n$ vertices in the classical streaming model can be done in $O(\log n)$ space, and we show that obtaining a $1.99$-approximation algorithm in the adversarial model requires $\Omega(n^{1-2/t})$ quantum space or $\Omega(n^{1-1/t})$ classical space. As far as we are aware, this is the first quantum space lower bound for an optimization problem. Although our lower bounds apply to the adversarial model, while prior works of Kapralov, Khanna and Sudan~\cite{kapralov2014streaming} and the mathematical tour-de-force result of Kapralov and Krachun~\cite{kapralov2019optimal} obtained an $\Omega(n)$ classical space lower bound for $(2-\varepsilon)$-approximation in the \emph{random} model, our proofs are significantly simpler. We further generalize our results to the case of $t$-hyperedge hypergraphs with vertices taking values over $\ensuremath{\mathbb{Z}}_r$. These hypergraphs can naturally be viewed as instances of Unique Games wherein the constraints are over $\ensuremath{\mathbb{Z}}_r$. Here again, we prove that obtaining an $r$-approximation algorithm requires $O(\log n)$ classical space and obtaining a $(r-\varepsilon)$-approximation algorithm requires $\Omega(n^{1-1/t})$ classical space or $\Omega(n^{1-2/t})$ quantum space. Second, we show an $N= 2^{\Omega(n/r^4)}$ lower bound for (even non-linear) $\mathsf{LDC}$s over $\ensuremath{\mathbb{Z}}_r$. In particular, for all $r$ smaller than $n^{1/4}$, we prove an exponential in $n$ lower bound for $\mathsf{LDC}$s over $\ensuremath{\mathbb{Z}}_r$. Previous main results in this direction were by Goldreich et al.~\cite{goldreich2002lower} for $r=2$ and linear $\mathsf{LDC}$s, Kerenidis and de Wolf~\cite{DBLP:journals/jcss/KerenidisW04} for $r=2$ and \emph{non-linear} $\mathsf{LDC}$s, Wehner and de Wolf~\cite{wehner2005improved} for non-linear $\mathsf{LDC}$s from $\{0,1\}^n\to\ensuremath{\mathbb{Z}}_r^N$ and finally by Dvir and Shpilka~\cite{dvir2007locally} for $r>2$ but linear $\mathsf{LDC}$s. Apart from the result of~\cite{dvir2007locally}, we are not aware of any lower bounds for non-linear $\mathsf{LDC}$s from $\ensuremath{\mathbb{Z}}_r^n\to\ensuremath{\mathbb{Z}}_r^N$, even though it is a very natural question with connections to other fundamental problems, such as polynomial identity testing~\cite{dvir2007locally}, private information retrieval~\cite{katz2000efficiency,goldreich2002lower}, additive combinatorics~\cite{briet2016outlaw} and quantum complexity theory~\cite{aaronson2018pdqp}, to cite a few. Furthermore, we are not aware of a formal reduction between $\mathsf{LDC}$s with $\Sigma=\{0,1\}$ and $\Sigma=\ensuremath{\mathbb{Z}}_r$, specially with recovery probability $1/|\Sigma| + \varepsilon$. Moreover, some past works define $\mathsf{LDC}$s over general $\Sigma$ with success probability $\geq \operatorname{Pr}[\text{wrong output}] + \varepsilon$~\cite{gopi2018locality}, $\geq 1/2 + \varepsilon$~\cite{goldreich2002lower} or $\geq 1-\varepsilon$~\cite{dvir2011matrix}. These alternative definitions are encompassed by ours by considering $\varepsilon$ a constant large enough. In the remaining part of the introduction, we describe these contributions in more detail. \subsection{Matrix hypercontractive inequality (over large alphabets)} \paragraph{Fourier analysis on the Boolean cube.} We first discuss the basics of Fourier analysis before stating our result. Let $f:\{0,1\}^n\rightarrow \ensuremath{\mathbb{R}}$ be a function, then the Fourier decomposition of $f$ is $$ f(x)=\sum_{S\in\{0,1\}^n} \widehat{f}(S)(-1)^{S\cdot x}, $$ where $S\cdot x = \sum_{i=1}^n S_ix_i$ (where the sum is over $\{0,1\}$) and the \emph{Fourier coefficients} of $f$ are defined as $\widehat{f}(S)=\mathbb{E}_x[f(x) (-1)^{S\cdot x}]$, the expectation taken over uniformly random $x\in\{0,1\}^n$. One of the technical tools in the area of theoretical computer science is the hypercontractivity theorem proven by Bonami and Beckner~\cite{bonami1970etude,beckner1975inequalities}. In order to understand the hypercontractivity theorem, we first need to define the noise operator: for a noise parameter $\rho \in [-1,1]$, let $\operatorname{T}_\rho$ be an operator on the space of functions $f:\{0,1\}^n\rightarrow \ensuremath{\mathbb{R}}$ defined as $$ ({\operatorname{T}}_\rho f)(x)=\operatorname*{\mathbb{E}}_{y\sim \mathcal{N}_\rho(x)}[f(y)], $$ where $y\sim \mathcal{N}_\rho(x)$ denotes that the random string $y\in\{0,1\}^n$ is drawn as $y_i=x_i$ with probability $\frac{1}{2} + \frac{1}{2}\rho$ and as $y_i=x_i\oplus 1$ with probability $\frac{1}{2} - \frac{1}{2}\rho$ for each $i\in[n]$ independently. One can show that the Fourier expansion of ${\operatorname{T}}_\rho f$ can be written as $$ ({\operatorname{T}}_\rho f)(x)=\sum_{S\in\{0,1\}^n}\rho^{|S|}\widehat{f}(S)(-1)^{S\cdot x}. $$ One way to intuitively view this expression is that ``large-weight" Fourier coefficients are reduced by an exponential factor while ``small-weight" Fourier coefficients remain approximately the same. Consequently, it is not hard to see that $\|{\operatorname{T}}_\rho f\|_p\leq \|f\|_p$ for every $p\geq 1$, where $\|f\|_p:=\big(\mathbb{E}_x[|f(x)|^p]\big)^{1/p}$ is the standard normalized $p$-norm of the function $f$. The main hypercontractivity theorem states that the previous inequality holds true even if we increase the left-hand size by a larger norm (meaning that norms under the noise operator are not just contractive, but \emph{hypercontractive}), i.e., for every $p\in [1,2]$ and $\rho \leq \sqrt{p-1}$, we have that $\|{\operatorname{T}}_\rho f\|_2\leq \|f\|_p$,\footnote{The hypercontractivity theorem can be stated for arbitrary $1\leq p \leq q$ and $\rho \leq \sqrt{(p-1)/(q-1)}$, here we state it for $q=2$ since we will be concerned with this setting.} which can alternatively be written as \begin{align} \label{eq:basichypercontractivity} \left(\sum_{S\in \{0,1\}^n} \rho^{2|S|} \widehat{f}(S)^2\right)^{1/2} \leq \left(\frac{1}{2^n}\sum_{x\in\{0,1\}^n} |f(x)|^p\right)^{1/p}. \end{align} This inequality has found several applications in approximation theory~\cite{khot2007optimal,dinur2005hardness}, expander graphs~\cite{hoory2006expander}, circuit complexity~\cite{linial1993constant}, coding theory~\cite{carlen1993optimal}, quantum computing~\cite{gavinsky2007exponential,DBLP:journals/qic/Montanaro11} (for more applications we refer the reader to~\cite{de2008brief,o2014analysis,montanaro2012some}). All these applications deal with understanding the effect of noise on real-valued functions on the Boolean cube. \noindent \textbf{Generalizations of hypercontractivity.} There are two natural generalizations of hypercontractivity: $(i)$ a hypercontractivity statement for arbitrary product probability spaces. In this direction, it is possible to prove a similar hypercontractive inequality: for every $p\in [1,2]$ and $f\in L^2(\Omega_1\times\cdots\times\Omega_n, \pi_1\otimes\cdots\otimes\pi_n)$, we have \begin{align} \label{eq:genhyperFr} \|{\operatorname{T}}_\rho f\|_2 \leq \|f\|_p \hspace{1mm}\text{ for }\hspace{1mm} \rho \leq \sqrt{p-1}\cdot\lambda^{1/p-1/2}, \end{align} where $\lambda$ is the smallest probability in any of the finite probability spaces $(\Omega_i,\pi_i)$ (see~\cite[Chapter~10]{o2014analysis}). As a corollary, one gets a hypercontractive inequality for $f:\ensuremath{\mathbb{Z}}_r^n\to\mathbb{R}$; $(ii)$ a hypercontractivity statement for matrix-valued functions $f:\{0,1\}^n\rightarrow \mathbb{C}^{m\times m}$, where the Fourier coefficients $\widehat{f}(S)=\mathbb{E}_x[f(x)(-1)^{S\cdot x}]$ are now $m\times m$ complex matrices. This was considered by Ben-Aroya, Regev and de Wolf~\cite{ben2008hypercontractive}, who proved a hypercontractivity statement by using the powerful inequality of Ball, Carlen and Lieb~\cite{ball1994sharp}. However, is there a generalization of hypercontractivity in both directions, i.e., a matrix-valued hypercontractivity for functions over $\ensuremath{\mathbb{Z}}_r$? This is open as far as we are aware and is our first main technical result. \begin{result} \label{result:hyper} For any $f:\ensuremath{\mathbb{Z}}_r^n\rightarrow \mathbb{C}^{m\times m}$, $p\in[1,2]$ and $\rho\leq \sqrt{\frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}}$, % \begin{align} \label{eq:resulthypereq} \left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^n} \rho^{2|S|} \|\widehat{f}(S)\|_p^2\right)^{1/2} \leq \left(\frac{1}{r^n}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\|f(x)\|_p^p\right)^{1/p}, \end{align} % where $\|M\|_p :=\big(\sum_i\sigma_i(M)^p\big)^{1/p}$ is the Schatten $p$-norm defined from the singular values $\{\sigma_i(M)\}_i$ of the matrix $M$ and $|S|:=|\{i\in[n]: S_i\neq 0\}|$ is the Hamming weight of $S\in\ensuremath{\mathbb{Z}}_r^n$. \end{result} The above result can be seen as an analogue of Eq.~\eqref{eq:basichypercontractivity} where the absolute values are replaced with Schatten norms. We now make a couple of remarks. First, when $m=1$ our result compares to the one in Eq.~\eqref{eq:genhyperFr} for $f:\ensuremath{\mathbb{Z}}_r^n\to\mathbb{R}$, but with a slightly worse $\rho$ parameter compared to the $(1/r)^{1/p-1/2}$ factor. Second, for $r=2$ we recover the same inequality from~\cite{ben2008hypercontractive}. The proof of this result is rather mathematical and not-so-intuitive. To this end, as in the proof of hypercontractive inequalities~\cite{o2014analysis,ben2008hypercontractive}, our result follows by induction on $n$. It so happens that the base case is the most non-trivial step in the proof. So for now, let us assume $n=1$, i.e., our goal is to prove Eq.~\eqref{eq:resulthypereq} for $n=1$. We now consider two special \emph{simple} cases of the~inequality. \emph{(i)} $r=2$ and $\mathbb{C}^{m\times m}$ is replaced with real numbers: in this case, this is the well-known two-point inequality by Gross~\cite{gross1975logarithmic} used for understanding the Logarithmic Sobolev inequalities. A proof of this inequality can also be easily viewed from a geometric perspective. As far as we are aware, there is no generalized $r$-point inequality for $r>2$. \emph{(ii)} $r=2$ and $\mathbb{C}^{m\times m}$ are arbitrary matrices: in this case, we only need to deal with two matrices $f(0),f(1)$ and Eq.~\eqref{eq:resulthypereq} is exactly a powerful inequality in functional analysis, called the \emph{$2$-uniform convexity} of trace norms, $$ \left(\frac{\|X+Y\|_p^p + \|X-Y\|_p^p}{2}\right)^{2/p} \geq \|X\|_p^2 + (p-1)\|Y\|_p^2. $$ This inequality was first proven for certain values of $p$ by Tomczak-Jaegermann~\cite{tomczak1974moduli} before being extended for all $p\in[1,2]$ by Ball, Carlen and Lieb~\cite{ball1994sharp} in 1994. Since then it has found several applications, e.g.\ an optimal hypercontractivity inequality for Fermi fields~\cite{carlen1993optimal}, regularized convex optimization~\cite{duchi2010composite} and metric embedding~\cite{lee2004embedding,naor2016spectral}. {$2$-uniform convexity can also be used to prove a variety of other inequalities, for example, Khintchine inequality~\cite{tomczak1974moduli,davis1984complex}, Hoeffding and Bennett-style bounds~\cite{pinelis1994optimum,howard2020time}}. Moreover, the above result could be seen as a corollary of Hanner's inequality for matrices (originally proven for Lebesgue spaces $L_p$~\cite{hanner1956uniform}), but, unfortunately, Hanner's inequality for Schatten trace ideals are only proven for $p\leq 4/3$ (see more in~\cite{ball1994sharp}). As far as we are aware, a generalization of the above inequality when considering $r$ matrices was unknown. One contribution in our work is the following generalization of a result from Ball, Carlen and Lieb~\cite{ball1994sharp} (note it also implies a generalization of the two-point inequality), which we believe may be of independent interest. \begin{result} \label{res:bcl} Let $r\in\mathbb{Z}$, $r\geq 2$. Let $\omega_r := e^{2i\pi/r}$, $A_0,\ldots,A_{r-1}\in \ensuremath{\mathbb{C}}^{n\times n}$ and $p\in [1,2]$, then % \begin{align} \left(\frac{1}{r} \sum_{k=0}^{r-1}\left\|\sum_{j=0}^{r-1} \omega_r^{jk} A_j\right\|_p^p\right)^{2/p} \geq\left\|A_0 \right\|_p^2 + \frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}\sum_{k=1}^{r-1} \left\| A_k\right\|_p^2.\label{eq:eqball-intro} \end{align} \end{result} Now that we have established Result~\ref{res:bcl}, the proof of Result~\ref{result:hyper} is a simple induction argument on $n$: for the base case $n=1$, Result~\ref{result:hyper} is exactly Result~\ref{res:bcl}, and proving the induction step requires an application of Minkowski inequality. Since this proof is very similar to the one in~\cite{ben2008hypercontractive}, we omit the details here. \subsection{Application 1: Streaming algorithms} Approximation algorithms for combinatorial optimization problems have been a rich area of study in theoretical computer science. One of the most famous approximation algorithms is by Goemans and Williamson~\cite{goemans1995improved} who proved that one can obtain a $1/0.878$-approximation algorithm in \emph{polynomial time} for Max-Cut using semi-definite programming and this is believed to be optimal assuming the Unique Games conjecture is true~\cite{khot2007optimal}. In the past few years there has been a sequence of works~\cite{kapralov2014streaming,guruswami2017streaming,DBLP:conf/approx/GuruswamiT19,kapralov2019optimal,DBLP:conf/focs/ChouGV20,chou2021approximabilityA,chou2021approximabilityB} that tried to prove \emph{unconditional} hardness of combinatorial optimization (e.g.\ the Max-Cut problem) in the well-known streaming model of computation by Alon, Matias and Szegedy~\cite{alon1999space}. In the streaming model, the goal is to optimize the amount of \emph{space} needed to solve a problem rather than time, and output a value which is at least a fraction $1/\gamma$ of the optimum value with high probability. Many recent works referenced above have shown interesting threshold theorems, for example, for the Max-Cut problem, getting a $2$-approximation algorithm using $O(\log n)$ classical space is easy: one simply counts the number of edges in the graph (which requires only a counter of size $2\log n$) and outputs half this count. Moreover, one can obtain a graph sparsifier using $O(n/\varepsilon^2)$ space~\cite{ahn2012graph} and, from it, a $(1+\varepsilon)$-approximation for the Max-Cut value. On the other hand, Kapralov et al.~\cite{kapralov2014streaming} initiated the study of proving streaming lower bounds for Max-Cut in the random-edge model (where inputs arrive randomly, and not necessarily adversarially), and in this work they showed that one requires $\Omega(\sqrt{n})$ space for $(2-\varepsilon)$-approximations in an $n$-vertex graph, together with a classical lower bound $\Omega(n^{1-\varepsilon})$ for $(1+\varepsilon)$-approximations in the adversarial model (their proof, together with Result~\ref{res:lowerHHM} below, immediately implies a similar quantum lower bound for $(1+\varepsilon)$-approximation). After a sequence of works, Kapralov and Krachun~\cite{kapralov2019optimal} finally obtained an $\Omega(n)$ space lower bound for $(2-\varepsilon)$-approximations even in the random-edge model. A common technique to prove streaming lower bounds is via communication complexity. To see this, suppose a problem $P$ has inputs $(X,Y)$ and the goal is to find space-efficient streaming algorithms to compute $P(X,Y)$, when $X,Y$ are presented in a stream (i.e., presented bit-by-bit). Then, one way to lower bound the \emph{space complexity} is to prove lower bounds on the following problem: consider the one-way communication problem where Alice gets the input $X$, Bob gets the input $Y$, their goal is to compute $P(X,Y)$ and only Alice is allowed to communicate to Bob. Then one can show that any lower bound for randomized one-way communication implies an equivalent lower bound for streaming algorithms. This technique has been used by a sequence of papers to prove lower bounds on space complexity of Max-Cut~\cite{kapralov2014streaming,kapralov2019optimal,DBLP:conf/approx/GuruswamiT19}, matching~\cite{goel2012communication}, Max-CSP~\cite{guruswami2017streaming,DBLP:conf/focs/ChouGV20,chou2021approximabilityA,chou2021approximabilityB} and counting cycles~\cite{verbin2011streaming,assadi2021simple}. One problem that is used often in this direction is a variant of the Boolean Hidden Matching. \subsubsection{Hidden Matching and its variants} The Boolean Hidden Matching ($\ensuremath{\mathsf{BHM}}$) problem was introduced by Bar-Yossef et al.~\cite{bar2004exponential} (which was in turn inspired by Kerenidis and de Wolf~\cite{DBLP:journals/jcss/KerenidisW04} for proving $\mathsf{LDC}$ lower bounds) in order to prove exponential separations between quantum and classical one-way communication complexities. Below we described the generalized Hidden Matching problem over larger alphabets and hypermatching. The $r$-ary Hidden Hypermatching ($r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$) problem is a two-party communication problem between Alice and Bob: Alice is given $x\in \ensuremath{\mathbb{Z}}_r^n$ and Bob is given a string $w\in \ensuremath{\mathbb{Z}}_r^{\alpha n/t}$ and $\alpha n/t$-many disjoint $t$-tuples (for $\alpha\in(0,1]$), i.e., hyperedges of an $\alpha$-partial hypermatching, which can also be viewed as an incident matrix $M\in \{0,1\}^{\alpha n/t\times n}$ (each row corresponding to a hyperedge). In the $\ensuremath{\mathsf{YES}}$ instance it is promised that $w=Mx$ (over $\ensuremath{\mathbb{Z}}_r$), while in the $\ensuremath{\mathsf{NO}}$ instance it is promised that $w$ is uniformly random, and the goal is to decide which is the case using a message sent from Alice to Bob. There have been a few lines of work in understanding the problem of Hidden Hypermatching: $(i)$ the seminal work of Bar-Yossef et al.~\cite{bar2004exponential} and Gavinsky et al.~\cite{gavinsky2007exponential} showed that, for $r=t=2$, $\ensuremath{\mathsf{BHM}}$ can be solved using $O(\log n)$ qubits but requires $\Omega(\sqrt{n})$ classical bits of communication; $(ii)$ Verbin and Yu~\cite{verbin2011streaming} considered the problem where $r=2$ and $t\geq 2$ (which in fact inspired many follow-up works on using hypermatching for classical streaming lower bounds) and showed a classical lowed bound of $\Omega(n^{1-1/t})$, which was subsequently generalized to a $\Omega(n^{1-2/t})$ quantum lower bound by Shi, Wu and Yu~\cite{shi2012limits}; {$(iii)$ Guruswami and Tao~\cite{DBLP:conf/approx/GuruswamiT19} studied the problem for when $t=2$ and $r\geq 2$, proving a classical $\Omega(\sqrt{n})$ lower bound.} A natural question is then, what is the quantum and classical communication complexities for $r,t\geq2$? In this paper, we give both upper and lower bounds for the Hidden Hypermatching problem for every $r$ and $t$. \paragraph{Upper bounds on Hidden Hypermatching.} For a given $t\geq2$, the same classical communication protocol for $r=2$ can be used for general $r>2$. The idea is that Alice picks $O((n/\alpha)^{1-1/t})$ entries of $x$ uniformly at random to send to Bob. By the Birthday Paradox, with high probability Bob will obtain all the values from one of his hyperedges $i$, and thus can compare $(Mx)_i$ with the corresponding $w_i$. If they are equal, he outputs $\ensuremath{\mathsf{YES}}$, otherwise he outputs $\ensuremath{\mathsf{NO}}$, which leads to an one-side error of $O(1/r)$. The total amount of communication is $O(\log{(rn)}(n/\alpha)^{1-1/t})$ bits.\footnote{ One can further improve this complexity to $O(\log{(n\log{r})} + (\log r)\cdot (n/\alpha)^{1-1/t})$ by Newman's theorem~\cite{newman1991private}.} The situation is more interesting in the quantum setting. For $t=2$, we prove that Hidden Hypermatching can be solved using only a logarithmic amount of qubits for every $r=\poly(n)$. \begin{result} \label{res:upperBHM} There is a protocol for $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,2,n)$ with one-sided error $1/3$ using $O(\log{(nr)}/\alpha)$~qubits. \end{result} The above bound uses a non-trivial procedure that allows to learn the sum of two numbers modulo $r$ by using just one ``query'' and crucially uses the knowledge of the string $w$: given a suitable superposition of two numbers, one can obtain their sum with one-sided error by using one measurement. As far as we are aware, such a statement was not known prior to our work. However, the knowledge of $w$ is vital, which means that the protocol does not work for more general settings where there is no promise on the inputs (e.g.\ a relational version of the $r$-ary Hidden Hypermatching problem where Bob must output one hyperedge $i$ and its corresponding value $(Mx)_i$), and it also cannot be used as a building block for the general case $t,r>2$. The current upper bound on the quantum communication complexity of the $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ problem with $t,r> 2$ thus matches the classical one. In view of the lower bounds stated below, we hence make the following conjecture. \begin{conjecture} If $t,r>2$, there is a protocol for $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ using $O(\log{(rn)}(n/\alpha)^{1-1/\lceil t/2\rceil})$~qubits. \end{conjecture} \paragraph{Lower bounds on Hidden Hypermatching.} The standard approach for proving a lower bound on the amount of communication required to solve the Hidden Hypermatching problem is via Fourier analysis. In the classical proofs of Gavinsky et al.~\cite{gavinsky2007exponential}, Verbin and Yu~\cite{verbin2011streaming} and Guruswami and Tao~\cite{DBLP:conf/approx/GuruswamiT19}, the total variation distance between the probability distributions arising from the $\ensuremath{\mathsf{YES}}$ and $\ensuremath{\mathsf{NO}}$ instances is bounded using the inequality of Kahn, Kalai and Linial~\cite{kahn1989influence} (which can be seen as a corollary of the hypercontractivity inequality). On the other hand, Shi, Wu and Yu~\cite{shi2012limits} obtained a quantum lower bound by bounding the Schatten 1-norm between the possible density matrices received by Bob in both $\ensuremath{\mathsf{YES}}$ and $\ensuremath{\mathsf{NO}}$ instances via the matrix-valued hypercontractivity from Ben-Aroya, Regev and de Wolf~\cite{ben2008hypercontractive}. We follow a similar approach by using our \emph{generalized matrix-valued hypercontractive} inequality from Result~\ref{result:hyper} in order to obtain the following lower bound (note that, for $r=2$, our lower bound is exponential better in $\alpha$ compared to~\cite{shi2012limits}). \begin{result} \label{res:lowerHHM} Every constant-bias protocol for the $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ problem with $t,r\geq 2$ requires at least $\Omega((n/t)^{1-2/t}/\alpha^{2/t})$ qubits of communication or $\Omega((n/t)^{1-1/t}/\alpha^{1/t})$ bits of communication. \end{result} \subsubsection{Relation to streaming lower bounds} As mentioned at the start of this section, using one-way communication complexity lower bounds has been a common technique used by several recent works~\cite{verbin2011streaming,kapralov2014streaming,guruswami2017streaming,DBLP:conf/approx/GuruswamiT19,DBLP:conf/focs/ChouGV20} to prove streaming lower bounds. Using our classical and quantum communication lower bound we present two lower bounds for streaming problems. There are a few natural generalizations to Max-Cut. One is Max-$k$-Cut, i.e., finding the maximum cut value on a hypergraph with $k$-sized hyperedges. Clearly, the lower bound of~\cite{kapralov2019optimal} holds true for Max-$k$-Cut, but could one prove better lower bound depending on $k$? Another is the Unique Games problem, a constraint satisfaction problem defined on a graph, where a linear constraint (a permutation) over $\ensuremath{\mathbb{Z}}_r$ is specified on each edge and the goal is to find a vertex assignment over $\ensuremath{\mathbb{Z}}_r$ that maximizes the number of satisfied constraints. When $r=2$, Unique Games reduces to Max-Cut. Guruswami and Tao~\cite{DBLP:conf/approx/GuruswamiT19} studied the streaming complexity of the Unique Games problem and proved a lower bound of $\Omega(\sqrt{n})$ in the adversarial model by using a reduction to Hidden Matching over $\ensuremath{\mathbb{Z}}_r$ and the same bound was obtained in~\cite{chou2021approximabilityB} for a larger set of problems including Unique Games. Here we join both directions, i.e., Max-$k$-Cut and the standard Unique Games problem, into a generalized version of Unique Games defined on a hypergraph and obtain streaming classical and quantum lower bounds in the adversarial model for any value $k,r\geq 2$. \begin{result} Every streaming algorithm giving a $(r-\varepsilon)$-approximation for Unique Games on $k$-hyperedge $n$-vertex hypergraphs over $\ensuremath{\mathbb{Z}}_r$ uses $\Omega(n^{1-2/k})$ quantum space or $\Omega(n^{1-1/k})$ classical space. \end{result} The above result clearly generalizes the work of Guruswami and Tao~\cite{DBLP:conf/approx/GuruswamiT19}. Compared to Kapralov and Krachun~\cite{kapralov2019optimal}, on the one hand our results are for the weaker adversarial model, and they obtained a stronger linear lower bound, but on the other hand, their result does not immediately generalize for $\ensuremath{\mathbb{Z}}_r$ and is a purely classical proof (in fact we remark that our classical lower bound is significantly simpler than their work). As far as we are aware, these are the first quantum lower bounds for Unique Games and Max-$k$-Cut in the streaming model. \subsection{Application 2: Locally decodable codes} A locally decodable code ($\mathsf{LDC}$) is an error correcting code that allows to retrieve a single bit of the original message (with high probability) by only examining a few bits in a corrupted codeword. More formally, a $(q,\varepsilon,\delta)$-$\mathsf{LDC}$ was defined by Katz and Trevisan~\cite{katz2000efficiency} as a function $C:\ensuremath{\mathbb{Z}}_r^n\rightarrow \ensuremath{\mathbb{Z}}_r^N$ that satisfies the following: for all $x\in \ensuremath{\mathbb{Z}}_r^n$, $i\in [n]$ and $y\in\ensuremath{\mathbb{Z}}_r^N$ that satisfies $d(C(x),y)\leq \delta$ (i.e., a $\delta$-fraction of the elements of $C(x)$ are corrupted), there exists an algorithm $\ensuremath{\mathcal{A}}$ that makes $q$ queries to $y$ non-adaptively and outputs $\mathcal{A}^{y}(i)\in \ensuremath{\mathbb{Z}}_r$ such that $\Pr[\mathcal{A}^{y}(i)=x_i]\geq 1/r+\varepsilon$ (where the probability is over the randomness of $\ensuremath{\mathcal{A}}$). Over $\{0,1\}$, $\mathsf{LDC}$s have found several applications in private information retrieval~\cite{chor1995private}, multiparty computation~\cite{ishai2004hardness}, data structures~\cite{chen2009efficient} and average-case complexity theory~\cite{trevisan2004some}. The natural question in constructing $\mathsf{LDC}$s is the trade-off between $N$ and $n$. A well-known $2$-query $\mathsf{LDC}$ is the Hadamard encoding that maps $x\in \ensuremath{\mathbb{Z}}_r^n$ into the string $C(x)=(\langle x,y\rangle)_{y\in \{0,1\}^n}$: on input $i\in [n]$, a decoding algorithm queries $C(x)$ at a uniformly random $y$ and $y+e_i$ and retrieves $C(x)=\langle x,y\oplus e_i\rangle-\langle x,y\rangle$, where $e_i = 0^{i-1}10^{n-i}$. Here the encoding length is $N=2^n$, and an important question is, are there $2$-query $\mathsf{LDC}$s with $N\ll 2^n$? For the case $r=2$, Goldreich et al.~\cite{goldreich2002lower} showed a lower bound $N= 2^{\Omega(n)}$ for \emph{linear codes}, which was later improved by Obata~\cite{obata2002optimal}. Later, Kerenidis and de Wolf~\cite{DBLP:journals/jcss/KerenidisW04} proved an exponential lower bound for \emph{non-linear codes} using a quantum argument!\footnote{For simplicity in exposition, we omit the dependence on $\delta,\varepsilon$ in these lower bounds.} This left open the setting where $r>2$. Following these works, {for $2$-query \emph{non-linear} $\mathsf{LDC}$s $C:\{0,1\}^n\to\ensuremath{\mathbb{Z}}_r^N$ (note the inputs are over $\{0,1\}$ and not $\ensuremath{\mathbb{Z}}_r$), Wehner and de Wolf~\cite{wehner2005improved} proved the lower bound $N=2^{\Omega(n/r^2)}$.} On the other hand, Dvir and Shpilka~\cite{dvir2007locally} showed a lower bound of $N= 2^{\Omega(n)}$ for every $2$-query \emph{linear} $\mathsf{LDC}$ $C:\ensuremath{\mathbb{Z}}_r^n\to\ensuremath{\mathbb{Z}}_r^N$, even independent of the field size. To prove their result, they crucially observed that, given a linear $\mathsf{LDC}$ over $\ensuremath{\mathbb{Z}}_r$, one can construct a linear $\mathsf{LDC}$ over $\{0,1\}$ (with almost the same parameters) and then invoked the result of Goldreich et al.~\cite{goldreich2002lower}. This reduction, however, fails for non-linear codes and motivates if there are \emph{non-linear} $\mathsf{LDC}$s $C:\ensuremath{\mathbb{Z}}_r^n\to\ensuremath{\mathbb{Z}}_r^N$ with $N\ll 2^n$? The main contribution here is a lower bound for \emph{non-linear} $\mathsf{LDC}$s over $\ensuremath{\mathbb{Z}}_r$ that scale as $2^{\Omega(n/r^4)}$, and which gives a super-polynomial lower bound for $r=o(n^{1/4})$. Our lower bound comes from using our hypercontractive inequality in Result~\ref{result:hyper}. The idea is similar to the one from~\cite{ben2008hypercontractive}, but more technical as a result of optimizing the dependence on $r$. A large $r^2N\times r^2N$ matrix with rank $1$ is constructed from a given $2$-query $\mathsf{LDC}$. By considering its Fourier transform over $\ensuremath{\mathbb{Z}}_r$, there exist various entries of the form $\mathbb{E}_x\big[\omega_r^{k_1C(x)_{j_1} + k_2C(x)_{j_2} - x_i}\big]$, whose absolute values are bounded by a technical result generalizing a few different ideas from~\cite{katz2000efficiency,DBLP:journals/jcss/KerenidisW04,ben2008hypercontractive}. It is possible then to lower bound the Schatten norm of the Fourier transformed matrix. On the other hand, its rank $1$ implies a simple expression for the original matrix's Schatten norm. The hypercontractive inequality connects both quantities and leads to the following final result. \begin{result} \label{res:ldchyper} If $C:\ensuremath{\mathbb{Z}}_r^n\to\ensuremath{\mathbb{Z}}_r^N$ is a $(2,\delta,\varepsilon)$-$\mathsf{LDC}$, then $N=2^{\Omega(\delta^2\varepsilon^4 n/r^4)}$. \end{result} We briefly mention that, if one requires the success probability to be larger than, for example, $1/2+\varepsilon$ instead of $1/r+\varepsilon$, so that plurality vote can be used and the success probability amplified, then $\varepsilon$ becomes a constant bounded away from $1/r$ (if $r>2$) and our lower bound is no longer dependent on $\varepsilon$. \paragraph{Further applications (Private information retrieval)} Katz and Trevisan~\cite{katz2000efficiency}, and Goldreich et al.~\cite{goldreich2002lower} established a nice connection between $\mathsf{LDC}$s and private information retrieval (\textsf{PIR}) protocols. We do not define these $\textsf{PIR}$ schemes here and refer the reader to Section~\ref{sec:PIR}. Almost as a black-box, using Result~\ref{res:ldchyper}, we get the following lower bound for $\textsf{PIR}$ schemes over $\ensuremath{\mathbb{Z}}_r$. \begin{result} A classical $2$-server PIR scheme with query size $t$, answer size $a$ and recovery probability $1/r + \varepsilon$, satisfies $t= \Omega\big(\delta^2\varepsilon^4 n/r^4 - a)$. \end{result} \paragraph{After completion of this work.} After completing this work, Chou et al.~\cite{chou2021linear} put up an online preprint in which they improve our classical streaming lower bound to $\Omega(n)$ for a broad class of problems, including Unique Games. As far as we are aware, our quantum streaming lower bound is the first for hypergraphs over $\ensuremath{\mathbb{Z}}_r$. Additionally, after completion, Jop Bri\"et (private communication) gave an alternate proof of $N=2^{\Omega(n/r^2)}$ for $2$-query LDCs over $\ensuremath{\mathbb{Z}}_r$ using the non-commutative Khintchine~inequality. \subsection{Future work} Our work open up a few directions of research. \emph{\textbf{1.}} \emph{Proving $\mathsf{LDC}$ lower bounds.} The first natural open question is, can we prove a lower bound of $N= 2^{\Omega(n/r)}$ for $\mathsf{LDC}$s over $\ensuremath{\mathbb{Z}}_r$, or, more ambitiously, prove that $N= 2^{\Omega(n)}$? As far as we are aware, there are no super-polynomial lower bounds for $N$ even for $r=\omega(\sqrt{n})$. Similarly, can one also prove a lower bound of $N= 2^{\Omega(n\log r)}$ for \emph{non-linear} locally-\emph{correctable} codes over $\ensuremath{\mathbb{Z}}_r$ (thereby matching a similar lower bound for linear case~\cite{bhattacharyya2011tight}). \emph{\textbf{2.}} \emph{Communication complexity of $r$-ary Hidden Hypermatching.} Our communication protocol behind Result~\ref{res:upperBHM} relies on the promise on the inputs, i.e., on the string $w\in\mathbb{Z}_r^{\alpha n/t}$ that either satisfies $w=Mx$ or is uniformly random. Is there a protocol with the same complexity which does not explicitly use $w$? More generally, what is the communication complexity of a relational version of the $r$-$\ensuremath{\mathsf{HH}}(\alpha,2,n)$ problem in which Bob outputs a hyperedge and the corresponding entry of $Mx$? Moreover, is it possible to match the quantum lower bounds from Result~\ref{res:lowerHHM}? \emph{\textbf{3.}} \emph{Better bounds on streaming algorithms.} What is the quantum space complexity of approximating Max-Cut or Unique Games? Is it possible to obtain some saving in space complexity, e.g.\ an upper bound of $O(n^{1-2/t})$ that matches our lower bound, or is the quantum space complexity $\Omega(n)$? The former would be interesting because advantage in quantum space complexity are only handful (for contrived problems) and the latter would require proving new quantum lower bounds for the communication problems introduced in~\cite{kapralov20171+,kapralov2019optimal,chou2021approximabilityA,chou2021approximabilityB,chou2021linear}. \emph{\textbf{4.}} \emph{Generalized hypercontractivity.} Another open question is regarding our main Result~\ref{result:hyper}, which shows a form of $(2,q)$-hypercontractivity, since the result works for all Schatten $p$-norms with $p\in [1,2]$. Can we prove a general $(q,p)$-hypercontractive statement for matrices, firstly for matrix-valued functions over $\{0,1\}$, and then further generalize that to functions over $\ensuremath{\mathbb{Z}}_r$? Proving this might also require a generalization of the powerful inequality of Ball, Carlen and Lieb~\cite{ball1994sharp} in a different direction. \paragraph{Acknowledgements.} SA firstly thanks T.S.\ Jayram for introducing him to this problem (and several discussions thereafter) on proving quantum bounds for streaming algorithms while participating in the program ``Quantum Wave in Computing” held at Simons Institute for the Theory for Computing. We thank Jop Bri\"et and Ronald de Wolf for many clarifications and discussions regarding hypercontractivity and $\mathsf{LDC}$s, and Mario Szegedy for discussions during the initial stages of this project. We are also very thankful to Keith Ball and Eric Carlen for the help in understanding their proof of uniform convexity for trace ideals. JFD was supported by the Singapore National Research Foundation, the Prime Minister's Office, Singapore and the Ministry of Education, Singapore under the Research Centres of Excellence programme under research grant R 710-000-012-135. \section{Preliminaries} \label{sec:sec2} Let $[n]:=\{1,\ldots,n\}$. For $r\in\ensuremath{\mathbb{Z}}$, $r\geq 2$, we let $\ensuremath{\mathbb{Z}}_r:=\{0,\ldots,r-1\}$ be the ring with addition and multiplication modulo $r$, and let $\omega_r := e^{2\pi i/r}$. Given $S\in\ensuremath{\mathbb{Z}}_r^n$, we write $|S|:= |\{i\in[n]: S_i\neq 0\}|$ for its Hamming weight. Let $\operatorname{D}(\mathbb{C}^m)$ be the set of all quantum states over $\mathbb{C}^m$, i.e., the set of positive semi-definite matrices with trace $1$. For a matrix $M\in \ensuremath{\mathbb{C}}^{m\times m}$, the (unnormalized) Schatten $p$-norm is defined as $\|M\|_p := (\operatorname{Tr}|M|^p)^{1/p} = \big(\sum_i\sigma_i(M)^p\big)^{1/p}$, where $\{\sigma_i(M)\}_i$ are the singular values of $M$, i.e., the eigenvalues of the positive semi-definite operator $|M|:=\sqrt{M^\dagger M}$. We also define the normalized Schatten $p$-norm as $\|M\|_p := \big(\frac{1}{m}\operatorname{Tr}|M|^p\big)^{1/p} = \big(\frac{1}{m}\sum_i\sigma_i(M)^p\big)^{1/p}$. Throughout the paper we shall use the unnormalized Schatten norm, unless stated otherwise. Given a vector $v\in\mathbb{C}^m$, its $p$-norm is $\|v\|_p := \big(\sum_{i=1}^m |v_i|^p\big)^{1/p}$. Given two probability distributions $P$ and $Q$ on the same finite set, their total variation distance is $\|P-Q\|_{\text{tvd}} := \sum_i |P(i) - Q(i)|$ (we might abuse notation and use random variables instead of their probability distributions in $\|\cdot\|_{\text{tvd}}$). For a probability $p = 1/r + \varepsilon$ with fixed $r\in\ensuremath{\mathbb{Z}}$, we refer to $\varepsilon$ as its \emph{advantage}, and to $2\varepsilon$ as its \emph{bias}. The Fourier transform of a matrix-valued function $f:\ensuremath{\mathbb{Z}}_r^n\to\mathbb{C}^{m\times m}$ is a function $\widehat{f}:\ensuremath{\mathbb{Z}}_r^n\to\mathbb{C}^{m\times m}$ defined by \begin{align*} \widehat{f}(S) = \frac{1}{r^n}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}f(x)\omega_r^{-S\cdot x}, \end{align*} where $S\cdot x = \sum_{i=1}^n S_ix_i$ is a sum over $\ensuremath{\mathbb{Z}}_r$. Here the Fourier coefficients $\widehat{f}(S)$ are $m\times m$ complex matrices and we can write $f:\ensuremath{\mathbb{Z}}_r^n\to\mathbb{C}^{m\times m}$ as \begin{align*} f(x) = \sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\widehat{f}(S)\omega_r^{S\cdot x}. \end{align*} We will need the Holevo-Helstrom theorem~\cite{helstrom1976quantum} which characterizes the optimal success probability of distinguishing between two quantum states. \begin{fact}[{\cite[Theorem~3.4]{watrous2018theory}}] \label{lem:lem3.5.c3} Let $\rho_0,\rho_1$ be two quantum states that appear with probability $p$ and $1-p$, respectively. The optimal success probability of predicting which state it is by a POVM~is % \begin{align*} \frac{1}{2} + \frac{1}{2}\|p\rho_0 - (1-p)\rho_1\|_1. \end{align*} \end{fact} \section{Hypercontractive Inequality} In this section we prove our main result, a hypercontractive inequality for matrix-valued functions over $\ensuremath{\mathbb{Z}}_r$, generalizing a result from~\cite{ben2008hypercontractive}. The proof is by induction on $n$ and the base case $n=1$ is proven in Section~\ref{sec:sec3.1}, which is a generalization of Ball, Carlen and Lieb~\cite{ball1994sharp} when considering $r$ matrices. After this, the induction is fairly straightforward and is described in Section~\ref{sec:sec3.2}. \subsection{Generalizing Ball, Carlen and Lieb} \label{sec:sec3.1} We first state the powerful inequality of Ball, Carlen and Lieb~\cite[Theorem~1]{ball1994sharp}. \begin{theorem}[Optimal 2-uniform convexity] \label{thr:thr1} Let $A,B\in\mathbb{C}^{n\times n}$, and $p\in[1,2]$. Then % $$ \left(\frac{\|A+B\|_p^p + \|A-B\|_p^p}{2}\right)^{2/p} \geq \|A\|_p^2 + (p-1)\|B\|_p^2. $$ \end{theorem} As previously mentioned in the introduction, this inequality was first proven by Tomczak-Jaegermann~\cite{tomczak1974moduli} for $p\leq 4/3$, before being generalized by Ball, Carlen and Lieb~\cite{ball1994sharp} for all $p\in [1,2]$ in 1994. Since then it has found several applications~\cite{carlen1993optimal,duchi2010composite,lee2004embedding,naor2016spectral}. The above result can be recast in a slightly different way. \begin{theorem} \label{thr:alternativeBCL} Let $p\in[1,2]$ and $Z,W\in\mathbb{C}^{n\times n}$ such that $\operatorname{Tr}[|Z|^{p-1}ZW^\dagger] =\operatorname{Tr}[|Z|^{p-1}WZ^\dagger] = 0$ (where $|Z|^{p-1}=(ZZ^\dagger)^{(p-1)/2}$).~Then % \begin{align*} \|Z+W\|_p^2 \geq \|Z\|_p^2 + (p-1)\|W\|_p^2. \end{align*} \end{theorem} Theorem~\ref{thr:alternativeBCL} is implicit in the proof of~\cite[Theorem~1]{ball1994sharp}, and it is where most of the difficulty lies, while the reduction from Theorem~\ref{thr:thr1} to Theorem~\ref{thr:alternativeBCL} is done by defining \begin{align*} Z = \begin{bmatrix} A & 0 \\ 0 & A \end{bmatrix}, \qquad W = \begin{bmatrix} B & 0 \\ 0 & -B \end{bmatrix}. \end{align*} Nonetheless, Theorem~\ref{thr:alternativeBCL} holds more generally for any $Z,W\in\mathbb{C}^{n\times n}$ that satisfy $\operatorname{Tr}[|Z|^{p-1}ZW^\dagger] =\operatorname{Tr}[|Z|^{p-1}WZ^\dagger]=0$. By using this result, we can prove the following generalization of Theorem~\ref{thr:thr1}. \begin{theorem}[A generalization of~\cite{ball1994sharp}] \label{eq:conjectureballgen} Let $r\in\mathbb{Z}$, $r\geq 2$. Let $\omega_r = e^{2i\pi/r}$, $A_0,\ldots,A_{r-1}\in \ensuremath{\mathbb{C}}^{n\times n}$ and $p\in [1,2]$, then % \begin{subequations} \begin{align} \left(\frac{1}{r} \sum_{j=0}^{r-1}\|A_j\|_p^p\right)^{2/p} &\geq \left\| \frac{1}{r}\sum_{j=0}^{r-1} A_j \right\|_p^2 + \frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}\sum_{k=1}^{r-1} \left\| \frac{1}{r}\sum_{j=0}^{r-1} \omega_r^{-jk}A_j \right\|_p^2,\\ \left(\frac{1}{r} \sum_{k=0}^{r-1}\left\|\sum_{j=0}^{r-1} \omega_r^{jk} A_j\right\|_p^p\right)^{2/p} &\geq\left\|A_0 \right\|_p^2 + \frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}\sum_{k=1}^{r-1} \left\| A_k\right\|_p^2.\label{eq:genball} \end{align} \end{subequations} \end{theorem} Notice that for $r=2$ we recover Theorem~\ref{thr:thr1}, since $\frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)} = p-1$. \begin{proof} In order to prove this theorem, first note that both inequalities are equivalent: just define $A'_k = \frac{1}{r}\sum_{j=0}^{r-1} \omega_r^{-jk}A_j \iff A_k = \sum_{j=0}^{r-1}\omega_r^{jk}A'_j$. Therefore we shall focus on Eq.~(\ref{eq:genball}). In order to prove it, let us first define the $rn\times rn$ matrices % \begin{align} \label{eq:matricesY} Z_j := \operatorname{diag}(\{\omega_r^{jk}A_j\}_{k=0}^{r-1}) = \begin{bmatrix} A_j & 0 & 0 & \dots & 0\\ 0 & \omega_r^{j}A_j & 0 & \dots & 0\\ 0 & 0 & \omega_r^{2j}A_j & \dots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \dots & \omega_r^{(r-1)j}A_{j} \end{bmatrix} \end{align} % for $j\in\{0,\dots,r-1\}$. Now, since the trace is additive for block matrices, we have % \begin{align} \label{eq:YandAmatrices} \operatorname{Tr}\left|\sum_{j=0}^{r-1}Z_j\right|^p = \sum_{k=0}^{r-1}\operatorname{Tr}\left|\sum_{j=0}^{r-1} \omega_r^{jk} A_j\right|^p. \end{align} % Moreover, observe that % \begin{align*} \|Z_j\|_p^2 = \left(\sum_{k=0}^{r-1}\operatorname{Tr}|\omega_r^{jk}A_j|^p\right)^{2/p} = (r\operatorname{Tr}|A_j|^p)^{2/p} = r^{2/p}\|A_j\|_p^2. \end{align*} % Therefore we can rewrite Eq.~(\ref{eq:genball}) as % \begin{align*} \left\|\sum_{j=0}^{r-1}Z_j\right\|^2_p \geq \|Z_0\|_p^2 + \frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}\sum_{j=1}^{r-1}\|Z_j\|_p^2. \end{align*} % The above can be proven by repeated applications of Theorem~\ref{thr:alternativeBCL} as follows: consider a permutation of $[r-1]$ given by $(k_1,\ldots,k_{r-1})$. Since $\operatorname{Tr}[|Z_j|^{p-1}Z_jZ_k^\dagger] = \operatorname{Tr}[|Z_j|^{p-1}Z_kZ_j^\dagger] = 0$ for any $j\neq k$, then (define $k_0:=0$) \begin{align*} \operatorname{Tr}\left[|Z_{k_j}|^{p-1}Z_{k_j}\left(\sum_{l=j+1}^{r-1}Z_{k_l}\right)^\dagger\right] = \operatorname{Tr}\left[|Z_{k_j}|^{p-1}\left(\sum_{l=j+1}^{r-1}Z_{k_l}\right)Z_{k_j}^\dagger\right] = 0 \end{align*} % for every $j\in\{0,1,\dots,r-2\}$, meaning that Theorem~\ref{thr:alternativeBCL} can be applied, which implies % \begin{align*} \left\|\sum_{j=0}^{r-1}Z_j\right\|^2_p &\geq \|Z_0\|_p^2 + (p-1)\left\| \sum_{j=1}^{r-1}Z_j\right\|^2_p\\ &\geq \|Z_0\|_p^2 + (p-1)\|Z_{k_1}\|_p^2 + (p-1)^2\left\| \sum_{j=2}^{r-1}Z_{k_j}\right\|^2_p\geq \|Z_0\|_p^2 + \sum_{j=1}^{r-1}(p-1)^j\|Z_{k_j}\|_p^2. \end{align*} % Averaging the above inequality over all the $(r-1)!$ permutations of the set $[r-1]$, we obtain % \begin{align*} \left\|\sum_{j=0}^{r-1}Z_j\right\|^2_p &\geq \|Z_0\|_p^2 + \frac{1}{(r-1)!}\sum_{j=1}^{r-1}\|Z_j\|_p^2\sum_{k=1}^{r-1}(r-2)!(p-1)^k\\ &= \|Z_0\|_p^2 + \frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}\sum_{j=1}^{r-1}\|Z_j\|_p^2, \end{align*} % proving our theorem statement. \end{proof} \begin{remark} \label{rem:1} It is not hard to see that $\frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)} \geq \frac{p-1}{r-1}$ and $\lim_{p\to 2}\frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)} = 1$. \end{remark} \noindent Observe that $t\mapsto t^{p/2}$ is concave for $p\in[1,2]$, hence Theorem~\ref{eq:conjectureballgen} implies the seemingly weaker \begin{align} \label{eq:BCLweaker} \frac{1}{r} \sum_{k=0}^{r-1}\left\|\sum_{j=0}^{r-1} \omega_r^{jk} A_j\right\|_p^2 &\geq \left\|A_0 \right\|_p^2 + \frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}\sum_{k=1}^{r-1} \left\| A_k\right\|_p^2 \end{align} for $p\in[1,2]$. Nonetheless, the above inequality also implies Theorem~\ref{eq:conjectureballgen} (this fact was already pointed out for $r=2$ by~\cite{ball1994sharp}). Indeed, consider again the $rn\times rn$ matrices $Z_j$ from Eq.~\eqref{eq:matricesY}. Then, similar to Eq.~\eqref{eq:YandAmatrices} (which only considered the $\ell=0$ case below), for any $\ell\in\ensuremath{\mathbb{Z}}_r$ we have \begin{align*} \operatorname{Tr}\left|\sum_{j=0}^{r-1}\omega_r^{j\ell} Z_j\right|^p = \sum_{k=0}^{r-1}\operatorname{Tr}\left|\sum_{j=0}^{r-1} \omega_r^{jk} A_j\right|^p \implies \left\|\sum_{j=0}^{r-1}\omega_r^{j\ell} Z_j\right\|_p^2 = \left(\sum_{k=0}^{r-1}\left\|\sum_{j=0}^{r-1} \omega_r^{jk} A_j\right\|^p_p\right)^{2/p}. \end{align*} Since $\|Z_j\|^2_p = r^{2/p}\|A_j\|^2_p$ for $j\in\ensuremath{\mathbb{Z}}_r$, Eq.~\eqref{eq:BCLweaker} implies (define $\zeta := \frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}$ for simplicity) \begin{align*} \left\|A_0 \right\|_p^2 + \zeta\sum_{k=1}^{r-1} \left\| A_k\right\|_p^2 = \frac{\left\|Z_0 \right\|_p^2}{r^{2/p}} + \zeta\sum_{k=1}^{r-1} \frac{\left\| Z_k\right\|_p^2}{r^{2/p}} \leq \frac{r^{-2/p}}{r} \sum_{\ell=0}^{r-1}\left\|\sum_{j=0}^{r-1} \omega_r^{j\ell} Z_j\right\|_p^2 = \left(\frac{1}{r}\sum_{k=0}^{r-1}\left\|\sum_{j=0}^{r-1} \omega_r^{jk} A_j\right\|^p_p\right)^{2/p}, \end{align*} which is exactly Theorem~\ref{eq:conjectureballgen}. \subsection{Proving $(2,p)$-hypercontractive inequality over $\ensuremath{\mathbb{Z}}_r$} \label{sec:sec3.2} Having proven the base case of our main theorem statement, we are now ready to prove our hypercontractivity theorem for matrix-valued functions over $\ensuremath{\mathbb{Z}}_r$. \begin{theorem} \label{thm:genmathypercontractivity} Let $p\in [1,2]$. For every $f:\ensuremath{\mathbb{Z}}_r^n\rightarrow \mathbb{C}^{m\times m}$ and % \begin{align*} \rho \leq \sqrt{\frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}}, \end{align*} % we have $$ \left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^n} \rho^{2|S|} \|\widehat{f}(S)\|_p^2\right)^{1/2} \leq \left(\frac{1}{r^n}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\|f(x)\|_p^p\right)^{1/p}, $$ where $|S| := |\{i\in[n]:S_i\neq 0\}|$. \end{theorem} \begin{proof} For ease of notation, define $\zeta := \frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}$. It suffices to prove the inequality for $\rho = \sqrt{\zeta}$. Our proof closely follows the one in~\cite{ben2008hypercontractive} and is by induction on $n$. For $n=1$, the desired statement is \begin{align} \label{eq:basecaseinduction} \sum_{S\in \ensuremath{\mathbb{Z}}_r} \zeta^{|S|} \|\widehat{f}(S)\|_p^2\leq \left(\frac{1}{r}\sum_{x\in\ensuremath{\mathbb{Z}}_r}\|f(x)\|_p^p\right)^{2/p}. \end{align} Consider the matrices $A_0,\dots,A_{r-1}$ such that $f(k)=\sum_{j=0}^{r-1}\omega_r^{j k}A_j$ for all $k\in\ensuremath{\mathbb{Z}}_r$, so that Eq.~\eqref{eq:basecaseinduction} can be written as \begin{align*} \left\|A_0 \right\|_p^2 + \zeta\sum_{k=1}^{r-1} \left\| A_k\right\|_p^2 \leq \left(\frac{1}{r}\sum_{k=0}^{r-1}\left\|\sum_{j=0}^{r-1}\omega_r^{j k}A_j\right\|_p^p\right)^{2/p}, \end{align*} using the fact that $\widehat{f}(j) = \frac{1}{r}\sum_{k=0}^{r-1}f(k)\omega_r^{-jk} = A_j$, which is precisely Theorem~\ref{eq:conjectureballgen}. We now assume the inequality holds for $n$ and prove it for $n+1$. Let $f:\ensuremath{\mathbb{Z}}_r^{n+1}\rightarrow \mathbb{C}^{m\times m}$ and $g_i=f\vert_{x_{n+1}=i}$ for $i\in \{0,\ldots,r-1\}$ be the function obtained by fixing the last bit of $f(\cdot)$ to $i$. By the induction hypothesis we have that, for every $i\in \{0,\ldots,r-1\}$ and $p\in [1,2]$, \begin{align*} \sum_{S\in \ensuremath{\mathbb{Z}}_r^n} \zeta^{|S|} \|\widehat{g_i}(S)\|_p^2\leq \left(\frac{1}{r^n}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\|g_i(x)\|_p^p\right)^{2/p}. \end{align*} We now take the $\ell_p$ average of each of these $r$ inequalities to obtain \begin{align} \label{eq:ellqaverage} \left(\frac{1}{r}\sum_{i=0}^{r-1}\left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^n} \zeta^{|S|} \|\widehat{g_i}(S)\|_p^2\right)^{p/2}\right)^{2/p}&\leq \left( \frac{1}{r}\sum_{i=0}^{r-1}\frac{1}{r^n}\sum_{x\in \ensuremath{\mathbb{Z}}_r^n}\|g_i(x)\|_p^p\right)^{2/p} &= \left(\frac{1}{r^{n+1}}\sum_{x\in \ensuremath{\mathbb{Z}}_r^{n+1}} \|f(x)\|_p^p\right)^{2/p}. \end{align} The right-hand side of the inequality above is exactly the right-hand side of the conjectured hypercontractive inequality. Below, we show how to lower bound the left-hand side of the inequality above by the desired left-hand side of the conjectured statement. To do so, we will need the following Minkowski's inequality. \begin{lemma}[{Minkowski's inequality, \cite[Theorem 26]{hardy1952j}}] \label{lem:minkowski} For any $r_1\times r_2$ matrix whose rows are given by $u_1,\dots,u_{r_1}$ and whose columns are given by $v_1,\dots,v_{r_2}$, and any $1\leq q_1\leq q_2\leq \infty$, % \begin{align*} \left\|\left(\|v_1\|_{q_2},\dots,\|v_{r_2}\|_{q_2}\right)\right\|_{q_1} \leq \left\|\left(\|u_1\|_{q_1},\dots,\|u_{r_2}\|_{q_1}\right)\right\|_{q_2}. \end{align*} \end{lemma} Now, consider the $r^n\times r$ matrix whose entries are given by $ c_{S,i}=r^{n/2}\big\|\zeta^{|S|/2} \widehat{g_i}(S)\big\|_p $ for every $i\in \{0,\ldots,r-1\}$ and $S\in \ensuremath{\mathbb{Z}}_r^n$. Then the left-hand side of Eq.~\eqref{eq:ellqaverage} can be written as \begin{align} \label{eq:afterminkowski} \left(\frac{1}{r}\sum_{i=0}^{r-1}\left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^n} \zeta^{|S|} \|\widehat{g_i}(S)\|_p^2\right)^{p/2}\right)^{1/p}&= \left(\frac{1}{r}\sum_{i=0}^{r-1}\left(\frac{1}{r^n}\sum_{S\in \ensuremath{\mathbb{Z}}_r^n} c_{S,i}^2\right)^{p/2}\right)^{1/p}\nonumber\\ &\geq \left(\frac{1}{r^n}\sum_{S\in \ensuremath{\mathbb{Z}}_r^n}\left(\frac{1}{r}\sum_{i=0}^{r-1} c_{S,i}^p\right)^{2/p}\right)^{1/2}\nonumber\\ &= \left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^n}\zeta^{|S|}\left(\frac{1}{r}\sum_{i=0}^{r-1} \|\widehat{g_i}(S)\big\|_p^p\right)^{2/p}\right)^{1/2}, \end{align} where the first inequality follows from Lemma~\ref{lem:minkowski} with $q_1 = p$ and $q_2=2$. Now, for a fixed $S\in\ensuremath{\mathbb{Z}}_r^n$, we use the base case $n=1$, i.e., Eq.~\eqref{eq:basecaseinduction}, on the functions $h(i)=\widehat{g_i}(S)$ in order to get $$ \left(\frac{1}{r}\sum_{i=0}^{r-1}\|\widehat{g_i}(S)\|_p^p\right)^{2/p}\geq \sum_{i=0}^{r-1} \zeta^{|i|} \left\|\frac{1}{r}\sum_{j=0}^{r-1}h(j)\omega_r^{-ij}\right\|_p^2 = \sum_{i=0}^{r-1} \zeta^{|i|} \left\|\frac{1}{r}\sum_{j=0}^{r-1}\widehat{g_j}(S)\omega_r^{-ij}\right\|_p^2. $$ Plugging this back into Eq.~\eqref{eq:afterminkowski}, we have \begin{align*} \left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^n}\zeta^{|S|}\left(\frac{1}{r}\sum_{i=0}^{r-1} \|\widehat{g_i}(S)\|_p^p\right)^{2/p}\right)^{1/2} &\geq \left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^{n}}\sum_{i=0}^{r-1}\zeta^{|S|+|i|}\left\|\frac{1}{r}\sum_{j=0}^{r-1}\widehat{g_j}(S)\omega_r^{-i j}\right\|_p^2\right)^{1/2} \\ &= \left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^{n+1}}\zeta^{|S|}\|\widehat{f}(S)\|_p^2\right)^{1/2}, \end{align*} where we used the fact that $g_j=f\vert_{x_{n+1}=j}$, so, for every $i\in\ensuremath{\mathbb{Z}}_r$ and $S\in\ensuremath{\mathbb{Z}}_r^n$, we have that $\widehat{f}(S,i)=\frac{1}{r}\sum_{j=0}^{r-1}\widehat{g_j}(S)\omega_r^{-i j}$. The lower bound we obtained above is exactly the left-hand side of the conjectured hypercontractive inequality, which proves the theorem statement. \end{proof} \begin{remark}[Comparison with hypercontractivity for real numbers] \label{rem:rem_realfunctions} For real functions $f:\ensuremath{\mathbb{Z}}_r^n\rightarrow \ensuremath{\mathbb{R}}$, it is known that~\cite{latala2000between,wolff2007hypercontractivity} (see also~\cite[Theorem~10.18]{o2014analysis}) $$ \left(\sum_{S\in \ensuremath{\mathbb{Z}}_r^n} \rho^{2|S|}|\widehat{f}(S)|^2\right)^{1/2} \leq \left(\frac{1}{r^n}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}|f(x)|^p\right)^{1/p}, $$ where $\rho \leq \sqrt{\frac{(r-1)^{1-1/p} - (r-1)^{-(1-1/p)}}{(r-1)^{1/p} - (r-1)^{-1/p}}}$. Moreover, this bound on $\rho$ is perfectly sharp, meaning that our bound $\rho \leq \sqrt{\frac{(p-1)(1-(p-1)^{r-1})}{(r-1)(2-p)}}$ in Theorem~\ref{thm:genmathypercontractivity} can possibly be improved. \end{remark} \section{Hidden Hypermatching Problem} The Boolean Hidden Matching ($\mathsf{BHM}$) problem is a canonical problem in one-way communication complexity. Here, Alice is given a string $x\in\{0,1\}^n$, while Bob is given a string $w\in\{0,1\}^{\alpha n/2}$ and a sequence of $\alpha n/2$ disjoint pairs $(i_1,j_1),\dots,(i_{\alpha n/2}, j_{\alpha n/2})\in[n]^2$ (called $\alpha$-partial matching), where $\alpha\in(0,1]$. Let $z\in\{0,1\}^{\alpha n/2}$ be the string defined as $z_\ell = x_{i_\ell}\oplus x_{j_\ell}$ for $\ell\in[\alpha n/2]$. It is promised that $z\oplus w = b^{\alpha n/2}$ for some $b\in\{0,1\}$. By sending a single message from Alice to Bob, their task is to output $b$, i.e., to decide whether $z\oplus w$ equals the all $0$ string or the all $1$ string. The $\mathsf{BHM}$ problem was proposed by Bar-Yossef \emph{et al.}~\cite{bar2004exponential}, where they showed a simple quantum protocol using only $O(\log{n})$ qubits of communication. Later, Gavinsky \emph{et al.}~\cite{gavinsky2007exponential}, by using Fourier techniques, specially the inequality of Kahn, Kalai and Linial~\cite{kahn1989influence}, proved that any classical protocol needs to communicate $\Omega(\sqrt{n})$ bits in order to solve the problem. Since then, many generalizations of the $\mathsf{BHM}$ problem were proposed. Verbin and Yu~\cite{verbin2011streaming} extended the $\alpha n/2$ disjoint pairs received by Bob to $\alpha n/t$ disjoint $t$-tuples $(M_{1,1},\dots,M_{1,t}),\dots,(M_{\alpha n/t,1},\dots,M_{\alpha n/t,t})$ (called ``hypermatching''). The main task now is to compute the parity $z_\ell = \bigoplus_{k=1}^t x_{M_{\ell,k}}$ of a ``hyperedge''. Verbin and Yu named the resulting problem Boolean Hidden Hypermatching ($2\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$),\footnote{We use the notation $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ for simplicity in exposition throughout.} proved a lower bound $\Omega(n^{1-1/t})$ on any classical communication protocol and used this to bound the amount of space required in streaming algorithms. A quantum lower bound $\Omega(n^{1-2/t})$ on the $2\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ problem was later proven by Shi, Wu and Yu~\cite{shi2012limits}. Subsequently, Kapralov, Khanna and Sudan~\cite{kapralov2014streaming} proposed the Boolean Hidden Partition, where Bob does not receive a matching anymore, but the edges of any graph $G$. It is promised that either $Mx=w$, where $M$ is the edge incidence matrix of $G$, or $w$ is taken uniformly at random independently on $x$, and Alice and Bob's task is to decide which is the correct case. In another line, Guruswami and Tao~\cite{DBLP:conf/approx/GuruswamiT19} introduced the $r$-ary Hidden Matching ($r\text{-}\ensuremath{\mathsf{HH}}(\alpha,2,n)$) problem, where now $x$ and $w$ are over $\ensuremath{\mathbb{Z}}_r$ instead of $\{0,1\}$, Bob receives a matching $M$ (and not a general graph), and either $Mx=w$ or $w$ is drawn uniformly at random. Finally, Doriguello and Montanaro~\cite{doriguello2020exponential} expanded the $2\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ problem to computing a fixed Boolean function on the hyperedges of Bob's hypermatching instead of the Parity function. Here we shall consider the standard Hidden Hypermatching problem over a larger alphabet. In the following, an $\alpha$-partial $t$-hypermatching $M\in\mathcal{M}_{t,n}^\alpha$ on $n$ vertices is defined as a sequence of $\alpha n/t$ disjoint hyperedges $(M_{1,1},\dots,M_{1,t}),\dots,(M_{\alpha n/t, 1},\dots, M_{\alpha n/t, t})\in[n]^t$ with $t$ vertices each, where $\mathcal{M}_{t,n}^\alpha$ is the set of all such hypermatchings. If $\alpha = 1$, we shall write $\mathcal{M}_{t,n}$. \begin{definition} \label{def:hypermatching} Let $n,t\in\mathbb{N}$ be such that $t|n$ and $\alpha\in(0,1]$. In the $r$-ary Hidden Hypermatching $(r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n))$ problem, Alice gets $x\in\ensuremath{\mathbb{Z}}_r^n$, Bob gets an $\alpha$-partial $t$-hypermatching $M\in\mathcal{M}_{t,n}^\alpha$ and a string $w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}$. The hyperedges of $M$ are $(M_{1,1},\dots,M_{1,t}),\dots,(M_{\alpha n/t, 1},\dots, M_{\alpha n/t, t})$. Let $M\in\{0,1\}^{\alpha n/t \times n}$ also be the incident matrix of Bob's hypermatching. Consider the~distributions: % \begin{enumerate} \item $\ensuremath{\mathsf{YES}}$ distribution $\mathcal{D}^{\ensuremath{\mathsf{YES}}}$, let $w=Mx$ (where the matrix product $Mx$ is over $\ensuremath{\mathbb{Z}}_r$); \item $\ensuremath{\mathsf{NO}}$ distribution $\mathcal{D}^{\ensuremath{\mathsf{NO}}}$, $w$ is uniformly random in $\ensuremath{\mathbb{Z}}_r^{\alpha n/t}$. \end{enumerate} % In the $r$-ary Hidden Hypermatching problem, Alice sends a message to Bob who needs to decide with high probability if $w$ is drawn from $\mathcal{D}^{\ensuremath{\mathsf{YES}}}$ or $\mathcal{D}^{\ensuremath{\mathsf{NO}}}$. \end{definition} \subsection{Quantum protocol for $r$-ary Hidden Hypermatching} For $t=2$, we obtain a efficient quantum communication protocol to solve the $r$-ary Hidden Hypermatching problem. \begin{theorem} \label{thr:rary-upper} Given $\varepsilon > 0$, there is a protocol for the $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,2,n)$ problem with one-sided error $\varepsilon$ and $O(\frac{1}{\alpha}\log{(nr)}\log(1/\varepsilon))$ qubits of communication from Alice to Bob. \end{theorem} \begin{proof} Let $M\in\mathcal{M}^\alpha_{2,n}$ be Bob's matching with edges $(M_{1,1},M_{1,2}),\dots,(M_{\alpha n/2,1},M_{\alpha n/2,2})$. Alice sends the following state to Bob, % \begin{align*} \frac{1}{\sqrt{n}}\sum_{i=1}^{n} |x_i,i\rangle, \end{align*} % who measures it with the POVM $\{E_1,\dots,E_{\alpha n/2}, \mathbb{I}-\sum_{i=1}^{\alpha n/2} E_i\}$, where % \begin{align*} E_i := |M_{i,1}\rangle\langle M_{i,1}| + |M_{i,2}\rangle\langle M_{i,2}| \end{align*} % for $i\in \{1,\dots,\alpha n/2\}$. With probability $1-\alpha$ the POVM outputs the final outcome, and with probability $\alpha$ he will obtain a measurement outcome $E_i$ with $i\in[\alpha n/2]$ and get the state % \begin{align*} |\psi\rangle := \frac{1}{\sqrt{2}}(|x_{M_{i,1}},M_{i,1}\rangle + |x_{M_{i,2}},M_{i,2}\rangle). \end{align*} % By repeating the procedure $O(1/\alpha)$ times, Bob obtains an outcome $i\in[\alpha n/2]$ with high probability. For the ease of notation, we can write $M_{i,1} = 0$ and $M_{i,2} = 1$ (note that Bob knows the values of both $M_{i,1},M_{i,2}$ explicitly). Bob now attaches a $\lceil\log_2{r}\rceil$-qubit register in the state $|0\rangle$ to $|\psi\rangle$ and applies a Fourier transform $Q_r$ over $\ensuremath{\mathbb{Z}}_r$ to it to obtain \begin{align*} |0\rangle|\psi\rangle \to \frac{1}{\sqrt{r}}\sum_{k=0}^{r-1} |k\rangle|\psi\rangle. \end{align*} From now on we shall consider a parameter $\ell\in\mathbb{Z}_r$ to be determined later. Let $X$ be the usual Pauli operator and let $S_\ell$ and $P$ be the shift and phase operators over $\ensuremath{\mathbb{Z}}_r$ defined as $S_\ell|k\rangle = |\ell-k\rangle$ and $P|k\rangle = \omega_r^{k}|k\rangle$ for $k\in\mathbb{Z}_r$. Let $C_\ell := PS_\ell P\otimes X$. Bob applies the controlled unitary $U_\ell$ defined as $U_\ell|k\rangle|\psi\rangle = |k\rangle C_\ell^k|\psi\rangle$ on his state, followed by an inverse Fourier transform $Q^\dagger_r$ on his first register to get % \begin{align*} \frac{1}{\sqrt{r}}\sum_{k=0}^{r-1} U_\ell|k\rangle|\psi\rangle = \frac{1}{\sqrt{r}}\sum_{k=0}^{r-1} |k\rangle C_\ell^k|\psi\rangle \stackrel{Q^\dagger_r\otimes \ensuremath{\mathbb{I}}}{\longrightarrow} \frac{1}{r}\sum_{j=0}^{r-1}\sum_{k=0}^{r-1}\omega_r^{-jk} |j\rangle C_\ell^k|\psi\rangle. \end{align*} % Let us calculate $C_\ell|\psi\rangle$ and $C_\ell^2|\psi\rangle$. We have % \begin{align} C_\ell|\psi\rangle &= \frac{1}{\sqrt{2}}(PS_\ell P \otimes X)(|x_0,0\rangle + |x_1,1\rangle)\nonumber\\ &= \frac{1}{\sqrt{2}}(PS_\ell \otimes \mathbb{I})(\omega_r^{x_0}|x_0,1\rangle + \omega_r^{x_1}|x_1,0\rangle)\nonumber\\ &= \frac{1}{\sqrt{2}}(P \otimes \mathbb{I})(\omega_r^{x_0}|\ell -x_0,1\rangle + \omega_r^{x_1}|\ell -x_1,0\rangle)\nonumber\\ &= \frac{\omega_r^{\ell}}{\sqrt{2}}(|\ell -x_1,0\rangle + |\ell -x_0,1\rangle)\label{eq:bobstate} \end{align} % and % \begin{align*} C_\ell^2|\psi\rangle &= \frac{\omega_r^{\ell}}{\sqrt{2}}(PS_\ell P \otimes X)(|\ell -x_1,0\rangle + |\ell -x_0,1\rangle)\\ &= \frac{\omega_r^{\ell}}{\sqrt{2}}(PS_\ell \otimes \mathbb{I})(\omega_r^{\ell - x_1}|\ell -x_1,1\rangle + \omega_r^{\ell - x_0}|\ell -x_0,0\rangle)\\ &= \frac{\omega_r^{\ell}}{\sqrt{2}}(P \otimes \mathbb{I})(\omega_r^{\ell - x_1}|x_1,1\rangle + \omega_r^{\ell - x_0}|x_0,0\rangle)\\ &= \omega_r^{2\ell}|\psi\rangle. \end{align*} % We can see from the above that $C_\ell^{2k}|\psi\rangle = \omega_r^{2k\ell}|\psi\rangle$. By defining $\Delta_\ell := \ell - (x_0+x_1)$ and $\delta_k =1$ if $k$ is odd and $0$ otherwise, Bob's final state is % \begin{align} \frac{1}{r}\sum_{j=0}^{r-1}\sum_{k=0}^{r-1}\omega_r^{k(\ell-j)}|j\rangle\frac{1}{\sqrt{2}}(|x_0 + \Delta_\ell\delta_k,0\rangle + |x_1 + \Delta_\ell\delta_k,1\rangle).\label{eq:bobfinal} \end{align} Now observe that, if $\ell=x_0+x_1$, then $C_\ell|\psi\rangle = \omega_r^{\ell}|\psi\rangle$ in Eq.~\eqref{eq:bobstate}. This means that Bob's state in Eq.~\eqref{eq:bobfinal} becomes $|x_0+x_1\rangle|\psi\rangle$, and if he measures his first register, he obtains $x_0+x_1~(\text{mod}~r)$ with certainty. On the other hand, if $\ell\neq x_0+x_1$, then the probability of measuring the first register and obtaining the outcome $m\in\mathbb{Z}_r$ is % \begin{align*} \operatorname{Pr}[m] &= \frac{1}{2r^2}\sum_{k_1,k_2=0}^{r-1}\omega_r^{(\ell-m)(k_1-k_2)}(\langle x_0 + \Delta_\ell\delta_{k_2}|x_0 + \Delta_\ell\delta_{k_1}\rangle + \langle x_1 + \Delta_\ell\delta_{k_2}|x_1 + \Delta_\ell\delta_{k_1}\rangle)\\ &= \frac{1}{r^2}\sum_{k_1,k_2~\text{even}}^{r-1}\omega_r^{(\ell-m)(k_1-k_2)} + \frac{1}{r^2}\sum_{k_1,k_2~\text{odd}}^{r-1}\omega_r^{(\ell-m)(k_1-k_2)}\\ &= \left|\frac{1}{r}\sum_{k~\text{even}}^{r-1}\omega_r^{k(\ell-m)}\right|^2 + \left|\frac{1}{r}\sum_{k~\text{odd}}^{r-1}\omega_r^{k(\ell-m)}\right|^2. \end{align*} % It is not hard to see that the above probability is maximum for when $m = \ell$, in which case % \begin{align*} \operatorname{Pr}[m=\ell] = \frac{1}{r^2}\left\lfloor\frac{r+1}{2}\right\rfloor^2 + \frac{1}{r^2}\left\lfloor\frac{r}{2}\right\rfloor^2 = \begin{cases} \frac{1}{2} \qquad &r~\text{even},\\ \frac{1}{2} + \frac{1}{2r^2} \qquad &r~\text{odd}. \end{cases} \end{align*} % Given the considerations above, Bob uses the following strategy: he picks $\ell$ as the corresponding entry $w_i$ from $w\in\mathbb{Z}_r^{\alpha n/2}$ given the measured hyperedge $(M_{i,1},M_{i,2})$. If the outcome $m$ from measuring his final state in Eq.~\eqref{eq:bobfinal} equals $w_i$, then he outputs $\mathsf{YES}$, otherwise he outputs $\mathsf{NO}$. Indeed, in the $\mathsf{YES}$ instance, $w_i = x_{M_{i,1}}+x_{M_{i,2}}$ and so $m$ equals $w_i$ with probability $1$, while in the $\mathsf{NO}$ instance, $m$ equals $w_i$ with probability at most $\frac{1}{2}+\frac{1}{2r^2}$. Thus the communication protocol has one-sided error at most $\frac{1}{2}+\frac{1}{2r^2}$, i.e., $\operatorname{Pr}[\text{error}|\mathsf{YES}] = 0$ and $\operatorname{Pr}[\text{error}|\mathsf{NO}] \leq \frac{1}{2}+\frac{1}{2r^2}$. By repeating the whole protocol $O(\log(1/\varepsilon))$ more times, the one-sided error probability can be decreased to $\varepsilon$: if in any of the repetitions the final measurement outcome is different from $w_i$, then Bob knows that $\mathsf{NO}$ is the correct answer. \end{proof} \subsection{Quantum lower bound on $r$-ary Hidden Hypermatching} In this section we shall turn our attention to proving quantum and classical lower bounds on the amount of communication required by the $r$-$\ensuremath{\mathsf{HH}}(\alpha,t,n)$ problem, but first we need the following~lemma. \begin{lemma} \label{lem:hypercontractive} Let $f:\ensuremath{\mathbb{Z}}_r^n \to \operatorname{D}(\mathbb{C}^{2^m})$ be any mapping from an $n$-bit alphabet to $m$-qubit density matrices. Then for any $\delta\in[0,1/(r-1)]$, we have % \begin{align*} \sum_{S\in\ensuremath{\mathbb{Z}}_r^n} \delta^{|S|}\|\widehat{f}(S)\|^2_1 \leq 2^{2(r-1)\delta m}. \end{align*} \end{lemma} \begin{proof} Let $p := 1+(r-1)\delta$. First note that, given the eigenvalues $\sigma_1,\dots,\sigma_{2^m}$ from $f(x)$, which are non-negative reals that sum to $1$, we have % \begin{align*} \|f(x)\|_p^p = \sum_{i=1}^{2^m}\sigma_i^p \leq \sum_{i=1}^{2^m}\sigma_i = 1. \end{align*} % Using Theorem~\ref{thm:genmathypercontractivity} and Remark~\ref{rem:1}, we now get % \begin{align*} \sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\left(\frac{p-1}{r-1}\right)^{|S|}\|\widehat{f}(S)\|_p^2 \leq \left(\frac{1}{r^n}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\|f(x)\|_p^p\right)^{2/p} \leq \left(\frac{1}{r^n}\cdot r^n\right)^{2/p} = 1. \end{align*} % On the other hand, the normalized Schatten norm $2^{-m/p}\|\widehat{f}(S)\|_p$ is non-decreasing with $p$, since $p \leq q \implies \left(\frac{1}{2^m}\sum_{i=1}^{2^m}\sigma_i^p\right)^{1/p} \leq \left(\frac{1}{2^m}\sum_{i=1}^{2^m}\sigma_i^q\right)^{1/q}$ by H\"older's inequality, hence % \begin{align*} \sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\left(\frac{p-1}{r-1}\right)^{|S|}2^{-2m/p}\|\widehat{f}(S)\|_p^2 \geq \sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\left(\frac{p-1}{r-1}\right)^{|S|}2^{-2m}\|\widehat{f}(S)\|_1^2. \end{align*} % Rearranging the inequalities leads to % \[ \sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\left(\frac{p-1}{r-1}\right)^{|S|}\|\widehat{f}(S)\|_1^2 \leq 2^{2m(1-1/p)} \leq 2^{2m(p-1)}. \qedhere \] \end{proof} We are now ready to state and prove our main quantum communication complexity lower bound for the $r$-ary Hidden Hypermatching problem. \begin{theorem} \label{thm:hypermatchinglowerbound} Any quantum protocol that achieves advantage $\varepsilon>0$ for the $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ problem with $t\geq 3$ and $\alpha \leq \min(1/2, (r-1)^{-1/2})$ requires at least $m = \Omega(r^{-(1+1/t)}(\varepsilon^2/\alpha)^{2/t}(n/t)^{1-2/t})$ qubits of communication from Alice to Bob. \end{theorem} Notice that for $r=2$ our lower bound reads $\Omega(\alpha^{-2/t}(n/t)^{1-2/t})$, which has a better dependence on $\alpha$ compared to the lower bound $\Omega(\log(1/\alpha)(n/t)^{1-2/t})$ from~\cite{shi2012limits}. Also, see Remark~\ref{rem:alphadependence} at the end of the section for an improvement on the requirement $\alpha \leq \min(1/2, (r-1)^{-1/2})$. \begin{proof} Consider an $m$-qubit communication protocol. An arbitrary $m$-qubit protocol can be viewed as Alice sending an encoding of her input $x\in \ensuremath{\mathbb{Z}}_r^n$ into a quantum state so that Bob can distinguish if his $w$ was drawn from $\mathcal{D}^{\ensuremath{\mathsf{YES}}}$ or $\mathcal{D}^{\ensuremath{\mathsf{NO}}}$. Let $\rho:\ensuremath{\mathbb{Z}}_r^n\to \operatorname{D}(\mathbb{C}^{2^m})$ be Alice's encoding function. For our `hard' distribution, Alice and Bob receive $x\in\ensuremath{\mathbb{Z}}_r^n$ and $M\in\mathcal{M}_{t,n}^\alpha$, respectively, uniformly at random, while Bob's input $w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}$ is drawn from the distribution $\mathcal{D} := \frac{1}{2}\mathcal{D}^{\ensuremath{\mathsf{YES}}} + \frac{1}{2}\mathcal{D}^{\ensuremath{\mathsf{NO}}}$, i.e., with probability $1/2$ is comes from $\mathcal{D}^{\ensuremath{\mathsf{YES}}}$, and with probability $1/2$ it comes from $\mathcal{D}^{\ensuremath{\mathsf{NO}}}$. Let $p_x := r^{-n}$, $p_M := |\mathcal{M}^{\alpha}_{t,n}|^{-1}$ and $p_w := r^{-\alpha n/t}$, then our hard distribution $\mathcal{P}$ is % \begin{align} \label{eq:eq3.5.c3} \operatorname{Pr}[x,\ensuremath{\mathsf{YES}},M,w] = \frac{1}{2}p_x\cdot p_M \cdot [Mx = w], \quad \operatorname{Pr}[x,\ensuremath{\mathsf{NO}},M,w] = \frac{1}{2}p_x\cdot p_M\cdot p_w. \end{align} Conditioning on Bob's input $(M, w)$, from his perspective, Alice sends the message $\rho_x$ with probability $\operatorname{Pr}[x|M,w]$. Therefore, conditioned on an instance of the problem ($\ensuremath{\mathsf{YES}}$ or $\ensuremath{\mathsf{NO}}$), Bob receives one of the following two quantum states $\rho_\ensuremath{\mathsf{YES}}^{M,w}$ and $\rho_\ensuremath{\mathsf{NO}}^{M,w}$, each appearing with probability $\operatorname{Pr}[\ensuremath{\mathsf{YES}}|M,w]$ and $\operatorname{Pr}[\ensuremath{\mathsf{NO}}|M,w]$, respectively, % \begin{align} \label{eq:rhoyesrhono} \begin{aligned} \rho_{\ensuremath{\mathsf{YES}}}^{M,w} &= \sum_{x\in\ensuremath{\mathbb{Z}}_r^n} \operatorname{Pr}[x|\ensuremath{\mathsf{YES}},M,w]\cdot\rho_x = \frac{1}{\operatorname{Pr}[\ensuremath{\mathsf{YES}},M,w]} \sum_{x\in\ensuremath{\mathbb{Z}}_r^n} \operatorname{Pr}[x,\ensuremath{\mathsf{YES}},M,w] \cdot \rho_x,\\ \rho_{\ensuremath{\mathsf{NO}}}^{M,w} &= \sum_{x\in\ensuremath{\mathbb{Z}}_r^n} \operatorname{Pr}[x|\ensuremath{\mathsf{NO}},M,w]\cdot\rho_x = \frac{1}{\operatorname{Pr}[\ensuremath{\mathsf{NO}},M,w]} \sum_{x\in\ensuremath{\mathbb{Z}}_r^n} \operatorname{Pr}[x,\ensuremath{\mathsf{NO}},M,w] \cdot \rho_x. \end{aligned} \end{align} % Bob's best strategy to determine the distribution of $w$ conditioning on his input $(M,w)$ is no more than the chance to distinguish between these two quantum states $\rho_\ensuremath{\mathsf{YES}}^{M,w}$ and $\rho_\ensuremath{\mathsf{NO}}^{M,w}$. Now let $\varepsilon_{bias}$ be the bias of the protocol that distinguishes between $\rho_\ensuremath{\mathsf{YES}}^{M,w}$ and $\rho_\ensuremath{\mathsf{NO}}^{M,w}$. According to Lemma~\ref{lem:lem3.5.c3}, the bias $\varepsilon_{bias}$ of any quantum protocol for a fixed $M$ and $w$ can be upper bounded as % \begin{align*} \varepsilon_{bias} \leq \big\|{\operatorname{Pr}}[\ensuremath{\mathsf{YES}}|M,w]\cdot\rho_\ensuremath{\mathsf{YES}}^{M,w} - \operatorname{Pr}[\ensuremath{\mathsf{NO}}|M,w]\cdot\rho_\ensuremath{\mathsf{NO}}^{M,w}\big\|_1. \end{align*} % We prove in Theorem~\ref{thr:raryhidden} below that, if $m \leq \frac{\gamma}{r^{1+1/t}}(\frac{\varepsilon^2}{\alpha})^{2/t} (n/t)^{1-2/t}$ for a universal constant $\gamma$, then the average bias over $M$ and $w$ is at most $\varepsilon^2$, i.e., % \begin{align*} \operatorname*{\mathbb{E}}_{(M,w)\sim\mathcal{P}_{M,w}}[\varepsilon_{bias}] \leq \varepsilon^2, \end{align*} % where $\mathcal{P}_{M,w}$ is the marginal distribution of $\mathcal{P}$. Therefore, by Markov's inequality, for at least a $(1-\varepsilon)$-fraction of $M$ and $w$, the bias in distinguishing between $\rho_\ensuremath{\mathsf{YES}}^{M,w}$ and $\rho_\ensuremath{\mathsf{NO}}^{M,w}$ is $\varepsilon$ small. Therefore, Bob's advantage over randomly guessing the right distribution will be at most $\varepsilon$ (for the event that $M$ and $w$ are such that the distance between $\rho_\ensuremath{\mathsf{YES}}^{M,w}$ and $\rho_\ensuremath{\mathsf{NO}}^{M,w}$ is more than $\varepsilon$) plus $\varepsilon/2$ (for the advantage over random guessing when $\varepsilon_{bias} \leq \varepsilon$), and so $m = \Omega(r^{-(1+1/t)}(\varepsilon^2/\alpha)^{2/t}(n/t)^{1-2/t})$. \end{proof} \begin{theorem} \label{thr:raryhidden} For $x\in\ensuremath{\mathbb{Z}}_r^n$, $M\in\mathcal{M}_{t,n}^\alpha$, $w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}$ and $b\in\{\ensuremath{\mathsf{YES}},\ensuremath{\mathsf{NO}}\}$, consider the probability distribution $\mathcal{P}$ defined in Eq.~(\ref{eq:eq3.5.c3}). Given an encoding function $\rho:\ensuremath{\mathbb{Z}}_r^n\to \operatorname{D}(\mathbb{C}^{2^m})$, consider the quantum states $\rho_\ensuremath{\mathsf{YES}}^{M,w}$ and $\rho_\ensuremath{\mathsf{NO}}^{M,w}$ from Eq.~(\ref{eq:rhoyesrhono}). If $\alpha \leq \min(1/2,(r-1)^{-1/2})$, there is a universal constant $\gamma>0$ (independent of $n$, $t$, $r$ and $\alpha$), such that, for all $\varepsilon > 0$, if $m \leq \frac{\gamma}{r^{1+1/t}}(\frac{\varepsilon^2}{\alpha})^{2/t} (n/t)^{1-2/t}$,~then % \begin{align*} \operatorname*{\mathbb{E}}_{(M,w)\sim\mathcal{P}_{M,w}}\left[\big\|\operatorname{Pr}[\ensuremath{\mathsf{YES}}|M,w]\cdot\rho_\ensuremath{\mathsf{YES}}^{M,w} - \operatorname{Pr}[\ensuremath{\mathsf{NO}}|M,w]\cdot\rho_\ensuremath{\mathsf{NO}}^{M,w}\big\|_1\right] \leq \varepsilon^2. \end{align*} \end{theorem} \begin{proof} For the ease of notation, we shall write % \begin{align*} \varepsilon_{bias} := \operatorname*{\mathbb{E}}_{(M,w)\sim\mathcal{P}_{M,w}}\left[\big\|\operatorname{Pr}[\ensuremath{\mathsf{YES}}|M,w]\cdot\rho_\ensuremath{\mathsf{YES}}^{M,w} - \operatorname{Pr}[\ensuremath{\mathsf{NO}}|M,w]\cdot\rho_\ensuremath{\mathsf{NO}}^{M,w}\big\|_1\right]. \end{align*} % Therefore, we have that % \begin{align*} \displaybreak[0]\varepsilon_{bias} &= \sum_{M\in\mathcal{M}_{t,n}^\alpha}\sum_{w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}}\operatorname{Pr}[M,w]\cdot \big\|\operatorname{Pr}[\ensuremath{\mathsf{YES}}|M,w]\cdot\rho_\ensuremath{\mathsf{YES}}^{M,w} - \operatorname{Pr}[\ensuremath{\mathsf{NO}}|M,w]\cdot\rho_\ensuremath{\mathsf{NO}}^{M,w}\big\|_1\displaybreak[0]\\ &= \sum_{M\in\mathcal{M}_{t,n}^\alpha}\sum_{w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}}\Big\|\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\Big(\operatorname{Pr}[x,\ensuremath{\mathsf{YES}},M,w]\cdot\rho_x - \operatorname{Pr}[x,\ensuremath{\mathsf{NO}},M,w]\cdot\rho_x\Big)\Big\|_1\displaybreak[0]\\ &= \sum_{M\in\mathcal{M}_{t,n}^\alpha}\sum_{w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}}\Big\|\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\frac{1}{2}p_x\cdot p_M \left(\big[Mx = w\big] - p_w\right)\rho_x\Big\|_1 && \tag{By Eqs.~\eqref{eq:eq3.5.c3},~\eqref{eq:rhoyesrhono}}\displaybreak[0]\\ &= \sum_{M\in\mathcal{M}_{t,n}^\alpha}\sum_{w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}}\Big\|\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\frac{1}{2}p_x\cdot p_M \left(\big[Mx = w\big] - p_w\right)\cdot \sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\widehat{\rho}(S)\omega_r^{S\cdot x}\Big\|_1\tag{Fourier decomposition of $\rho$}\displaybreak[0]\\ &= \sum_{M\in\mathcal{M}_{t,n}^\alpha}\sum_{w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}} \Big\|\sum_{S\in\ensuremath{\mathbb{Z}}_r^n} u(M,w,S) \widehat{\rho}(S)\Big\|_1\displaybreak[0]\\ &\leq \sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\sum_{M\in\mathcal{M}_{t,n}^\alpha}\sum_{w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}}|u(M,w,S)|\cdot \|\widehat{\rho}(S)\|_1, \end{align*} % where we defined % \begin{align} \label{eq:defnofu} u(M,w,S) := \frac{1}{2}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}p_x\cdot p_M \cdot \omega_r^{S\cdot x} \left(\big[Mx = w\big] - p_w\right). \end{align} Next, we upper bound the quantity $u(M,w,S)$ using the lemma below. In the following lemma, given an $\alpha$-partial hypermatching $M\in\mathcal{M}_{t,n}^\alpha$, we can, without lost of generality, complete $M$ with $(1-\alpha)n/t$ remaining hyperedges and turn it into a perfect hypermatching, i.e., we can assume that $M\in\mathcal{M}_{t,n}$. Moreover, we shall write $S|_{M_i} = S_{M_{i,1}} S_{M_{i,2}}\dots S_{M_{i,t}}\in\ensuremath{\mathbb{Z}}_r^t$ to denote the string $S$ restricted to the hyperedge $M_i = (M_{i,1},\dots,M_{i,t})$, where $S_{M_{i,j}}$ is the $M_{i,j}$-th entry of $S$. The same applies to $x\in\ensuremath{\mathbb{Z}}_r^n$. \begin{lemma} \label{lem:understandingu} Let $M\in\mathcal{M}_{t,n}$, $w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}$ and $S\in\ensuremath{\mathbb{Z}}_r^n$. Define the set \begin{align*} \Delta(M) = \{S\in\ensuremath{\mathbb{Z}}_r^n\setminus\{0^n\} ~|~ S_{M_{i,1}} = S_{M_{i,2}} = \cdots = S_{M_{i,t}} &\text{ for every } i\in[\alpha n/t]\\ \text{and}~ S|_{M_i} = 0^t &\text{ for every } i>\alpha n/t\}. \end{align*} Given $u(M,w,S)$ as defined in Eq.~(\ref{eq:defnofu}), we have $ u(M,w,S)=\frac{1}{2}\cdot r^{-\alpha n/t}\cdot p_M $ if $S\in\Delta(M)$ and 0 if $S\notin\Delta(M)$. \end{lemma} \begin{proof} Recall the definition of $u$: $$ u(M,w,S) = \frac{1}{2}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}p_x\cdot p_M \cdot \omega_r^{S\cdot x} \left(\big[Mx = w\big] - p_w\right). $$ In order to understand this expression, we start with the following: % \begin{align*} \displaybreak[0]\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\omega_r^{S\cdot x}\big[Mx = w\big] &= \sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\omega_r^{S\cdot x} \prod_{i=1}^{\alpha n/t} \left[(Mx)_i = w_i\right]\displaybreak[0]\\ &= \sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\omega_r^{S\cdot x} \prod_{i=1}^{\alpha n/t} \left[\sum_{j=1}^t x_{M_{i,j}} \equiv w_i~(\text{mod}~r)\right]\displaybreak[0]\\ &= \sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\omega_r^{\sum_{i=1}^{n/t}\sum_{j=1}^t S_{M_{i,j}} x_{M_{i,j}}} \prod_{i=1}^{\alpha n/t} \left[\sum_{j=1}^t x_{M_{i,j}} \equiv w_i~(\text{mod}~r)\right]\displaybreak[0]\\ &= \sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\omega_r^{\sum_{i=1}^{n/t}\sum_{j=1}^t S_{M_{i,j}} x_{(i-1)t+j}} \prod_{i=1}^{\alpha n/t} \left[\sum_{j=1}^t x_{(i-1)t+j} \equiv w_i~(\text{mod}~r)\right], \end{align*} % where we reordered $x\in\ensuremath{\mathbb{Z}}_r^n$ in the last step. Therefore % \begin{align*} \sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\omega_r^{S\cdot x}\big[Mx = w\big] &= \left(\prod_{i=1}^{\alpha n/t}\sum_{x\in\ensuremath{\mathbb{Z}}_r^{t}}\omega_r^{\sum_{j=1}^t S_{M_{i,j}}x_j}\left[\sum_{j=1}^t x_{j} \equiv w_i~(\text{mod}~r)\right] \right)\left(\prod_{i>\alpha n/t}^{n/t}\sum_{x\in\ensuremath{\mathbb{Z}}_r^{t}}\omega_r^{\sum_{j=1}^t S_{M_{i,j}}x_j}\right)\\ &= r^{n(1-\alpha)}\prod_{i=1}^{\alpha n/t}\sum_{x\in\ensuremath{\mathbb{Z}}_r^t}\omega_r^{S|_{M_i}\cdot x}\left[\sum_{j=1}^t x_j \equiv w_i~(\text{mod}~r)\right], \end{align*} where $S|_{M_i} = 0^t$ for all $i > \alpha n/t$, otherwise the expression above is $0$. Now we use that % \begin{align*} \sum_{j=1}^t x_j \equiv w_i~(\text{mod}~r) \implies x_t \equiv w_i - \sum_{j=1}^{t-1} x_j ~(\text{mod}~r), \end{align*} % and so % \begin{align*} S|_{M_i}\cdot x = \sum_{j=1}^t S_{M_{i,j}}x_i = \sum_{j=1}^{t-1} S_{M_{i,j}}x_j + S_{M_{i,t}}\left(w_i - \sum_{j=1}^{t-1}x_j\right) = S_{M_{i,t}}w_i + \sum_{j=1}^{t-1}(S_{M_{i,j}} - S_{M_{i,t}})x_j \end{align*} % modulo $r$. This leads to % \begin{align} \sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\omega_r^{S\cdot x}\big[Mx = w\big] &= r^{n(1-\alpha)}\prod_{i=1}^{\alpha n/t}\omega_r^{S_{M_{i,t}}w_i}\sum_{x\in\ensuremath{\mathbb{Z}}_r^{t-1}}\omega_r^{\sum_{j=1}^{t-1}(S_{M_{i,j}} - S_{M_{i,t}})x_j}= \frac{r^{n}}{r^{\alpha n/t}}\prod_{i=1}^{\alpha n/t}\omega_r^{S_{M_{i,t}}w_i}\label{eq:rary1} \end{align} % if, for all $i\in[\alpha n/t]$, $S_{M_{i,j}}$ is constant for all $j\in[t]$, i.e., if $S_{M_{i,1}} = S_{M_{i,2}} = \dots = S_{M_{i,t}}$ for any $i\in[\alpha n/t]$. Otherwise the above expression is $0$. % Thus, if $S_{M_{i,1}} = S_{M_{i,2}} = \dots = S_{M_{i,t}}$ for any $i\in[\alpha n/t]$ and $S|_{M_i}=0^t$ for $i>\alpha n/t$, then we can use Eq.~(\ref{eq:rary1}) to get (remember that $p_x := r^{-n}$ and $p_w := r^{-\alpha n/t}$) % \begin{align*} \displaybreak[0]|u(M,w,S)| = \frac{1}{2}\left|\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}p_xp_M \omega_r^{S\cdot x} \left(\big[Mx = w\big] - p_w\right)\right| &= \frac{1}{2}\frac{1}{r^{\alpha n/t}}p_M\left|\prod_{i=1}^{\alpha n/t}\omega_r^{S_{M_{i,t}}w_i} - [S=0^n]\right|\displaybreak[0] \\ &= \begin{cases} 0 &~\text{if}~ S=0^n,\\ \frac{1}{2} r^{-\alpha n/t} p_M &~\text{if}~S\neq 0^n. \end{cases} \end{align*} % Hence, we have % \begin{align*} |u(M,w,S)| = \begin{cases} 0 &\text{if}~ S=0^n,\\ \frac{1}{2} r^{-\alpha n/t} p_M &\text{if}~S_{M_{i,1}} = S_{M_{1,2}} = \dots= S_{M_{i,t}}~ \forall i\in[\alpha n/t] ~\text{and}~ S|_{M_i} = 0^t~\forall i>\alpha n/t,\\ 0 &\text{otherwise}, \end{cases} \end{align*} proving the lemma statement \end{proof} We now proceed to upper bound $\varepsilon_{bias}$ using the expression for $|u(M,w,S)|$ from Lemma~\ref{lem:understandingu}. For $S\in\ensuremath{\mathbb{Z}}_r^n$, let $|S| := |\{i\in[n]: S_i\neq 0\}|$. Notice that, if $S\in\Delta(M)$, then $|S| = kt$ for some $k\in[\alpha n/t]$. Hence, we have that % \begin{align*} \varepsilon_{bias} \leq \frac{1}{2}\sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\sum_{\substack{M\in\mathcal{M}^\alpha_{t,n} \\ S\in\Delta(M)}}p_M \sum_{w\in\ensuremath{\mathbb{Z}}_r^{\alpha n/t}}\frac{1}{r^{\alpha n/t}}\|\widehat{\rho}(S)\|_1 &= \frac{1}{2}\sum_{k=1}^{\alpha n/t}\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}}\sum_{\substack{M\in\mathcal{M}^\alpha_{t,n} \\ S\in\Delta(M)}} p_M\|\widehat{\rho}(S)\|_1\\ &= \frac{1}{2}\sum_{k=1}^{\alpha n/t}\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}}\operatorname*{Pr}_{M\sim\mathcal{M}_{t,n}^\alpha}[S\in\Delta(M)]\cdot\|\widehat{\rho}(S)\|_1, \end{align*} % using that % \begin{align*} \sum_{\substack{M\in\mathcal{M}^\alpha_{t,n} \\ S\in\Delta(M)}} p_M = \operatorname*{Pr}_{M\sim\mathcal{M}^{\alpha}_{t,n}}[S\in\Delta(M)]. \end{align*} % We now upper bound this probability using the following lemma. % \begin{lemma} \label{lem:probcomb} Let $t\in\ensuremath{\mathbb{Z}}$. Let $S\in\ensuremath{\mathbb{Z}}_r^n$ with $k_j := \frac{1}{t}\cdot |\{i\in[n]:S_i = j\}|\in\ensuremath{\mathbb{Z}}$ for $j\in\{1,\dots,r-1\}$. Let $k := \sum_{j=1}^{r-1} k_j$. For any $M\in\mathcal{M}_{t,n}^\alpha$, let $\Delta(M)$ be the set from Lemma~\ref{lem:understandingu}. Then % \begin{align*} \operatorname*{Pr}_{M\sim\mathcal{M}_{t,n}^\alpha}[S\in\Delta(M)] = \frac{\binom{\alpha n/t}{k}}{\binom{n}{kt}}\frac{k!}{(kt)!}\prod_{j=1}^{r-1}\frac{(k_jt)!}{k_j!}. \end{align*} \end{lemma} \begin{proof} We can assume without lost of generality that $S = 1^{k_1t}2^{k_2t}\dots (r-1)^{k_{r-1}t}0^{n-kt}$. First note that the total number $|\mathcal{M}_{t,n}^\alpha|$ of $\alpha$-partial hypermatchings is $n!/\big((t!)^{\alpha n/t}(\alpha n/t)!(n-\alpha n)!\big)$. This can be seen as follows: pick a permutation of $n$, view the first $\alpha n/t$ tuples of length $t$ as $\alpha n/t$ hyperedges, and ignore the ordering within each hyperedge, the ordering of the $\alpha n/t$ hyperedges and the ordering of the last $n-\alpha n$ vertices. Now, given our particular $S$, notice that $S\in\Delta(M)$ if, for $j\in[r-1]$, $M$ has exactly $k_j$ hyperedges in $$ \left\{1 + t\sum_{i=1}^{j-1}k_i,~2 + t\sum_{i=1}^{j-1}k_i,~3 + t\sum_{i=1}^{j-1}k_i,\ldots,(k_j-1) + t\sum_{i=1}^{j-1}k_i,~t\sum_{i=1}^{j}k_i\right\}, $$ i.e., $k_1$ hyperedges in $\{1,\dots,k_1t\}$, $k_2$ hyperedges in $\{k_1t+1,\dots,(k_2+k_1)t\}$, etc., and also $\alpha n/t-k$ hyperedges in $[n]\setminus[kt]$. The number of ways to pick $k_j$ hyperedges in $\left\{1 + t\sum_{i=1}^{j-1}k_i,\dots,t\sum_{i=1}^{j}k_i\right\}$ is $(k_jt)!/((t!)^{k_j}k_j!)$. The number of ways to pick the remaining $\alpha n/t - k$ hyperedges in $[n]\setminus[kt]$ is $(n-kt)!/((t!)^{\alpha n/t - k} (\alpha n/t - k)! (n-\alpha n)!)$. Hence $\operatorname*{Pr}_{M\sim\mathcal{M}_{t,n}^\alpha}[S\in\Delta(M)]$ equals \[ \frac{\frac{(n-kt)!}{(t!)^{\alpha n/t - k} (\alpha n/t - k)! (n-\alpha n)!}}{\frac{n!}{(t!)^{\alpha n/t}(\alpha n/t)!(n-\alpha n)!}}\prod_{j=1}^{r-1}\frac{(k_jt)!}{(t!)^{k_j}k_j!} = \frac{(n-kt)!(\alpha n/t)!}{n!(\alpha n/t - k)!}\prod_{j=1}^{r-1}\frac{(k_jt)!}{k_j!} = \frac{\binom{\alpha n/t}{k}}{\binom{n}{kt}}\frac{k!}{(kt)!}\prod_{j=1}^{r-1}\frac{(k_jt)!}{k_j!}. \qedhere \] \end{proof} % By using Lemma~\ref{lem:probcomb} and the notation $|S|_i := |\{j\in[n]:S_j=i\}|$, we continue upper bounding $\varepsilon_{bias}$ as follows % \begin{align} \displaybreak[0]\varepsilon_{bias} &\leq \frac{1}{2}\sum_{k=1}^{\alpha n/t}\frac{\binom{\alpha n/t}{k}}{\binom{n}{kt}}\sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}}\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S|_i = k_it, ~i\in[r-1]}}\frac{k!}{(kt)!}\left(\prod_{j=1}^{r-1}\frac{(k_jt)!}{k_j!}\right) \|\widehat{\rho}(S)\|_1\nonumber\displaybreak[0]\\ &\leq \frac{1}{2}\sum_{k=1}^{\alpha n/t}\frac{\binom{\alpha n/t}{k}}{\binom{n}{kt}}\sqrt{\sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}}\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S|_i = k_it, ~i\in[r-1]}}\frac{k!^2}{(kt)!^2}\prod_{j=1}^{r-1}\frac{(k_jt)!^2}{k_j!^2}} \sqrt{\sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}}\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S|_i = k_it, ~i\in[r-1]}}\|\widehat{\rho}(S)\|_1^2}\label{eq:new_proof1}\displaybreak[0]\\ &\leq \frac{1}{2}\sum_{k=1}^{\alpha n/t}\frac{\binom{\alpha n/t}{k}}{\binom{n}{kt}}\sqrt{\sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}}\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S|_i = k_it, ~i\in[r-1]}}\frac{k!^2}{(kt)!^2}\prod_{j=1}^{r-1}\frac{(k_jt)!^2}{k_j!^2}} \sqrt{\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}}\|\widehat{\rho}(S)\|_1^2}\nonumber\displaybreak[0]\\ &= \frac{1}{2}\sum_{k=1}^{\alpha n/t}\frac{\binom{\alpha n/t}{k}}{\sqrt{\binom{n}{kt}}}\sqrt{\sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}} \frac{k!^2}{(kt)!}\prod_{j=1}^{r-1}\frac{(k_jt)!}{k_j!^2}} \sqrt{\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}}\|\widehat{\rho}(S)\|_1^2},\label{eq:new_proof2} \end{align} % where Eqs.~\eqref{eq:new_proof1} and~\eqref{eq:new_proof2} used Cauchy-Schwarz inequality and $\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S|_i = k_it}} 1 = \binom{n}{kt}(kt)!\prod_{j=1}^{r-1}\frac{1}{(k_jt)!}$, respectively. We now use the multinomial theorem in % \begin{align} \label{eq:alphadependence} \sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}} \frac{k!^2}{(kt)!}\prod_{j=1}^{r-1}\frac{(k_jt)!}{k_j!^2} &= \sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}} \frac{\binom{k}{k_1,\dots,k_{r-1}}^2}{\binom{kt}{k_1t,\dots,k_{r-1}t}} \leq \sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}} \binom{k}{k_1,\dots,k_{r-1}} = (r-1)^k, \end{align} % which leads to % \begin{align*} \varepsilon_{bias} \leq \frac{1}{2}\sum_{k=1}^{\alpha n/t}\alpha^k\frac{\binom{ n/t}{k}}{\sqrt{\binom{n}{kt}}}(r-1)^{k/2} \sqrt{\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}}\|\widehat{\rho}(S)\|_1^2}, \end{align*} % where we also used that $\binom{\alpha n/t}{k} \leq \alpha^k \binom{n/t}{k}$ for $\alpha\in[0,1]$. In order to compute the above sum, we shall split it into two parts: one in the range $1\leq k < 4rm$, and the other in the range $4rm \leq k \leq \alpha n/t$. \textbf{Sum I} ($1\leq k < 4rm$): in order to upper bound each term, pick $\delta = k/(4rm)$ in Lemma~\ref{lem:hypercontractive}, so % \begin{align*} \sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}} \|\widehat{\rho}(S)\|_1^2 \leq \frac{1}{\delta^{kt}}\sum_{S\in\ensuremath{\mathbb{Z}}_r^n}\delta^{|S|} \|\widehat{f}(S)\|_1^2 \leq \frac{1}{\delta^{kt}}2^{2r\delta m} = \left(\frac{2^{1/(2t)}4rm}{k}\right)^{kt}. \end{align*} % Therefore, and by using that $m \leq \frac{\gamma}{r^{1+1/t}} (\frac{\varepsilon^2}{\alpha})^{2/t} (n/t)^{1-2/t}$ and $\binom{q}{s}^2\binom{\ell q}{\ell s}^{-1} \leq (\frac{s}{q})^{(\ell-2)s}$ (see~\cite[Appendix~A.5]{shi2012limits}) for $q=n/t,s=k,\ell=t$, we have % \begin{align*} \frac{1}{2}\sum_{k=1}^{4rm-1}\alpha^k\frac{\binom{n/t}{k}}{\sqrt{\binom{n}{kt}}}(r-1)^{k/2}\sqrt{\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}} \|\widehat{\rho}(S)\|_1^2} &\leq \frac{1}{2}\sum_{k=1}^{4rm-1} \alpha^k (r-1)^{k/2}\left(\frac{kt}{n}\right)^{(1-2/t)kt/2}\left(\frac{2^{1/(2t)}4rm}{k}\right)^{kt/2}\\ &\leq \frac{1}{2}\sum_{k=1}^{4rm-1} \alpha^k (r-1)^{k/2}\left(\frac{2^{1/(2t)}4\gamma \varepsilon^{4/t}}{\alpha^{2/t}r^{1/t}k^{2/t}}\right)^{kt/2}\\ &\leq \frac{1}{2}\sum_{k=1}^{4rm-1}\left(\frac{2^{1/4}(4\gamma)^{t/2} \varepsilon^2 }{k}\right)^{k} \leq \frac{\varepsilon^2}{2} \end{align*} % for sufficiently small $\gamma$. \textbf{Sum II} ($4rm \leq k \leq \alpha n/t$): first we note that the function $g(k) := \alpha^k (r-1)^{k/2}\binom{n/t}{k}/\sqrt{\binom{n}{kt}}$ is non-increasing in the interval $1\leq k \leq \alpha n/t \leq n/(2t)$. That is because $\alpha \sqrt{r-1} \leq 1$, and so % \begin{align*} \displaybreak[0]\frac{g(k-1)}{g(k)} \geq \frac{\binom{n/t}{k-1}}{\sqrt{\binom{n}{kt-t}}}\frac{\sqrt{\binom{n}{kt}}}{\binom{n/t}{k}} = \sqrt{\frac{kt}{n-kt+t}\prod_{j=1}^{t-1}\frac{n-kt+j}{kt-j}} &\geq \sqrt{\frac{kt}{n-kt+t}\prod_{j=1}^{t-1}\frac{n-kt+j+1}{kt-j+1}}\displaybreak[0]\\ &= \sqrt{\prod_{j=1}^{t-2}\frac{n-kt+j+1}{kt-j}} \geq 1, \end{align*} % where we used that $\frac{a}{b} \geq \frac{a+s}{b+s}$ for all $a,b,s>0$ with $a\geq b$. Hence, and with the aid once more of Lemma~\ref{lem:hypercontractive} with $\delta=1$ and the inequality $\binom{q}{s}^2\binom{\ell q}{\ell s}^{-1} \leq (\frac{s}{q})^{(\ell-2)s}$ (for $q=n/t,s=2m,\ell=t$) in order to bound $g(4rm)$, % \begin{align} \frac{1}{2}\sum_{k=4rm}^{\alpha n/t}\alpha^k\frac{\binom{n/t}{k}}{\sqrt{\binom{n}{kt}}}(r-1)^{k/2}\sqrt{\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}} \|\widehat{\rho}(S)\|_1^2} &\leq \frac{1}{2}g(4rm) \sum_{k=4rm}^{\alpha n/t}\sqrt{\sum_{\substack{S\in\ensuremath{\mathbb{Z}}_r^n \\ |S| = kt}} \|\widehat{\rho}(S)\|_1^2}\nonumber\\ &\leq \frac{1}{2}g(4rm)\sqrt{\frac{\alpha n}{t}}\sqrt{\sum_{S\in\ensuremath{\mathbb{Z}}_r^n} \|\widehat{\rho}(S)\|_1^2}\label{eq:eqfinalhidden}\\ &\leq \frac{1}{2}\left(\alpha \sqrt{r-1}\right)^{4rm}\left(\frac{4rm}{n/t}\right)^{2(t-2)rm}\sqrt{\frac{\alpha n}{t}}2^{(r-1)m}\nonumber\\ &\leq \frac{1}{2}\left(2^{1/4}\alpha \sqrt{r-1}\right)^{4rm} \left(\frac{(4\gamma)^{t/2}\varepsilon^2}{\alpha \sqrt{r}(n/t)}\right)^{4(1-2/t)rm}\sqrt{\frac{\alpha n}{t}}\nonumber\\ &\leq \frac{\varepsilon^2}{2},\nonumber \end{align} % where Eq.~(\ref{eq:eqfinalhidden}) comes from Cauchy-Schwarz, and in the last step we used that $m\geq 1 \implies 4(1-2/t)m \geq 1$ (so $n$ is in the denominator and $\varepsilon^{4(1-2/t)m} \leq \varepsilon$) and picked $\gamma$ sufficiently small. Finally, merging both results, we get that, if $m \leq \frac{\gamma}{r^{1+1/t}}(\frac{ \varepsilon^2}{\alpha})^{2/t} (n/t)^{1-2/t}$, then $\varepsilon_{bias} \leq \varepsilon^2$. \end{proof} A very similar classical communication lower bound for the $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ problem can be proven. \begin{theorem} \label{thr:thr_classicalhh} Any one-way classical protocol that achieves advantage $\varepsilon>0$ for the $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ problem with $t\geq 2$ and $\alpha \leq 1/2$ requires at least $\Omega(r^{-1}(\varepsilon^4/\alpha)^{1/t}(n/t)^{1-1/t})$ bits of communication. \end{theorem} The proof is very similar to that of past works~\cite{gavinsky2007exponential,verbin2011streaming,DBLP:conf/approx/GuruswamiT19} and we include it in Appendix~\ref{app:appB} for completeness. We now conclude this section with a remark that improves the $r$ dependence of the $\alpha$ parameter. \begin{remark} \label{rem:alphadependence} The dependence of $\alpha$ on $r$ can be improved. For example, we can improve the bound in Eq.~(\ref{eq:alphadependence}) by observing that $\binom{k}{k_1,\dots,k_{r-1}}^2\binom{kt}{k_1t,\dots,k_{r-1}t}^{-1} \leq 1$, which can be seen from the identity $\binom{k}{k_1,\dots,k_{r-1}} = \binom{k_1}{k_1}\binom{k_1+k_2}{k_2}\cdots\binom{k_1+k_2+\cdots+k_{r-1}}{k_{r-1}}$ and the inequality $\binom{q}{s}^2\binom{\ell q}{\ell s}^{-1} \leq (\frac{s}{q})^{(\ell-2)s} \leq 1$. Hence % \begin{align*} \sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}} \frac{k!^2}{(kt)!}\prod_{j=1}^{r-1}\frac{(k_jt)!}{k_j!^2} &= \sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}} \frac{\binom{k}{k_1,\dots,k_{r-1}}^2}{\binom{kt}{k_1t,\dots,k_{r-1}t}} \leq \sum_{\substack{k_1,\dots,k_{r-1}\geq 0 \\ \sum_{j=1}^{r-1} k_j = k}} 1 = \binom{k+r-2}{k}, \end{align*} % which is better than $(r-1)^{k}$. By bounding % \begin{align*} \binom{k+r-2}{k} \leq e^k\left(1 + \frac{r-2}{k}\right)^{k}, \end{align*} % the new function $g(k) := \alpha^k \sqrt{\binom{k+r-2}{k}}\binom{n/t}{k}/\sqrt{\binom{n}{kt}}$ is still non-increasing in the interval $4rm \leq k \leq \alpha n/t \leq n/(2t)$ if now % \begin{align*} \alpha \leq e^{-1/2}\operatorname*{\min}_{4rm\leq k \leq \alpha n/t}\sqrt{\frac{k}{k+r-2}} = e^{-1/2}\sqrt{\frac{4rm}{4rm+r-2}}. \end{align*} % For $m\gg 1$, $\alpha$ is essentially independent of $r$, and hence $\alpha \leq \min(1/2,e^{-1/2}) = 1/2$. \end{remark} \subsection{Quantum streaming lower bound for Unique Games on hypergraphs} The Unique Games problem is a generalization of the classical Max-Cut and can in fact be viewed as constraint satisfaction problems on a graph but over a larger alphabet. Consider a graph on $n$ vertices $x_1,\ldots,x_n$ and edges in $E$. The constraint on an arbitrary edge $(i,j)\in E$ is specified by a permutation $\pi_{i,j}:\ensuremath{\mathbb{Z}}_r\rightarrow \ensuremath{\mathbb{Z}}_r$ and the goal is to find an assignment of $x_1,\ldots,x_n\in \ensuremath{\mathbb{Z}}_r$ that maximizes $$ \sum_{(i,j)\in E} [\pi_{i,j}(x_i)=x_j]. $$ In this section, we consider a generalization of Unique Games to hypergraphs. \begin{definition}[Unique Games instance on hypergraphs] \label{def:UGhyper} A hypergraph $H=(V,E)$ is defined on a vertex set $V$ of size $n$ with $t$-sized hyperedges $E$ (i.e., $t$-sized subsets of $V$). Given a linear constraint on a hyperedge $e\in E$, i.e., a linear function $\pi_e:\ensuremath{\mathbb{Z}}_r^t\rightarrow \{0,1\}$, the goal is to compute $$ \max_{x\in \ensuremath{\mathbb{Z}}_r^n}\sum_{e\in E} \pi_{e}(x_e), $$ where $x_e$ corresponds to the set of vertex-assignment in the hyperedge $e\in E$. \end{definition} \begin{definition} Let $H=(V,E)$ be a hypergraph and let $\operatorname{OPT}$ be the optimal value of the Unique Games on $H$. A randomized algorithm gives a $\gamma$-approximation to a Unique Games instance with failure probability $\delta\in[0,1/2)$ if, on any input hypergraph $H$, it outputs a value in the interval $[\operatorname{OPT}/\gamma, \operatorname{OPT}]$ with probability at least $1-\delta$. \end{definition} A uniformly random assignment of $x\in \ensuremath{\mathbb{Z}}_r^n$ to the vertex set $V$ will satisfy a $1/r$-fraction of the hyperedges, since each linear constraint $\pi_e(x_e)$ is satisfied with probability $1/r$. This gives a trivial $r$-approximation algorithm for the problem above. Below we show that any better than trivial approximation requires space that scales as $n^\beta$ for constant $\beta>0$. \begin{theorem} \label{thm:streaminguglowerbound} Let $r,t\geq 2$ be integers. Every {quantum} streaming algorithm giving a $(r-\varepsilon)$-approximation for Unique Games on hypergraphs (as in Definition~\ref{def:UGhyper}) with at most $t$-sized hyperedges with alphabet size $r$ and success probability at least $2/3$ over its internal randomness, needs $\Omega((n/t)^{1-2/t})$ space (which hides dependence on $r,\varepsilon$). \end{theorem} The proof of this theorem combines techniques used by Guruswami and Tao~\cite{DBLP:conf/approx/GuruswamiT19} and Kapralov, Khanna and Sudan~\cite{kapralov2014streaming}. Akin to these works, based on the Hidden Matching problem, we will construct instances of the hypergraph for which a Unique Games instance is hard to solve space-efficiently in the streaming model. \paragraph{Input distributions.} To this end, we construct two distributions $\ensuremath{\mathcal{Y}}$ and $\ensuremath{\mathcal{N}}$ such that $\ensuremath{\mathcal{Y}}$ is supported on satisfiable Unique Games instances and $\ensuremath{\mathcal{N}}$ is supported on instances for which at most an $O(1/r)$-fraction of the constraints is satisfied. We now define these instances in a multi-stage way (using $k$ stages). First, sample $k$ independent $\alpha$-partial $t$-hypermatchings on $n$ vertices and then construct a hypergraph $G$ by putting together all the hyperedges from these $k$ stages. Note that $G$ still has $n$ vertices, while the number of hyperedges is $k\cdot \alpha n /t$ (since each stage has $\alpha n/t$ many hyperedges and we allow multiple hyperedges should they be sampled). Now we specify the constraints $\pi_e$ in Definition~\ref{def:UGhyper} for the $\ensuremath{\mathcal{Y}},\ensuremath{\mathcal{N}}$ distributions: \begin{itemize} \item $\ensuremath{\mathcal{Y}}$ distribution: sample $z\in \ensuremath{\mathbb{Z}}_r^n$ and for each $e\in E$, let $\pi_e(x_e)=\big[\sum_{i\in e}x_i=\sum_{i\in e}z_i\big]$ (where by $i\in e$ we mean all the vertices in the hyperedge $e$). \item $\ensuremath{\mathcal{N}}$ distribution: for each $e\in E$, pick a uniform $q\in \ensuremath{\mathbb{Z}}_r$ and let $\pi_e(x_e)=\big[\sum_{i\in e}x_i=q\big]$. \end{itemize} It is clear that, in the $\ensuremath{\mathcal{Y}}$ distribution, the optimal solution is when all the $x_1,\ldots,x_n$ are just set to $z_1,\ldots,z_n$. Below we show that for the $\ensuremath{\mathcal{N}}$ distribution, the value of the optimal solution is at most $(1+\varepsilon)/r$ with high probability. \begin{lemma} \label{lem:NOdistribution} Let $\varepsilon\in (0,1)$. If $k=O(r(\log r)t/(\alpha\varepsilon^2))$, then for the Unique Games instance sampled from $\ensuremath{\mathcal{N}}$ distribution above, the optimal fraction of satisfiable constraints (i.e., number of hyperedges $e\in E$ for which $\pi_e(\cdot)$ evaluates to $1$) over all possible vertex labelling is at most $(1+\varepsilon)/r$ with high probability. \end{lemma} \begin{proof} The proof of this lemma is similar to the proof in~\cite[Lemma~4.1]{DBLP:conf/approx/GuruswamiT19}. Fix an assignment $x\in \ensuremath{\mathbb{Z}}_r^n$. Let $X^\ell_{e}$ be the random variable that indicates that the hyperedge $e\in E$ appears in $\ell$-th stage and is satisfied by $x$. Let $S=\sum_{\ell,e}X^{\ell}_e$. The expectation of $S$ is $k\alpha n/t \cdot 1/r$, since the total number of hyperedges is $\alpha n/t$ for each of the $k$ stages and the probability that a uniform $x$ satisfies a $t$-hyperedge (i.e., probability that $\sum_{e\in E} x_e=q$ for some fixed $q$) is $1/r$. Using the same analysis in~\cite{DBLP:conf/approx/GuruswamiT19}, we can show that the variables $X^\ell_{e}$ are negatively correlated. Indeed, first note that hyperedges from different stages are independent. Now suppose we know that the random variables $X^{\ell}_{e_1},\dots,X^{\ell}_{e_s}$ have value $1$, and we also know a hyperedge $e\in E$. If $e\cap e_u \neq \emptyset$ for some $u\in[s]$, then $X^\ell_e=0$, since the hyperedges of a given stage form a matching. Otherwise, the conditional expectation of $X^\ell_e$ (conditioned on $e\cap e_u = \emptyset$ for all $u\in[s]$) is $\frac{\alpha n/t - s}{r}\binom{n-ts}{t}^{-1}$, which is less than its unconditional expectation of $\frac{\alpha n/t}{r}\binom{n}{t}^{-1}$. Therefore, in all cases one has $\mathbb{E}[X^\ell_e|X^\ell_{e_1}=\cdots=X^\ell_{e_s} = 1] \leq \mathbb{E}[X^\ell_e]$, which means negative correlation. Hence, using a Chernoff bound for negative-correlated variables leads to $$ \Pr[S\geq (1+\varepsilon)(k\alpha n/t)/r]\leq \exp(-\varepsilon^2 k\alpha n /(3rt))=\exp(-O(n\log r)), $$ where the inequality used the choice of $k$. Applying a union bound over the set of $x\in \ensuremath{\mathbb{Z}}_r^n$ concludes the proof of the lemma. \end{proof} \paragraph{Reduction to Hypermatching.} The reduction to $r$-ary Hidden Hypermatching is similar to the analysis used by Guruswami and Tao~\cite{DBLP:conf/approx/GuruswamiT19}, but now it is from quantum streaming algorithms to one-way quantum communication complexity. The main lemma that we need is the following. \begin{lemma} \label{lem:reductionfromyesnotobhm} Let $\varepsilon>0$. If there is a streaming algorithm using at most $c$ qubits of space that distinguishes between the $\ensuremath{\mathcal{Y}}$ and $\ensuremath{\mathcal{N}}$ distributions on Unique Games instances (with $k$ stages) with bias $1/3$, then there is a $c$-qubit protocol that distinguish between the $\mathsf{YES}$ and $\mathsf{NO}$ distributions of $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ with bias $\Omega(1/k)$. \end{lemma} In order to prove this lemma we need a few definitions and facts. First, towards proving the lemma above, let us assume there is a $c$-qubit streaming $\ensuremath{\mathcal{A}}$ for Lemma~\ref{lem:reductionfromyesnotobhm}. During the execution of the streaming protocol on instances from the $\ensuremath{\mathcal{Y}}$ and $\ensuremath{\mathcal{N}}$ distributions, let the memory content after receiving the $i$th stage constraints be given by the $c$-qubit quantum states $\ket{\phi_i^\ensuremath{\mathcal{Y}}}$ and $\ket{\phi_i^\ensuremath{\mathcal{N}}}$, respectively.\footnote{Without loss of generality, we assume they are pure states-- this only affects the cost of the protocol by a constant factor (since one can always purify mixed quantum states by doubling the dimension).} Assume that $|\phi_0^\ensuremath{\mathcal{Y}}\rangle = |\phi_0^\ensuremath{\mathcal{N}}\rangle = 0$. Using the notion of informative index from~\cite[Definition~6.2]{kapralov2014streaming}, we say an index $j\in \{0,\ldots,k-1\}$ is $\delta$-informative if $$ \big\|\ket{\phi^\ensuremath{\mathcal{Y}}_{j+1}}-\ket{\phi^\ensuremath{\mathcal{N}}_{j+1}}\big\|_1\geq \big\|\ket{\phi^\ensuremath{\mathcal{Y}}_{j}}-\ket{\phi^\ensuremath{\mathcal{N}}_{j}}\big\|_1+\delta. $$ With this definition it is not hard to see the following fact, which follows from a simple triangle inequality. \begin{fact} \label{fact:informativeindex} Suppose there exists a streaming protocol for distinguishing the $\ensuremath{\mathcal{Y}},\ensuremath{\mathcal{N}}$ distributions with advantage $\geq 1/3$, then there exists a $\Omega(1/k)$-informative index. \end{fact} Suppose $j^*$ is an $\Omega(1/k)$-informative index for the streaming protocol $\ensuremath{\mathcal{A}}$. Using this we devise a communication protocol for $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ with bias $\Omega(1/k)$ as follows: suppose Alice has a string $x\in \ensuremath{\mathbb{Z}}_r^n$ and Bob has $w\in \ensuremath{\mathbb{Z}}_r^{\alpha n/t}$ and a hypermatching $M\in\mathcal{M}_{t,n}^\alpha$. \begin{enumerate} \item Alice samples $j^*$ many $\alpha$-partial $t$-hypermatchings and runs the streaming algorithm $\ensuremath{\mathcal{A}}$ on Unique Games constraints for the first $j^*$ stages that follow the $\ensuremath{\mathcal{Y}}$ distribution with $z=x$. She then sends the memory contents after these $j^\ast$ stages to Bob. \item Bob assigns the constraints $\sum_{i\in e}x_i=w_e$, where $e\in M$, according to his inputs $w,M$. He then continues running $\ensuremath{\mathcal{A}}$ on these constraints as the $(j^*+1)$th~stage. Let $\ket{s}$ be the quantum state that Bob gets after running $\mathcal{A}$. \item Let $\ket{\phi^\ensuremath{\mathsf{YES}}}$ and $\ket{\phi^\ensuremath{\mathsf{NO}}}$ be the resulting quantum states under the two cases, depending on $w$'s distribution (these can be computed by Bob since $\mathcal{A}$ is known). Bob can distinguish between $\ket{\phi^\ensuremath{\mathsf{YES}}}$ and $\ket{\phi^\ensuremath{\mathsf{NO}}}$ with bias $\frac{1}{2}\big\||\phi^\ensuremath{\mathsf{YES}}\rangle-|\phi^\ensuremath{\mathsf{NO}}\rangle\big\|_1$ by measuring the state $|s\rangle$ with a suitable POVM, according to Lemma~\ref{lem:lem3.5.c3}. \end{enumerate} We are now ready to prove Lemma~\ref{lem:reductionfromyesnotobhm}. \begin{proof}[Proof of Lemma~\ref{lem:reductionfromyesnotobhm}] We argue that the above protocol achieves a $\Omega(1/k)$ bias in distinguishing between the $\mathsf{YES}$ and $\mathsf{NO}$ distributions from $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$. To this end, let $U$ be the unitary that maps the quantum state after stage $j^*$ and constraints of stage $j^*+1$ (which is classical) to the quantum state after $j^*+1$. Thus we have $\ket{\phi^\ensuremath{\mathsf{YES}}}=\ket{\phi^\ensuremath{\mathcal{Y}}_{j^*+1}}=U\ket{\phi^\ensuremath{\mathcal{Y}}_{j^*},C^\ensuremath{\mathcal{Y}}}$ and $\ket{\phi^\ensuremath{\mathsf{NO}}}=U\ket{\phi^\ensuremath{\mathcal{Y}}_{j^*},C^\ensuremath{\mathcal{N}}}$, where $C^\ensuremath{\mathcal{Y}}$ and $C^\ensuremath{\mathcal{N}}$ are the constraints corresponding to the $\ensuremath{\mathsf{YES}}$ and $\ensuremath{\mathsf{NO}}$ distributions, respectively, and, similarly, we have $\ket{\phi^\ensuremath{\mathcal{N}}_{j^*+1}}=U\ket{{\phi^\ensuremath{\mathcal{N}}_{j^*}},C^\ensuremath{\mathcal{N}}}$. Then, we have \begin{align*} \big\|\ket{\phi^\ensuremath{\mathsf{YES}}}-\ket{\phi^\ensuremath{\mathsf{NO}}}\big\|_1 &\geq \big\|\ket{\phi^\ensuremath{\mathcal{Y}}_{j^*+1}}-\ket{\phi^\ensuremath{\mathcal{N}}_{j^*+1}}\big\|_1- \big\|\ket{\phi^\ensuremath{\mathsf{NO}}}-\ket{\phi^\ensuremath{\mathcal{N}}_{j^*+1}}\big\|_1\\ &\geq \big\|\ket{\phi^\ensuremath{\mathcal{Y}}_{j^*+1}}-\ket{\phi^\ensuremath{\mathcal{N}}_{j^*+1}}\big\|_1 - \big\|\ket{\phi^\ensuremath{\mathcal{Y}}_{j^*}}-\ket{\phi^\ensuremath{\mathcal{N}}_{j^*}}\big\|_1 =\Omega(1/k), \end{align*} % where the second inequality used that $\big\|\ket{\phi^\ensuremath{\mathsf{NO}}}-\ket{\phi^\ensuremath{\mathcal{N}}_{j^*+1}}\big\|_1 = \big\|U\ket{\phi^\ensuremath{\mathcal{Y}}_{j^*},C^\ensuremath{\mathcal{N}}}-U\ket{\phi^\ensuremath{\mathcal{N}}_{j^*},C^\ensuremath{\mathcal{N}}}\big\|_1\leq \|\ket{\phi^\ensuremath{\mathcal{Y}}_{j^*}}-\ket{\phi^\ensuremath{\mathcal{N}}_{j^*}}\|_1$ (since unitaries preserve norms) and the third inequality is because $j^*$ is an informative index. Hence in Step (3) of the procedure above, the bias of Bob in obtaining the right outcome is~$\Omega(1/k)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:streaminguglowerbound}] Finally, by picking $k=O(r(\log r)t/(\alpha\varepsilon^2))$ in order to invoke Lemma~\ref{lem:NOdistribution} and using our lower bound in Theorem~\ref{thm:hypermatchinglowerbound} with $\alpha= O(1)$, we get our desired lower bound of \[ \Omega(r^{-(1+1/t)}(k^2\alpha)^{-2/t}(n/t)^{1-2/t})=\Omega((n/t)^{1-2/t}). \qedhere \] \end{proof} It is possible to prove a classical version of Theorem~\ref{thm:streaminguglowerbound}. \begin{theorem} \label{thm:streaminguglowerbound2} Let $r,t\geq 2$ be integers. Every classical streaming algorithm giving an $(r-\varepsilon)$-approximation for Unique Games on hypergraphs (as in Definition~\ref{def:UGhyper}) with at most $t$-sized hyperedges with alphabet size $r$ and success probability at least $2/3$ over its internal randomness, needs $\Omega((n/t)^{1-1/t})$ space (which hides dependence on $r,\varepsilon$). \end{theorem} \begin{proof} Since the proof is very similar to the one of Theorem~\ref{thm:streaminguglowerbound}, we shall just point out the few required modifications. The main idea is still to reduce a streaming algorithm for Unique Games to a communication protocol for $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$. The distributions $\ensuremath{\mathcal{Y}}$ and $\ensuremath{\mathcal{N}}$ on the Unique Games inputs are the same. Let $S_i^\ensuremath{\mathcal{Y}}$ and $S_i^\ensuremath{\mathcal{N}}$ be the memory after receiving the $i$th stage constraints. The notion of information index is similarly defined for $S_i^\ensuremath{\mathcal{Y}}$ and $S_i^\ensuremath{\mathcal{N}}$, i.e., an index $j\in \{0,\ldots,k-1\}$ is $\delta$-informative if $$ \big\|S_{j+1}^\ensuremath{\mathcal{Y}} - S_{j+1}^\ensuremath{\mathcal{N}}\big\|_{\text{tvd}}\geq \big\|S_j^\ensuremath{\mathcal{Y}}-S_j^\ensuremath{\mathcal{Y}}\big\|_{\text{tvd}}+\delta. $$ The communication protocol for $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ is basically the same as in the quantum case, using an $\Omega(1/k)$-informative index $j^\ast$. At the end of Step (2), Bob gets the memory $s$. Let $S^\ensuremath{\mathsf{YES}}$ and $S^\ensuremath{\mathsf{NO}}$ be the resulting memory distributions under the two cases depending on $w$'s distribution. Bob outputs $1$ if $\operatorname{Pr}[S^\ensuremath{\mathsf{YES}} = s] \geq \operatorname{Pr}[S^\ensuremath{\mathsf{NO}} = s]$, and $0$ otherwise. The bias of distinguishing between $S^\ensuremath{\mathsf{YES}}$ and $S^\ensuremath{\mathsf{NO}}$ is $\frac{1}{2}\|S^\ensuremath{\mathsf{YES}} - S^\ensuremath{\mathsf{NO}}\|_{\text{tvd}}$, which can be shown to be at least $\Omega(1/k)$, similarly to the quantum case. Indeed, let $f$ be the function that maps the memory after stage $j^\ast$ and constraints $C^\ensuremath{\mathcal{Y}}$ or $C^\ensuremath{\mathcal{N}}$ of stage $(j^\ast+1)$ to the memory after stage $(j^\ast+1)$. Then $S^\ensuremath{\mathsf{YES}} = S^\ensuremath{\mathcal{Y}}_{j^\ast+1} = f(S^\ensuremath{\mathcal{Y}}_{j^\ast},C^\ensuremath{\mathcal{Y}})$ and $S^\ensuremath{\mathsf{NO}} = f(S^\ensuremath{\mathcal{Y}}_{j^\ast},C^\ensuremath{\mathcal{N}})$. By using Lemma~\ref{lem:functionbias} below, we can show that % \begin{align*} \big\|S^\ensuremath{\mathsf{YES}} - S^\ensuremath{\mathsf{NO}}\big\|_{\text{tvd}} &\geq \big\|S^\ensuremath{\mathcal{Y}}_{j^\ast+1} - S^\ensuremath{\mathcal{N}}_{j^\ast+1}\big\|_{\text{tvd}} - \big\|S^\ensuremath{\mathsf{NO}} - S^\ensuremath{\mathcal{N}}_{j^\ast+1}\big\|_{\text{tvd}}\\ &= \big\|S^\ensuremath{\mathcal{Y}}_{j^\ast+1} - S^\ensuremath{\mathcal{N}}_{j^\ast+1}\big\|_{\text{tvd}} - \big\|f(S^\ensuremath{\mathcal{Y}}_{j^\ast},C^\ensuremath{\mathcal{Y}}) - f(S^\ensuremath{\mathcal{N}}_{j^\ast},C^\ensuremath{\mathcal{N}})\big\|_{\text{tvd}}\\ &\geq \big\|S^\ensuremath{\mathcal{Y}}_{j^\ast+1} - S^\ensuremath{\mathcal{N}}_{j^\ast+1}\big\|_{\text{tvd}} - \big\|S^\ensuremath{\mathcal{Y}}_{j^\ast} - S^\ensuremath{\mathcal{N}}_{j^\ast}\big\|_{\text{tvd}} \geq \Omega(1/k). \end{align*} % \begin{lemma}[{\cite[Claim 6.5]{kapralov2014streaming}}] \label{lem:functionbias} Let $X,Y$ be two random variables and $W$ be independent of $(X,Y)$. Then, for any function $f$, % \begin{align*} \|f(X,W) - f(Y,W)\|_{\operatorname{tvd}} \leq \|X-Y\|_{\operatorname{tvd}}. \end{align*} \end{lemma} Therefore Bob can distinguish between $S^\ensuremath{\mathsf{YES}}$ and $S^\ensuremath{\mathsf{NO}}$ with bias at least $\Omega(1/k)$, meaning that there is a $c$-bit protocol that distinguish between the $\mathsf{YES}$ and $\mathsf{NO}$ distributions of $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$ with bias $\Omega(1/k)$. By picking $k=O(r(\log r)t/(\alpha\varepsilon^2))$ in order to invoke Lemma~\ref{lem:NOdistribution} and using the classical lower bound on $r\text{-}\ensuremath{\mathsf{HH}}(\alpha,t,n)$, we get our desired lower bound of \[ \Omega(r^{-1}(k^4\alpha)^{-1/t}(n/t)^{1-1/t})=\Omega((n/t)^{1-1/t}). \qedhere \] \end{proof} \section{Locally Decodable Codes} In this section we prove our lower bound on locally decodable codes over $\ensuremath{\mathbb{Z}}_r$. Before that, let us first formally define an $\mathsf{LDC}$. \begin{definition}[Locally decodable code] A $(q,\delta,\varepsilon)$-locally decodable code over $\ensuremath{\mathbb{Z}}_r$ is a function $C:\ensuremath{\mathbb{Z}}_r^n\rightarrow \ensuremath{\mathbb{Z}}_r^N$ that satisfies the following: for every $x\in \ensuremath{\mathbb{Z}}_r^n$ and $i\in [n]$, there exists a (randomized) algorithm $\ensuremath{\mathcal{A}}$ that, on any input $y\in \ensuremath{\mathbb{Z}}_r^N$ that satisfies $d(y,C(x))\leq \delta N$, makes $q$ queries to $y$ non-adaptively and outputs a number $\ensuremath{\mathcal{A}}^{y}(i)\in \ensuremath{\mathbb{Z}}_r$ that satisfies $\Pr[\ensuremath{\mathcal{A}}^{y}(i)=x_i]\geq 1/r+\varepsilon$ (where the probability is only taken over the randomness of $\ensuremath{\mathcal{A}}$). \end{definition} As is often the case when proving $\mathsf{LDC}$ lower bounds, we use the useful fact proven by Katz and Trevisan~\cite{katz2000efficiency} that, without loss of generality, one can assume that an $\mathsf{LDC}$ is smooth, i.e., the queries made by $\ensuremath{\mathcal{A}}$ have ``reasonable" probability over all indices, and that $\ensuremath{\mathcal{A}}$ makes queries to a codeword (and not a corrupted codeword). We first formally define a smooth code below. \begin{definition}[Smooth code] We say $C:\ensuremath{\mathbb{Z}}_r^n\rightarrow \ensuremath{\mathbb{Z}}_r^N$ is a $(q,c,\varepsilon)$-smooth code if there exists a decoding algorithm $\ensuremath{\mathcal{A}}$ that satisfies the following: for every $x\in \ensuremath{\mathbb{Z}}_r^n$ and $i\in [n]$, $\ensuremath{\mathcal{A}}$ makes at most $q$ non-adaptive queries to $C(x)$ and outputs $\ensuremath{\mathcal{A}}^{C(x)}(i)\in \ensuremath{\mathbb{Z}}_r$ such that $\Pr[\ensuremath{\mathcal{A}}^{C(x)}(i)=x_i]\geq 1/r+\varepsilon$ (where the probability is only taken over the randomness of $\ensuremath{\mathcal{A}}$). Moreover, for every $x\in \ensuremath{\mathbb{Z}}_r^n$, $i\in [n]$ and $j\in [N]$, on input $i$, the probability that $\ensuremath{\mathcal{A}}$ queries the index $j$ in $C(x)\in \ensuremath{\mathbb{Z}}_r^N$ is at most $c/N$. \end{definition} Crucially note that smooth codes only require a decoder to recover $x_i$ when given access to an actual codeword, unlike the standard definition of $\mathsf{LDC}$ where a decoder is given a noisy codeword. With this definition in hand, we state a theorem of Katz and Trevisan. \begin{theorem}[\cite{katz2000efficiency}] \label{thr:ldc-smoothcode-equivalence} A $(q,\delta,\varepsilon)$-$\mathsf{LDC}$ $C:\ensuremath{\mathbb{Z}}_r^n\rightarrow \ensuremath{\mathbb{Z}}_r^N$ is a $(q,q/\delta,\varepsilon)$-smooth code. \end{theorem} \noindent We remark that a converse to this theorem holds: a $(q, c, \varepsilon)$-smooth code is a $(q,\delta,\varepsilon - c\delta)$-$\mathsf{LDC}$, since the probability that the decoder queries one of $\delta N$ corrupted positions is at most $(c/N)(\delta N) = c\delta$. \subsection{Smooth codes over large alphabets} Katz and Trevisan~\cite{katz2000efficiency} observed that a $(q,c,\varepsilon)$-smooth code over $\{0,1\}$ is a $(q,q,\varepsilon^2/2c)$-smooth code that is good on \emph{average}, i.e., that there is a decoder $\ensuremath{\mathcal{A}}$ such that, for all $i\in[n]$, \begin{align*} \frac{1}{2^n}\sum_{x\in\{0,1\}^n}\operatorname{Pr}[\ensuremath{\mathcal{A}}^{C(x)}(i)=x_i] \geq \frac{1}{2} + \frac{\varepsilon^2}{2c}. \end{align*} This comes from the observation that a $q$-decoder can partition the set $[N]$ into $q$-tuples, pick one of such tuples uniformly at random and continue as the original decoder by querying the elements of the picked tuple at the cost of a slightly worse success probability. These ideas are formally explained in the result below, where we already generalize them to large alphabets $\ensuremath{\mathbb{Z}}_r$ (the overall presentation is inspired in~\cite[Theorem~15]{ben2008hypercontractive}). \begin{theorem} \label{thr:ldc1} Suppose $C: \ensuremath{\mathbb{Z}}_r^n \to \ensuremath{\mathbb{Z}}_r^N$ is a $(q,c,\varepsilon)$-smooth code. Then for every $i\in[n]$, there exists a set $M_i$ consisting of at least $\varepsilon N/(2cq)$ disjoint sets of at most $q$ elements of $[N]$ each such that, for every $Q\in M_i$, there exists a function $f_Q : \ensuremath{\mathbb{Z}}_r^{|Q|} \to \ensuremath{\mathbb{Z}}_r$ with the property % \begin{align*} \sum_{k=1}^{r-1}\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\left[\omega_r^{k(f_Q(C(x)_Q) - x_i)}\right] \geq \frac{r}{2}\varepsilon. \end{align*} % Here $C(x)_Q$ is the restriction of $C(x)$ to the bits in $Q$. \end{theorem} \begin{proof} Fix some $i\in[n]$. In order to decode $x_i$, we can assume, without lost of generality, that the decoder $\ensuremath{\mathcal{A}}$ picks some set $Q\subseteq [N]$ (of at most $q$ indices) with probability $p(Q)$, queries those bits, and then outputs a random variable (not yet a function) $f_Q(C(x)_Q)\in\ensuremath{\mathbb{Z}}_r$ that depends on the query-outputs. Call such a $Q$ ``good'' if % \begin{align*} \frac{1}{r} + \frac{\varepsilon}{2} \leq \operatorname*{Pr}_{x\sim\ensuremath{\mathbb{Z}}_r^n}[f_Q(C(x)_Q) = x_i] = \operatorname*{\mathbb{E}}_{\substack{x\sim\ensuremath{\mathbb{Z}}_r^n \\ k\sim\mathbb{Z}_r}}\left[\omega_r^{k(f_Q(C(x)_Q) - x_i)}\right] \iff \frac{r}{2}\varepsilon \leq \sum_{k=1}^{r-1}\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\left[\omega_r^{k(f_Q(C(x)_Q) - x_i)}\right]. \end{align*} % Now construct the hypergraph $H_i = (V, E_i)$ with $V = [N]$ and edge-set $E_i$ consisting of all good sets $Q$. The probability that the decoder queries any $Q\in E_i$ is $p(E_i) := \sum_{Q\in E_i} p(Q)$. If it queries some $Q\in E_i$, then % \begin{align*} \operatorname*{Pr}_{x\sim\ensuremath{\mathbb{Z}}_r^n}[f_Q(C(x)_Q) = x_i] \leq 1 \iff \sum_{k=1}^{r-1}\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\left[\omega_r^{k(f_Q(C(x)_Q) - x_i)}\right] \leq r-1, \end{align*} % and if it queries some $Q\notin E_i$, then $\sum_{k=1}^{r-1}\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\big[\omega_r^{k(f_Q(C(x)_Q) - x_i)}\big] < \frac{r}{2}\varepsilon$. Given the smooth code property of outputting $x_i$ with probability at least $\frac{1}{r} + \varepsilon$ for every $x$, we have % \begin{align*} r\varepsilon \leq \sum_{k=1}^{r-1}\operatorname*{\mathbb{E}}_{x,Q}\left[\omega_r^{k(f_Q(C(x)_Q) - x_i)}\right] < p(E_i)(r-1) + (1-p(E_i))\frac{r}{2}\varepsilon = \frac{r}{2}\varepsilon + p(E_i)\left(r-1-\frac{r}{2}\varepsilon\right), \end{align*} % hence % \begin{align*} p(E_i) > \frac{\varepsilon}{2-2/r-\varepsilon} \geq \frac{\varepsilon}{2}. \end{align*} % Since $C$ is also smooth, for every $j\in[N]$ we have % \begin{align*} \sum_{Q\in E_i:j\in Q} p(Q) \leq \sum_{Q:j\in Q}p(Q) = \operatorname{Pr}[\ensuremath{\mathcal{A}}~\text{queries}~j] \leq \frac{c}{N}. \end{align*} % Let $M_i$ be a matching in $H_i$ of maximal size. We want to show that $|M_i|\geq \varepsilon N/(2cq)$. To do so, define $T := \bigcup_{Q\in M_i} Q$. Observe that the set $T$ has at most $q|M_i|$ elements, and intersects each $Q\in E_i$ (otherwise $M_i$ would not be maximal). The size of $M_i$ can be lower bounded as follows: % \begin{align*} \frac{\varepsilon}{2} < p(E_i) = \sum_{Q:Q\in E_i}p(Q) \overset{(a)}{\leq} \sum_{j\in T}\sum_{Q\in E_i: j\in Q} p(Q) \leq \frac{c|T|}{N} \leq \frac{cq|M_i|}{N}, \end{align*} % where (a) holds because each $Q\in E_i$ is counted exactly once on the left and at least once on the right (since $T$ intersects each $Q\in E_i$). Hence $|M_i| \geq \varepsilon N/(2cq)$. Finally, the random variables $f_Q(C(x)_Q)$ can be fixed in $\ensuremath{\mathbb{Z}}_r$ without reducing the probability $\operatorname{Pr}_{x\sim\ensuremath{\mathbb{Z}}_r^n}[f_Q(C(x)_Q) = x_i]$. \end{proof} As previously mentioned, the above result tells us that a decoder can focus its queries on one of the $q$-tuples $Q$. We can go one step further and show that the decoder can restrict itself to computing a linear function of the queried bits while still maintaining a good correlation with the target bit $x_i$ at the cost of decreasing the average success probability. \begin{theorem} \label{thr:ldc2} Suppose $C: \ensuremath{\mathbb{Z}}_r^n \to \ensuremath{\mathbb{Z}}_r^N$ is a $(q,c,\varepsilon)$-smooth code. Then for every $i\in[n]$, there exists a set $M_i$ consisting of at least $\varepsilon N/(2cq)$ disjoint sets of at most $q$ elements of $[N]$ each such that, for every $Q\in M_i$, % \begin{align*} \sum_{k=1}^{r-1}\sum_{S\in\ensuremath{\mathbb{Z}}_r^{|Q|}}\left|\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\left[\omega_r^{S\cdot C(x)_Q - kx_i}\right]\right| \geq \frac{\varepsilon r}{2}. \end{align*} % Here $C(x)_Q$ is the restriction of $C(x)$ to the bits in $Q$. \end{theorem} \begin{proof} Fix $i\in[n]$ and take the set $M_i$ produced by Theorem~\ref{thr:ldc1}. For every $Q\in M_i$ we have % \begin{align*} \sum_{k=1}^{r-1}\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\left[\omega_r^{k(f_Q(C(x)_Q) - x_i)}\right] \geq \frac{\varepsilon r}{2}. \end{align*} % For $k\in\{1,\dots,r-1\}$, define the function $h_{Q,k}:\ensuremath{\mathbb{Z}}_r^{|Q|}\to\mathbb{C}$ by $h_{Q,k}(x) = \omega_r^{kf_Q(x)}$. Consider its Fourier transform $\widehat{h}_{Q,k}: \ensuremath{\mathbb{Z}}_r^{|Q|}\to\mathbb{C}$. Hence we can write % \begin{align*} h_{Q,k}(x) = \sum_{S\in\ensuremath{\mathbb{Z}}_r^{|Q|}}\widehat{h}_{Q,k}(S)\omega_r^{S\cdot x}. \end{align*} % Finally, using that $|\widehat{h}_{Q,k}(S)|\in[0,1]$ for all $S\in\ensuremath{\mathbb{Z}}_r^{|Q|}$, we can upper bound $\varepsilon r/2$ by % \[ \sum_{k=1}^{r-1}\operatorname*{\mathbb{E}}_{x}\left[\omega_r^{k(f_Q(C(x)_Q) - x_i)}\right] = \sum_{S\in\ensuremath{\mathbb{Z}}_r^{|Q|}}\sum_{k=1}^{r-1}\widehat{h}_{Q,k}(S)\operatorname*{\mathbb{E}}_{x}\left[\omega_r^{S\cdot C(x)_Q-kx_i}\right] \leq \sum_{S\in\ensuremath{\mathbb{Z}}_r^{|Q|}}\sum_{k=1}^{r-1}\left|\operatorname*{\mathbb{E}}_{x}\left[\omega_r^{S\cdot C(x)_Q-kx_i}\right]\right|.\qedhere \] \end{proof} \subsection{An exponential lower bound for $\mathsf{LDC}$s} In this section, we use our results from matrix-valued hypercontractivity to obtain our lower bound for $\mathsf{LDC}$s over $\ensuremath{\mathbb{Z}}_r$. \begin{theorem} \label{thm:2querylowerboundhyper} If $C:\ensuremath{\mathbb{Z}}_r^n\to\ensuremath{\mathbb{Z}}_r^N$ is a $(2,\delta,\varepsilon)$-$\mathsf{LDC}$, then $N=2^{\Omega(\delta^2\varepsilon^4 n/r^4)}$. \end{theorem} \begin{proof} In this proof we shall use the normalized Schatten norm. Fix $x\in \ensuremath{\mathbb{Z}}_r^n$. Define the vector $v_x\in \ensuremath{\mathbb{C}}^{r^2N}$ $$ v_x=\big(1,\dots,1,\omega_r^{C(x)_1},\ldots,\omega_r^{C(x)_N}, \omega_r^{2 C(x)_1},\ldots,\omega_r^{2 C(x)_N},\ldots,\omega_r^{(r-1) C(x)_1},\ldots,\omega_r^{(r-1) C(x)_N} \big), $$ where each sequence $\omega_r^{jC(x)_1},\dots,\omega_r^{jC(x)_N}$ is repeated $r$ times consecutively. Let $R:=r^2N$ and define the $R\times R$ symmetric matrix $f(x):=v_x^{\operatorname{T}}\cdot v_x$ whose $(N(r j_1+m_1)+\ell_1,N(rj_2+m_2)+\ell_2)$-entry is $\omega_r^{j_1C(x)_{\ell_1} + j_2C(x)_{\ell_2}}$, where $j_1,j_2,m_1,m_2\in\ensuremath{\mathbb{Z}}_r$ and $\ell_1,\ell_2\in[N]$ (note that there are $r$ repeated entries $\omega_r^{j_1C(x)_{\ell_1} + j_2C(x)_{\ell_2}}$ in each row and column). Since $f(x)$ has rank $1$ and its $R^2$ entries have absolute value $1$, its only non-zero singular value is $R$. Hence $\|f(x)\|_p^p = R^{p-1}$ for every $x\in\ensuremath{\mathbb{Z}}_r^n$. Fix $i\in[n]$. For every $k\in\{1,\dots,r-1\}$ consider the $R\times R$ matrices $\widehat{f}(0^{i-1}k0^{n-i})$ that are the Fourier transform of $f$ at the strings in $\ensuremath{\mathbb{Z}}_r^n$ which are zero in all but the $i$th coordinate: % \begin{align*} \widehat{f}(0^{i-1}k0^{n-i}) = \frac{1}{r^n}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}f(x)\omega_r^{-kx_i}. \end{align*} % We shall lower bound $\sum_{k=1}^{r-1}\big\|\widehat{f}(0^{i-1}k0^{n-i})\big\|_p^p$. By Theorem~\ref{thr:ldc2}, there is a set $M_i$ consisting of at least $\delta \varepsilon N/8$ disjoint sets of indices in $[N]$, each with cardinality at most $2$,\footnote{Here we used Theorem~\ref{thr:ldc-smoothcode-equivalence} in order to invoke Theorem~\ref{thr:ldc2} with $c=q/\delta$.} such that $\sum_{k=1}^{r-1}\sum_{S\in\ensuremath{\mathbb{Z}}_r^{|Q|}}\big|{\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}}\big[\omega_r^{S\cdot C(x)_Q - kx_i}\big]\big| \in[\varepsilon r/2,r^{|Q|}(r-1)]$. Given $S=(S_1,S_2)$, consider $Q=(Q_1,Q_2)\in M_i$\footnote{If $Q$ is a singleton, take $Q=(Q_1,Q_1)$ and $S=(S_1,0)$.} and the following $2\times 2$ submatrix in $f(x)$ % \begin{align*} \begin{pmatrix} \omega_r^{2S_1 C(x)_{Q_1}} & \omega_r^{S_1 C(x)_{Q_1} + S_2 C(x)_{Q_2}}\\ \omega_r^{S_1 C(x)_{Q_1} + S_2 C(x)_{Q_2}} & \omega_r^{2S_2 C(x)_{Q_2}} \end{pmatrix}. \end{align*} % Observe that this submatrix clearly exists in $f(x)$, and comes from the rows and columns $N(rS_1 +m_1) + Q_1$ and $N(rS_2 +m_2) + Q_2$ for any $m_1,m_2\in\ensuremath{\mathbb{Z}}_r$. In particular, we can take $m_1=S_2$ and $m_2=S_1$, so that such submatrix does not have overlapping rows or columns with any other submatrix similarly defined from different $S'$ or $Q'$. Hence the corresponding $2\times 2$ submatrix of $\widehat{f}(0^{i-1}k0^{n-i})$ is % \begin{align*} \begin{pmatrix} \alpha & \operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\big[\omega_r^{S\cdot C(x)_Q - kx_i}\big]\\ \operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\big[\omega_r^{S\cdot C(x)_Q - kx_i}\big] & \beta \end{pmatrix}, \end{align*} % for some $\alpha,\beta\in\mathbb{C}$ (in this proof we will not be concerned with the value of $\alpha,\beta$). Let $P$ be the $R\times R$ permutation matrix that, for every $Q=(Q_1,Q_2)$ and $S = (S_1,S_2)$, swaps rows $N(rS_1 + S_2) + Q_1$ and $N(rS_2 + S_1) + Q_2$. We define the matrices $F_i(k) := P\widehat{f}(0^{i-1}k0^{n-i})$ for $k\in\{1,\dots,r-1\}$. Because we previously chose $m_1=S_2$ and $m_2=S_1$, for each of the at least $\delta\varepsilon N/8$ sets $Q\in M_i$, $F_i(k)$ has diagonal entries $\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\big[\omega_r^{S\cdot C(x)_Q - kx_i}\big]$ for all $S\in\ensuremath{\mathbb{Z}}_r^{|Q|}$ (each entry is repeated twice). In other words, $F_i(k)$ has at least $\delta\varepsilon Nr^2/4$ entries $\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\big[\omega_r^{S\cdot C(x)_Q - kx_i}\big]$ for $Q\in M_i$ and $S\in\ensuremath{\mathbb{Z}}_r^{|Q|}$. The Schatten norm $\|\cdot\|_p$ is \emph{unitarily invariant}: $\|UAV\|_p = \|A\|_p$ for every matrix $A$ and unitaries $U,V$. We shall use the following lemma. Its proof is left to the end of the section. % \begin{lemma}[{\cite[Eq. (IV.52)]{bhatia2013matrix}}] \label{lem:ldc3} Let $\|\cdot\|$ be a unitarily-invariant norm on $\mathbb{C}^{d\times d}$. If $A\in\mathbb{C}^{d\times d}$ and $\operatorname{diag}(A)$ is the matrix obtained from $A$ by setting its off-diagonal entries to $0$, then $\|{\operatorname{diag}}(A)\| \leq \|A\|$. \end{lemma} Using this lemma, we obtain % \begin{align*} \sum_{k=1}^{r-1}\left\|\widehat{f}(0^{i-1}k0^{n-i})\right\|_p^p = \sum_{k=1}^{r-1}\|F_i(k)\|_p^p &\geq \sum_{k=1}^{r-1}\|\text{diag}(F_i(k))\|_p^p \geq \frac{2}{R}\sum_{k=1}^{r-1}\sum_{Q\in M_i}\sum_{S\in\ensuremath{\mathbb{Z}}_r^{|Q|}}\left|\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\big[\omega_r^{S_Q\cdot C(x)_Q - kx_i}\big]\right|^p, \end{align*} % but, by H\"{o}lder's inequality, % \begin{align*} \sum_{k=1}^{r-1}\sum_{S\in\ensuremath{\mathbb{Z}}_r^{|Q|}}\left|\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\big[\omega_r^{S_Q\cdot C(x)_Q - kx_i}\big]\right|^p \geq \frac{1}{r^{3(p-1)}} \left(\sum_{k=1}^{r-1}\sum_{S\in\ensuremath{\mathbb{Z}}_r^{|Q|}}\left|\operatorname*{\mathbb{E}}_{x\sim\ensuremath{\mathbb{Z}}_r^n}\big[\omega_r^{S_Q\cdot C(x)_Q - kx_i}\big]\right|\right)^p \geq \frac{1}{r^{3(p-1)}}\left(\frac{\varepsilon r}{2}\right)^{p}, \end{align*} % hence % \begin{align*} \sum_{k=1}^{r-1}\left\|\widehat{f}(0^{i-1}k0^{n-i})\right\|_p^p \geq \frac{2}{R}\frac{\delta\varepsilon N}{8}\frac{1}{r^{3(p-1)}}\left(\frac{\varepsilon r}{2}\right)^{p} = \frac{\delta\varepsilon}{4r^{2p-1}}\left(\frac{\varepsilon}{2}\right)^p, \end{align*} % which implies % \begin{align*} \sum_{k=1}^{r-1}\left\|\widehat{f}(0^{i-1}k0^{n-i})\right\|_p^2 \geq \frac{1}{r^{2/p-1}}\left(\sum_{k=1}^{r-1}\left\|\widehat{f}(0^{i-1}k0^{n-i})\right\|_p^p\right)^{2/p} \geq \frac{1}{r^3}\left(\frac{\delta \varepsilon}{4}\right)^{2/p} \left(\frac{\varepsilon}{2}\right)^2, \end{align*} % where we used H\"{o}lder's inequality again. Now, using the hypercontractive inequality, we have for any $p\in[1,2]$ that % \begin{align*} n(p-1)\frac{1}{r^4}\left(\frac{\delta \varepsilon}{4}\right)^{2/p}\left(\frac{\varepsilon}{2}\right)^2 \leq \sum_{i=1}^n\sum_{k=1}^{r-1} \frac{p-1}{r-1}\left\|\widehat{f}(0^{i-1}k0^{n-i})\right\|_p^2 \leq \left(\frac{1}{r^n}\sum_{x\in\ensuremath{\mathbb{Z}}_r^n}\|f(x)\|_p^p\right)^{2/p} = R^{2(p-1)/p}. \end{align*} % Choosing $p=1+1/\log{R}$ gives us % \begin{align*} \frac{n}{\log{R}}\frac{1}{r^4}\left(\frac{\delta \varepsilon}{4}\right)^{2}\left(\frac{\varepsilon}{2}\right)^2 \leq R^{2/(1+\log{R})} = 4^{\log{R}/(1+\log{R})} \implies R \geq \frac{2^{\delta^2\varepsilon^4 n/(2^{6}r^4)}}{2^{4^{\log{R}/(1+\log{R})}}} = 2^{\Omega(\delta^2\varepsilon^4 n/r^4)}. \end{align*} Since $R=r^2N$, we have the desired lower bound by adjusting the constant in the $\Omega(\cdot)$ in the exponent. \end{proof} \begin{myproof}{Lemma~\ref{lem:ldc3}} The proof sets the off-diagonal entries of $A$ to $0$ recursively without increasing its norm. Start with the off-diagonal entries in the $d$th row and column. Define $D_d$ be the diagonal matrix by $D_{d,d} = -1$ and $D_{i,i}=1$ for $i<d$. Note that $D_dAD_d$ is the same as $A$, except that the off-diagonal entries of the $d$th row and column are multiplied by $-1$. Hence $A_{d-1} := (A+D_dAD_d)/2$ is the matrix obtained from $A$ by setting those entries to $0$ (this does not affect the diagonal). Since $D_d$ is unitary, by the triangle inequality % \begin{align*} \|A_{d-1}\| = \|(A+D_dAD_d)/2\| \leq \frac{1}{2}(\|A\| + \|D_dAD_d\|) = \|A\|. \end{align*} % Continuing in this manner for $i=1,\dots,d-1$, we can set the off-diagonal entries in the $(d-i)$th row and column of $A_{d-i}$ to $0$ by using the diagonal matrix $D_{d-i}$ which has a $-1$ only on its $(d-i)$th position and without increasing its norm. \end{myproof} \section{2-server private information retrieval} \label{sec:PIR} As mentioned in the introduction, the connection between $\mathsf{LDC}$s and \textsf{PIR} is well known since the results of~\cite{katz2000efficiency,goldreich2002lower}. In general, upper bounds on $\mathsf{LDC}$s are derived via \textsf{PIR} schemes, which in turn means that our $\mathsf{LDC}$ lower bounds translate to $\textsf{PIR}$ lower bounds, which we illustrate below. We first define the notion of private information retrieval. \begin{definition} A one-round, $(1-\delta)$-secure, $k$-server private information retrieval $\mathsf{(PIR)}$ scheme with recovery probability $1/r+\varepsilon$, query size $t$ and answer size $a$, consists of a randomized user and $k$ deterministic algorithms $S_1,\ldots,S_k$ (the servers) that satisfy the following: % \begin{enumerate} \item On input $i\in [n]$, the user produces $k$ queries $q_1,\ldots,q_k\in \ensuremath{\mathbb{Z}}_r^t$ and sends them to the $k$ servers respectively. The servers reply back with a string $a_j=S_j(x,q_j)\in\ensuremath{\mathbb{Z}}_r^a$, and based on $a_1,\ldots,a_k$ and $i$, the user outputs $b\in \ensuremath{\mathbb{Z}}_r$. \item For every $x\in \ensuremath{\mathbb{Z}}_r^n$ and $i\in [n]$, the output $b$ of the user satisfies $\Pr[b=x_i]\geq 1/r+\varepsilon$. \item For every $x\in \ensuremath{\mathbb{Z}}_r^n$ and $j\in[k]$, the distributions over $q_j$ (over the user's randomness) are $\delta$-close for different $i\in[n]$. \end{enumerate} % We crucially remark that for the lower bounds that we present below, the function $S_j$ could be an arbitrary (not necessarily linear) function over $x_1,\ldots,x_n \in \ensuremath{\mathbb{Z}}_r$. \end{definition} Our \textsf{PIR} lower bound follows almost immediately from the following immediate consequence of Goldreich et al.~\cite[Lemma~5.1]{goldreich2002lower}. In the following we shall assume $\delta = 0$. \begin{lemma}[\cite{goldreich2002lower}] \label{lem:goldreich} If there is a classical $2$-server $\mathsf{PIR}$ scheme with query size $t$, answer size $a$ and recovery probability $1/r+\varepsilon$, then there is a $(2, 3, \varepsilon)$-smooth code $C:\ensuremath{\mathbb{Z}}_r^n\rightarrow (\ensuremath{\mathbb{Z}}_r^a)^m$ with $m \leq 6r^t$. \end{lemma} We remark that Goldreich et al.~\cite{goldreich2002lower} state the lemma above only for $r=2$, but the exact same analysis carries over to the large alphabet case. We now get the following main theorem. \begin{theorem} A classical $2$-server $\mathsf{PIR}$ scheme with query size $t$, answer size $a$ and recovery probability $1/r + \varepsilon$ satisfies $t\geq \Omega\big((\delta^2\varepsilon^4 n/r^4 - a)/\log{r}\big)$. \end{theorem} \begin{proof} By using Lemma~\ref{lem:goldreich}, there is a $(2,3,\varepsilon)$-smooth code $C:\ensuremath{\mathbb{Z}}_r^n\rightarrow (\ensuremath{\mathbb{Z}}_r^a)^m$ with $m\leq 6r^t$. In order to apply Theorem~\ref{thm:2querylowerboundhyper}, we form a new code $C'$ by transforming each old string $C(x)_j\in\mathbb{F}_r^a$ using the Hadamard code into $C'(x)_j\in\{0,1\}^{2^a}\subseteq\mathbb{Z}_r^{2^a}$. The total length of $C'$ is $m'=m 2^a$. By using Theorem~\ref{thm:2querylowerboundhyper} on $C'$ (note that the theorem can be applied directly to a smooth code), this gives us $$ m'a\geq 2^{\Omega(\delta^2\varepsilon^4 n/r^4)}, $$ and since $m=O(r^t)$, we get the desired lower bound in the theorem statement. \end{proof} \DeclareRobustCommand{\DE}[3]{#3} \bibliographystyle{alpha}
1,116,691,498,094
arxiv
\section{Introduction} Sustained and secure supply of power is a vital component of a prosperous society. Other essential services, such as water supply, medical infrastructure, communication, transport and so on are all dependent on the stability of their power supplies. Interruptions or outages in the power supply can have catastrophic consequences, as was witnessed in many occasions (\textit{e.g.} August 14$^{th}$, 2003 in the U.S. and Canada; September 28$^{th}$, 2003 in Italy; and July 30$^{th}$, 2012 in India) \cite{list}. Characterizations of power grid instabilities and outages, therefore, have been active topics of research for decades in engineering and physics communities (see \textit{e.g.}, \cite{anatomy,dobson2015,timme12,pahwa14,brummitt12,simonsen08,vaiman12,ji16}). An operating power grid, particularly near its permissible level of capacity, can suffer from large outages triggered from small initial fluctuations or disturbances. For example, a software failure in an early warning management system \cite{report}, a falling tree on a line \cite{report_italy} or overloading by users \cite{blackout_india} caused the above mentioned blackouts affecting, respectively, about 55, 56 and 620 million people. An amplified response to a small scale perturbation is a prominent signature of system-wide correlations developed near a critical point. A clear example of correlated response in power grids is the distribution function of the outage sizes, determined, for example, by the number of customers left unserved during an outage. While a random failure probability would give an exponentially decaying distribution for outage sizes, in reality the probability $P(S)$ of an outage of size $S$ has a power law tail \cite{2007chaos}: $P(S)\sim S^{-\alpha}$, implying relatively higher probabilities for large outages. This is due to local correlations and causally connected cascades or avalanches of outage events. Statistical analysis of those avalanches, particularly those demonstrating the universality of the exponent value $\alpha$ across different countries \cite{2007chaos}, has led to the identification of the dynamics of the power grid avalanches with that of self organized criticality (SOC) \cite{dobson00,dobson04}. Indeed, a connected set of objects (grid lines) having finite failure thresholds, with drive (customer demand) and dissipation (load unserved) is a suitable system for showing universal collective behavior in a self-organized critical state. Drawing such a parallel allows for investigation of power grid dynamics using the standard tools of SOC developed over decades \cite{bak}. Furthermore, it also puts power grids in the generic class of driven dissipative systems having intermittent activities or avalanches with scale free size distributions. For example, this is reminiscent of the Gutenberg-Richter-like law seen in various scales such as originally for the earthquake statistics \cite{gr}, also for stressed brittle solids \cite{main_kun}, sheared granular media \cite{takahiro} and so on. While in the steady state the time series of the avalanches show scale free size distribution with universal exponent value, a common observation in these systems is the variation of this exponent value with the `load' on the system. Load is in general considered here to be the relevant driving field, e.g. tectonic stress for earthquakes, compressive stress for fracture experiments and so on. Specifically, if only the events occurring at a higher stress are sampled, the magnitude of the exponent $\alpha$ is smaller than what is obtained for events occurring at a lower stress. This was first observed for sheared rocks \cite{scholtz} where the exponent decreases linearly with the differential stress. Subsequently, it was observed in other failure dynamics \cite{amitrano}, including famously for the Gutenberg-Richter law exponent in earthquakes \cite{sch05,scholz15}. Although the magnitudes of earthquakes follow universal scaling, in some regions the exponent value tends to be lower, signaling a higher risk of large earthquakes \cite{obara}. Here we show that such lowering of size distribution exponent with increased load, i.e. customers' demands, also takes place for power outage statistics. Using the data for the outages in the U.S. between 2002-2017, we found that the size distribution of the outages between night and day times, where the usage level approximately changes by 35\%, are significantly different. Such changes in the exponent are also observed for smaller regions in the U.S., where it can be indicative of the relative risks of outages. Indeed there is a systematic variation of the exponent value with the load on the grid for different hours of a day and different months of a year. We are able to reproduce this feature using a minimal model both for a realistic topology of the U.S. grid and other simpler topologies. The load dependence of the size distribution exponent for outages opens a new path towards possible forecasting of large outage prone regions. For power grids, such identifications are very advantageous, as focused mitigation efforts (e.g. upgrading lines) can help in preventing large outages. \section{Outages in the U.S. grid} \begin{figure}[t] \centering \includegraphics[width=7.6cm]{fig01.eps} \caption{The sizes of large-scale outages in the U.S. follow a power-law distribution, whose exponent changes with the load on the grid at the time of outage. (a) We demonstrate this by dividing outages into those occurring during the day and night, or the summer and off-peak/winter periods. The lower load cases (night and winter) show power-law distributions with steeper distributions than the higher load cases (day and summer). The summer/winter plots are shifted 10x up the y-axis, to aid visibility. The inset shows the national electricity consumption at different times (Pacific time) and months, for 2016. This robust anti-correlation between the load and exponent can be seen if the data are further subdivided according to (b) month (3-month rolling average) or (c) time of day (3-hour rolling average).} \label{fig1} \end{figure} The size distribution of the power outages in the U.S. has been studied, both in terms of the power left unserved and the number of customers affected \cite{clauset2009,dobson2015,hines09}. In fact, these quantities vary almost co-linearly, except in a few instances involving load shed affecting \textit{e.g.} one customer, such as may be the case for a large industrial facility. The two metrics also show power law size distributions and are more or less equivalent in terms of the exponent values \cite{hines09}. For example, the cumulative distribution function for the number of customers affected during blackouts has been reported to follow a power law, with exponents variously estimated in the range of 0.8 - 1.3 \cite{clauset2009,hines09,dobson2015}. Similar studies for outages in Sweden \cite{hol06}, Norway \cite{bakke06}, New Zealand \cite{ancell05} and China \cite{weng06} also show scale-free size distributions of power outages. Here we show that the exponent value of such distributions depends on the load carried by the power grid, at the time of failure. For events with a power-law size distribution, the probability of an event of size $S$ scales as $p(S) \sim S^{-\alpha}$. When these events are arranged in descending order of magnitude, the resulting rank-plot will follow $k\sim S_k^{-B}$, where $S_k$ is the $k$-th largest event, and the exponent $B = \alpha-1$ (see Methods). In Fig. \ref{fig1}(a) the rank plots, or cumulative size distributions, are given for the subsets of power outages occurring respectively during the day (08:00-20:00, local times), night (22:00-04:00), summer (July-October), or winter (October-May). These periods were chosen to correspond with the times of peak and off-peak loads, as measured in the national electricity demand during 2016, and shown in the inset to \ref{fig1}(a). All data are taken from the public reports of the U.S. Energy Information Administration \cite{USEIA}, which lists outage events affecting more than 50,000 consumers, or resulting in a load shedding of more than 300 MW, as well as the hourly electricity demand. The outage data used covers 1193 events in the years 2002-2016 A power law fit of the whole data set gives an exponent $B = 1.30\pm0.02$, consistent with previous reports \cite{clauset2009,dobson2015,hines09}. However, with a day/night division these outages split into a shallower daytime distribution with $B = 1.15\pm 0.03$ and a steeper nighttime distribution with $B = 1.78 \pm 0.02$. While it is known that there are fewer outages at night, than in the day \cite{hines09}, this result shows that those outages that do occur at night are generally also much less severe. Similarly, if the data are split seasonally, we find an exponent of $B = 1.22\pm 0.04$ in the months of peak summer usage, but an exponent of $B = 1.74\pm0.04$ during the off-peak winter months. To show that there is a significant relationship between the load on the grid and the exponent value of the outage size distribution, we have considered outages in rolling 3-hour time windows. Fig. \ref{fig1}(b) shows the variations of the exponent $B$ and the load for different hours of the day. Similarly, Fig. \ref{fig1}(c) shows the load and exponents for different months of the year, using a three month window. For both cases an anti-correlation between the load and the fitted exponent value can be seen. Like a variety of other driven disordered systems, including earthquakes \cite{sch05,scholz15}, laboratory scale fracture \cite{amitrano}, and sheared granular systems \cite{riviere2017}, we find that a higher load is associated with a smaller $B$ value, and hence a more extreme distribution of events. \begin{figure*}[tb] \centering \includegraphics[width=14.6cm]{fig02.eps} \caption{Day-night variations in outage size distributions are also seen regionally. Outages in the U.S. are divided according to their governing Regional Reliability Council (RRC). Shown are the rank size distributions for the three regions with the most reported events, (a) WECC, (b) RFC and (c) SERC, divided between day (10:00 - 20:00, local time) and night (00:00-06:00). In each case the night-time outage distribution is steeper than the day-time distribution, indicating that outage events are generally more severe at higher-load times.} \label{fig2} \end{figure*} So far we have shown the temporal variation of the outage size distribution exponent over the entire data. However, to identify the vulnerable areas or dangerous hot-spots on a grid, it is important to analyze such load dependence on different spatial segments as well. The U.S. grid is divided between 10 electricity regulatory authorities \cite{map}. We chose the largest three in terms of the number of events and performed the similar analysis of splitting the data between day and night and find a similar variation in the exponent values (see Fig. \ref{fig2}). This establishes that our method can be useful for identification of a vulnerable sub-volume of a larger grid. For that, however, one needs to calibrate the variation in exponent with risk of large outage, which requires a large volume of fine grained data. With the present data, requiring more than 50,000 people or loss of 300MW of load to get reported, such analysis is not feasible. \section{Model} We now explore the load-dependence of power grids \textit{via} a simple network model, with different topologies and loading conditions. There are several approaches to modeling the dynamics of a power grid, including examples of networks obeying circuit laws \cite{chen05,rios02,kirschen04,simonsen08}, sometimes incorporating phase information \cite{timme12,yang17}, as well as more abstract models \cite{demarco, stubna03}, alongside a large volume of literature on failures in complex networks in general (see e.g. \cite{boccaletti03,buldyrev10,goh01,panzieri}). Here we use a model of the power grid similar to that studied in Refs. \cite{pahwa10,yagan1,yagan2}, and demonstrate how the observed U.S. outage data match generic features of the load-dependence of outage statistics. Specifically, we consider a set of elements, or nodes, having finite failure thresholds. The elements are either arranged on a regular grid, or connected to each other by the topology given in Ref. \cite{watts98}, which simulates the Western Interconnection of the U.S. grid. The thresholds $\sigma_{th}^i$ and loads $\sigma_l^i$ of the $i$-th element are related by \begin{equation} \sigma_{th}^i=\sigma_l^i+s\epsilon_i. \end{equation} We assign a random load $\sigma_l^i$ to each element, from a uniform distribution between zero and one. The second term on the right hand side provides a buffer or redundancy for the elements. This ensures that the capacity of an element is always higher than its initial load. The random variables $\epsilon_i$ are also chosen from a uniform distribution on $[0,1]$. Therefore, on average, the network carries a load of $1/(1+s)$, relative to its maximum capacity. The dynamics of the model follows from randomly choosing an element and equating its threshold to its load thereby triggering a failure event. This can happen due to external causes (storm, vandalism etc.) on a grid or due to sudden surge in demand among the customers and so on. The load carried by that element now has to be redistributed among the remaining surviving elements. That may, in turn, cause some of those elements to break, triggering an avalanche. We can quite reasonably assume a separation of time scales between successive triggering events and internal redistribution of loads. Therefore, during an avalanche the total load remains constant and the only dynamics is the redistribution of loads in successive steps following a breakdown. An avalanche is then the number of elements breaking until a stable configuration is reached. After an avalanche, all the elements are again restored with another random threshold and load chosen from the same distributions. This allows an average over disorder in the system. Clearly, the value of $s$ will determine the relative stress on an element with respect to its carrying capacity. Higher the $s$, lower is the relative stress. It is important to note, however, that due to the long-range nature of the correlation developed in the system, a failure somewhere in the grid can trigger an avalanche or cascade failure at a different location. Specific examples of such events include the August 10, 1996 outage \cite{96outage}. Therefore, one version of our model is long range, taking into account the long range nature of electrical current variation, so that a local disturbance can indeed trigger a remote event. In another version, we take the exact topology \cite{watts98} of the Western Interconnection. \begin{figure}[tb] \centering \includegraphics[width=7.6cm]{fig03.eps} \caption{The rank-plot of the avalanches observed for the two-dimensional version of the model with power-law load redistribution for various values of stress $s$. In the range shown, the exponent value changes from $0.62$ to $1.80$, which include the range seen in the data (Fig. \ref{fig1}).} \label{ava_stress_rank} \end{figure} While the statistics of the avalanches can depend on the particular topology considered, the qualitative observation of lowering of the size distribution exponent (or that of the rank plot) should remain valid independent of it. To begin with, we consider the two-dimensional square lattice network. Following a local failure, the load carried by the element is redistributed to the entire remaining network. However, the load sharing depends with distance from the failure point in a power law $1/|x-x^{\prime}|^{\gamma}$ (considering periodic boundary conditions). For the simulations here we have taken $\gamma=2$ in keeping with the similar dependence of current flow in the random fuse model \cite{lucilla}. The resulting avalanches are recorded over time and the above mentioned rank-plots are shown in Fig. \ref{ava_stress_rank} for different values of the relative stress $s$. The exponent varies significantly within the range of $s$ studied. Particularly, for the range $s=0.55-0.9$, the exponent value changed from $0.88-1.87$. This covers the observed range ($1.2-1.80$) in the data for the whole country or that of its parts ($1.1-1.4$). \begin{figure}[tb] \centering \includegraphics[width=7.6cm]{fig04.eps} \caption{The top figure shows the scatter plot of the exponent values from Fig. \ref{fig1} with the corresponding average load showing representative error bars. A linear trend of decay in the exponent value is observed. The figure at the bottom is the rank-plot of the avalanches observed in the model with same topological structure as the North-Eastern power grid \cite{watts98}. For various values of stress $s$ the exponent value changes from $0.1$ to $1.5$, which has substantial overlap with the range seen in the data (Fig. \ref{fig1}).} \label{ava_topology} \end{figure} The topology of the power grid is also an important factor in its dynamics. The characterization of the network properties is a debated issue \cite{pagani13}, with claims of small-world and scale-free properties in the topologies of power grids in various countries. Nevertheless, the qualitative feature of stress-dependence as discussed above, are not expected to change with the topology of the grid. To have a more realistic feature in our model, we have studied the model in the exact topology of the North-Eastern grid as reported in Ref. \cite{watts98}. The load sharing in this case is confined only among the connected neighbors. The resulting statistics (Fig. \ref{ava_topology}) of the outages for various values of stress still has a substantial overlap in the observed exponent values. \section{Discussion and Conclusions} The intermittent dynamics of the power grid outages and its association with self-organized criticality is known for over a decade now. Characteristic signatures, including scale-free size distribution of the outage sizes are seen for outages in different countries, for example, the U.S., Sweden, Norway, China. This connection of power grids with self-organized criticality enables comparisons of statistics of power outages with other similar systems, for example, that of earthquakes. Here we show that like in earthquakes and several other driven disordered systems, for power grids as well the exponent value of the size distribution of the avalanches, which in this case is the number of customers affected in an outage, depends on the load on the system. Particularly, for higher values of the load, the magnitude of the exponent becomes smaller, indicating a relatively increased probability for outages of higher sizes. We demonstrated this anti-correlation between the load on the grid and the exponent value of the outage size distribution by analyzing the data for different months of the year and different hours of the day (Fig. \ref{fig1}). This property is also valid for sub-regions, for example in several Reliability Commission areas in the U.S. The scatter plot of the exponent values with the average load on the grid at the time of outage show a clear signature of the decay of the exponent value with the load (Fig. \ref{ava_topology}), which is almost linear, as in the case of earthquakes \cite{scholtz}. Given the generic nature of the anti-correlation between load and avalanche size distribution exponent, we attempted to use a toy model for power grids to explore the effect of load dependence on the outage size distribution. The model is defined as a collection of nodes, each carrying a load lower than their randomly assigned capacity, connected in certain topologies. We tested the results in a square lattice topology with distance dependent load redistribution, as well as the exact topology of the Western Interconnects (Fig. \ref{ava_topology}). Depending on the average load on the system, the model reproduces the load dependence of the avalanche size distribution as seen in the model for various topologies. In conclusion, we have demonstrated that scale free variation of the outage size distribution in power grids depends on the load on the grid at the time of outage. The variation of the exponent, an almost linear decay of the exponent value with the load, is similar to that found in earthquake size distributions. Given sufficient resolution of the outage data, the method can be used to identify vulnerable regions of power grids, as is done for statistical predictions of earthquakes. \section*{Methods} Data are collected from the U. S. Energy Information Administration website \cite{USEIA}, over the period 2002-2016, inclusive. There are 1193 reported forced (i.e. unplanned) outages in this period affecting known, non-zero numbers of customers, which we consider. Outage times are given according to the appropriate local time zone. The load values used are for 2016, and reported nationally using the Pacific time (PT) zone as a reference \cite{USEIA}. Hourly load data was taken from the 1st and 15th of every month, avoiding weekends and holidays (specifically, using January 4th/15th; May 2nd/13th; and October 3rd/14th), when load patterns would be different. Daily average loads were collected on each day throughout the year. Averages and standard deviations of the load were calculated from these data for each window of hours or months used. For fitting the outage size distributions, we use rank plots. The events $S$ can be arranged in the descending order of their sizes: \begin{equation} S_1 \ge S_2 \ge S_3 \dots \ge S_n. \nonumber \end{equation} The $k$th ranked element has size $S_k$. For events with a probability distribution $p(S)$, the number of events having size greater than or equal to $S_k$ is \begin{equation} \int_{S_k}^{\infty} p(S)dS=k. \end{equation} If $p(S)\sim S^{-\alpha}$, one then has \begin{equation} k\sim S_k^{(1-\alpha)} = S_k^{-B}. \end{equation} The ranked data was fit in two ways. First, a maximum likelihood estimator (MLE) method \cite{clauset2009} was applied to measure the exponent $B$, and an event size cutoff. Second, we applied a least-squares fit to data that were binned to have equal widths on a log-scale. For this an \textit{a priori} cutoff is required. This is taken as 50,000 (the requirement for reporting) or somewhat above, when there are other signatures of under-reporting (\textit{e.g.} kink at 100,000 consumers in Fig. \ref{fig2}(b)). Error estimates were checked by repeating fits on randomly subsampled data sets (100 trials on half-sampled data); the resulting spread in exponents is consistent with the stated fit errors. While the precise exponent values differ depending on the method of fitting used, they show same trends with load. Fits given in the manuscript are least-squares fits.
1,116,691,498,095
arxiv
\section{Introduction} \label{sec:intro} Semi-supervised learning methods have been a cornerstone in addressing annotated data scarcity by taking advantage of and incorporating the relatively larger amounts of \textit{unlabeled}\footnote{We use descriptors ``(un)labeled'' and ``(un)supervised'' interchangeably throughout this draft.} data in the training process. Self-training is a relatively early instance of such methods \cite{1053799}. Conceptually, self-training is simple: first, a base model is trained using limited labeled data. The base model is then used to predict labels for the unlabeled data. The generated labels are termed ``\textit{pseudo-labels}'' to signify their predicted nature, as opposed to gold supervised data. Finally, the pseudo-labels are combined with the initial seed supervised data to train a new model, and this process is repeated until no further improvement in performance is observed. Self-training, or pseudo-labeling interchangeably, has been shown to be effective to improve upon fully supervised baselines in low-resource settings for several sequence-to-sequence (seq2seq) tasks, such as machine translation (MT) \cite{zhang2018joint,DBLP:conf/iclr/HeGSR20,jiao2021self}, end-to-end speech recognition (ASR) \cite{xu20b_interspeech,park20d_interspeech,9054295,likhomanenko21b_interspeech}, and end-to-end speech translation (ST) \cite{pino20_interspeech}. In this work, we study pseudo-labeling for a recently proposed new setup, joint speech transcription and translation (STT)~\cite{anastasopoulos2018tied,sperber-etal-2020-consistent}: a setup that is of interest in use cases where both the transcript and translation of a speech signal are returned to the user. As we describe in detail later in \S\ref{subsec:stt}, the fully supervised data for modeling end-to-end joint transcription and translation is triples of form $(s, tc, tl)$ where $s$ is the speech signal, $tc$ is the transcript, and $tl$ is the translation. As that is especially costly to come by, STT also seems to have the potential to benefit from pseudo-labeling. Our investigations show that while pseudo-labeling is indeed helpful, the quality of pseudo-labels that bring about the benefits is subpar. Upon inspecting the supervised and unsupervised sets, that proves to be not surprising: with limited amounts of supervised data, it is likely that the supervised and unsupervised sets differ in domain, impacting the quality of pseudo-labels. Specifically, in our case, we identify two causes leading to domain mismatch with out-of-distribution unlabeled data: difference between the sequence length ranges and vocabulary sets of the supervised and unsupervised sets. In this work, we ask \textit{if} we can specifically counteract the domain mismatch to reach a set of pseudo-labels of higher quality, and \textit{if} that higher quality, in turn, translates into a better overall performance of pseudo-labeling. First, we propose pseudo-label filtering, which is often a part of pseudo-labeling algorithm~\cite{9054295,park20d_interspeech,zhang2021flexmatch,likhomanenko21b_interspeech,zhang2022censer}. However, while filtering is usually based on the model prediction scores, we rely on data-centric criteria~\cite{likhomanenko21b_interspeech} that directly target the identified domain mismatch aspects. Second, we augment the supervised data by concatenating randomly-picked samples to create new ones and adding them to the supervised set. These two are essentially different in nature: while filtering increases the overall quality by removing samples with pseudo-labels that are likely to be faulty, augmentation does so by extending the supervised set and generating better labels in the first place. Our results confirm that indeed this distinction in nature gets reflected in different ways filtering and augmentation improve the performance of pseudo-labeling. The outline of this paper is as follows. We provide some background in \S\ref{sec:backgrnd} and detail the experimental setup in \S\ref{sec:exp}. Then, in \S\ref{sec:res}, we report and discuss the results from vanilla pseudo-labeling, the observation of domain mismatch, and the gains brought about by filtering and augmentation. Our \textbf{contributions} are: 1) We specifically focus on pseudo-labeling in the face of domain mismatch between the supervised and unsupervised sets; 2) We investigate the mitigation of the effect of domain mismatch through two approaches: pseudo-label filtering and augmentation by concatenation and demonstrate how they improve pseudo-labeling in different ways. These approaches can be repurposed wherever pseudo-labeling is considered as a solution; 3) We apply pseudo-labeling modified with those approaches specifically to a novel setup, joint speech transcription and translation, and report gains on top of the vanilla pseudo-labeling for STT. \section{Background} \label{sec:backgrnd} Our work studies a pseudo-labeling solution for end-to-end joint speech transcription and translation. In this section, we provide the background for these two components involved in the study, namely \emph{speech transcription and translation} and \emph{pseudo-labeling}. \begin{algorithm*} \caption{Pseudo-labeling}\label{alg:1} \begin{algorithmic}[1] \Require $L = \{x_i, y_i\}$ and $U = \{x_j\}$ \State Train base model $M$ on $L$ \While{The desired number of rounds or convergence has not been reached} \State Generate the pseudo-labeled set: $P = \{{x_j, M(x_j)} \mid x_j \in U\}$ \State Obtain $M^+$ by fine-tuning $M$ on $L \cup P$ \State Replace $M$ with $M^+$ \EndWhile \State \Return $M$ \end{algorithmic} \end{algorithm*} \subsection{Speech Transcription and Translation} \label{subsec:stt} Our task of speech transcription and translation (STT) is closely related to script recognition (ASR) and speech translation (ST). ASR is the task of generating the text equivalent to an audio speech signal. Meanwhile, ST aims to generate the text equivalent to the signal in a target language other than the language of the speaker. In contrast, STT generates both the transcript and the translation \textit{jointly} in an end-to-end fashion. STT is particularly appealing in cases where both the transcript and translation are to be displayed to the user. Formally, STT can be modeled as follows: given a speech signal ($s$), the model generates the transcript ($tc$) and translation ($tl$) concatenated together in the output as one single sequence: $s \rightarrow tc \_ tl$ \cite{sperber-etal-2020-consistent}. This formulation is simple to implement as it casts STT as an instance of the well-known seq2seq modeling and results in a single end-to-end model to be stored on device. Furthermore, as reported by \newcite{sperber-etal-2020-consistent}, this formulation results in a reasonably consistent transcripts and translations as the coupled inference ensures that translations are conditioned on transcripts. In our experiments, we adopt this STT formulation. However, the major challenge that such modeling presents is insufficient data resources: three-way parallel samples of form $(s, tc, tl)$ are expensive to annotate. Annotation would require multilingual annotators and would be time-consuming. To alleviate this limitation, we study how pseudo-labeling can be employed effectively to combat data scarcity in this setting. We provide a background on pseudo-labeling in the next section. \subsection{Pseudo-labeling} \label{subsec:pl} Pseudo-labeling, which is also often referred to as self-training in the literature, addresses the data insufficiency issue by taking advantage of much larger amounts of unsupervised data. More precisely, assume a labeled set $L = \{x_i, y_i\}$ and an unlabeled set $U = \{x_j\}$, where $|U| \geq |L|$, are available (note that in the case of STT, $y_i$ is actually a tuple consisting of the transcript and the translation: $y_i = (tc_i, tl_i)$). Pseudo-labeling starts with training an initial model $M$ in a supervised manner using $L$. Then, using $M$, it generates pseudo-labels (predictions) for $U$. It then incorporates the pseudo-labels to create a new model $M^+$, which hopefully supersedes $M$ in performance. $M^+$ can then replace $M$ to repeat this process for as many rounds as desired, or until no further gains are observed. Although conceptually simple, several key decisions need to be made before pseudo-labeling can be applied: \begin{itemize} \item \emph{How should $M^+$ be created?} $M^+$ can be trained from scratch (e.g., as done by \newcite{park20d_interspeech}) or alternatively obtained by continuously fine-tuning $M$ (e.g., as done by \newcite{xu20b_interspeech}) using the labeled set combined with the pseudo-labeled set. As we later report in \S\ref{sec:res}, in our preliminary experiments, fine-tuning consistently outperforms training from scratch. Hence, we opt for fine-tuning in our experiments. \item \emph{Should pseudo-labeling be applied to supervised sets?} For the pseudo-labeling stage, we consider and experiment with labeling the supervised set in addition to the unsupervised set and monitor for any potential improvements. Similar to the previous item, as we later show in \S\ref{sec:res}, using the pseudo-labels for the supervised set does not prove to be beneficial in our preliminary experiments. Therefore, we generate predictions only for the unlabeled set. \item \emph{In what way should the pseudo-labels be used to update existing models?} For instance, \newcite{DBLP:conf/iclr/HeGSR20}, at each round, first train a model from scratch on the pseudo-labeled set, and then fine-tune it on the labeled set to obtain the final model for that round. Alternatively, \newcite{xu20b_interspeech} combine the two sets and use a hyper-parameter to have control over the relative weight of the labeled portion against the pseudo-labeled portion. To keep our setup simple, we opt for combining the sets and treating them equally. \end{itemize} With the key factors outlined above, Algorithm~\ref{alg:1} shows how we carry out vanilla pseudo-labeling for our experiments. All results we report in \S\ref{subsec:vpl} follow this algorithm. \section{Experimental Setup} \label{sec:exp} We describe the supervised set and the unsupervised set for our experiments in \S\ref{subsec:data} and our model architecture in \S\ref{subsec:model}. \subsection{Data} \label{subsec:data} In this work, we use two publicly available multilingual speech translation datasets which, thanks to the nature of their creation, also include transcripts: CoVoST V2 \cite{wang2020covost} and MuST-C \cite{CATTONI2021101155}. CoVoST V2 is created by amending the validated audio clips and transcripts from the Common Voice crowd-sourced ASR corpus \cite{ardila-etal-2020-common} with professional translations. It covers translations from English into 15 languages and from 21 languages into English. MuST-C is created by automatically aligning the audio segments from TED talks to corresponding manual transcripts and translations (available from the TED website), which are also aligned. It covers translations from English into 14 languages. We conduct our experiments across two language pairs : English--German (En--De) and English--Chinese (En--Zh), which are available in both CoVoST and MuST-C. In all our experiments, we designate CoVoST as the supervised set, and MuST-C as the unsupervised set. Note that this means our objective is to reach the best performance possible on the CoVoST evaluation set. While we also have the gold transcripts and translations (labels in the STT problem) for MuST-C, we do not use them and practically treat MuST-C as an unlabeled set. We only use MuST-C gold labels for analysis and pseudo-label quality assessment. We provide the statistics of our data in Table~\ref{tab:datastat}. \begin{table}[t] \centering \begin{tabular}{lcccc} \toprule & \multicolumn{2}{c}{CoVoST} & \multicolumn{2}{c}{MuST-C} \\ & Train & Eval & Train & Eval \\ \midrule En--De & 233k & 15.5k & 251k & 1.4k \\ En--Zh & 233k & 15.5k & 359k & 1.3k \\ \bottomrule \end{tabular} \caption{Amount of data available (number of sentences), per language pair and corpus.} \label{tab:datastat} \end{table} \subsection{Model} \label{subsec:model} Our model is an end-to-end Transformer \cite{NIPS2017_3f5ee243} with a hidden dimension of 1024, and five and three layers of encoder and decoder respectively (following \newcite{sperber-etal-2020-consistent}). For the input, on the encoder side, we first use wav2vec 2.0 \textsc{Base} \cite{NEURIPS2020_92d1e1eb} as provided by \texttt{Hugging Face Transformers} \cite{wolf-etal-2020-transformers} (specifically, \texttt{facebook/wav2vec2-base-960h}) to extract speech representations before feeding them into the Transformer encoder. On the output side, as described in \S\ref{subsec:stt}, the decoder generates one sequence consisting of the transcript and the translation concatenated together. In terms of input prepossessing, we remove instances where speech is either shorter than 0.5 seconds or longer than 15 seconds, or either the transcript or the translation is longer than 50 words. After that, we use \texttt{SentencePiece} \cite{kudo-richardson-2018-sentencepiece} for subword tokenization. We use a vocabulary size of 1020 and 8188 in the case of En--De and En--Zh, respectively. The transcription and translation vocabulary is shared in both cases. The objective function during optimization is a weighted sum of the CTC loss \cite{10.1145/1143844.1143891} on the encoder side, and the cross-entropy loss on the decoder side. Both during training a base model and fine-tuning an existing checkpoint on the union of the labeled set and the pseudo-labeled set, we use Adam optimizer \cite{DBLP:journals/corr/KingmaB14} with peak learning rate of 0.0005 after 500 warmup steps, coupled with inverse square root learning rate scheduling. We train for a total of 100 epochs. For both language pairs, we use the dev sets provided by the corpora as the held-out evaluation set. We remove diacritics and punctuation before scoring and report our performance in terms of WER of transcripts and BLEU of translations using beam size of five with \textsc{Sacre}BLEU.\footnote{Hash: case.lc+numrefs.1+smooth.4.0+tok.\{13a,zh\} for \{En--De,En--Zh\}.} Our implementation is built upon \texttt{PyTorch} \cite{Paszke_PyTorch_An_Imperative_2019}, \texttt{xnmt} \cite{neubig-etal-2018-xnmt}, and \texttt{Lightning} \cite{Falcon_PyTorch_Lightning_2019}. \begin{table*}[t] \centering \scalebox{0.85}{\begin{tabular}{clrrrrrrrrrrr} \toprule & & \multicolumn{5}{c}{En--De} & \multicolumn{6}{c}{En--Zh} \\ \cmidrule(r){3-7} \cmidrule(l){8-13} & & Base Model & R1 & R2 & R3 & Bound & Base Model & R1 & R2 & R3 & R4 & Bound \\ \cmidrule(r){3-7} \cmidrule(l){8-13} \multirow{2}{*}{\faSearch CoVoST} & WER $\downarrow$ & 15.4 & 15.4 & 15.0 & 15.0 & 14.4 & 14.8 & 14.6 & 14.8 & 14.7 & 14.6 & 13.7 \\ & BLEU $\uparrow$ & 22.8 & 23.8 & 24.5 & 24.5 & 25.5 & 28.7 & 29.4 & 30.0 & 30.5 & 30.7 & 31.9 \\ \cmidrule(r){1-7} \cmidrule(l){8-13} \multirow{2}{*}{MuST-C} & WER $\downarrow$ & 45.1 & 45.2 & 29.7 & 28.4 & 9.6 & 47.9 & 46.2 & 43.8 & 42.8 & 37.2 & 8.9 \\ & BLEU $\uparrow$ & 7.3 & 9.1 & 9.7 & 9.6 & 22.4 & 9.1 & 9.9 & 9.6 & 9.0 & 8.3 & 18.9 \\ \bottomrule \end{tabular}} \caption{Vanilla pseudo-labeling results over each round up to saturation. CoVoST, our supervised set, is distinguished with \faSearch symbol to signify it is intended to improve performance on it.} \label{tab:baseres} \end{table*} \section{Results and Discussion} \label{sec:res} We present our results in this section in the following order: \S\ref{subsec:vpl} establishes vanilla pseudo-labeling performance, which leads to our analysis of the domain mismatch between the supervised and unsupervised sets. \S\ref{subsec:filt} and \S\ref{subsec:aug} then describe the two categories of remedies we devise to mitigate the effect of domain discrepancies on pseudo-labeling. As mentioned in \S\ref{subsec:pl}, this is all using the best setting we were able to establish during our pilot experiments: at each pseudo-labeling round, we 1) label only the unsupervised data, and 2) fine-tune the existing checkpoint on the combination of supervised and pseudo-labeled data. We conduct our pilot experiments on En--De. We were able to confirm that the aforementioned setting consistently beats the rest over several rounds of pseudo-labeling. Figure~\ref{fig:pl_setting} illustrates the lead of the best setting over others in the last round of our experiments. The same pattern holds across all rounds. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{pl_settings.png} \caption{Performance of different PL settings on the supervised set: CoVoST. The best setting fine-tunes the checkpoint from the last round on the supervised set and the pseudo-labels for the unsupervised set.} \label{fig:pl_setting} \end{figure} \subsection{Vanilla Pseudo-Labeling} \label{subsec:vpl} In Table~\ref{tab:baseres}, we include the results of vanilla pseudo-labeling, as in Algorithm~\ref{alg:1}, with no modifications. We report WER and BLEU for En--De and En--Zh across both corpora. To reiterate, CoVoST (distinguished by the magnifying glass symbol \faSearch) is our designated supervised set, and hence, what we are trying to boost performance on. MuST-C scores, on the other hand, are reported for the sake of analysis; the metrics are to assess the quality of the pseudo-labels. We report the performance of the initial model (the fully supervised baseline, Model $M$ on line 1 of the Algorithm~\ref{alg:1}) in the ``Base Model'' column. Scores from each pseudo-labeling round, thereafter, appear on the corresponding ``R'' column. To have an upper bound of what is possible with the collective data that we have, we run an experiment training a single model using both corpora in a supervised manner. Those numbers are provided in the ``Bound'' column. Note that this is the only case for which MuST-C gold labels are used, solely to obtain an upper bound performance: what would be possible if we predicted labels perfectly for the unsupervised set? Is pseudo-labeling helping us close the gap between the base model and that upper bound for \faSearch CoVoST? First and foremost, in confirmation with the literature, vanilla pseudo-labeling is effective. On \faSearch CoVoST, it is able to improve the base model by 0.4\% absolute WER and 1.7 BLEU points on En--De, and 0.2\% absolute WER and 2.0 BLEU points on En--Zh. However, with a closer look at the quality of pseudo-labels at each round (i.e., MuST-C scores), it is evident that the generated labels are far from ideal quality. Specifically, even if we train plain machine translation systems on \faSearch CoVoST transcripts and translations (and take the audio out of the picture), the En--De system scores 12.4 BLEU on MuST-C En--De, and the En--Zh system scores 9.6 BLEU on MuST-C En--Zh. \begin{table*}[t] \centering \scalebox{0.9}{\begin{tabular}{lrrrrrrrr} \toprule & \multicolumn{4}{c}{En--De} & \multicolumn{4}{c}{En--Zh} \\ \cmidrule(r){2-5} \cmidrule(l){6-9} & \multicolumn{2}{c}{\faSearch CoVoST} & \multicolumn{2}{c}{MuST-C} & \multicolumn{2}{c}{\faSearch CoVoST} & \multicolumn{2}{c}{MuST-C} \\ & WER $\downarrow$ & BLEU $\uparrow$ & \multirow{2}{*}{WER $\downarrow$} & \multirow{2}{*}{BLEU $\uparrow$} & WER $\downarrow$ & BLEU $\uparrow$ & \multirow{2}{*}{WER $\downarrow$} & \multirow{2}{*}{BLEU $\uparrow$} \\ Bound & 14.4 & 25.5 & & & 13.7 & 31.9 & & \\ \cmidrule(r){1-5} \cmidrule(l){6-9} Vanilla PL & 15.4/\textbf{15.0} & 23.8/24.5 & 45.2/28.4 & 9.1/9.7 & 14.6/14.6 & 29.4/30.7 & 46.2/37.2 & 9.9/9.9 \\ \cmidrule(r){1-5} \cmidrule(l){6-9} Ratio to Gold & 15.3/15.0 & 24.1/24.7 & 22.8/15.8 & 9.6/10.4 & 14.5/14.2 & 29.5/30.5 & 23.2/17.4 & 10.0/10.2 \\ Ratio KDE & \textbf{15.1}/\textbf{15.0} & 24.2/24.5 & 30.5/27.1 & 9.4/10.1 & \textbf{14.3}/\textbf{14.2} & 29.8/30.7 & 30.8/21.7 & 10.8/10.8 \\ LASER & 15.2/\textbf{15.0} & 24.1/24.5 & 34.7/27.6 & 9.6/10.0 & 14.6/14.3 & 29.4/30.6 & 40.8/20.3 & 10.7/11.2 \\ \cmidrule(r){1-5} \cmidrule(l){6-9} Augmentation & 15.3/15.3 & \textbf{24.9}/\textbf{24.9} & 33.8/22.2 & 11.5/11.8 & 14.6/14.3 & \textbf{30.1}/\textbf{30.9} & 48.7/25.4 & 11.9/11.9 \\ \bottomrule \end{tabular}} \caption{Improved results using remedies recommended. Each cell includes the performance obtained from the first round and the best performance obtained using the corresponding method (R1/Best). We also include bounds from Table~\ref{tab:baseres} for \faSearch CoVoST for comparison. We use bold font to mark the best performance on \faSearch CoVoST.} \label{tab:betterres} \end{table*} Our investigation into the reasons as to why that is the case points to two root causes that indicate \faSearch CoVoST and MuST-C are significantly different in \textit{domain} in the following aspects: \begin{itemize} \item Length mismatch between corpora: As shown in Figure~\ref{fig:length_kde}, MuST-C speech sequences are generally longer, which also results in longer transcripts and translations. \item Vocabulary mismatch between corpora: Additionally, we were able to identify discrepancies between the vocabulary of words between the two corpora. For instance, on the English side, MuST-C and CoVoST each have roughly 64k and 121k unique types, respectively. Of those, only 38k types are in common, with CoVoST having more probability mass on rare (tail-end of the Zipfian distribution) vocabulary types. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{length_kde.png} \vspace{-0.8cm} \caption{The probability density function of input speech lengths estimated using kernel density estimation. MuST-C speech signals are longer in duration.} \label{fig:length_kde} \end{figure} Following this observation, we next demonstrate that it is possible to counteract the domain mismatch and enhance the quality of labels to boost the effectiveness of pseudo-labeling. \subsection{Direction \#1: Data-Centric Filtering} \label{subsec:filt} Per \S\ref{subsec:pl}, in vanilla pseudo-labeling, we use all the generated labels to update the model. Alternatively, pseudo-labels can be filtered to remove predictions of less quality. Recent works \cite{park20d_interspeech} rely on confidence scores from the model to filter the pseudo-labels, which require careful and proper normalization. \newcite{9054295} use a combination of heuristic-based and confidence-based filtering. In our case, similar to \newcite{likhomanenko21b_interspeech}, we propose and only rely on data-centric metrics to specifically target domain-mismatch and select a subset of pseudo-labels to use in the next round: transcript length to speech length ratio and transcript and translation LASER embeddings cosine similarity. \subsubsection{Length Ratio Distribution} A sign of flawed inference and faulty output in seq2seq models has been known to be looping \cite{ChorowskiJ17}: the model generates the same n-gram repeatedly. We were also able to identify looping occurring frequently in the pseudo-labels and resulting in long transcripts. While the supposed lengths of the correct transcripts are unknown, the length of the input audio can be used as an indicator: heuristically, the shorter the input audio, the shorter the transcript. To take advantage of this signal with no supervision overhead, we estimate the probability density function (PDF) (using kernel density estimation (KDE)) of the joint probability distribution over the input speech lengths and predicted transcripts lengths. At each pseudo-labeling round then, we only keep the top 90\% of the most probable transcripts. Figure~\ref{fig:length_ratio_kde} visualizes the effect of such filtering. Instances that have the highest PDF values, have a similar ratio of transcript length to speech length to that of gold transcripts. Hence, this can be a useful metric that needs no supervision. To gauge the maximum potential effectiveness of length ratio-based filtering, we also conduct experiments with filtering based on the ratio of the generated transcript length to the \textit{gold} transcript length, where we only keep those with the length within 0.9$\times$ and 1.1$\times$ the length of the corresponding gold transcript. Note that this only has discussion purposes, as it uses supervision in the form of access to the length of the gold transcripts. Table~\ref{tab:betterres} (rows ``Ratio to Gold'' and ``Ratio KDE'') shows how our length ratio-based filtering methods compare against plain vanilla pseudo-labeling. For each method, we run the same number of rounds as we did for vanilla pseudo-labeling in Table~\ref{tab:baseres}. We report the performance of the first round and the best round (first round/best round in table cells) of each method. Results from each separate round are comprehensively provided in Appendix~\ref{app:extres}. On \faSearch CoVoST, ``Ratio KDE'' speeds up gains relative to vanilla pseudo-labeling despite incorporating fewer labels (only 90\%): 15.1 vs. 15.4 WER and 24.2 vs. 23.8 BLEU at the first round in the case of En--De. The same pattern holds for En--Zh. Looking at the scores on MuST-C, it is evident that moderating the quality of pseudo-labels in this way, does indeed translate into better pseudo-labels for future rounds and improved performance on the supervised set. Also, ``Ratio to Gold'', benefiting from a form of supervision, expectedly results in better quality on the unsupervised set. However, on the supervised set, it performs similarly to ``Ratio KDE'', demonstrating that ``Ratio KDE'' is effective enough at removing detrimental pseudo-labels. While ``Ratio KDE'' performs clearly better at earlier rounds, it saturates at the same performance as vanilla pseudo-labeling, which uses all the labels (with being better only in the case of En--Zh WER by 0.4\% absolute WER). So it is especially beneficial when available resources can only cover a small number of pseudo-labeling rounds. \begin{figure*} \centering \includegraphics[width=\textwidth]{length_ratio_kde.png} \vspace{-1cm} \caption{Plots of transcript lengths against input audio lengths (gold transcripts above and generated transcripts during pseudo-labeling at the bottom). Datapoints in the below plot are color-coded based on their PDF value as estimated by KDE, with lighter colors indicating higher values. The most probable mass forms a pattern similar to that of gold transcription. Therefore PDF values can be effective for filtering pseudo-labels of less quality.} \label{fig:length_ratio_kde} \end{figure*} \subsubsection{LASER Score} Our second filtering method relies on the relationship between the generated translations and transcripts (in contrast to the previous method, which relied on the relationship between the generated transcripts and speech signals). For this, we use LASER \cite{artetxe-schwenk-2019-massively}, a multilingual sentence encoder, to embed the generated transcripts and translations in a multilingual space to rank pairs based on cosine similarity and hold onto only the top 90\%. Given that LASER lies at the center of this, the quality of representations of different languages in its multilingual space can affect the degree of usefulness. Per Table~\ref{tab:betterres}, row ``LASER'', in our experiments LASER-based filtering improves performance on the unsupervised set (and hence, the quality of the pseudo-labels) all across the board. Those improvements translate into better performance on the supervised set in the case of En--De. Importantly, the improvement pattern is similar to that of length ratio-based filtering: more gains at earlier rounds, saturating at the same performance as the vanilla pseudo-labeling. So similarly, LASER-based filtering can be useful when fewer runs can be run. \subsection{Direction \#2: Data Augmentation} \label{subsec:aug} Our previous filtering methods remove pseudo-labels so that the remaining subset has a higher quality. However, if we can generate better labels, to begin with, we can discard none and retain all the labels. Here, to improve the quality of the labels generated by the base model at no extra supervision cost, we use data augmentation by concatenation to directly target the reported length mismatch between corpora in \S\ref{subsec:vpl}. To do so, we create an augmented set from our supervised set by randomly selecting a pair of samples and constructing a new sample by concatenating the speech signals as the input and concatenating corresponding transcripts and translations as output. In our experiments, we build a set of 20k augmented samples as such using the original \faSearch CoVoST data. After training the base model, before generating pseudo-labels, we first further fine-tune the base model on the union of the original supervised set and the augmented set. We then proceed as in vanilla pseudo-labeling with the union of the original data and the augmented set as our supervised training set. As shown in Table~\ref{tab:betterres}, row ``Augmentation'', we can see that although no generated labels are thrown away, the quality of pseudo-labels is indeed increased in the subsequent round. This is especially pronounced in the case of translations. With retaining all pseudo-labels, not only does bootstrapping the supervised set using concatenation expedite the gains from pseudo-labeling, but it is also the most effective in terms of the final performance before saturation by improving the score in three cases: it improves the performance of vanilla pseudo-labeling on \faSearch CoVoST by 0.4 and 0.2 BLEU points on En--De and En--Zh, respectively, and by 0.3\% absolute WER on En--Zh. Therefore, it further closes the gap between pseudo-labeling and the upper bounds. This ends our discussion of how domain mismatch can be addressed. We find filtering methods, which discard labels, to be only effective when due to any resource limitation, only a few, one or two rounds of pseudo-labeling can be run. This finding also echoes insights from \newcite{pmlr-v162-bansal22b} that studies data scaling laws for MT and shows while filtering may benefit computational efficiency, more unfiltered data can replace filtered data. As an alternative to filtering, we show that improving the quality of all generated labels through augmentation so that all can be kept, is the most effective, especially when as many rounds as needed can be run to reach saturation. \section{Related Work} The two paradigms often considered in low-resource data scenarios are self-training and pretraining. Self-training and pseudo-labeling have long been studied for a variety of seq2seq tasks \cite{DBLP:conf/iclr/HeGSR20,xu20b_interspeech,park20d_interspeech,9054295,DBLP:conf/interspeech/ChenWW20,likhomanenko21b_interspeech,pino20_interspeech}. Regarding the relationship between pretraining and self-training, \newcite{9414641} and \newcite{wang21r_interspeech} show that self-training and unsupervised pretraining are complimentary and can be combined to boost performance on speech recognition and speech translation, respectively. In the case of supervised pretraining, however, \newcite{NEURIPS2020_27e9661e} show in the vision domain that as the size of the labeled data available grows, self-training remains helpful, whereas the benefits of supervised pretraining start to diminish. In the application of self-training to the unvisited setup of joint speech transcription and translation \cite{sperber-etal-2020-consistent}, we focus on domain mismatch, a matter which can get overlooked when gains from vanilla pseudo-labeling are observed. For our solutions, we study pseudo-label filtering and augmentation by concatenation. In contrast to conventional filtering which relies on normalized model confidence scores \cite{park20d_interspeech,9054295}, we define and use data-centric factors that directly target the domain differences that we observe. Concatenation as an effective augmentation method has been studied in the context of machine translation \cite{Agrawal2018ContextualHI,kondo-etal-2021-sentence,nguyen-etal-2021-data,https://doi.org/10.48550/arxiv.2210.05096} and speech-to-text \cite{DBLP:journals/corr/abs-2210-15398}. In our case, we use it to expose our base model to sequences of higher length to improve the quality of generated pseudo-labels. \section{Conclusion} We studied pseudo-labeling for joint speech transcription and translation. We show that while vanilla pseudo-labeling is helpful, there are additional improvements to be gained from addressing the low quality of generated pseudo-labels due to the domain mismatch between the supervised and unsupervised sets. We find that our proposed solutions help in two different ways, as they are in distinct nature: pseudo-label filtering, which discards low-quality labels, is mostly helpful by expediting gains in earlier rounds, especially for transcriptions. Augmentation by concatenation, on the other hand, does not discard any of the labels. As a result, it is able to maintain an edge over vanilla pseudo-labeling in the late rounds as well. \section*{Acknowledgements} We would like to thank Qin Gao, Amittai Axelrod, Boliang Zhang, Barry Theobald, David Grangier, Jiatao Gu and the rest of machine translation and machine learning research teammates for fruitful discussions and constructive feedback on the manuscript.
1,116,691,498,096
arxiv
\section{Introduction} An unbiased and detailed characterization of the luminosity function (LF) of field galaxies is a basic requirement in many extragalactic issues. \\ At present the local luminosity function is well constrained by the results obtained by the 2dF Galaxy Redshift Survey (2dFGRS, Norberg et al. \cite{norberg02}) and by the Sloan Digital Sky Survey (SDSS, Blanton et al. \cite{blanton03}). These surveys measure redshifts for $10^5 - 10^6$ galaxies over a large area, and therefore explore well the properties of the local ($z<0.3$) Universe. Such large numbers of objects also allow to study the luminosity functions (as well as the correlation functions and other properties) for galaxies of different types, defined on the basis of colours and/or spectral properties. A critical analysis of the luminosity functions depending on galaxy type as measured from the various redshift surveys, as well as a comparison of the different results, can be found in de~Lapparent (\cite{delapparent03b}). \\ Madgwick et al. (\cite{madgwick02}), analyzing 2dFGRS data, find a systematic steepening of the faint end slope and a faintening of $M^*$ of the luminosity function as one moves from passive to active star forming galaxies. Similar results are found by Blanton et al. (\cite{blanton01}) for the SDSS sample, moving from the redder to the bluer galaxies. \\ For what concerns the high redshift Universe, several studies in the past ten years have aimed to map the evolution of the luminosity function. However, because of the long exposure times required to obtain spectra of high redshift galaxies, spectroscopic surveys were limited to a few $10^2$ objects. The Canadian Network for Observational Cosmology field galaxy redshift survey (CNOC-2, Lin et al. \cite{lin99}) and the ESO Sculptor Survey (ESS, de~Lapparent et al. \cite{delapparent03a}) derived the luminosity function up to $z\sim 0.5$ using $\sim 2000$ and $\sim 600$ redshifts, respectively. de~Lapparent et al. (\cite{delapparent03a}) find a behaviour of the LF by type similar to the local one derived from 2dFGRS and a strong evolution of a factor 2 in the volume density of the late type galaxies with respect to the early type galaxies. Lin et al. (\cite{lin99}) find for early type galaxies a positive luminosity evolution with increasing redshift, which is nearly compensated by a negative density evolution. On the contrary, for late type galaxies they find a strong positive density evolution, with nearly no luminosity evolution. At higher redshift, the Canada France Redshift Survey (CFRS, Lilly et al. \cite{lilly95}) allowed to study the luminosity function up to $z\sim 1.1$ with a sample of $\sim 600$ redshifts. From this survey, the LF of the red population shows small changes with redshift, while the LF of the blue population brightens by about one magnitude from $z\sim 0.5$ to $z\sim 0.75$. Other results suggest a strong number density evolution of early type galaxies (Bell et al. \cite{bell04}, Faber et al. \cite{faber05}); conversely, in the K20 survey (Cimatti et al. \cite{k20}), Pozzetti et al. (\cite{pozzetti03}) found that red and early type galaxies dominate the bright end of the LF and that their number density shows at most a small decrease ($<30\%$) up to $z\sim 1$ (see also Saracco et al. \cite{saracco05} and Caputi et al. \cite{caputi06}). \\ Luminosity function estimates at higher redshift and/or with larger samples are up to now based only on photometric redshifts, like the COMBO-17 survey (Wolf et al. \cite{wolf03}) and the analysis of the FORS Deep Field (FDF, Gabasch et al. \cite{gabasch04}) and the Hubble Deep Fields (HDF-N and HDF-S, see e.g. Sawicki et al. \cite{sawicki97}; Poli et al. \cite{poli01}, \cite{poli03}); most of these projects derived also the luminosity function for different galaxy types. Wolf et al. (\cite{wolf03}) find that early type galaxies show a decrease of a factor $\sim 10$ in $\phi^*$ up to $z=1.2$. Latest type galaxies show a brightening of about one magnitude in $M^*$ and a $\phi^*$ increase of a factor $\sim 1.6$ in their highest redshift bin ($z\sim 1.1$) in the blue band. Giallongo et al. (\cite{giallongo05}), using HDFs data, find that the B band number densities of red and blue galaxies have a different evolution, with a strong decrease of the red population at $z=2-3$ and a corresponding increase of the blue population. Dahlen et al. (\cite{dahlen05}), using GOODS data, claim that the starburst population fraction increases with redshift by a factor of 3 at $z=2$ in the U band. \\ Although photometric redshifts represent a powerful tool for deep surveys, their precision strongly relies on the number of used photometric bands, on the templates and on the adopted training procedure; moreover, they are affected by the problem of ``catastrophic errors", i.e. objects with a large difference between the spectroscopic and the photometric redshift. \\ A major improvement in this field is obtained with the VIMOS VLT Deep Survey (VVDS, Le~F\`evre et al. \cite{lefevre03}) and the DEEP-2 Galaxy Redshift Survey (Davis et al. \cite{davis03}). The VVDS is an ongoing program to map the evolution of galaxies, large scale structures and AGNs from the redshift measurements of $\sim 10^5$ objects down to a magnitude I$_{AB}=24$, in combination with a multiwavelength dataset from radio to X-rays. \\ From the analysis of the evolution of the global luminosity function from the first epoch VVDS data (Ilbert et al. \cite{vvdsLF}), we found a significant brightening of the $M^*$ parameter in the U, B, V, R and I rest frame bands, going from $z=0.05$ to $z=2$. Moreover, we measured an increase of the comoving density of bright galaxies: this increase depends on the rest frame band, being higher in the bluest bands. \\ Among the other results of this survey, we recall the study of the radio selected objects (Bondi et al. \cite{radio1}) and of their optical counterparts (Ciliegi et al. \cite{radio2}), the evolution of the clustering properties (Le~F\`evre et al. \cite{clus1}, Pollo et al. \cite{clus2}) and of the bias parameter (Marinoni et al. \cite{marinoni05}). Moreover, from the joined GALEX-VVDS sample, we derived the evolution of the far UV luminosity function (Arnouts et al. \cite{galexLF}) and luminosity density (Schiminovich et al. \cite{galexLD}). \\ In this paper we study the evolution of the luminosity functions of galaxies of different spectral types based on the VVDS data. This sample allows to perform this analysis for the first time with excellent statistical accuracy over a large redshift range ($0.05<z<1.5$). \\ The plan of the paper is the following: in sect. 2 we briefly present the first epoch VVDS sample, in sect. 3 we describe the galaxy classification and in sect. 4 we illustrate the method we used to estimate the luminosity functions. In sect. 5 we compare the luminosity functions of the different galaxy types and in sect. 6 we show the evolution with redshift of the luminosity functions by type. Finally in sect. 7 we compare our results with previous literature estimates and in sect. 8 we summarize our results. \\ Throughout the paper we adopt the cosmology $\Omega_m = 0.3$ and $\Omega_\Lambda = 0.7$, with $h = H_0 / 100$ km s$^{-1}$ Mpc$^{-1}$, Magnitudes are given in the AB system and are expressed in the five standard bands U (Bessel), B and V (Johnson), R and I (Cousins). \begin{figure} \centering \includegraphics[width=\hsize]{frac_20.ps} \caption{Observed fraction of bright galaxies ($M_{B_{AB}}-5log(h) < -20$) of different types as a function of redshift. Error bars are $1\sigma$ Poisson errors. } \label{frac_20} \end{figure} \section{The first epoch VVDS sample} The VVDS is described in detail in Le~F\`evre et al. (\cite{vvds1}): here we report only the main characteristics of the sample used for the analysis presented in this paper. \\ The entire VVDS is formed by a wide part on 4 fields (which is not used in this paper), and by a deep part, with spectroscopy in the range $17.5\le$ I$_{AB}\le 24$ on the field 0226-04. Multicolour photometry is available for each field (Le~F\`evre et al. \cite{photom1}): in particular, the B, V, R, I photometry for the 0226-04 deep field is described in detail in McCracken et al. (\cite{photom2}). Moreover, U band (Radovich et al. \cite{photomU}) and J and K band (Iovino et al. \cite{photomK}) data are available for smaller areas of these fields. \\ Starting from these photometric catalogues, spectroscopic observations were performed with the VIsible Multi--Object Spectrograph (VIMOS, Le~F\`evre et al. \cite{vimos}) mounted on the ESO Very Large Telescope (UT3). The selection of objects for spectroscopic observations was based only on magnitude, without any other colour or shape criteria. \\ Deep spectroscopic observations ($17.5\le$ I$_{AB}\le 24$) were performed also on the Chandra Deep Field South (VVDS-CDFS, Le~F\`evre et al. \cite{vvds-cdfs}), starting from the EIS I band photometry and astrometry (Arnouts et al. \cite{eis}). Multicolour U, B, V, R and I photometry for this sample is available from the COMBO-17 survey (Wolf et al. \cite{wolf03}). \\ Spectroscopic data were reduced with the VIMOS Interactive Pipeline Graphical Interface (VIPGI, Scodeggio et al. \cite{vipgi}, Zanichelli et al. \cite{ifu}) and redshift measurements were performed with the KBRED package (Scaramella et al. \cite{kbred}) and then visually checked. Each redshift measurement was assigned a quality flag, ranging from 0 (failed measurement) to 4 (100\% confidence level); flag 9 indicates spectra with a single emission line, for which multiple solutions are possible. Further details on the quality flags are given in Le~F\`evre et al. (\cite{vvds1}). \\ The analysis presented in this paper is based on the first epoch VVDS deep sample, which has been obtained from the first observations (fall 2002) on the fields VVDS-02h and VVDS-CDFS, which cover 1750 and 450 arcmin$^2$, respectively. We eliminated from the sample spectroscopically confirmed stars and broad line AGNs, remaining with 6477 + 1236 galaxy spectra with secure spectroscopic identification (flag 2, 3, 4, 9), corresponding to a confidence level higher than 75\%. Redshifts with flags 0 and 1 are taken into account statistically (see sect. 4). This spectroscopic sample, which is purely magnitude selected, has a median redshift of $\sim 0.76$. \section{Galaxy classification} Galaxies have been classified using all the multicolour information available; in the VVDS-02h field B, V, R and I band magnitudes are available for all galaxies, while U band data are available for 83\% of the galaxies. For the VVDS-CDFS sample U, B, V, R and I photometry from the COMBO-17 survey is used. \\ Absolute magnitudes are computed following the method described in the Appendix of Ilbert et al. (\cite{vvdsLF}). The K-correction is computed using a set of templates and all the photometric information (UBVRI) available. However, in order to reduce the template dependency, the rest frame absolute magnitude in each band is derived using the apparent magnitude from the closest observed band, redshifted at the redshift of the galaxy. With this method, the applied K-correction is as small as possible as possible. \\ For each galaxy the rest frame magnitudes were matched with the empirical set of SEDs described in Arnouts et al. (\cite{arnouts99}), composed of four observed spectra (CWW, Coleman et al. \cite{cww}) and two starburst SEDs computed with GISSEL (Bruzual \& Charlot \cite{bc93}). The match is performed minimizing a $\chi^2$ variable on these templates at the spectroscopic redshift of each galaxy. The same procedure has been applied by Lin et al. (\cite{lin99}) to the CNOC-2 survey up to $z\sim 0.55$. This approach is also similar to that adopted by Wolf et al. (\cite{wolf03}) for the COMBO-17 survey, but we have the advantage of using spectroscopic redshifts, while they had to rely on photometric redshifts. \\ Galaxies have been divided in four types, corresponding to the E/S0 template (type 1), early spiral template (type 2), late spiral template (type 3) and irregular template (type 4). These types are based on the four CWW templates: type 4 includes also the two starburst templates. The numbers of galaxies for each type are listed in Table \ref{numbers}. \\ In order to have an idea of the correspondence of these types with colours, we report here the rest frame colours for each template: type 1, 2, 3 and 4 have $B_{AB}-I_{AB}=$1.58, 1.11, 0.79 and 0.57, respectively. Given these colours, a rough colour subdivision for each class is $1.3<B_{AB}-I_{AB}$ for type 1, $0.95<B_{AB}-I_{AB}<1.3$ for type 2, $0.68<B_{AB}-I_{AB}<0.95$ for type 3 and $B_{AB}-I_{AB}<0.68$ for type 4. However, we remind that these colour ranges are only indicative and our classification scheme is based on the whole multicolour coverage. In Fig.\ref{colors} we show the $U-B$ and $B-I$ colour distributions for the galaxies of our sample divided according to type. From this figure it is clear that, although the different types have different colour distributions, they present significant overlaps. This fact is a consequence of classification schemes using template fitting on multicolour data. \\ Note that, in order to avoid to be model dependent, we did not apply to the templates any correction aimed at taking into account colour evolution with redshift. It is well known that the colour of a simple stellar population subject to passive evolution was bluer in the past. In principle, this could imply that galaxies classified as type 1 at low redshift might be classified differently at higher redshift. Indeed, this effect has been invoked by the authors who found negative evolution in the luminosity function of ``red" galaxies (see f.i. Wolf et al. \cite{wolf03}). In order to verify this hypothesis, we applied our classification scheme to synthetic spectra (Bruzual \& Charlot \cite{bc93}) of ellipticals (i.e. simple stellar populations and exponentially declining star formation with time scales of 0.1 Gyr and 0.3 Gyr) with formation redshift between $z_{form}=$2 and 20. We find that all ellipticals with $z_{form}> 2$ would be classified as type 1 objects, even at $z\sim 1$. \\ In order to check, at least on a statistical basis, the consistency between this photometric classification and average spectral properties, we summed the normalized spectra of all galaxies in each of the four types. The resulting average spectra are shown in Fig.\ref{class_types} for each type in various redshift bins. This figure confirms the robustness of our classification scheme: moving from type 1 to type 4 objects, the composite spectra show an increasingly blue continuum, with emission lines of increasing strength. This confirms that the four types show different spectral features and therefore represent different classes of objects. \\ For the VVDS-CDFS sample, HST-ACS images are available. Using these data, Lauger et al. (\cite{lauger06}) classified the galaxies in this sample using an asymmetry-concentration diagram. Plotting our type 1 galaxies in this diagram, we find that $\sim 91\%$ of them lie in the region of bulge dominated objects, showing an excellent consistency also between our photometric classification and a morphological one. \\ In Fig.\ref{frac_20} we plot the observed fraction of bright galaxies of each type as a function of redshift. We selected objects with $M_{B_{AB}}-5log(h) < -20$ because these galaxies are visible in the whole redshift range. From this figure it is clear the growing importance of bright late type objects with increasing redshift and the corresponding strong decrease of the fraction of bright early type galaxies. \begin{table} \caption[]{Numbers of galaxies of different types in the sample} \begin{flushleft} \begin{tabular}{rr} \hline\noalign{\smallskip} Type & N$_{gal}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} total & 7713 \\ type 1 & ~730 \\ type 2 & 1290 \\ type 3 & 2622 \\ type 4 & 3071 \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{numbers} \end{table} \begin{figure*} \centering \includegraphics[width=0.45\hsize]{LF_04-09_U.ps} \includegraphics[width=0.45\hsize]{LF_04-09_B.ps} \includegraphics[width=0.45\hsize]{LF_04-09_R.ps} \includegraphics[width=0.45\hsize]{LF_04-09_I.ps} \caption{ Luminosity functions by different types in the redshift range $[0.4-0.9]$ in various rest frame bands: upper panels U (left) and B (right); lower panels R (left) and I (right). Note that the ranges of magnitude are different in the various panels. The lines represent the $STY$ estimates for type 1 (solid), type 2 (dotted), type 3 (short dashed) and type 4 (long dashed) galaxies. The vertical dashed line represents the faint absolute magnitude limit considered in the $STY$ estimate (see text). The shaded regions represent the 68\% uncertainties on the parameters $\alpha$ and $M^*$, whose confidence ellipses are reported in the right panels. The ellipse contours are at 68\% and 90\% confidence level (solid and dotted line respectively). The points inside the ellipses represent the best fit values for type 1 (filled circle), type 2 (open circle), type 3 (filled triangle) and type 4 (open triangle) galaxies. } \label{LF_04-09} \end{figure*} \begin{table*} \caption[]{$STY$ parameters (with $1\sigma$ errors) for different galaxy types in different bands in the redshift range $[0.4 - 0.9]$ } \begin{flushleft} \begin{tabular}{c c c c c c c} \hline \multicolumn{7}{c}{$\Omega_m$=0.3 \hspace{1cm} $\Omega_\Lambda$=0.7} \\ \hline Band & Type & Number$^{(a)}$ & Number$^{(b)}$ & $\alpha$ & $M^*_{AB}-5log(h)$ & $\phi^*$($10^{-3} h^3 Mpc^{-3}$) \vspace{0.2cm} \\ \hline \hline U & 1 & 411 & 357 & -0.16$^{{\rm + 0.15}}_{{\rm - 0.15}}$ & -19.03$^{{\rm + 0.14}}_{{\rm - 0.15}}$ & 3.35$^{{\rm + 0.20}}_{{\rm - 0.26}}$ \\ & 2 & 677 & 596 & -0.54$^{{\rm + 0.12}}_{{\rm - 0.11}}$ & -19.18$^{{\rm + 0.13}}_{{\rm - 0.14}}$ & 4.83$^{{\rm + 0.48}}_{{\rm - 0.53}}$ \\ & 3 & 1371 & 1192 & -0.90$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & -19.31$^{{\rm + 0.11}}_{{\rm - 0.12}}$ & 7.32$^{{\rm + 0.86}}_{{\rm - 0.86}}$ \\ & 4 & 1442 & 1238 & -1.66$^{{\rm + 0.10}}_{{\rm - 0.10}}$ & -19.35$^{{\rm + 0.17}}_{{\rm - 0.18}}$ & 4.09$^{{\rm + 1.21}}_{{\rm - 1.04}}$ \\ B & 1 & 411 & 404 & -0.29$^{{\rm + 0.10}}_{{\rm - 0.10}}$ & -20.35$^{{\rm + 0.13}}_{{\rm - 0.13}}$ & 3.19$^{{\rm + 0.23}}_{{\rm - 0.26}}$ \\ & 2 & 677 & 669 & -0.61$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & -20.25$^{{\rm + 0.12}}_{{\rm - 0.12}}$ & 4.48$^{{\rm + 0.43}}_{{\rm - 0.44}}$ \\ & 3 & 1371 & 1349 & -0.96$^{{\rm + 0.06}}_{{\rm - 0.06}}$ & -20.12$^{{\rm + 0.10}}_{{\rm - 0.11}}$ & 6.79$^{{\rm + 0.72}}_{{\rm - 0.71}}$ \\ & 4 & 1442 & 1403 & -1.62$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & -19.83$^{{\rm + 0.15}}_{{\rm - 0.16}}$ & 4.46$^{{\rm + 1.08}}_{{\rm - 0.95}}$ \\ V & 1 & 411 & 411 & -0.31$^{{\rm + 0.09}}_{{\rm - 0.09}}$ & -21.13$^{{\rm + 0.12}}_{{\rm - 0.13}}$ & 3.16$^{{\rm + 0.23}}_{{\rm - 0.25}}$ \\ & 2 & 677 & 677 & -0.61$^{{\rm + 0.07}}_{{\rm - 0.07}}$ & -20.82$^{{\rm + 0.11}}_{{\rm - 0.12}}$ & 4.51$^{{\rm + 0.40}}_{{\rm - 0.41}}$ \\ & 3 & 1371 & 1371 & -1.00$^{{\rm + 0.06}}_{{\rm - 0.06}}$ & -20.57$^{{\rm + 0.10}}_{{\rm - 0.11}}$ & 6.21$^{{\rm + 0.66}}_{{\rm - 0.64}}$ \\ & 4 & 1442 & 1442 & -1.62$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & -20.03$^{{\rm + 0.15}}_{{\rm - 0.16}}$ & 4.36$^{{\rm + 1.03}}_{{\rm - 0.92}}$ \\ R & 1 & 411 & 411 & -0.31$^{{\rm + 0.09}}_{{\rm - 0.09}}$ & -21.48$^{{\rm + 0.12}}_{{\rm - 0.13}}$ & 3.16$^{{\rm + 0.23}}_{{\rm - 0.25}}$ \\ & 2 & 677 & 677 & -0.63$^{{\rm + 0.07}}_{{\rm - 0.07}}$ & -21.15$^{{\rm + 0.11}}_{{\rm - 0.12}}$ & 4.37$^{{\rm + 0.40}}_{{\rm - 0.41}}$ \\ & 3 & 1371 & 1371 & -1.05$^{{\rm + 0.05}}_{{\rm - 0.05}}$ & -20.86$^{{\rm + 0.11}}_{{\rm - 0.11}}$ & 5.55$^{{\rm + 0.63}}_{{\rm - 0.60}}$ \\ & 4 & 1442 & 1436 & -1.65$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & -20.18$^{{\rm + 0.16}}_{{\rm - 0.17}}$ & 3.87$^{{\rm + 0.99}}_{{\rm - 0.87}}$ \\ I & 1 & 411 & 411 & -0.31$^{{\rm + 0.09}}_{{\rm - 0.09}}$ & -21.78$^{{\rm + 0.12}}_{{\rm - 0.13}}$ & 3.17$^{{\rm + 0.23}}_{{\rm - 0.25}}$ \\ & 2 & 677 & 677 & -0.66$^{{\rm + 0.07}}_{{\rm - 0.07}}$ & -21.44$^{{\rm + 0.12}}_{{\rm - 0.12}}$ & 4.22$^{{\rm + 0.40}}_{{\rm - 0.40}}$ \\ & 3 & 1371 & 1370 & -1.10$^{{\rm + 0.05}}_{{\rm - 0.05}}$ & -21.13$^{{\rm + 0.11}}_{{\rm - 0.11}}$ & 5.05$^{{\rm + 0.60}}_{{\rm - 0.57}}$ \\ & 4 & 1442 & 1413 & -1.67$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & -20.32$^{{\rm + 0.17}}_{{\rm - 0.18}}$ & 3.58$^{{\rm + 0.97}}_{{\rm - 0.86}}$ \\ \hline \multicolumn{7}{l}{(a) Number of galaxies in the redshift bin }\\ \multicolumn{7}{l}{(b) Number of galaxies brighter than the bias limit (sample used for $STY$ estimate; see the text for details)}\\ \end{tabular} \end{flushleft} \label{param04-09} \end{table*} \section{Luminosity function estimate} Luminosity functions were derived using the Algorithm for Luminosity Function (ALF), a dedicated tool which uses various estimators: the non-parametric $1/V_{max}$ (Schmidt \cite{schmidt68}), $C^+$ (Lynden Bell \cite{lyndenbell71}), $SWML$ (Efstathiou et al. \cite{swml}) and the parametric $STY$ (Sandage, Tammann \& Yahil \cite{sty}), for which we assumed a Schechter function (Schechter \cite{schechter76}). The tool and these estimators, as well as their specific use in the context of the VVDS, are described in detail in Ilbert et al. (\cite{vvdsLF}). \\ Ilbert et al. (\cite{ilbert04}) have shown that the estimate of the global luminosity function can be biased, mainly in its faint end, when the band in which it is measured is far from the rest frame band in which galaxies are selected. This is due to the fact that, because of the K-corrections, different galaxy types are visible in different absolute magnitude ranges at a given redshift. When computing the global luminosity functions (Ilbert et al. \cite{vvdsLF}), we avoided this bias by using for the $STY$ estimate, in each redshift range, only galaxies within the absolute magnitude range where all the SEDs are potentially observable. \\ Even if this bias is much less important when estimating the luminosity function of galaxies divided by types, we have, however, taken it into account. The absolute magnitude limits for the $STY$ estimate are indicated with vertical dashed lines in the figures, and in the tables where the best fit parameters are reported (Table \ref{param04-09} and \ref{parameters}) we give both the total number of objects and the number of galaxies within this magnitude limit. \\ In order to take into account the unknown redshifts (not observed objects and failed spectra), a weight was applied to each galaxy, following the procedure described in detail in Ilbert et al. (\cite{vvdsLF}). \\ This weight is a combination of two different contributions: the target sampling rate and the spectroscopic success rate. The target sampling rate, i.e. the fraction of observed galaxies, corrects for the selection effects due to the procedure used for the mask preparation (Bottini et al. \cite{vmmps}): to maximize the number of slits, the procedure tends to select objects with smaller angular size on the x-axis of the image, corresponding to the direction in which the slits are placed. As a consequence, the final spectroscopic sample has a bias against large objects, which produces a mild dependence of the target sampling rate on the apparent magnitude. The target sampling rate is $\sim 25\%$ for most of the sample and is computed as a function of the object size (see Ilbert et al. \cite{vvdsLF} for further details). The spectroscopic success rate takes into account the fraction of objects without a good redshift determination (i.e. flags 0 and 1). As shown in Ilbert et al. (\cite{vvdsLF}), these objects are expected to have a different redshift distribution with respect to that of the sample with measured redshift, as confirmed by the use of their photometric redshifts. Given the fact that the spectroscopic success rate decreases for faint apparent magnitudes, we derived it in four magnitude bins as a function of redshift (using photometric redshifts, see Fig.3 in Ilbert et al. \cite{vvdsLF}). The shape of the spectroscopic success rate is similar in all magnitude bins, showing a maximum at $z\sim 0.7$ and two minima for $z<0.5$ and $z>1.5$. Since the number of galaxies for each type is not large enough to reliably estimate the spectroscopic success rate as a function also of the galaxy type, we have used the global spectroscopic success rate for all galaxy types. \\ In order to check the effect of the ``cosmic variance", i.e. variations in the luminosity function due to fluctuations in the large scale structure, we applied the following test on the VVDS-02h deep area. For this field we derived photometric redshifts (Ilbert et al. \cite{zphot}) based on both VVDS photometry (BVRIJK) and on new CFHT Legacy Survey photometry (ugriz), which has now become available in a field covering 1 sq. deg. (http://www.cfht.hawaii.edu/Science/CFHLS/) which includes the 1700 arcmin$^2$ area covered by the VVDS spectroscopic survey. Then we divided the field in two non overlapping regions (the sub-area where spectroscopic data are available and the remaining area) and we compared the luminosity distributions of galaxies in these two samples ($\sim$0.5 sq.deg. each), in the same redshift bins in which the luminosity functions were derived. In each redshift bin the two distributions show average differences of the order of 10\%, with some larger fluctuations due to Poisson statistics, without any systematic trend. Therefore the influence of the ``cosmic variance" is expected to be limited. \section{Comparison of the luminosity functions of different types} As a first step, we compare the luminosity functions for galaxies of different types, in order to see which is the relative behaviour of the various populations. To perform this comparison, we selected galaxies in the redshift range $[0.4 - 0.9]$. About $50\%$ of the objects of our sample are included in this redshift interval, covering a wide range of luminosities (i.e. absolute magnitudes in the B band are in the range $[-23.7; -16.8] - 5$ log$(h)$). Moreover, the spectroscopic success rate of our survey reaches a maximum at $z\sim 0.7$ and therefore the possible dependency of the estimated LF on the weighting scheme described above is minimized in this redshift interval. \\ In Fig.\ref{LF_04-09} we report the luminosity functions estimated with the $STY$ method in the rest frame bands U, B, R and I, with the corresponding confidence ellipses for the $\alpha$ and $M^*$ parameters. The values of the parameters with their $1\sigma$ errors, as well as the number of galaxies for each type, are reported in Table \ref{param04-09}. The $\phi^*$ parameters listed in this table are derived adopting the density estimator of Efstathiou et al. (\cite{swml}), following the procedure described in the Appendix of Ilbert et al. (\cite{vvdsLF}). \\ Table \ref{param04-09} shows that our estimates are based on several hundreds of galaxies for each type, and are therefore well statistically constrained, as can be seen also from the sizes of the confidence ellipses in the figures. The first, very clear result which appears from Fig.\ref{LF_04-09} is the significant strong steepening of the luminosity functions going from early to late types. In all bands the power law slope steepens by $\Delta \alpha \sim 1.3 - 1.5$ going from type 1 to type 4 galaxies and galaxies of late types are the dominant population at faint magnitudes. \\ Systematic trends are also seen in the $M^*$ parameter. In the reddest bands (lower panels in Fig.\ref{LF_04-09}), $M^*$ is significantly fainter for late type galaxies and this faintening is particularly apparent for type 4 objects. The brighter $M^*$ for early type galaxies reflects the fact that most of the more massive objects belong to this population. \\ The difference of the $M^*$ values for different types decreases in the B band and disappears or even changes sign in the U band. This behaviour is explained by the fact that the luminosity in the bluer bands is dominated by the light of young stars, produced during the star formation activity. Galaxies of later types, which are still actively forming stars, are therefore more luminous in the bluer bands. \\ These results are qualitatively in agreement with previous results from the literature, most of which at lower redshift (see de~Lapparent \cite{delapparent03b} for a review of the results from a number of surveys in the redshift range $0\le z \le 0.6$). In particular, in almost all surveys the luminosity function of late type galaxies is steeper and with a fainter $M^*$ with respect to that of early type galaxies. However, a quantitative comparison with previous results is difficult, because of the different classification schemes adopted in the various surveys, the different redshift ranges and selection criteria. \begin{figure*} \centering \includegraphics[width=\hsize]{LF_type1_new.ps} \caption{ Evolution of the luminosity function in the B-band for type 1 galaxies. Each panel refers to a different redshift bin, which is indicated in the label. The vertical dashed line represents the faint absolute limit considered in the $STY$ estimate. The luminosity functions are estimated with different methods (see text for details) but for clarity we plot only the results from $C^+$ (open squares), and $STY$ (solid line). The dashed line is the $STY$ estimate obtained by fixing $\alpha$ to the value determined in the redshift range $[0.4-0.9]$. The dotted line represents the luminosity function estimated in the redshift range $[0.4-0.9]$: this curve is reported in each panel as a reference. } \label{LFtype1} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{LF_type2_new.ps} \caption{ Evolution of the luminosity function in the B-band for type 2 galaxies. The meaning of the lines and the symbols is the same as in Fig.\ref{LFtype1}. } \label{LFtype2} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{LF_type3_new.ps} \caption{ Evolution of the luminosity function in the B-band for type 3 galaxies. The meaning of the lines and the symbols is the same as in Fig.\ref{LFtype1}. } \label{LFtype3} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{LF_type4_new.ps} \caption{ Evolution of the luminosity function in the B-band for type 4 galaxies. The meaning of the lines and the symbols is the same as in Fig.\ref{LFtype1}. } \label{LFtype4} \end{figure*} \begin{table*} \caption[]{$STY$ parameters (with $1\sigma$ errors) for different galaxy types in the rest frame B band} \begin{flushleft} \begin{tabular}{r r r r c l l} \hline \multicolumn{7}{c}{$\Omega_m$=0.3 \hspace{1cm} $\Omega_\Lambda$=0.7} \\ \hline Type & z-bin & Number$^{(a)}$ & Number$^{(b)}$ & $\alpha$ & $M^*_{AB}-5log(h)$ & $\phi^*$ ($10^{-3} h^3 Mpc^{-3}$) \vspace{0.2cm} \\ \hline & & & & & $\alpha$ free\ \ \ \ \ /\ \ \ \ \ $\alpha$ fixed & $\alpha$ free\ \ \ /\ \ \ $\alpha$ fixed \vspace{0.2cm} \\ \hline \hline 1 & 0.20-0.40 & 70 & 65 & -0.04$^{{\rm + 0.28}}_{{\rm - 0.27}}$ & -19.81$^{{\rm + 0.39}}_{{\rm - 0.46}}$ -20.27$^{{\rm + 0.27}}_{{\rm - 0.31}}$ & 5.90$^{{\rm + 0.73}}_{{\rm - 0.73}}$ 5.15$^{{\rm + 0.64}}_{{\rm - 0.64}}$ \\ & 0.40-0.60 & 113 & 106 & -0.40$^{{\rm + 0.20}}_{{\rm - 0.20}}$ & -20.71$^{{\rm + 0.39}}_{{\rm - 0.46}}$ -20.49$^{{\rm + 0.17}}_{{\rm - 0.18}}$ & 2.81$^{{\rm + 0.50}}_{{\rm - 0.58}}$ 3.12$^{{\rm + 0.30}}_{{\rm - 0.30}}$ \\ & 0.60-0.80 & 204 & 197 & -0.22$^{{\rm + 0.17}}_{{\rm - 0.17}}$ & -20.14$^{{\rm + 0.19}}_{{\rm - 0.20}}$ -20.22$^{{\rm + 0.09}}_{{\rm - 0.10}}$ & 3.70$^{{\rm + 0.33}}_{{\rm - 0.43}}$ 3.53$^{{\rm + 0.25}}_{{\rm - 0.25}}$ \\ & 0.80-1.00 & 164 & 164 & -0.01$^{{\rm + 0.25}}_{{\rm - 0.24}}$ & -20.46$^{{\rm + 0.22}}_{{\rm - 0.24}}$ -20.73$^{{\rm + 0.11}}_{{\rm - 0.12}}$ & 2.68$^{{\rm + 0.21}}_{{\rm - 0.21}}$ 2.36$^{{\rm + 0.18}}_{{\rm - 0.18}}$ \\ & 1.00-1.20 & 114 & 114 & -1.23$^{{\rm + 0.34}}_{{\rm - 0.34}}$ & -21.49$^{{\rm + 0.48}}_{{\rm - 0.57}}$ -20.53$^{{\rm + 0.11}}_{{\rm - 0.12}}$ & 0.92$^{{\rm + 0.65}}_{{\rm - 0.56}}$ 2.39$^{{\rm + 0.22}}_{{\rm - 0.22}}$ \\ & 0.40-0.90 & 411 & 404 & -0.29$^{{\rm + 0.10}}_{{\rm - 0.10}}$ & -20.35$^{{\rm + 0.13}}_{{\rm - 0.13}}$ & 3.19$^{{\rm + 0.23}}_{{\rm - 0.26}}$ \\ \hline 2 & 0.20-0.40 & 136 & 132 & -0.67$^{{\rm + 0.13}}_{{\rm - 0.13}}$ & -20.29$^{{\rm + 0.37}}_{{\rm - 0.44}}$ -20.13$^{{\rm + 0.19}}_{{\rm - 0.21}}$ & 5.88$^{{\rm + 1.34}}_{{\rm - 1.33}}$ 6.50$^{{\rm + 0.56}}_{{\rm - 0.56}}$ \\ & 0.40-0.60 & 203 & 195 & -0.50$^{{\rm + 0.15}}_{{\rm - 0.14}}$ & -19.81$^{{\rm + 0.20}}_{{\rm - 0.21}}$ -19.97$^{{\rm + 0.12}}_{{\rm - 0.12}}$ & 4.99$^{{\rm + 0.74}}_{{\rm - 0.79}}$ 4.35$^{{\rm + 0.31}}_{{\rm - 0.31}}$ \\ & 0.60-0.80 & 322 & 310 & -0.57$^{{\rm + 0.13}}_{{\rm - 0.13}}$ & -20.33$^{{\rm + 0.19}}_{{\rm - 0.20}}$ -20.39$^{{\rm + 0.09}}_{{\rm - 0.10}}$ & 4.81$^{{\rm + 0.69}}_{{\rm - 0.74}}$ 4.58$^{{\rm + 0.26}}_{{\rm - 0.26}}$ \\ & 0.80-1.00 & 267 & 267 & -0.60$^{{\rm + 0.20}}_{{\rm - 0.20}}$ & -20.54$^{{\rm + 0.24}}_{{\rm - 0.26}}$ -20.55$^{{\rm + 0.10}}_{{\rm - 0.11}}$ & 3.58$^{{\rm + 0.64}}_{{\rm - 0.74}}$ 3.54$^{{\rm + 0.22}}_{{\rm - 0.22}}$ \\ & 1.00-1.20 & 178 & 175 & -0.76$^{{\rm + 0.34}}_{{\rm - 0.33}}$ & -20.92$^{{\rm + 0.35}}_{{\rm - 0.40}}$ -20.77$^{{\rm + 0.12}}_{{\rm - 0.13}}$ & 2.64$^{{\rm + 0.79}}_{{\rm - 0.98}}$ 3.01$^{{\rm + 0.23}}_{{\rm - 0.23}}$ \\ & 1.20-1.50 & 103 & 103 & -1.57$^{{\rm + 0.61}}_{{\rm - 0.62}}$ & -21.65$^{{\rm + 0.62}}_{{\rm - 0.84}}$ -20.82$^{{\rm + 0.13}}_{{\rm - 0.14}}$ & 0.81$^{{\rm + 1.04}}_{{\rm - 0.72}}$ 2.19$^{{\rm + 0.22}}_{{\rm - 0.22}}$ \\ & 0.40-0.90 & 677 & 669 & -0.61$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & -20.25$^{{\rm + 0.12}}_{{\rm - 0.12}}$ & 4.48$^{{\rm + 0.43}}_{{\rm - 0.44}}$ \\ \hline 3 & 0.20-0.40 & 341 & 329 & -0.84$^{{\rm + 0.10}}_{{\rm - 0.10}}$ & -18.92$^{{\rm + 0.19}}_{{\rm - 0.21}}$ -19.14$^{{\rm + 0.12}}_{{\rm - 0.13}}$ & 12.37$^{{\rm + 2.30}}_{{\rm - 2.20}}$ 9.82$^{{\rm + 0.54}}_{{\rm - 0.54}}$ \\ & 0.40-0.60 & 451 & 429 & -1.07$^{{\rm + 0.10}}_{{\rm - 0.10}}$ & -20.28$^{{\rm + 0.24}}_{{\rm - 0.27}}$ -20.04$^{{\rm + 0.11}}_{{\rm - 0.11}}$ & 4.93$^{{\rm + 1.26}}_{{\rm - 1.17}}$ 6.31$^{{\rm + 0.30}}_{{\rm - 0.30}}$ \\ & 0.60-0.80 & 626 & 610 & -0.79$^{{\rm + 0.13}}_{{\rm - 0.13}}$ & -19.86$^{{\rm + 0.17}}_{{\rm - 0.19}}$ -20.10$^{{\rm + 0.09}}_{{\rm - 0.09}}$ & 9.10$^{{\rm + 1.46}}_{{\rm - 1.51}}$ 7.11$^{{\rm + 0.29}}_{{\rm - 0.29}}$ \\ & 0.80-1.00 & 534 & 533 & -0.87$^{{\rm + 0.15}}_{{\rm - 0.15}}$ & -20.23$^{{\rm + 0.18}}_{{\rm - 0.19}}$ -20.33$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & 7.01$^{{\rm + 1.29}}_{{\rm - 1.34}}$ 6.27$^{{\rm + 0.27}}_{{\rm - 0.27}}$ \\ & 1.00-1.20 & 292 & 288 & -1.39$^{{\rm + 0.26}}_{{\rm - 0.26}}$ & -20.82$^{{\rm + 0.31}}_{{\rm - 0.34}}$ -20.38$^{{\rm + 0.10}}_{{\rm - 0.10}}$ & 3.11$^{{\rm + 1.56}}_{{\rm - 1.35}}$ 5.57$^{{\rm + 0.33}}_{{\rm - 0.33}}$ \\ & 1.20-1.50 & 208 & 193 & -1.86$^{{\rm + 0.55}}_{{\rm - 0.59}}$ & -21.87$^{{\rm + 0.77}}_{{\rm - 1.23}}$ -20.81$^{{\rm + 0.12}}_{{\rm - 0.13}}$ & 0.80$^{{\rm + 1.82}}_{{\rm - 0.79}}$ 3.67$^{{\rm + 0.27}}_{{\rm - 0.27}}$ \\ & 0.40-0.90 & 1371 & 1349 & -0.96$^{{\rm + 0.06}}_{{\rm - 0.06}}$ & -20.12$^{{\rm + 0.10}}_{{\rm - 0.11}}$ & 6.79$^{{\rm + 0.72}}_{{\rm - 0.71}}$ \\ \hline 4 & 0.20-0.40 & 394 & 380 & -1.59$^{{\rm + 0.11}}_{{\rm - 0.12}}$ & -19.60$^{{\rm + 0.46}}_{{\rm - 0.58}}$ -19.73$^{{\rm + 0.29}}_{{\rm - 0.33}}$ & 3.05$^{{\rm + 2.09}}_{{\rm - 1.67}}$ 2.59$^{{\rm + 0.13}}_{{\rm - 0.13}}$ \\ & 0.40-0.60 & 487 & 449 & -1.53$^{{\rm + 0.18}}_{{\rm - 0.19}}$ & -19.17$^{{\rm + 0.33}}_{{\rm - 0.39}}$ -19.38$^{{\rm + 0.17}}_{{\rm - 0.18}}$ & 5.57$^{{\rm + 3.14}}_{{\rm - 2.56}}$ 4.10$^{{\rm + 0.19}}_{{\rm - 0.19}}$ \\ & 0.60-0.80 & 656 & 622 & -1.35$^{{\rm + 0.15}}_{{\rm - 0.15}}$ & -19.55$^{{\rm + 0.20}}_{{\rm - 0.21}}$ -19.95$^{{\rm + 0.12}}_{{\rm - 0.12}}$ & 7.72$^{{\rm + 2.33}}_{{\rm - 2.09}}$ 4.07$^{{\rm + 0.16}}_{{\rm - 0.16}}$ \\ & 0.80-1.00 & 552 & 552 & -1.68$^{{\rm + 0.20}}_{{\rm - 0.21}}$ & -20.19$^{{\rm + 0.31}}_{{\rm - 0.36}}$ -20.10$^{{\rm + 0.12}}_{{\rm - 0.12}}$ & 4.06$^{{\rm + 2.44}}_{{\rm - 1.93}}$ 4.72$^{{\rm + 0.20}}_{{\rm - 0.20}}$ \\ & 1.00-1.20 & 389 & 373 & -1.99$^{{\rm + 0.33}}_{{\rm - 0.34}}$ & -20.62$^{{\rm + 0.41}}_{{\rm - 0.52}}$ -20.19$^{{\rm + 0.12}}_{{\rm - 0.12}}$ & 3.19$^{{\rm + 3.49}}_{{\rm - 2.22}}$ 6.95$^{{\rm + 0.36}}_{{\rm - 0.36}}$ \\ & 1.20-1.50 & 239 & 188 & -2.50$^{{\rm + 0.52}}_{{\rm - 0.91}}$ & -21.58$^{{\rm + 0.76}}_{{\rm - 0.40}}$ -20.53$^{{\rm + 0.12}}_{{\rm - 0.12}}$ & 0.52$^{{\rm + 2.06}}_{{\rm - 0.29}}$ 4.34$^{{\rm + 0.32}}_{{\rm - 0.32}}$ \\ & 0.40-0.90 & 1442 & 1403 & -1.62$^{{\rm + 0.08}}_{{\rm - 0.08}}$ & -19.83$^{{\rm + 0.15}}_{{\rm - 0.16}}$ & 4.46$^{{\rm + 1.08}}_{{\rm - 0.95}}$ \\ \hline \multicolumn{7}{l}{(a) Number of galaxies in the redshift bin }\\ \multicolumn{7}{l}{(b) Number of galaxies brighter than the bias limit (sample used for $STY$ estimate; see the text for details)}\\ \end{tabular} \end{flushleft} \label{parameters} \end{table*} \begin{figure*} \centering \includegraphics[width=\hsize]{ellipses.ps} \caption{ Confidence ellipses at 90\% confidence level for the $\alpha$ and $M^*$ parameters of the luminosity functions reported in Fig.\ref{LFtype1}, \ref{LFtype2}, \ref{LFtype3} and \ref{LFtype4}. Different line types and weights refer to different redshift bins, described in the labels; the points indicate the best fit values in the redshift bins $[0.2-0.4]$ (filled squares), $[0.4-0.6]$ (open squares), $[0.6-0.8]$ (filled triangles), $[0.8-1.0]$ (open triangles), $[1.0-1.2]$ (filled circles), $[1.2-1.5]$ (open circles). The large open star indicates the reference value obtained in the redshift bin $[0.4-0.9]$. } \label{ellipses} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{evolution.ps} \caption{ Evolution of the parameters $\phi^*$ (upper panel) and $M^*$ (middle panel) as a function of the redshift, for different galaxy types. In the lowest panel the density of bright ($M_{B_{AB}}-5log(h) < -20$) galaxies of different types is shown. The slope $\alpha$ is fixed to the value derived in the redshift range $[0.4-0.9]$; the suffix 'ref' indicates the parameters estimated in the redshift range $[0.4-0.9]$. Error bars are at $1\sigma$. } \label{para_evol} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.45\hsize]{LF_type1_Bcombo.ps} \includegraphics[width=0.45\hsize]{LF_type2_Bcombo.ps} \includegraphics[width=0.45\hsize]{LF_type3_Bcombo.ps} \includegraphics[width=0.45\hsize]{LF_type4_Bcombo.ps} \caption{ Comparison between VVDS and COMBO-17 luminosity functions, in various redshift bins (indicated in the label in each panel) and for various types. Upper panels: type 1 (left) and type 2 (right) galaxies. Lower panels: type 3 (left) and type 4 (right) galaxies. Solid lines: VVDS estimate. Dotted lines: VVDS estimate in the redshift range $[0.4-0.9]$, plotted as a reference. Dashed line: COMBO-17 estimate from Wolf et al. (\cite{wolf03}). } \label{LFcombo} \end{figure*} \section{Evolution with redshift of the luminosity functions by type} We derived luminosity functions for each type in redshift bins in the U, B, V, R and I rest frame bands. Given our multicolour coverage and the explored redshift range, the estimate of the absolute magnitudes in the U and B rest frame bands are those which require less extrapolations (see Appendix A and Figure A.1 in Ilbert et al. \cite{vvdsLF}). Therefore, to limit the number of figures in the paper, we show the results in the B rest frame band. \\ Figures \ref{LFtype1}, \ref{LFtype2}, \ref{LFtype3} and \ref{LFtype4} show the luminosity function for type 1, 2, 3 and 4 galaxies in redshift bins, obtained with $C^+$ and $STY$ methods. The luminosity functions derived with the other two methods ($1/V_{max}$ and $SWML$) are consistent with those shown in the figures, but are not drawn for clarity. The dotted line in each panel represents the fit derived in the redshift range $[0.4-0.9]$ (see previous section), while the dashed line is the estimate derived by fixing the slope $\alpha$ to the value obtained in the range $[0.4-0.9]$. \\ In Table \ref{parameters} we report the Schechter parameters, with their $1\sigma$ errors, estimated for the various redshift bins, from $z=0.2$ to $z=1.5$ for each type; as a reference, in the last line we give the parameters derived in the redshift bin $[0.4-0.9]$. \\ We do not show the results for bins where the number of objects is too small (less than $\sim 30$) to constrain the parameters of the luminosity function (these are the bin $[0.05-0.2]$ for type 1 and 2 and the bin $[1.2-1.5]$ for type 1). Given the bright magnitude limit of the survey (I$_{AB} \ge 17.5$) and the small sampled volume, bright galaxies are not sampled in the redshift bin $[0.05-0.2]$ and therefore we can not constrain the $M^*$ parameter even for type 3 and 4 galaxies, where the number of objects is relatively high ($\sim 80$). For this reason we show in the figures the luminosity function estimates in this bin, but we do not report the $STY$ parameters for this redshift range in Table \ref{parameters}. \\ In Fig.\ref{ellipses} we show the confidence ellipses of the parameters $\alpha$ and $M^*$ in different redshift bins for the different types. From this figure it is possible to see that, within each type, the estimated slopes $\alpha$ in the various redshift bins are always consistent (within 90\% confidence level) with each other and with the value derived in the redshift range $[0.4-0.9]$. Therefore, there is no evidence of a significant change with redshift of the luminosity function slope within each galaxy type. Note also that the uncertainties on the slope estimates become quite large for $z>1$: this is due to the fact that, even with the faint limit (I$_{AB} \le 24$) of this survey, the number of galaxies fainter than $M^*$ is too low to well constrain the slope. \\ In each panel of Figures \ref{LFtype1}, \ref{LFtype2}, \ref{LFtype3} and \ref{LFtype4} we draw, as a reference, the luminosity function derived in the redshift bin $[0.4-0.9]$ (dotted line). Comparing this curve with the estimates of the luminosity function in the different redshift bins an evolution can be seen, strongly depending on the galaxy type. This evolution is particularly evident for type 4 galaxies: going from low to high redshift there is an almost continuous brightening of $M^*$ and at fixed luminosity the density of these galaxies was much higher in the past. \\ The observed evolution of the luminosity function could be due to an evolution in luminosity and/or in density or both. One of the advantages of the $STY$ method is that of allowing to derive the $\alpha$ and $M^*$ parameters independently from $\phi^*$, which is not possible when one directly fits a Schechter function on the $1/V_{max}$ points. \\ Given the fact that we found that $\alpha$ is consistent with being constant for each type, we can fix it at the reference value derived in the redshift range $[0.4-0.9]$ and then study the variations of the parameters $M^*$ and $\phi^*$ as a function of the redshift (see upper and middle panels of Fig.\ref{para_evol}). These estimates are reported in Table \ref{parameters}. \\ From Fig.\ref{para_evol} we can see a mild evolution of $M^*$ from the lowest to the highest redshift bin for each type. In particular, this brightening ranges from $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.5$ mag for early type galaxies to $\sim 1$ mag for the latest type galaxies. The only exception with respect to the general trend is that of type 3 objects in the bin $[0.2-0.4]$, for which the best fit value of the $M^*$ parameter is significantly fainter than expexted. The reason for this discontinuity in $M^*$ for type 3 galaxies at low redshift is not clear. \\ On the contrary, the $\phi^*$ parameter shows a very different behaviour for type 1, 2 and 3 galaxies with respect to type 4 galaxies. The first three types show a rapid decrease of $\phi^*$ at low redshifts (between $z\sim 0.3$ and $z\sim 0.5$), then $\phi^*$ remains roughly constant up to $z\sim 0.9$ and finally slowly decreases up to $z=1.5$. \\ Type 4 objects, on the contrary, show an increase in $\phi^*$ at low redshift, then $\phi^*$ is nearly constant up to $z\sim 0.8$ and shows a rapid increase of a factor $\sim 2$ at $z=1.1$. Then there seems to be a decrease from $z=1.1$ to $z=1.3$. However, this decrease can likely be a spurious effect, due to the fact that in this bin the estimated $M^*$ is very close to the bias limit (see Sect. 4). If, for example, we fix $M^*$ to the value obtained in the previous redshift bin $[1.0-1.2]$, we derive a significantly higher value for $\phi^*$, i.e. $\phi^* = 5.83 \times 10^{-3} h^3 Mpc^{-3}$ (corresponding to $\phi^* / \phi^*_{ref} = 1.31$). Therefore the last $\phi^*$ value for type 4 galaxies is likely to be a lower limit of the true density value. \\ This analysis of the different trends with redshift of the $\phi^*$ parameter indicates that the importance of type 4 galaxies is increasing with redshift. However, since $M^*$ is changing with redshift (see above), the trends of $\phi^*$ can not be immediately interpreted in terms of density at a given absolute magnitude. For this reason, we have computed the density of bright galaxies as a function of redshift. We integrated the best fit luminosity function down to $M_{B_{AB}}-5log(h) < -20$. This limit approximately corresponds to the faintest galaxies which are visible in the whole redshift range. In the lowest panel of Fig.\ref{para_evol} we plot the density of bright galaxies of each type as a function of redshift. \\ The main results shown in this plot can be summarized in the following way: \\ a. the density of bright early type galaxies (type 1) decreases with increasing redshift, however this decrease is rather modest, being of the order of $\sim 40\%$ from $z\sim 0.3$ to $z\sim 1.1$; \\ b. the density of bright late type galaxies (type 4) is instead significantly increasing, by a factor $\sim 6.6$ from $z\sim 0.3$ to $z\sim 1.3$. \\ The behaviour of type 4 galaxies is also responsible of the evolution of the global luminosity function measured by Ilbert et al. (\cite{vvdsLF}). In fact, the increasing number of both faint and bright type 4 galaxies leads to the steepening of the global LF (due to the very steep slope of type 4 LF) and to the brightening of $M^*$ (due to the increasing fraction of bright blue objects). This fact has been directly checked summing the LF of all types and comparing the result with the global LF estimate. \section{Comparison with previous literature results} Although various estimates of the luminosity function by galaxy type are available in the literature, a quantitative comparison of our results with previous analyses is not straightforward, because of the different classification schemes adopted in the various surveys, the different numbers of galaxy types, the different redshift ranges and selection criteria. \\ The strong density evolution of late type galaxies we find in the redshift range $[0.2-1.5]$ extends to significantly higher redshift the results found by de~Lapparent et al. (\cite{delapparent04}) for their latest type at $z\sim 0.5$. \\ Among the other surveys, CNOC-2 (Lin et al. \cite{lin99}) and COMBO-17 (Wolf et al. \cite{wolf03}) adopted a classification scheme somewhat similar to ours. Lin et al. (\cite{lin99}) divided their sample of galaxies with $z<0.55$ in three classes (early, intermediate and late type) using CWW templates. For early type galaxies they found a positive luminosity evolution, which is nearly compensated by a negative density evolution. On the contrary, for late type galaxies they found a strong positive density evolution, with nearly no luminosity evolution. We see at higher redshift the same trend in density, but our evolution in $M^*$ is contained within one magnitude for each type. Wolf et al. (\cite{wolf03}) used a sample of $\sim 25000$ galaxies with photometric redshifts, applying a classification scheme in four classes similar to ours but using the Kinney et al. (\cite{kinney96}) templates instead of the CWW templates. \\ In Figure \ref{LFcombo} we compare our results (solid lines) with those from COMBO-17 (dashed lines). Note that, since our data extend always to fainter absolute magnitudes, in particular for type 1 galaxies, our estimates of the faint end slope are likely to be better determined: these estimates are derived in each redshift bin, while the COMBO-17 slopes are fixed to the value determined in the redshift range [0.2 - 0.4]. This figure shows that there are significant differences in both shapes and evolution of the LF estimates. The slope of the COMBO-17 LF is flatter than ours for type 1 galaxies, while it is steeper for types 2 and 3 (at least up to $z=1.0$). The most significant difference between the two surveys regards the evolution of type 1 galaxies, for which we do not have any evidence of the very strong density decrease with increasing redshift present in COMBO-17 data. The reason for this difference is unclear: it could be due to the use of different templates in the definition of the galaxy types or to a degeneracy between photometric redshift and classification, which might affect the COMBO-17 data. Bell et al. (\cite{bell04}) explained the strong negative evolution of type 1 galaxies as a consequence of the blueing with increasing redshift of elliptical galaxies with respect to the template used for classification. In this way, at increasing redshift an increasing number of ``ellipticals'' would be assigned a later type, therefore producing the detected density decrease observed in COMBO-17 data. However, as already mentioned in sect.3, we verified that this effect does not affect our classification scheme, at least up to $z\sim 1$ and for simple stellar populations with $z_{form}>2$. \\ As a test of this possible effect, we compared the LF obtained by adding together type 1 and 2 objects. In this way we can check whether the differences between VVDS and COMBO-17 LF of type 1 galaxies is due to the fact that a significant fraction of high redshift type 1 galaxies in COMBO-17 are classified as type 2 galaxies. This comparison is shown in Fig.\ref{LFcombo12}. The discrepancy between VVDS and COMBO-17 LF is now reduced, but there are still significant differences both in slope (being the COMBO-17 LF steeper than that of VVDS) and in normalization, especially in the highest redshift bin, where the VVDS LF is more than a factor 2 higher than the COMBO-17 LF. \section{Conclusions} In this paper we studied the evolution of the luminosity function of different galaxy types up to $z=1.5$, using 7713 spectra with $17.5 \le$ I$_{AB}\le 24$ from the first epoch VVDS deep sample. \\ The VVDS data allow for the first time to study with excellent statistical accuracy the evolution of the luminosity functions by galaxy type from relatively low redshift up to $z=1.5$ from a single purely magnitude selected spectroscopic sample. The faint limiting magnitude of the VVDS sample allows to measure the slope of the faint end of the luminosity function with unprecedented accuracy up to $z\sim 1.2$. The use of spectroscopic redshifts implies low ``catastrophic'' failure rate compared to the photometric redshifts and therefore rare populations are sampled with a better accuracy, like in the bright end on the luminosity function. Moreover, the use of spectroscopic redshifts allow to classify galaxies avoiding a possible degeneracy between photometric redshift and classification. \\ VVDS galaxies were classified in four spectral classes using their colours and redshift, from early type to irregular galaxies, and luminosity functions were derived for each type in redshift bins, from $z=0.05$ to $z=1.5$, in the U, B, V, R and I rest frame bands. \\ We find a significant strong steepening of the luminosity function going from early to late types: in all bands the power law slope steepens by $\Delta\alpha \sim 1.3-1.5$ going from type 1 to type 4. Moreover, the $M^*$ parameter of the Schechter function is significantly fainter for late type galaxies. As expected, this difference increases in the redder bands, reaching $\sim 1.4$ mag in the I band. \\ Studying the variations with redshift of the luminosity function for each type, we find that there is no evidence of a significant change of the slope, while we find a brightening of $M^*$ with increasing redshift, ranging from $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.5$ mag for early type galaxies to $\sim 1$ mag for the latest type galaxies. We also find a strong evolution in the normalization of the luminosity function of latest type galaxies, with an increase of more than a factor $2$ in the $\phi^*$ parameter going from $z\sim 0.3$ to $z\sim 1.3$. The density of bright ($M_{B_{AB}}-5log(h) < -20$) galaxies shows a modest decrease ($\sim 40\%$) for early type objects from $z\sim 0.3$ to $z\sim 1.1$; on the contrary, the number of bright late type galaxies increases of a factor $\sim 6.6$ from $z\sim 0.3$ to $z\sim 1.3$. \\ Our results indicate that the importance of type 4 galaxies is increasing with redshift, with an important contribution of both bright and faint blue objects. This fact is also largely responsible of the evolution of the global luminosity function measured by Ilbert et al. (\cite{vvdsLF}), which shows a brightening of $M^*$ and a steepening of $\alpha$ with increasing redshift. Moreover, the increasing contribution of blue galaxies has been seen in the evolution of the GALEX-VVDS luminosity function at 1500\AA (Arnouts et al. \cite{galexLF}). We are therefore pinpointing that the galaxies responsible for most of the evolution quoted in the literature belong to the population of the latest spectral type. The epoch at which a transition between a Universe dominated by late type galaxies and a Universe dominated by old massive objects occurs is at a redshift of $z\sim 0.7-0.8$. \\ The fact that type 1 galaxies show only a mild evolution both in luminosity (positive) and in density (negative) is consistent with the fact that most of the objects in this class are old ($z_{form} > 2$, see sect. 3) galaxies, experiencing only a passive evolution in the explored redshift range. More intriguing is the density evolution of type 4 galaxies, which corresponds to an increasing number of bright star forming galaxies towards high redshift which could be connected to various populations of high redshift objects seen in multiwavelength surveys. \begin{figure} \centering \includegraphics[width=\hsize]{LF_type12_Bcombo.ps} \caption{ Comparison between VVDS and COMBO-17 luminosity functions, in various redshift bins (indicated in the label in each panel) for the sum of types 1 and 2. The meaning of the symbols is the same as in Fig.\ref{LFcombo}. } \label{LFcombo12} \end{figure} \begin{acknowledgements} This research has been developed within the framework of the VVDS consortium.\\ This work has been partially supported by the CNRS-INSU and its Programme National de Cosmologie (France), and by Italian Ministry (MIUR) grants COFIN2000 (MM02037133) and COFIN2003 (num.2003020150).\\ The VIMOS VLT observations have been carried out on guaranteed time (GTO) allocated by the European Southern Observatory (ESO) to the VIRMOS consortium, under a contractual agreement between the Centre National de la Recherche Scientifique of France, heading a consortium of French and Italian institutes, and ESO, to design, manufacture and test the VIMOS instrument. \end{acknowledgements}
1,116,691,498,097
arxiv
\section{Introduction} This paper has a two-fold goal: to compute, and to study the asymptotic behavior of the generating function of rank $r$ motivic Donaldson--Thomas invariants of ${\mathbb{A}}^3$, namely the series \begin{equation*} \mathsf{DT}_r^{\points}({\mathbb{A}}^3,q) = \sum_{n\geq 0}\,\left[\Quot_{{\mathbb{A}}^3}(\mathscr O^{\oplus r},n) \right]_{\vir}\cdot q^n \,\in\,\mathcal M_{{\mathbb{C}}}\llbracket q \rrbracket. \end{equation*} Here $\mathcal M_{{\mathbb{C}}}$ is a suitable motivic ring and $[\,\cdot\,]_{\vir} \in \mathcal M_{{\mathbb{C}}}$ is the \emph{virtual motive} (cf.~\S\,\ref{subsec:motivic_quantum_torus}), induced by the critical locus structure on the Quot scheme $\Quot_{{\mathbb{A}}^3}(\mathscr O^{\oplus r},n)$ parametrising $0$-dimensional quotients of length $n$ of the free sheaf $\mathscr O^{\oplus r}$. The following is our first main result. \bthm\label{thm:main_motivic} There is an identity \begin{equation} \label{formu} \mathsf{DT}_r^{\points}({\mathbb{A}}^3,q) = \prod_{m\geq 1}\prod_{k=0}^{rm-1}\left(1-{\mathbb{L}}^{2+k-\frac{rm}{2}}q^{m}\right)^{-1}. \end{equation} Moreover, this series factors as $r$ copies of shifted rank $1$ contributions: there is an identity \begin{equation} \label{formula_product} \mathsf{DT}_r^{\points}({\mathbb{A}}^3,q)=\prod_{i=1}^r \mathsf{DT}_1^{\points}\left({\mathbb{A}}^3,q{\mathbb{L}}^{\frac{-r-1}{2}+i}\right). \end{equation} \ethm The result was first obtained in the case $r=1$ by Behrend, Bryan and Szendr\H{o}i \cite{BBS} via an explicit motivic vanishing cycle calculation. Formula \eqref{formula_product} follows by combining Formula \eqref{formu} and Lemma \ref{thm:quot_partition_function}. The approach of \S\,\ref{subsec:calculation_framed_3_loop_quiver}, where we prove Formula \eqref{formu}, is based on the techniques of \emph{motivic wall-crossing} for framed objects developed by Mozgovoy \cite{Mozgovoy_Framed_WC}, allowing us to express the invariants for $\Quot_{{\mathbb{A}}^3}(\mathscr O^{\oplus r},n)$, which we view as `$r$-framed' Donaldson--Thomas invariants, in terms of the universal series of the invariants of \emph{unframed} representations of the $3$-loop quiver in a critical chamber. These ideas can be employed to compute framed motivic Donaldson--Thomas invariants of small crepant resolutions of affine toric Calabi–Yau 3-folds \cite{Cazza_Ric}, which also exhibit similar factorisation properties. The fact that partition functions of rank $r$ invariants factor as $r$ copies of partition functions of rank $1$ invariants, shifted just as in Formula \eqref{formula_product}, has also been observed in the context of K-theoretic Donaldson--Thomas theory of ${\mathbb{A}}^3$ \cite{FMR_K-DT}, as well as in string theory \cite{Magnificent_colors}. The exponential form of Formula \eqref{formu} has been exploited in \cite{Quot19} to define higher rank motivic Donaldson--Thomas invariants for an arbitrary smooth quasi-projective $3$-fold. \smallbreak Formula \eqref{formu} allows us to interpret the refined Donaldson--Thomas invariants of $\Quot_{{\mathbb{A}}^3}(\mathscr O^{\oplus r},n)$ in terms of a weighted count of $r$-tuples of plane partitions $\overline{\pi}=(\pi_{1}, \ldots, \pi_{r})$ of total size $n$ (also known in the physics literature as $r$-\emph{colored plane partitions}). Setting $T={\mathbb{L}}^{1/2}$, the coefficient of $q^{n}$ in $\mathsf{DT}_r^{\points}({\mathbb{A}}^3,q)$ can be written as \begin{equation}\label{eq:coefM} M_{n,r}(T)=\sum_{\overline{\pi}} T^{\,S_{n,r}(\overline{\pi})}, \end{equation} where $S_{n,r}$ is a certain explicit random variable on the space of $r$-tuples of plane partitions. In \S\,\ref{sec:Asymptotics}, we describe the asymptotic behavior of (a renormalisation of) the refined DT generating series, generalising A.~Morrison's result for $r=1$ \cite{Morrison_asymptotics}. We discuss the relationship with Morrison's work in \S\,\ref{sec:random_variables_on_colored_partitions}. The following is our second main result. It will be proved in \S\,\ref{sub:proofB}. \bthm\label{main1} As $n\to \infty$, the normalised random variable $n^{-2/3}S_{n,r}$ converges in distribution to $\mathcal{N}(\mu, \sigma^2)$ with \[ \mu = \frac{r^{1/3}\pi^2}{2^{5/3}(\zeta(3))^{2/3}}\, \text{ and }\, \sigma^2=\frac{r^{5/3}}{(2\zeta(3))^{1/3}}, \] where $\zeta(s)$ is Riemann's zeta function. \ethm \subsection*{Acknowledgments} The very existence of this work owes a debt to Bal\'{a}zs Szendr\H{o}i, who brought the authors together. We thank him for his support and for generously sharing his ideas throughout the years. A.C.~thanks CNR-IOM for support and the excellent working conditions, and the African Institute for Mathematical Sciences (AIMS), South Africa, for support during the first part of this collaboration. D.R.~is supported by Division for Research Development (DRD) of Stellenbosch University and the National Research Foundation (NRF) of South Africa. A.R.~is supported by Dipartimenti di Eccellenza and thanks SISSA for the excellent working conditions. \section{Background material} \label{sec:background_material} \subsection{Rings of motives and the motivic quantum torus} \label{subsec:motivic_quantum_torus} Let $K_0(\St_{{\mathbb{C}}})$ be the Grothendieck ring of stacks. It can be defined as the localisation of the ordinary Grothendieck ring of varieties $K_0(\Var_{{\mathbb{C}}})$ at the classes $[\GL_k]$ of general linear groups \cite{Bri-Hall}. The invariants we want to study will live in the extended ring \[ {\mathcal M}_{{\mathbb{C}}} = K_0(\St_{{\mathbb{C}}})\bigl[{\mathbb{L}}^{-\frac{1}{2}}\bigr], \] where ${\mathbb{L}} = [{\mathbb{A}}^1] \in K_0(\Var_{{\mathbb{C}}}) \to K_0(\St_{{\mathbb{C}}})$ is the Lefschetz motive. \subsubsection{The virtual motive of a critical locus}\label{subsec:virtual_motive} Let $U$ be a smooth $d$-dimensional ${\mathbb{C}}$-scheme, $f\colon U \to {\mathbb{A}}^1$ a regular function. The \emph{virtual motive} of the critical locus $\operatorname{crit} f = Z( \mathrm{d} f) \subset U$, depending on the pair $(U,f)$, is defined in \cite{BBS} as the motivic class \[ \bigl[\operatorname{crit} f\bigr]_{\vir} = -{\mathbb{L}}^{-\frac{d}{2}}\cdot \left[\phi_f\right] \,\in \,{\mathcal M}_{{\mathbb{C}}}^{\hat\mu}, \] where $[\phi_f] \in K_0^{\hat\mu}(\Var_{{\mathbb{C}}})$ is the (absolute) motivic vanishing cycle class defined by Denef and Loeser \cite{DenefLoeser1}. The `$\hat\mu$' decoration means that we are considering $\hat\mu$-equivariant motives, where $\hat\mu$ is the group of all roots of unity. However, the motivic invariants studied here will live in the subring ${\mathcal M}_{{\mathbb{C}}}\subset {\mathcal M}_{{\mathbb{C}}}^{\hat\mu}$ of classes carrying the trivial action. \begin{example}\label{example:vir_motive_smooth_scheme} Set $f = 0$. Then $\operatorname{crit} f = U$ and $[U]_{\vir} = {\mathbb{L}}^{-(\dim U)/2}\cdot [U]$. For instance, $[\GL_k]_{\vir} = {\mathbb{L}}^{-k^2/2}\cdot [\GL_k]$. \end{example} \begin{remark} We use lambda-ring conventions on $\mathcal M_{{\mathbb{C}}}$ from \cite{BBS,DavisonR}. In particular we use the definition of $[\operatorname{crit} f]_{\vir}$ from \cite[\S\,2.8]{BBS}, which differs (slightly) from the one in \cite{RefConifold}. The difference amounts to the substitution ${\mathbb{L}}^{1/2}\to -{\mathbb{L}}^{1/2}$. The Euler number specialisation with our conventions is ${\mathbb{L}}^{1/2}\to -1$. \end{remark} \subsection{Quivers and motivic quantum torus} A quiver $Q$ is a finite directed graph, determined by its sets $Q_0$ and $Q_1$ of vertices and edges, respectively, along with the maps $h$, $t\colon Q_1 \to Q_0$ specifying where an edge starts or ends. We use the notation \[ \begin{tikzcd} t(a) \,\,\bullet \arrow{rr}{a} & & \bullet\,\, h(a) \end{tikzcd} \] to denote the \emph{tail} and the \emph{head} of an edge $a \in Q_1$. All quivers in this paper will be assumed connected. The \emph{path algebra} ${\mathbb{C}} Q$ of a quiver $Q$ is defined, as a ${\mathbb{C}}$-vector space, by using as a ${\mathbb{C}}$-basis the set of all paths in the quiver, including a trivial path $\epsilon_i$ for each $i \in Q_0$. The product is defined by concatenation of paths whenever the operation is possible, and is set to be $0$ otherwise. The identity element is $\sum_{i\in Q_0}\epsilon_i \in {\mathbb{C}} Q$. On a quiver $Q$ one can define the \emph{Euler--Ringel form} $\chi_Q(-,-)\colon {\mathbb{Z}}^{Q_0}\times {\mathbb{Z}}^{Q_0} \to {\mathbb{Z}}$ by \[ \chi_Q(\alpha,\beta) = \sum_{i \in Q_0}\alpha_i\beta_i - \sum_{a \in Q_1}\alpha_{t(a)}\beta_{h(a)}, \] as well as the skew-symmetric form \[ \braket{\alpha,\beta}_Q = \chi_Q(\alpha,\beta)-\chi_Q(\beta,\alpha). \] \begin{definition}[$r$-framing]\label{def:r-framing} Let $Q$ be a quiver with a distinguished vertex $0\in Q_0$, and let $r$ be a positive integer. We define the quiver $\widetilde Q$ by adding one vertex, labelled $\infty$, to the original vertices in $Q_0$, and $r$ edges $\infty\to 0$. We refer to $\widetilde Q$ as the $r$-\emph{framed} quiver obtained out of $(Q,0)$. \end{definition} Let $Q$ be a quiver. Define its \emph{motivic quantum torus} (or \emph{twisted motivic algebra}) as \[ \mathcal T_Q = \prod_{\alpha \in {\mathbb{N}}^{Q_0}} \mathcal{M}_{{\mathbb{C}}}\cdot y^\alpha \] with product \begin{equation}\label{eqn:product_in_TQ} y^\alpha\cdot y^\beta = {\mathbb{L}}^{\frac{1}{2}\braket{\alpha,\beta}_Q}y^{\alpha+\beta}. \end{equation} If $\widetilde{Q}$ is the $r$-framed quiver associated to $(Q,0)$, one has a decomposition \[ \mathcal T_{\widetilde{Q}} = \mathcal T_Q \oplus \prod_{d\geq 0} {\mathcal M}_{{\mathbb{C}}}\cdot y_\infty^d, \] where we have set $y_\infty = y^{(1,\mathbf 0)}$. A generator $y^\alpha \in \mathcal T_Q$ will be identified with its image $y^{(0,\alpha)} \in \mathcal T_{\widetilde{Q}}$. \subsection{Quiver representations and their stability} Let $Q$ be a quiver. A \emph{representation} $\rho$ of $Q$ is the datum of a finite dimensional ${\mathbb{C}}$-vector space $\rho_i$ for every vertex $i\in Q_0$, and a linear map $\rho(a)\colon \rho_i\to \rho_j$ for every edge $a\colon i\to j$ in $Q_1$. The \emph{dimension vector} of $\rho$ is $ \underline{\dim}\,\rho = (\dim_{{\mathbb{C}}} \rho_i)_i\in \mathbb N^{Q_0}$, where ${\mathbb{N}}={\mathbb{Z}}_{\geq 0}$. \begin{convention}\label{order_of_dimensions} Let $Q$ be a quiver, $\widetilde Q$ its $r$-framing. The dimension vector of a representation $\widetilde \rho$ of $\widetilde Q$ will be denoted by $(d,\alpha)$, where $d = \dim_{{\mathbb{C}}}\widetilde{\rho}_\infty \in {\mathbb{N}}$ and $\alpha \in {\mathbb{N}}^{Q_0}$. \end{convention} The space of all representations of $Q$ with a fixed dimension vector $\alpha\in \mathbb N^{Q_0}$ is the affine space \[ \Rep(Q,\alpha) = \prod_{a \in Q_1}\Hom_{{\mathbb{C}}}({\mathbb{C}}^{\alpha_{t(a)}},{\mathbb{C}}^{\alpha_{h(a)}}). \] The gauge group $\GL_\alpha = \prod_{i\in Q_0} \GL_{\alpha_i}$ acts on $\Rep(Q,\alpha)$ by $(g_i)_i \cdot (\rho(a))_{a\in Q_1} = (g_{h(a)}\circ\rho(a)\circ g_{t(a)}^{-1})_{a \in Q_1}$. Following \cite{RefConifold}, we recall the notion of (semi)stability of a representation. \begin{definition}\label{centralcharge} A \emph{central charge} is a group homomorphism $\mathrm{Z}\colon \mathbb Z^{Q_0}\to {\mathbb{C}}$ such that the image of $\mathbb N^{Q_0}\setminus 0$ lies inside $\mathbb H_+ = \set{re^{\sqrt{-1}\pi\varphi}|r>0,\,0<\varphi\leq 1}$. For every $\alpha\in \mathbb N^{Q_0}\setminus 0$, we denote by $\varphi(\alpha)$ the real number $\varphi$ such that $\mathrm{Z}(\alpha) = re^{\sqrt{-1}\pi\varphi}$. It is called the \emph{phase} of $\alpha$ with respect to $\mathrm{Z}$. \end{definition} Note that every vector $\zeta\in {\mathbb{R}}^{Q_0}$ induces a central charge $\mathrm{Z}_{\zeta}$ if we set $\mathrm{Z}_{\zeta}(\alpha) = -\zeta\cdot \alpha + \lvert\alpha\rvert\sqrt{-1}$, where $\lvert\alpha\rvert = \sum_{i \in Q_0}\alpha_i$. We denote by $\varphi_\zeta$ the induced phase function, and we set $ \varphi_\zeta(\rho) = \varphi_\zeta(\underline{\dim}\,\rho)$ for every representation $\rho$ of $Q$. \begin{definition}\label{stablereps} Fix $\zeta\in {\mathbb{R}}^{Q_0}$. A representation $\rho$ of $Q$ is called \emph{$\zeta$-semistable} if \[ \varphi_\zeta(\rho')\leq \varphi_\zeta(\rho) \] for every nonzero proper subrepresentation $0\neq \rho'\subsetneq \rho$. If `$\leq$' can be replaced by `$<$', we say that $\rho$ is \emph{$\zeta$-stable}. Vectors $\zeta\in {\mathbb{R}}^{Q_0}$ are referred to as \emph{stability parameters}. \end{definition} \begin{definition} Let $\alpha \in {\mathbb{N}}^{Q_0}$ be a dimension vector. A stability parameter $\zeta$ is called $\alpha$-\emph{generic} if for any $0<\beta<\alpha$ one has $\varphi_\zeta(\beta) \neq \varphi_\zeta(\alpha)$. This implies that every $\zeta$-semistable representation of $Q$, of dimension $\alpha$, is $\zeta$-stable. \end{definition} The sets of $\zeta$-stable and $\zeta$-semistable representations with given dimension vector $\alpha$ form a chain of open subsets \[ \Rep^{\zeta\textrm{-st}}(Q,\alpha)\subset \Rep^{\zeta\textrm{-ss}}(Q,\alpha)\subset \Rep(Q,\alpha). \] \subsection{Quivers with potential} \label{subsec:quiver_with_potential} Let $Q$ be a quiver. Consider the quotient ${\mathbb{C}} Q / [{\mathbb{C}} Q,{\mathbb{C}} Q]$ of the path algebra by the commutator ideal. A finite linear combination of cyclic paths $W \in {\mathbb{C}} Q / [{\mathbb{C}} Q,{\mathbb{C}} Q]$ is called a \emph{superpotential}. Given a cyclic path $w$ and an arrow $a \in Q_1$, one defines the noncommutative derivative \[ \frac{\partial w}{\partial a} = \sum_{\substack{w=cac' \\ c,c'\textrm{ paths in }Q}}c'c\,\in\,{\mathbb{C}} Q. \] This rule extends to an operator $\partial/\partial a$ acting on every superpotential. The \emph{Jacobi algebra} $J=J_{Q,W}$ of $(Q,W)$ is the quotient of ${\mathbb{C}} Q$ by the two-sided ideal generated by $\partial W/\partial a$ for all edges $a \in Q_1$. For every $\alpha \in {\mathbb{N}}^{Q_0}$, the superpotential $W = \sum_ca_cc$ determines a regular function \[ f_\alpha \colon \Rep(Q,\alpha) \to {\mathbb{A}}^1,\quad \rho \mapsto \sum_{c\textrm{ cycle in }Q}a_c \Tr (\rho(c)). \] The points in the critical locus $\operatorname{crit} f_\alpha \subset \Rep(Q,\alpha)$ correspond to $\alpha$-dimensional $J$-\emph{modules}. Fix an $\alpha$-generic stability parameter $\zeta \in {\mathbb{R}}^{Q_0}$. If $f_{\zeta,\alpha} \colon \Rep^{\zeta\textrm{-st}}(Q,\alpha)\to {\mathbb{A}}^1$ is the restriction of $f_\alpha$, then \[ \mathfrak M(J,\alpha) = [\operatorname{crit} f_\alpha/G_\alpha],\quad \mathfrak M_\zeta(J,\alpha) = [\operatorname{crit} f_{\zeta,\alpha} / \GL_\alpha] \] are, by definition, the stacks of $\alpha$-dimensional $J$-modules and $\zeta$-stable $J$-modules. \begin{definition}[\cite{RefConifold}] We define motivic Donaldson--Thomas invariants \begin{equation} \label{eqn:motivicDT} \bigl[\mathfrak M(J,\alpha)\bigr]_{\vir} \,=\, \frac{\left[\operatorname{crit} f_\alpha\right]_{\vir}}{\left[\GL_\alpha\right]_{\vir}},\qquad \left[\mathfrak M_\zeta(J,\alpha)\right]_{\vir} \,=\, \frac{\left[\operatorname{crit} f_{\zeta,\alpha}\right]_{\vir}}{\left[\GL_\alpha\right]_{\vir}} \end{equation} in ${\mathcal M}_{{\mathbb{C}}}$, where $[\GL_\alpha]_{\vir}$ is taken as in Example \ref{example:vir_motive_smooth_scheme}. The generating function \begin{equation}\label{definition_AU} A_U = \sum_{\alpha \in {\mathbb{N}}^{Q_0}}\,\bigl[\mathfrak M(J,\alpha)\bigr]_{\vir}\cdot y^\alpha \,\in\,{\mathcal T}_{Q} \end{equation} is called the \emph{universal series} of $(Q,W)$. \end{definition} \begin{definition} A stability parameter $\zeta \in {\mathbb{R}}^{Q_0}$ is called \emph{generic} if $\zeta\cdot \underline{\dim}\,\rho \neq 0$ for every nontrivial $\zeta$-stable $J$-module $\rho$. \end{definition} \subsection{Framed motivic DT invariants} \label{subsec:motivicDT_quiver_potential} Let $Q$ be a quiver, $r\geq 1$ be an integer, and consider its $r$-framing $\widetilde Q$ with respect to a vertex $0 \in Q_0$ (Definition \ref{def:r-framing}). A representation $\widetilde \rho$ of $\widetilde Q$ can be uniquely written as a pair $(u,\rho)$, where $\rho$ is a representation of $Q$ and $u = (u_1,\dots,u_r)$ is an $r$-tuple of linear maps $u_i\colon \widetilde{\rho}_\infty\to \rho_0$. From now on, we assume our framed representations to satisfy $\dim_{{\mathbb{C}}} \widetilde{\rho}_\infty = 1$, so that according to Convention \ref{order_of_dimensions} we can write $\underline{\dim}\,\widetilde \rho=(1,\underline{\dim}\,\rho)$. We also view $\rho$ as a subrepresentation of $\widetilde{\rho}$ of dimension $(0,\underline{\dim}\,\rho)$, based at the vertex $0 \in Q_0$. \begin{definition}\label{def:framedstability} Fix $\zeta\in \mathbb R^{Q_0}$. A representation $(u,\rho)$ of $\widetilde Q$ (resp.~a $\widetilde J$-module) with $\dim_{{\mathbb{C}}} \widetilde{\rho}_\infty = 1$ is said to be \emph{$\zeta$-(semi)stable} if it is $(\zeta_\infty,\zeta)$-(semi)stable in the sense of Definition \ref{stablereps}, where $\zeta_\infty = -\zeta\cdot \underline{\dim}\,\rho$. \end{definition} We now define motivic DT invariants for moduli stacks of $r$-framed representations of a given quiver $Q$. Fix a superpotential $W$ on $Q$. Let $\widetilde{Q}$ be the $r$-framing of $Q$ at a given vertex $0 \in Q_0$, and let $\widetilde J$ be the Jacobi algebra $J_{\widetilde{Q},W}$, where $W$ is viewed as a superpotential on $\widetilde Q$ in the obvious way. For a generic stability parameter $\zeta \in {\mathbb{R}}^{Q_0}$, and an arbitrary dimension vector $\alpha \in {\mathbb{N}}^{Q_0}$, set \[ \zeta_\infty = -\zeta\cdot \alpha,\quad \widetilde \zeta = (\zeta_\infty,\zeta),\quad \widetilde \alpha = (1,\alpha). \] As in \S\,\ref{subsec:quiver_with_potential}, consider the trace map $f_{\widetilde{\alpha}}\colon\Rep(\widetilde{Q},\widetilde{\alpha})\to {\mathbb{A}}^1$, induced by $W$, and its restriction to the framed-stable locus $f_{\widetilde{\zeta},\widetilde{\alpha}}\colon\Rep^{\widetilde{\zeta}\textrm{-st}}(\widetilde{Q},\widetilde{\alpha}) \to {\mathbb{A}}^1$. Define the moduli stacks \[ \mathfrak M(\widetilde J,\alpha) = \left[\operatorname{crit} f_{\widetilde{\alpha}} \,\big/ \GL_\alpha\right], \quad \mathfrak M_\zeta (\widetilde J,\alpha) = \left[\operatorname{crit} f_{\widetilde{\zeta},\widetilde{\alpha}} \,\big/ \GL_\alpha\right]. \] Note that we are not quotienting by $\GL_{\widetilde{\alpha}} = \GL_\alpha \times {\mathbb{C}}^\times$, but only by $\GL_\alpha$. \begin{definition}\label{def:motivic_partition_functions} We define $r$-framed motivic Donaldson--Thomas invariants \[ \left[\mathfrak M(\widetilde J,\alpha) \right]_{\vir} \,=\,\frac{\left[\operatorname{crit} f_{\widetilde\alpha}\right]_{\vir}}{\left[\GL_{\alpha}\right]_{\vir}},\qquad \left[\mathfrak M_\zeta(\widetilde J,\alpha) \right]_{\vir} \,=\,\frac{\bigl[\operatorname{crit} f_{\widetilde \zeta,\widetilde\alpha}\bigr]_{\vir}}{\left[\GL_{\alpha}\right]_{\vir}} \] and the associated motivic generating functions \begin{align*} \widetilde A_U &= \sum_{\alpha \in {\mathbb{N}}^{Q_0}} \,\left[\mathfrak M(\widetilde J,\alpha) \right]_{\vir}\cdot y^{\widetilde\alpha} \in \mathcal T_{\widetilde Q} \\ \mathsf Z_\zeta &= \sum_{\alpha \in {\mathbb{N}}^{Q_0}} \,\left[\mathfrak M_\zeta(\widetilde J,\alpha) \right]_{\vir}\cdot y^{\widetilde\alpha} \in \mathcal T_{\widetilde Q}. \end{align*} \end{definition} \subsection{Dimensional reduction} We say that a quiver with potential $(Q,W)$ admits a \emph{cut} if there is a subset $I \subset Q_1$ such that every cyclic monomial appearing in $W$ contains exactly one edge in $I$. If $I$ is a cut for $(Q,W)$, one can define a new quiver $Q_I = (Q_0,Q_1\setminus I)$. Let $J_{W,I}$ be the quotient of ${\mathbb{C}} Q_I$ by the two-sided ideal generated by the noncommutative derivatives $\partial W/\partial a$ for $a \in I$. Let $\Rep(J_{W,I},\alpha) \subset \Rep(Q_I,\alpha)$ be the space of $J_{W,I}$-modules of dimension vector $\alpha$. Then one has the following dimensional reduction principle. \begin{prop}[{\cite[Prop.~1.15]{RefConifold}}] \label{prop:cut} Suppose $I$ is a cut for $(Q,W)$. Set $\mathrm{d}_I(\alpha) = \sum_{a \in I}\alpha_{t(a)}\alpha_{h(a)}$. Then \[ A_U = \sum_{\alpha \in {\mathbb{N}}^{Q_0}}{\mathbb{L}}^{\frac{1}{2}\chi_Q(\alpha,\alpha) + \mathrm{d}_I(\alpha)}\frac{[\Rep(J_{W,I},\alpha)]}{[\GL_\alpha]} \cdot y^\alpha. \] \end{prop} \begin{example}\label{example:cut_3_loop_quiver} Let $Q=L_3$ be the $3$-loop quiver (see Figure \ref{fig:3loopquiver_framed}, and remove the framing vertex to obtain a picture of this quiver) with the potential $W=A_3[A_1,A_2]$. Notice that $J=J_{L_3,W}={\mathbb{C}}[x,y,z]$, and $I = \set{A_3}$ is a cut for $(L_3,W)$. The quiver $Q_I$ is the $2$-loop quiver and $J_{W,I} = {\mathbb{C}}[x,y]$. We have $\mathrm{d}_I(n)=n^2$ and $\chi_Q(n,n)=-2n^2$. Therefore Proposition \ref{prop:cut} yields an identity \begin{equation}\label{eqn:cut_yields_FF} \sum_{n\geq 0}\,\bigl[\mathfrak M(J,n) \bigr]_{\vir}\cdot y^n = \sum_{n\geq 0}\frac{[C_n]}{[\GL_n]}\cdot y^n = \prod_{m \geq 1}\prod_{k \geq 1}\left(1-{\mathbb{L}}^{2-k}y^m\right)^{-1}, \end{equation} where $\Rep(J_{W,I},n)$ is identified with the \emph{commuting variety} \[ C_n = \Set{(A_1,A_2) \in \End_{{\mathbb{C}}}({\mathbb{C}}^n)^{\oplus 2} | [A_1,A_2] = 0} \subset \End_{{\mathbb{C}}}({\mathbb{C}}^n)^{\oplus 2}, \] and the second identity in \eqref{eqn:cut_yields_FF} is the Feit--Fine formula \cite{FF1,BBS,BM15}. \end{example} \begin{remark} The universal series $A_{U}$ has been computed for several homogeneous deformations of the potential $W$ of Example \ref{example:cut_3_loop_quiver} in \cite{DEF-MOT-DT}. \end{remark} \section{Motivic DT invariants of the Quot scheme of points} \label{sec:quot_scheme_of_points_via_WC} \subsection{Stability on the framed 3-loop quiver} \label{subsec:framed_3_loop_and_its_stab} The main character in this section is the framed quiver $\widetilde{L}_3$ of Figure \ref{fig:3loopquiver_framed}, which we equip with the superpotential $W=A_3[A_1,A_2]$. \begin{figure}[ht] \begin{tikzpicture}[>=stealth,->,shorten >=2pt,looseness=.5,auto] \matrix [matrix of math nodes, column sep={3cm,between origins}, row sep={3cm,between origins}, nodes={circle, draw, minimum size=7.5mm}] { |(A)| \infty & |(B)| 0 \\ }; \tikzstyle{every node}=[font=\small\itshape] \path[->] (B) edge [loop above] node {$A_1$} () edge [loop right] node {$A_2$} () edge [loop below] node {$A_3$} (); \node [anchor=west,right] at (-0.15,0.11) {$\vdots$}; \draw (A) to [bend left=25,looseness=1] (B) node [midway,above] {}; \draw (A) to [bend left=40,looseness=1] (B) node [midway] {}; \draw (A) to [bend right=35,looseness=1] (B) node [midway,below] {}; \end{tikzpicture} \caption{The $r$-framed $3$-loop quiver $\widetilde{L}_3$.}\label{fig:3loopquiver_framed} \end{figure} \begin{definition}\label{def:span} Let $\widetilde{\rho}=(u,\rho)$ be a representation of $\widetilde{L}_3$ of dimension $(1,n)$. We denote by $\braket{u,\rho} \subset \widetilde{\rho}$ the smallest subrepresentation of $\widetilde{\rho}$ containing $u(\rho_{\infty})$. More precisely, if $\rho = (A_1,A_2,A_3) \in \Rep(L_3,n)$, then $\braket{u,\rho}$ is the subrepresentation of $\widetilde\rho$ with $\braket{u,\rho}_\infty = \widetilde{\rho}_\infty = {\mathbb{C}}$ and \[ \braket{u,\rho}_0 = \textrm{span}_{{\mathbb{C}}}\Set{A_1^{a_1}A_2^{a_2}A_3^{a_3}\cdot u_\ell(1)|a_i\geq 0,1\leq \ell\leq r} \subset \widetilde{\rho}_0, \] and with linear maps induced naturally by those defined by $\widetilde{\rho}$. \end{definition} From now on we identify the space of stability parameters for $L_3$ with ${\mathbb{R}}$. \begin{lemma}\label{lemma:stability} Let $\zeta\in {\mathbb{R}}$ be a stability parameter, and let $\widetilde{\rho} = (u,\rho)$ be a representation of $ \ \widetilde{L}_3$ of dimension $(1,n)$. Set $\widetilde \zeta = (-n\zeta,\zeta)$. Then: \begin{enumerate} \item if $\zeta<0$, $\widetilde{\rho}$ is $\zeta$-semistable if and only if it is $\zeta$-stable if and only if $\widetilde{\rho}=\braket{u,\rho}$;\label{item11} \item if $\zeta=0$, $\widetilde{\rho}$ is $\zeta$-semistable;\label{item22} \item if $\zeta>0$, $\widetilde{\rho}$ is $\zeta$-semistable if and only if it is $\zeta$-stable if and only if $n=0$.\label{item33} \end{enumerate} \end{lemma} \begin{proof} For the case $\zeta<0$ we refer to \cite[Prop. 2.4]{BR18}. Consider the case $\zeta>0$. If we had $n = \dim_{{\mathbb{C}}}\widetilde{\rho}_0 > 0$, then $\rho \subset \widetilde{\rho}$ would be destabilising, for $\mathrm{Z}_{\widetilde{\zeta}}(0,n) = -n\zeta + n\sqrt{-1}$ implies $\varphi_{\widetilde{\zeta}}(\rho) > 1/2 = \varphi_{\widetilde{\zeta}}(\widetilde{\rho})$, since $\mathrm{Z}_{\widetilde{\zeta}}(1,n)=(n+1)\sqrt{-1}$ has vanishing real part. On the other hand, if $n = 0$ then $\widetilde{\rho}$ is simple and hence $\zeta$-stable. In the case $\zeta=0$ there is nothing to prove, as all representations have phase $1/2$. \end{proof} Consider the following regions of the space of stability parameters ${\mathbb{R}}$: \begin{itemize} \item [$\circ$] $\Omega_{+} = \set{\zeta\in {\mathbb{R}} | \zeta < 0 }$, \item [$\circ$] $\Omega_{0} = \set{\zeta\in {\mathbb{R}} | \zeta=0 }$, \item [$\circ$] $\Omega_{-} = \set{\zeta\in {\mathbb{R}} | \zeta > 0}$. \end{itemize} By Lemma \ref{lemma:stability} the space of stability parameters on $\widetilde{{L}}_3$ admits a particularly simple wall-and-chamber decomposition ${\mathbb{R}} = \Omega_{+} \amalg \Omega_{0} \amalg \Omega_{-}$ with one wall (the origin) and two chambers. \subsection{The virtual motive of the Quot scheme of points} Let $\widetilde{L}_3$ be the $r$-framed $3$-loop quiver (Figure \ref{fig:3loopquiver_framed}), and fix the superpotential $W=A_3[A_1,A_2]$. Fix a stability parameter $\zeta^+ \in \Omega_+ = {\mathbb{R}}_{<0}$. Fixing $n\geq 0$ and setting $\widetilde{\zeta}^+ = (-n\zeta,\zeta)$, $\widetilde n = (1,n)$, the quotient stack \[ \mathfrak M_{\zeta^+}(\widetilde{L}_3,n) = \bigl[\Rep^{\widetilde{\zeta}^+\textrm{-st}}(\widetilde{L}_3,\widetilde n)\, \big/ \GL_n\bigr] \] is a smooth quasi-projective \emph{variety} of dimension $2n^2+rn$, called the \emph{noncommutative Quot scheme} in \cite{BR18}. The regular function $f_n\colon \Rep(\widetilde{L}_3,\widetilde n) \to {\mathbb{A}}^1$ given by taking the trace of $W$ descends to a regular function on $\mathfrak M_{\zeta^+}(\widetilde{L}_3,n)$, still denoted $f_n$. We have the following description of the Quot scheme of length $n$ quotients of $\mathscr O_{{\mathbb{A}}^3}^{\oplus r}$. \begin{prop}[{\cite[Thm.~2.6]{BR18}}]\label{prop:SPOT_A^3} There is an identity of closed subschemes \[ \Quot_{{\mathbb{A}}^3}(\mathscr O^{\oplus r},n) = \operatorname{crit}(f_n) \subset \mathfrak M_{\zeta^+}(\widetilde{L}_3,n). \] \end{prop} Thanks to Proposition \ref{prop:SPOT_A^3}, we can form the virtual motives of the Quot scheme, as in \S\,\ref{subsec:virtual_motive}, and define their generating function \begin{equation}\label{eqn:motivic_DT_rank_r} \mathsf{DT}_r^{\points}({\mathbb{A}}^3,q) = \sum_{n\geq 0}\,\left[\Quot_{{\mathbb{A}}^3}(\mathscr O^{\oplus r},n) \right]_{\vir}\cdot q^n\,\in\,{\mathcal M}_{{\mathbb{C}}}\llbracket q \rrbracket. \end{equation} \begin{remark} The main result of \cite{BBS} is the formula \[ \mathsf{DT}_1^{\points}({\mathbb{A}}^3,q) = \prod_{m\geq 1}\prod_{k=0}^{m-1}\left(1-{\mathbb{L}}^{2+k-\frac{m}{2}}q^m\right)^{-1}. \] The series $\mathsf{DT}_1^{\points}(Y,q)$ studied in \cite{BBS} for an arbitrary smooth $3$-fold $Y$ also appeared in \cite{DavisonR} as the wall-crossing factor in the motivic DT/PT correspondence based at a fixed smooth curve $C \subset Y$ in $Y$. This correspondence refined its enumerative counterpart \cite{LocalDT,Ricolfi2018}. The same phenomenon occurred in \cite{RefConifold,MorrNagao} in the context of framed motivic DT invariants. See \cite[\S\,4]{Quot19} for a generalisation $\mathsf{DT}_r^{\points}(Y,q)$ of \eqref{eqn:motivic_DT_rank_r} to an arbitrary smooth $3$-fold $Y$. See \cite{ricolfi2019motive} for a plethystic formula expressing the \emph{naive} motives $[\Quot_Y(F,n)] \in K_0(\Var_{{\mathbb{C}}})$ in terms of the motives of the \emph{punctual} Quot schemes. \end{remark} The following consideration will constitute the final step in proving Theorem \ref{thm:main_motivic}. \begin{lemma}\label{thm:quot_partition_function} There is an identity \begin{equation}\label{eqn:Partition_Function_QUOT} \prod_{m\geq 1}\prod_{k=0}^{rm-1}\left(1-{\mathbb{L}}^{2+k-\frac{rm}{2}}q^m\right)^{-1} \,=\, \prod_{i=1}^r \mathsf{DT}_1^{\points}\left({\mathbb{A}}^3,q{\mathbb{L}}^{\frac{-r-1}{2}+i}\right). \end{equation} \end{lemma} \begin{proof} The claimed identity follows from a simple manipulation: \begin{align*} \prod_{i=1}^r\mathsf{DT}_1^{\points}({\mathbb{A}}^3,q{\mathbb{L}}^{\frac{-r-1}{2}+i}) &\,=\,\prod_{i=1}^r \prod_{m\geq 1}\prod_{k=0}^{m-1}\left(1-{\mathbb{L}}^{2+k-\frac{m}{2}}{\mathbb{L}}^{\frac{-r-1}{2}m+im}q^m\right)^{-1} \\ &\,=\,\prod_{m\geq 1}\prod_{k=0}^{m-1}\prod_{i=1}^r \left(1-{\mathbb{L}}^{2+k+(i-1)m-\frac{rm}{2}}q^m\right)^{-1} \\ &\,=\,\prod_{m\geq 1}\prod_{k=0}^{rm-1}\left(1-{\mathbb{L}}^{2+k-\frac{rm}{2}}q^m\right)^{-1}.\qedhere \end{align*} \end{proof} The identification of \eqref{eqn:Partition_Function_QUOT} with $\mathsf{DT}_r^{\points}({\mathbb{A}}^3,q)$ is proven in the PhD theses of the first and third author \cite{Cazzaniga_Thesis,ThesisR}. Both proofs follow the technique introduced in the $r=1$ case by Behrend--Bryan--Szendr\H{o}i \cite{BBS}. In the next subsection, we provide a new proof of Theorem \ref{thm:main_motivic}. We exploit an $r$-framed version of motivic wall-crossing. This technique, inspired by \cite{Mozgovoy_Framed_WC,RefConifold,MorrNagao}, will be applied to small crepant resolutions of affine toric Calabi--Yau $3$-folds in \cite{Cazza_Ric}. \subsection{Calculation via wall-crossing} \label{subsec:calculation_framed_3_loop_quiver} In this subsection we prove Theorem \ref{thm:main_motivic}. Consider the universal generating function \[ \widetilde A_U = \sum_{n\geq 0}\,\left[\mathfrak M(\widetilde J,n) \right]_{\vir}\cdot y^{(1,n)}\,\in\,\mathcal T_{\widetilde L_3} \] as an element of the motivic quantum torus. To (generic) stability parameters $\zeta^{\pm} \in \Omega_{\pm}$ we associate elements (cf.~Definition \ref{def:motivic_partition_functions}) \[ \mathsf Z_{\zeta^\pm} = \sum_{n\geq 0}\,\left[\mathfrak M_{\zeta^{\pm}}(\widetilde J,n) \right]_{\vir}\cdot y^{(1,n)}\,\in\,\mathcal T_{\widetilde L_3}. \] By Lemma \ref{lemma:stability}\,\eqref{item33}, we have an identity \begin{equation}\label{A_minus} \mathsf Z_{\zeta^-} = y_{\infty} = y^{(1,0)}, \end{equation} whereas the series $\mathsf Z_{\zeta^+}$ is, essentially, a ``shift'' of the generating function $\mathsf{DT}_r^{\points}({\mathbb{A}}^3,y^{(0,1)})$. More precisely, by Proposition \ref{prop:SPOT_A^3} we have an identification \[ \mathfrak M_{\zeta^{+}}(\widetilde J,n) = \Quot_{{\mathbb{A}}^3}(\mathscr O^{\oplus r},n) \subset \mathfrak M_{\zeta^+}(\widetilde{L}_3,n) \] of critical loci sitting inside the noncommutative Quot scheme; in particular, the associated virtual motives are the same. The shift is intended in the following sense: since $\braket{(0,n),(1,0)}_{\widetilde{L}_3} = rn$, the product rule \eqref{eqn:product_in_TQ} yields the identity \begin{equation}\label{eqn:y1n} y^{(1,n)} = {\mathbb{L}}^{-\frac{rn}{2}} y^{(0,n)}\cdot y_{\infty}\,\in\,\mathcal T_{\widetilde{L}_3}. \end{equation} Since we can express $y^{(0,n)}$ as the $n$-fold product of $y^{(0,1)}$ with itself, we obtain \begin{equation}\label{eqn:A+} \mathsf Z_{\zeta^+} = \mathsf{DT}_r^{\points}\left({\mathbb{A}}^3,{\mathbb{L}}^{-\frac{r}{2}} y^{(0,1)}\right)\cdot y_{\infty}. \end{equation} The last generating function we need to analyse is \[ A_U = \sum_{n\geq 0}\,\bigl[\mathfrak{M}(J,n) \bigr]_{\vir} \cdot y^{(0,n)}\,\in\,\mathcal T_{L_3}\,\subset\, \mathcal T_{\widetilde{L}_3}, \] whose $n$-th coefficient is the virtual motive of the stack of $0$-dimensional ${\mathbb{C}}[x,y,z]$-modules of length $n$. This was already computed in Example \ref{example:cut_3_loop_quiver}: \begin{equation}\label{eqn:BU} A_U(y) = \prod_{m \geq 1}\prod_{k \geq 1}\left(1-{\mathbb{L}}^{2-k}y^m\right)^{-1}. \end{equation} The next ingredient of the proof is a particular instance of Mozgovoy's motivic wall-crossing formula \cite{Mozgovoy_Framed_WC}. \begin{prop}\label{prop:motivic_WC_3loop_quiver} In $\mathcal T_{\widetilde{L}_3}$, there are identities \begin{equation}\label{eqn:WC_3loop_quiver} \mathsf Z_{\zeta^+}\cdot A_U = \widetilde A_U = A_U \cdot \mathsf Z_{\zeta^-}. \end{equation} \end{prop} \begin{proof} Let $\widetilde{\rho} = (u,\rho)$ be a $\widetilde{J}$-module. Consider $\zeta^+\in \Omega_{+}$, and let $\braket{u,\rho}\subset \widetilde{\rho}$ be the submodule introduced in Definition \ref{def:span}. We have that $\braket{u,\rho}$ is $\zeta^+$-stable by Lemma \ref{lemma:stability}\,\eqref{item11} and the quotient $\widetilde{\rho}/\braket{u,\rho}$ is supported at the vertex $0$. From this we deduce the decomposition $\widetilde{A}_U=\mathsf Z_{\zeta^+}\cdot A_{U}$. Consider now $\zeta^- \in \Omega_{-}$. The quotient of $\widetilde{\rho}$ by the submodule $\rho$ based at the vertex $0$ is the simple module supported at the framing vertex $\infty$. By Lemma \ref{lemma:stability}\,\eqref{item33} this is the unique $\zeta^-$-stable module for the current choice of $\zeta^-$, so we obtain the decomposition $\widetilde{A}_U=A_{U}\cdot \mathsf Z_{\zeta^-}$. \end{proof} \setlength{\abovecaptionskip}{-40pt} \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[>=stealth',shorten >=1pt,auto,node distance=3cm, thick,main node/.style={circle,draw,minimum size=3mm}] \node at (-3,-1) {$\Omega_{+}$}; \node at (0,-1) {$\Omega_{0}$}; \node at (3,-1) {$\Omega_{-}$}; \node (a) at (0,3) {}; \node (b) at (0,-3) {}; \draw (-5,0)--(5,0); \draw[line width=1.8] (0.2,-0.2)--(-0.2,0.2); \draw[line width=1.8] (-0.2,-0.2)--(0.2,0.2); \begin{pgflowlevelscope}{\pgftransformscale{0.45}} \node[main node](0) at (8.5-16,6-2) {$v_{0}$}; \node[circle,draw,minimum size=3mm] (i) at (8.5-16,4-2) {$v_{\infty}$}; \node at (7.5-16,5.5-2) {$\subseteq$}; \node (dots) at (8.5-16,5-2) {$\ldots$}; \node[main node](0') at (6.5-16,6-2) {$v_{0}$}; \node[circle,draw,minimum size=3mm] (i') at (6.5-16,4-2) {$v_{\infty}$}; \node (dots) at (6.5-16,5.5-2) {$\ldots$}; \node (n) at (9.2-16,6-2) {$n$}; \node (n') at (5.7-16,6-2) {$n'$}; \node (1) at (9.2-16,4-2) {$1$}; \node (1') at (5.7-16,4-2) {$1$}; \path[every node/.style={font=\sffamily\small, fill=white,inner sep=1pt},every loop/.style={looseness=12}] (0) edge [->,in=110, out=70, loop] node {}(0) (0) edge [->,in=120, out=60, loop] node {}(0) (0) edge [->,in=130, out=50, loop] node {}(0) (i) edge [->,bend left=20] node[] {} (0) edge [->,bend left=-20] node[] {} (0); \path[every node/.style={font=\sffamily\small, fill=white,inner sep=1pt},every loop/.style={looseness=12}] (0') edge [->,in=110, out=70, loop] node {}(0') (0') edge [->,in=120, out=60, loop] node {}(0') (0') edge [->,in=130, out=50, loop] node {}(0') (i') edge [->,bend left=20] node[] {} (0') edge [->,bend left=-20] node[] {} (0'); \end{pgflowlevelscope} \begin{pgflowlevelscope}{\pgftransformscale{0.45}} \node[main node](0) at (-8.5+16,-4.5+8) {$v_{0}$}; \node at (-7.5+16,-4.5+8) {$\subseteq$}; \node (dots) at (-6.5+16,-5+8) {$\ldots$}; \node[main node](0') at (-6.5+16,-4+8) {$v_{0}$}; \node[circle,draw,minimum size=3mm] (i') at (-6.5+16,-6+8) {$v_{\infty}$}; \node (n) at (-9.2+16,-4.5+8) {$n$}; \node (n') at (-5.7+16,-4+8) {$n$}; \node (1') at (-5.7+16,-6+8) {$1$}; \path[every node/.style={font=\sffamily\small, fill=white,inner sep=1pt},every loop/.style={looseness=12}] (0) edge [->,in=110, out=70, loop] node {}(0) (0) edge [->,in=120, out=60, loop] node {}(0) (0) edge [->,in=130, out=50, loop] node {}(0); \path[every node/.style={font=\sffamily\small, fill=white,inner sep=1pt},every loop/.style={looseness=12}] (0') edge [->,in=110, out=70, loop] node {}(0') (0') edge [->,in=120, out=60, loop] node {}(0') (0') edge [->,in=130, out=50, loop] node {}(0') (i') edge [->,bend left=20] node[] {} (0') edge [->,bend left=-20] node[] {} (0'); \end{pgflowlevelscope} \begin{pgflowlevelscope}{\pgftransformscale{0.45}} \node[main node](0) at (.5,4) {$v_{0}$}; \node[circle,draw,minimum size=3mm] (i) at (.5,2) {$v_{\infty}$}; \node (dots) at (.5,3) {$\ldots$}; \node (dots) at (-.5,3.5) {$\subseteq$}; \node (dots) at (-1.3,3.5) {$0$}; \node (n) at (-.2,4) {$n$}; \node (1) at (-.2,2) {$1$}; \path[every node/.style={font=\sffamily\small, fill=white,inner sep=1pt},every loop/.style={looseness=12}] (0) edge [->,in=110, out=70, loop] node {}(0) (0) edge [->,in=120, out=60, loop] node {}(0) (0) edge [->,in=130, out=50, loop] node {}(0) (i) edge [->,bend left=20] node[] {} (0) edge [->,bend left=-20] node[] {} (0); \end{pgflowlevelscope} \end{tikzpicture} \caption{An illustration of Proposition \ref{prop:motivic_WC_3loop_quiver}.}\label{fig_wall} \end{center} \end{figure} Note that we have an identity \[ y_{\infty}\cdot y^{(0,n)} = {\mathbb{L}}^{-\frac{rn}{2}}\cdot y^{(1,n)} = {\mathbb{L}}^{-rn}\cdot y^{(0,n)}\cdot y_{\infty}, \] where we have used \eqref{eqn:y1n} for the second equality. By Formula \eqref{eqn:A+}, the left-hand term of Formula \eqref{eqn:WC_3loop_quiver} can then be rewritten as \begin{multline*} \mathsf{DT}_r^{\points}\left({\mathbb{A}}^3,{\mathbb{L}}^{-\frac{r}{2}} y^{(0,1)}\right)\cdot \sum_{n\geq 0}\,\bigl[\mathfrak{M}(J,n)\bigr]_{\vir}\cdot y_{\infty} \cdot y^{(0,n)} \\ =\mathsf{DT}_r^{\points}\left({\mathbb{A}}^3,{\mathbb{L}}^{-\frac{r}{2}} y^{(0,1)}\right)\cdot \sum_{n\geq 0}\,\bigl[\mathfrak{M}(J,n)\bigr]_{\vir} {\mathbb{L}}^{-rn}\cdot y^{(0,n)}\cdot y_{\infty}. \end{multline*} Therefore, by Equation \eqref{A_minus}, identifying the two expressions for $\widetilde{A}_U$ in Equation \eqref{eqn:WC_3loop_quiver} yields \[ \mathsf{DT}_r^{\points}\left({\mathbb{A}}^3,{\mathbb{L}}^{-\frac{r}{2}} y^{(0,1)}\right)\cdot A_U\left({\mathbb{L}}^{-r}y^{(0,1)}\right) \cdot y_{\infty} = A_U\left(y^{(0,1)}\right)\cdot y_{\infty}, \] which is equivalent to \[ \mathsf{DT}_r^{\points}\left({\mathbb{A}}^3,{\mathbb{L}}^{-\frac{r}{2}} y^{(0,1)}\right) = \frac{A_U\left(y^{(0,1)}\right)}{A_U\left({\mathbb{L}}^{-r}y^{(0,1)}\right)}. \] Setting $q = {\mathbb{L}}^{-\frac{r}{2}}y^{(0,1)}$, and using Equation \eqref{eqn:BU}, a simple substitution yields \begin{align*} \mathsf{DT}_r^{\points}({\mathbb{A}}^3,q) &= \frac{A_U\left({\mathbb{L}}^{\frac{r}{2}}q\right)}{A_U\left({\mathbb{L}}^{-\frac{r}{2}}q\right)} \\ &=\prod_{m\geq 1}\prod_{j\geq 0}\frac{\left(1-\mathbb L^{1-j+\frac{rm}{2}}q^m\right)^{-1}}{\left(1-\mathbb L^{1-j-\frac{rm}{2}}q^m\right)^{-1}}\\ &=\prod_{m\geq 1}\prod_{j=0}^{rm-1}\left(1-\mathbb L^{1-j+\frac{rm}{2}}q^m\right)^{-1}\\ &=\prod_{m\geq 1}\prod_{k=0}^{rm-1}\left(1-\mathbb L^{2+k-\frac{rm}{2}}q^m\right)^{-1}. \end{align*} Formula \eqref{formu} is proved. By Lemma \ref{thm:quot_partition_function}, Theorem \ref{thm:main_motivic} is proved. \begin{remark} The generating function $\mathsf{DT}_r^{\points}({\mathbb{A}}^3,(-1)^rq)$ admits the plethystic expression \[ \mathsf{DT}_r^{\points}({\mathbb{A}}^3,(-1)^rq) = \Exp\left(\frac{(-1)^rq\mathbb L^{\frac{3}{2}}}{\bigl(1-(-\mathbb L^{-\frac{1}{2}})^rq\bigr)\bigl(1-(-\mathbb L^{\frac{1}{2}})^rq\bigr)}\frac{\mathbb L^{-\frac{r}{2}}-\mathbb L^{\frac{r}{2}}}{\mathbb L^{-\frac{1}{2}}-\mathbb L^{\frac{1}{2}}}\right). \] This is exploited in \cite[\S\,4]{Quot19} to define $[\Quot_Y(F,n)]_{\vir}$ for all $3$-folds $Y$ and locally free sheaves $F$ over $Y$. \end{remark} \begin{remark} The Euler number specialisation ${\mathbb{L}}^{1/2} \to -1$ applied to Formula \eqref{formu} yields a formula for the generating function of \emph{virtual Euler characteristics} of Quot schemes, \[ \sum_{n \geq 0} \chi_{\vir}(\Quot_{{\mathbb{A}}^3}(\mathscr O^{\oplus r},n))\cdot q^n = \mathsf M((-1)^rq)^r, \] where $\mathsf M(q) = \prod_{m\geq 1}(1-q^m)^{-m}$ is the MacMahon function, the generating function of plane partitions, and where $\chi_{\vir}(-) = \chi(-,\nu)$ is the Euler characteristic weighted by Behrend's microlocal function \cite{Beh}. The above identity can be seen as the Calabi--Yau specialisation (i.e.~the specialisation $s_1+s_2+s_3 = 0$) of the generating function of \emph{cohomological Donaldson--Thomas invariants} of ${\mathbb{A}}^3$, \[ \sum_{n\geq 0}\mathsf{DT}_r^{\coh}({\mathbb{A}}^3,n)\cdot q^n = \mathsf M((-1)^rq)^{-r\frac{(s_1+s_2)(s_1+s_3)(s_2+s_3)}{s_1s_2s_3}}\,\,\in\,\, {\mathbb{Q}}(\!(s_1,s_2,s_3)\!) \llbracket q \rrbracket, \] obtained in \cite[Thm.~B]{FMR_K-DT} (as a higher rank version of \cite[Thm.~1]{MNOP2}), where $s_1$, $s_2$ and $s_3$ are the equivariant parameters of the torus ${\mathbb{T}} = {\mathbb{G}}_m^3$ acting on the Quot scheme. \end{remark} \section{The normal limit law and asymptotics}\label{sec:Asymptotics} In \S\,\ref{sec:random_variables_on_colored_partitions}, we introduce a family of random variables on the space of $r$-colored plane partitions, and we describe the asymptotics of the members of the family after suitable normalisation in Proposition \ref{propCLT}. Theorem \ref{main1}, the main theorem of the section, is deduced from Proposition \ref{propCLT} in subsection \ref{sub:proofB}. Finally, subsection \ref{sec:proofCLT} is entirely devoted to the proof of Proposition \ref{propCLT}. \subsection{Random variables on $r$-colored plane partitions}\label{sec:random_variables_on_colored_partitions} We introduce a multivariate function \begin{equation}\label{eq:F_prod} F(u,v,w,z)=\prod_{l=1}^{r}\prod_{m\geq 1}\prod_{k=1}^{m}\left(\ 1-wu^{k}v^{ml}z^m\right)^{-1}. \end{equation} The coefficient of $z^n$ is a polynomial on the three variables $u$, $v$ and $w$, which we denote by $Q_{n}(u,v,w)$, whose coefficients are nonnegative integers. When $u=v=w=1$, we obtain the well known MacMahon function raised to the power $r$, which is the generating function for the $r$-tuples of plane partitions. Hence, $Q_{n}(1,1,1)$ is the number of $r$-tuples of plane partitions of total size $n$, i.e.~the number of $r$-colored plane partitions $(\pi_1,\, \pi_2, \, \dots, \, \pi_r)$ such that $\sum_{j=1}^{r}\lvert\pi_j\rvert=n$, where $\lvert\pi_j\rvert$ denotes the sum of the entries of the plane partition $\pi_j$ (or the number of boxes, cf.~Figure \ref{partition}). The polynomial $Q_{n}(u,v,w)$, when divided by $Q_{n}(1,1,1)$, represents the joint probability generating function of some random variables $X_n$, $Y_n$, and $Z_n$ on the space of $r$-colored plane partitions of size $n$, where each $r$-tuple is equally likely. More precisely, we have \begin{equation}\label{eq:prob_gen} \frac{Q_{n}(u,v,w)}{Q_{n}(1,1,1)}=\mathbb{E}\left(u^{X_n}v^{Y_n}w^{Z_n}\right). \end{equation} To describe these random variables, we need to define certain parameters of plane partitions. For a plane partition $\pi$, let $\Delta(\pi)$ denote the sum of the diagonal parts of $\pi$, $\Delta_{+}(\pi)$ denote the sum of the upper diagonal parts, and $\Delta_{-}(\pi)$ denote the sum of the lower diagonal parts. See Figure~\ref{partition} for an example of a plane partition showing the values of these parameters. \setlength{\abovecaptionskip}{10pt} \begin{figure}[!ht] \begin{tikzpicture}[scale=0.36] \centering \node[anchor=west] at (-15,1) {\begin{ytableau} 1 \\ 2 & 2 \\ 2 & 2 & 1 \\ 3 & 3 & 2 \\ 5 & 4 & 3 & 1 \\ \end{ytableau}}; \planepartition{{5,3,2,2,1},{4,3,2,2},{3,2,1},{1}} \end{tikzpicture} \caption{A plane partition $\pi$ of size $\lvert\pi\rvert=31$, $\Delta(\pi)=9$, $\Delta_{+}(\pi)=12$, and $\Delta_{-}(\pi)=10$.}\label{partition} \end{figure} The parameter $\Delta(\pi)$ is also known as the trace of $\pi$ and it has been studied in the literature, see for instance \cite{Kamenov2007} and the references therein. In particular, one has \[ \prod_{m\geq 1}(1-wz^m)^{-m}=\sum_{\pi}w^{\Delta(\pi)}z^{\lvert\pi\rvert}. \] Similarly, we can find in \cite{Morrison_asymptotics} that \[ \prod_{m\geq 1}\prod_{k=1}^{m}(1-q^{2k-m}z^{m})^{-1}=\sum_{\pi}q^{\Delta(\pi)+\Delta_{+}(\pi)-\Delta_{-}(\pi)}z^{\lvert\pi\rvert}. \] We can easily deduce from these two identities that for an $r$-colored plane partition $\overline{\pi} = (\pi_1, \pi_2, \ldots, \pi_r)$ of total size $n$, we have \begin{align*} X_n\left(\overline{\pi}\right)& =\frac{n}{2}+\frac{1}{2}\left( \sum_{l=1}^{r}\left(\Delta(\pi_l)+\Delta_{+}(\pi_l)-\Delta_{-}(\pi_l)\right)\right)\\ &=\sum_{l=1}^{r}\left(\Delta(\pi_l)+\Delta_{+}(\pi_l)\right),\\ Y_n\left(\overline{\pi}\right)& = \sum_{l=1}^{r}l|\pi_l|, \ \ \text{and}\\ Z_n\left(\overline{\pi}\right)& = \sum_{l=1}^{r}\Delta(\pi_l). \end{align*} When $r=1$, Kamenov and Mutafchiev~\cite{Kamenov2007} proved that the distribution of the trace of a random plane partition of size $n$, when suitably normalised, is asymptotically normal. Morrison \cite{Morrison_asymptotics} also established asymptotic normality for any random variable of the form $\delta\Delta(\pi)+\Delta_{+}(\pi)-\Delta_{-}(\pi)$, where $\pi$ is a random plane partition of size $n$ and $\delta$ is a fixed real number. We show that for any fixed integer $r\geq 1$, any nontrivial linear combination of the variables $X_n$, $Y_n$ and $Z_n$, when suitably normalised, converges weakly to a normal distribution. It is worth noting that the random variable $Y_n$ is non-constant only when $r>1$. \bprop \label{propCLT} For any fixed real vector $(\alpha,\beta,\gamma)\neq (0,0,0)$, there exist sequences of real numbers $\mu_n$ and $\sigma_n\geq 0$ such that the normalised random variable $$\frac{\alpha X_n+\beta Y_n+ \gamma Z_n-\mu_n}{\sigma_n}$$ converges weakly to the standard normal distribution. Moreover, $\mu_n$ and $\sigma_n$ satisfy the following asymptotic formulas as $n\to \infty$ \begin{align*} \mu_n&\,\,=\,\,\left(\frac{1}{2}\alpha+\frac{r+1}{2} \beta\right)n+\frac{r^{1/3}\zeta(2)(\alpha+2\gamma)}{2^{5/3}(\zeta(3))^{2/3}}\, n^{2/3}+\mathcal{O}(n^{1/3}), \, \text{ and }\,\\ \sigma_n^2 &\,\,\sim\,\, \begin{cases} \displaystyle \frac{\alpha^2+(r^2-1)\beta^2}{2^{7/3}(r\zeta(3))^{1/3}}\, n^{4/3}\, & \text{ if } (\alpha,\beta)\neq (0,0),\\[2em] \displaystyle \hfil \frac{r^{1/3}\gamma^2}{3(2\zeta(3))^{2/3}} n^{2/3}\log n \, & \text{otherwise.} \end{cases} \end{align*} \eprop Looking at the asymptotic behaviour of the random variables $X_n$, $Y_n$ and $Z_n$, when divided by $n^{2/3}$, we observe from the above result that the random variable $n^{-2/3}Z_n$ degenerates as $n\to \infty$. Furthermore, by the Cam\'er-Wold device \cite{Cramer-Wold}, the random variables $n^{-2/3}X_n$, $n^{-2/3}Y_n$ converge jointly to a bivariate normal distribution with a diagonal covariance matrix. The appropriate normalisation of $Z_n$ is $n^{-1/3}(\log n)^{-1/2}Z_n$ which is, when centred, asymptotically normal. This asymptotic normality and the asymptotic formulas for $\mu_n$ and $\sigma_n^2$ agree with the main result in \cite{Kamenov2007} when $r=1$ and $(\alpha,\beta,\gamma)=(0,0,1).$ \begin{convention} We shall use the Vinogradov notation $\ll$ interchangeably with the $\mathcal O$-notation. For instance, by $f(n)\ll g(n)$ (or $g(n)\gg f(n)$) as $n\to\infty$, we mean that there exists a positive constant $C$ such that $|f(n)|\leq Cg(n)$ for sufficiently large $n$. \end{convention} Theorem \ref{main1} now follows immediately from Proposition \ref{propCLT} as we will see next. \subsection{Deducing Theorem \ref{main1}}\label{sub:proofB} Granting Proposition \ref{propCLT}, we can now finish the proof of our second main result. \begin{proofof}{Theorem \ref{main1}} With the change of variable $T={\mathbb{L}}^{1/2}$, the combination of the equations \eqref{formu} and \eqref{formula_product} in Theorem~\ref{thm:main_motivic} yields \[ \mathsf{DT}_r^{\points}({\mathbb{A}}^3,q) =\prod_{l=1}^{r}\prod_{m\geq 1}\prod_{k=0}^{m-1}\left(1-T^{4+2k-m}\Big(qT^{-r-1+2l}\Big)^m\right)^{-1}. \] The product on the right-hand side can be expressed in terms of our auxiliary function $F(u,v,w,z)$ defined at the beginning of this section. First we write it as follows \[ \prod_{l=1}^{r}\prod_{m\geq 1}\prod_{k=0}^{m-1}\left(1-T^{4+m-2(m-k)}\Big(qT^{r+1-2(r-l+1)}\Big)^m\right)^{-1}. \] If $k$ goes from $0$ to $m-1$, then $m-k$ goes from $m$ to $1$. Similarly, if $l$ goes from $1$ to $r$, then $r-l+1$ goes from $r$ to $1$. Therefore, we have \begin{align*} \mathsf{DT}_r^{\points}({\mathbb{A}}^3,q) & =\prod_{l=1}^{r}\prod_{m\geq 1}\prod_{k=1}^{m}\left(1-T^{4+m-2k}(T^{r+1-2l}q)^m\right)^{-1}\\ & = \prod_{l=1}^{r}\prod_{m\geq 1}\prod_{k=1}^{m}\left(1-T^{4-2k-2ml}(T^{r+2}q)^m\right)^{-1}\\ & =F(T^{-2},T^{-2},T^4,T^{r+2}q). \end{align*} This implies that \begin{equation}\label{eq:ET} {\mathbb{E}}(T^{S_{n,r}})= \frac{M_{n,r}(T)}{M_{n,r}(1)}={\mathbb{E}}\left(T^{4Z_n-2X_n-2Y_n+(r+2)n}\right), \end{equation} where $M_{n,r}(T)$ is, as defined in \eqref{eq:coefM}, the coefficient of $q^n$ in $\mathsf{DT}_r^{\points}({\mathbb{A}}^3,q)$. The second equality in \eqref{eq:ET} makes use of Equation~\eqref{eq:prob_gen}. Thus, $S_{n,r}$ has the same distribution as $4Z_n-2X_n-2Y_n+(r+2)n$ --- a shifted linear combination of the variables $X_n$, $Y_n$, and $Z_n$. Now, applying Proposition \ref{propCLT} with $(\alpha, \beta, \gamma) = (-2, -2, 4)$, we deduce that the normalised random variable $n^{-2/3}(4Z_n-2X_n-2Y_n+(r+2)n)$ converges weakly to the normal distribution $\mathcal{N}(\mu,\sigma^2)$ with \[ \mu = \frac{r^{1/3}\pi^2}{2^{5/3}(\zeta(3))^{2/3}}\, \text{ and }\, \sigma^2=\frac{r^{5/3}}{(2\zeta(3))^{1/3}}, \] which proves Theorem \ref{main1}. \end{proofof} \subsection{Proof of Proposition \ref{propCLT}}\label{sec:proofCLT} Morrison used the method of moments to prove his result in \cite{Morrison_asymptotics}. However, due to the appearance of the second variable $Y_n$ and the complication that comes with it, we decided to use a different approach. We follow the method that Hwang used in \cite{Hwang} to prove limit theorems for the number of parts in the so-called restricted partitions (these are one dimensional partitions with some restrictions on the parts). The first part of the proof is based on the saddle-point method to get an asymptotic formula for $Q_{n}(u,v,w)$ as $n\to\infty$, and the second is a perturbation technique to deduce the central limit theorem. \subsubsection{Saddle-point method} The goal here is to obtain an asymptotic formula for $Q_{n}(u,v,w)$ as $n\to\infty$, where $(u,v,w)$ is allowed to vary in a fixed real neighborhood of $(1,1,1)$. To simplify our notation, define $ \Phi(t)=-\log(1-e^{-t}) $, and for real numbers $a$, $b$ and $c$, we let \[ f(x,y)=\sum_{l=1}^{r}\sum_{m=1}^{\infty} \sum_{k=1}^{m}\Phi(xm+y(c+ak+mbl)). \] The function $f$ depends on $(a,b,c)$ but we drop this dependence for now to ease notation. Also, for a positive number $\rho$, we make the substitution $z=e^{-\tau}$, where $\tau=\rho+it$. Hence, \[ f(\tau,\rho) = -\sum_{l=1}^{r}\sum_{m=1}^{\infty}\sum_{k=1}^{m}\log\left(1-e^{-\rho(c+ak+mbl)-\tau m}\right)=\log F(e^{-a\rho},e^{-b\rho},e^{-c\rho},e^{-\tau}). \] One can easily verify that if $(u,v,w)$ is bounded (which is the case throughout this section), then there exists a fixed positive real number $R$ such that the product in \eqref{eq:F_prod} converges absolutely whenever $|z|<R$. Hence, $F(u,v,w,z)$, as function of $z$, is analytic in a complex neighborhood of $0$. By Cauchy's integral formula, we have \begin{equation}\label{eq:Cauchy} Q_{n}(e^{-a\rho},e^{-b\rho},e^{-c\rho})=\frac{e^{n\rho}}{2\pi}\int_{-\pi}^{\pi}\exp\Big(f(\rho+it,\rho)+nt i\Big)dt. \end{equation} We now use the saddle-point method to estimate the above integral. We choose $\rho$ to be the positive solution of the equation \begin{equation}\label{eq:saddle} n=-f_{x}(\rho,\rho)=\sum_{l=1}^{r}\sum_{m=1}^{\infty}\sum_{k=1}^{m}\frac{me^{-\rho(m+(c+ak+mbl))}}{1-e^{-\rho(m+(c+ak+mbl))}}, \end{equation} where $f_x$ denotes the partial derivative of $f$ with respect to $x$. Similar notations will be used for other partial derivatives. Note that there is a unique positive solution $\rho=\rho(n,a,b,c)$ of Equation~\eqref{eq:saddle} since the function defined by the series is strictly decreasing as a function of $\rho$, provided that $a$, $b$ and $c$ are small enough (it suffices for instance to assume that $|c|+|a|+r|b|<1$). Furthermore, we observe that $\rho\to0$ as $n\to \infty$. The following lemma reveals the asymptotic dependence between $n$ and $\rho$. \begin{lemma}\label{lem:1} Let $\epsilon$ be a number in the interval $[0,1/2]$ and $\rho$ be the solution of Equation~\eqref{eq:saddle}. Then we have \begin{equation} \frac{2r\zeta(3)}{(1+\epsilon)^3} \, \rho^{-3}+\mathcal{O}(\rho^{-2})\leq n \leq \frac{2r\zeta(3)}{(1-\epsilon)^3}\, \rho^{-3}+\mathcal{O}(\rho^{-2}), \end{equation} as $n\to\infty$, uniformly for $|c|+|a|+r|b|\leq \epsilon$, where the implied constants in the $\mathcal{O}$-terms are independent of $\epsilon.$ \end{lemma} \begin{proof} Recall from \eqref{eq:saddle} that \[ n =-f_x(\rho,\rho)=-\sum_{l=1}^{r}\sum_{m=1}^{\infty}\sum_{k=1}^{m} m\Phi'\left(\rho m\Big(1+\frac{c+ak+mbl}{m}\Big)\right). \] Under the assumption that $|c|+|a|+r|b|\leq \epsilon$, for any $m\geq 1$, $1\leq k\leq m$ and $1\leq l\leq r$, we have \[ |c+ak+mbl|\leq \left(|c|+|a|+r|b|\right) m\leq \epsilon m. \] Moreover, the function $\Phi'(x)$ is an increasing function. Therefore, \[ \Phi'((1-\epsilon)\rho m)\leq \Phi'\left(\rho m\left(1+\frac{c+ak+mbl}{m}\right)\right)\leq \Phi'( (1+\epsilon)\rho m). \] Multiplying by $m$ and summing over $m\geq 1$, $1\leq k\leq m$ and $1\leq l\leq r$, we obtain \[ f_x((1-\epsilon)\rho,0)\leq f_x(\rho,\rho)\leq f_x((1+\epsilon)\rho,0). \] We can obtain asymptotic estimates of the lower and upper bounds as $\rho\to0^+$. This can be done via Mellin transform. The reader can consult \cite{Mellin} for a comprehensive survey on the Mellin transform method. The Mellin transform of $f_x((1-\epsilon)t,0)$ is \[ \int_{0}^{\infty} f_x((1-\epsilon)t,0) t^{s-1} dt = -r(1-\epsilon)^{-s}\zeta(s-2)\zeta(s)\Gamma(s), \] which has simple poles at $s=3$ and $s=1$. The other singularities are precisely at the negative odd integers. Thus, we have \begin{equation}\label{eq:mellin} f_x((1-\epsilon)\rho,0)=-\frac{2r\zeta(3)}{(1-\epsilon)^{3}}\, \rho^{-3}+\frac{r}{12(1-\epsilon)}\, \rho^{-1}-\frac{r}{2\pi i }\int_{-i\infty}^{i\infty}\zeta(s-2)\zeta(s)\Gamma(s)((1-\epsilon)\rho)^{-s}ds. \end{equation} Since $|\zeta(it-2)\zeta(it)\Gamma(it)|$ decays exponentially fast as $t\to \pm\infty$, the absolute value of the integral on the right hand side is bounded by an absolute constant. The same argument works for the estimate of the upper bound $f_x((1+\epsilon)\rho,0)$. This completes the proof of the lemma. \end{proof} Next, we split the integral on the right-hand side of \eqref{eq:Cauchy} into two parts as follows: let \[ \mathcal{I}_1=\frac{e^{n\rho}}{2\pi}\int_{-\rho^{C}}^{\rho^{C}}\exp\Big(f(\rho+it,\rho)+nti\Big)\,dt, \] where $C$ is an absolute constant in the interval $(5/3, 2)$, and let $\mathcal{I}_2=Q_{n}(e^{-a\rho},e^{-b\rho},e^{-c\rho})-\mathcal{I}_1$. \subsection*{Estimate of $\mathcal{I}_1$} For the rest of this section, we work under the condition of Lemma \ref{lem:1}, that is $|c|+|a|+r|b|\leq \epsilon$ and $\epsilon\in [0,1/2]$. For $-\rho^{C}\leq t\leq \rho^{C}$, Equation~\eqref{eq:saddle} and a Taylor approximation of $f(\rho+it,\rho)$ give \[ f(\rho+it,\rho)+nit=f(\rho,\rho)-f_{xx}(\rho,\rho)\frac{t^2}{2}+\mathcal{O}\left(\rho^{3C}\max_{-\rho^{\rho}\leq \theta\leq \rho^{C} }\Big|f_{xxx}(\rho+i\theta,\rho)\Big|\right), \] where the implied constant in the error term is absolute. To estimate the error term, observe that \[ f_{xxx}(\tau,\rho)=-\sum_{l=1}^{r}\sum_{m=1}^{\infty}\sum_{k=1}^{m}\frac{m^3e^{-\tau m-\rho(c+ak+mbl)}(1+e^{-\tau m-\rho(c+ak+mbl)})}{(1-e^{-\tau m-\rho(c+ak+mbl)})^3} \] For any real number $\theta$ and $\tau=\rho+i\theta$ we have \begin{align*} |1+e^{-\tau m-\rho(c+ak+mbl)}|& \leq 1+e^{-\rho m-\rho(c+ak+mbl)}, \\ |1-e^{-\tau m-\rho(c+ak+mbl)}| & \geq 1-e^{-\rho m-\rho(c+ak+mbl)}. \end{align*} Hence $ |f_{xxx}(\rho+i\theta,\rho)| $ is bounded above by $|f_{xxx}(\rho,\rho)|$. We can estimate $|f_{xxx}(\rho,\rho)|$ as we did for $f_{x}(\rho,\rho)$ in the proof of Lemma \ref{lem:1}. We obtain $$ |f_{xxx}(\rho+i\theta,\rho)|\leq |f_{xxx}(\rho,\rho)| =\mathcal{O}(\rho^{-5}), $$ where the implied constant depends only on $r$. Therefore, for $\lvert t \rvert\leq \rho^{C}$ we have \[ f(\rho+it,\rho)+nit=f(\rho,\rho)-f_{xx}(\rho,\rho)\frac{t^2}{2}+\mathcal{O}(\rho^{3C-5}). \] Since we chose $C>5/3$, we have $\rho^{3C-5}=o(1)$. Thus \[ \mathcal{I}_1=\frac{e^{f(\rho,\rho)+n\rho}}{2\pi}\int_{-\rho^C}^{\rho^C}e^{-f_{xx}(\rho,\rho)t^2/2}\, dt\, \Big(1+\mathcal{O}(\rho^{3C-5})\Big). \] It remains to estimate the integral on the right hand side as $\rho\to 0^+$. Note that $f_{xx}(\rho,\rho)>0$ and $f_{xx}(\rho,\rho) \gg \rho^{-4} $ (again via Mellin transform as in Lemma~\ref{lem:1}), so we have \begin{align*} \int_{-\rho^{C}}^{\rho^{C}}e^{-f_{xx}(\rho,\rho)t^2/2}\, dt & = \int_{-\infty}^{\infty}e^{-f_{xx}(\rho,\rho)t^2/2}\, dt-2\int_{\rho^{C}}^{\infty}e^{-f_{xx}(\rho,\rho)t^2/2}\, dt\\ & =\sqrt{\frac{2\pi}{f_{xx}(\rho,\rho)}}+\mathcal{O}\left(\int_{\rho^{C}}^{\infty}e^{-\rho^{C}f_{xx}(\rho,\rho)t/2}\, dt\right)\\ & =\sqrt{\frac{2\pi}{f_{xx}(\rho,\rho)}}+\mathcal{O}\left(\rho^{4-c}e^{-A\rho^{2C-4}}\right), \end{align*} where $A>0$ and the hidden constants in the error terms above depend only on $r$. Thus, since we chose $C<2$, the term $\rho^{4-c}e^{-A\rho^{2C-4}}$ tends to zero faster than any power of $\rho$ as $\rho\to0^{+}$. Hence, we obtain an estimate for $\mathcal{I}_1$ \begin{equation}\label{eq:est_I1} \mathcal{I}_1=\frac{e^{f(\rho,\rho)+n\rho}}{\sqrt{2\pi f_{xx}(\rho,\rho)}}\Big(1+\mathcal{O}(\rho^{3C-5})\Big)\ \ \text{as}\ \ \rho\to 0^{+}. \end{equation} This estimate holds uniformly for $|c|+|a|+r|b|\leq \epsilon$ and $\epsilon\in [0,1/2]$. \subsection*{Estimate of $\mathcal{I}_2$} We will prove that $|\mathcal I_2|$ is much smaller than $|\mathcal I_1|$. To this end, we assume that $\rho^{C}< t \leq \pi$, where $C$ is as before. We have \begin{align*} \mathrm{Re} (f(\rho+it,\rho))-f(\rho,\rho) & \,\,=\,\,-\sum_{l=1}^r\sum_{m=1}^{\infty}\sum_{k=1}^{m} \sum_{j=1}^{\infty}j^{-1}e^{-j\rho(m+(c+ak+mbl))}(1-\cos(mjt))\\ & \,\,\leq\,\, -\sum_{l=1}^r\sum_{m=1}^{\infty}\sum_{k=1}^{m} e^{-\rho(m+(c+ak+mbl))}\left(1-\cos(mt)\right)\\ & \,\,\leq\,\, -r\sum_{m=1}^{\infty}me^{-\rho(1+\epsilon)m}(1-\cos(mt)). \end{align*} Moreover, \[ \sum_{m=1}^{\infty}me^{-\rho(1+\epsilon)m}(1-\cos(mt)) =\frac{e^{\rho(1+\epsilon)}}{(e^{\rho(1+\epsilon)}-1)^2}-\mathrm{Re}\left( \frac{e^{\rho(1+\epsilon)+it}}{(e^{\rho(1+\epsilon)+it}-1)^2}\right). \] A lower estimate of the same term can be found in the proof of \cite[Lemma~5]{root}. By the same argument as the one given in loc.~cit., but with $\lvert t \rvert\geq \rho^{C}$, we get \[ \sum_{m=1}^{\infty}me^{-\rho(1+\epsilon)m}(1-\cos(mt)) \gg \left(\rho(1+\epsilon)\right)^{2C-4} \ \ \text{as}\ \ \rho\to0^+, \] where the implied constant is independent of $\epsilon.$ Noting that $2C-4<0$, we deduce that $\exp\left( \mathrm{Re} (f(\rho+it,\rho))-f(\rho,\rho)\right)$ tends to zero faster than any power of $\rho$ as $\rho\to0^+$. Thus, by \eqref{eq:est_I1}, we find \[ \frac{|\mathcal{I}_2|}{|\mathcal{I}_1|}\ll \sqrt{f_{xx}(\rho,\rho)} \int_{\rho^C}^{\pi}\exp\left( \mathrm{Re} (f(\rho+it,\rho))-f(\rho,\rho)\right)dt, \] which tends to zero faster than any power of $\rho$ as $\rho\to0^+$. Recalling that $Q_{n}(e^{-a\rho},e^{-b\rho},e^{-c\rho})=\mathcal{I}_1+\mathcal{I}_2$, we finally obtain \begin{equation}\label{eq:asym_Q_n} Q_{n}(e^{-a\rho},e^{-b\rho},e^{-c\rho})=\frac{e^{f(\rho,\rho)+n\rho}}{\sqrt{2\pi f_{xx}(\rho,\rho)}}\Big(1+\mathcal{O}(\rho^{3C-5})\Big)\ \ \text{as}\ \ n\to \infty, \end{equation} uniformly for $|c|+|a|+r|b|\leq \epsilon$ and $\epsilon\in [0,1/2]$, where $\rho$ is the solution of Equation~\eqref{eq:saddle}. \subsubsection{Perturbation} Here we set $u=e^{\eta\alpha}$, $v=e^{\eta\beta}$, and $w=e^{\eta\gamma}$ where $(\alpha,\beta,\gamma)$ is fixed and $\eta$ can vary in a small open interval containing zero. Hence, Equation~\eqref{eq:prob_gen} becomes \begin{equation}\label{eq:mgf} \frac{Q_n(u,v,w)}{Q_n(1,1,1)}=\mathbb{E}\left(e^{\eta(\alpha X_n+\beta Y_n+\gamma Z_n)}\right). \end{equation} The right-hand side is the moment generating function of the random variable $\alpha X_n+\beta Y_n+\gamma Z_n$. From now on, let $\rho_0$ be the unique positive number such that $n=-f_x(\rho_0,0).$ Then, by Lemma~\ref{lem:1} (with $\epsilon= 0$) we have $ n\sim 2r\zeta(3)\rho_0^{-3}. $ Moreover, if we write $u=e^{-a \rho}$, $v=e^{-b \rho}$, and $w=e^{-\gamma\rho}$ for $\rho>0$ as before, then we have $a=-\alpha\eta \rho^{-1}$, $b=-\beta\eta \rho^{-1}$, and $c=-\gamma\eta \rho^{-1}$. This implies that \[ |c|+|a|+r|b|=(|\gamma|+|\alpha|+r|\beta|)\eta\rho^{-1}. \] Now, if we choose $\eta$ and $\rho$ in such a way that $\eta\rho^{-1}=o(1)$ and $\rho\to 0$ as $n\to \infty$, then by Lemma~\ref{lem:1} (with $\epsilon\to 0$), we get \[ -f_x(\rho,\rho)\sim 2r\zeta(3)\rho^{-3}. \] Observe that it is possible to choose such $\eta$ and $\rho>0$ that satisfy $\eta=o(\rho_0)$ and $\rho=o(\rho_0)$. In this case, the above asymptotic formula implies that $-f_x(\rho,\rho)>-f_x(\rho_0,0)=n$ for large enough $n$. Similarly, we can also choose $\eta$ and $\rho>0$ such that $\eta=o(\rho_0)$, $\rho\to 0$, $\eta\rho^{-1}=o(1)$, and $\frac{\rho}{\rho_0}\to \infty$. This time, the asymptotic formula gives $-f_x(\rho,\rho)<n$. Hence, for $\eta=o(\rho_0)$ and $n$ large enough, the equation $n=-f_x(\rho,\rho)$ has a unique solution, which we denote by $\rho(\eta)$. Furthermore, it satisfies $\rho(\eta)\to 0$ and $\eta\rho(\eta)^{-1}=o(1)$ as $n\to \infty$. Therefore, we also have \[ n\sim 2r\zeta(3)\rho(\eta)^{-3}. \] The latter and the asymptotic estimate $n\sim 2r\zeta(3)\rho_0^{-3}$ yield $\rho(\eta)\sim \rho_0$ as $n\to \infty$ whenever $\eta=o(\rho_0)$. From this point onward, we assume that $\eta=o(\rho_0).$ Since $\rho(\eta)$ and $\rho_0$ are asymptotically equivalent as $n\to \infty$, so are $f_{xx}(\rho(\eta), \rho(\eta))$ and $f_{xx}(\rho_0, 0).$ Thus, by \eqref{eq:asym_Q_n}, we have \begin{equation}\label{eq:mgf_asymp} \frac{Q_n(u,v,w)}{Q_n(1,1,1)}\sim \exp\Big(f(\rho(\eta),\rho(\eta))-f(\rho_0,0)+n(\rho(\eta)-\rho_0)\Big). \end{equation} We want to obtain a precise asymptotic estimate of the exponent of the right-hand side that holds uniformly for $\eta=o(\rho_0)$. We will use a Taylor approximation of the function $f(\rho(\eta),\rho(\eta))$ when $\eta$ is near $0$ (noting that the parameters $a$, $b$, and $c$ are themselves functions of $\eta$). So to highlight the variable $\eta$, we define \begin{align*} g(x,y) & =\log F(e^{y\alpha},e^{y\beta},e^{y\gamma},e^{-x})\\ &=-\sum_{l=1}^{r}\sum_{m=1}^{\infty}\sum_{k=1}^{m}\log\left(1-e^{y(\gamma+\alpha k+m\beta l)-x m}\right)\\ &=\sum_{l=1}^{r}\sum_{m=1}^{\infty}\sum_{k=1}^{m}\Phi(xm-y(\gamma+\alpha k+m\beta l)). \end{align*} This is essentially the same as the function $f(x,y)$ (if $(a,b,c)$ is replaced by $(-\alpha, -\beta, -\gamma)$). However, in our case we have $a=-\alpha\eta \rho(\eta)^{-1}$, $b=-\beta\eta \rho(\eta)^{-1}$, and $c=-\gamma\eta \rho(\eta)^{-1}$. Hence, the meanings of the first and second variables will be different. For instance, we have the equation \begin{equation}\label{eq:f2g} g(\rho(\eta),\eta)=f(\rho(\eta),\rho(\eta))\ \ \text{and}\ \ g(\rho_0,0)=f(\rho_0,0). \end{equation} Similarly, the saddle-point equation $n=-f_x(\rho(\eta),\rho(\eta))$ becomes $n=-g_x(\rho(\eta),\eta).$ First, we apply implicit differentiation (with respect to $\eta$) to the latter equation, then by the mean value theorem, we get \[ \rho(\eta)-\rho_0= -\eta \, \frac{g_{xy}(\rho(\theta),\theta)}{g_{xx}(\rho(\theta),\theta)}, \] for some real number $\theta$ between $0$ and $\eta.$ We can estimate $g_{xy}(\rho(\theta),\theta)$ and $g_{xx}(\rho(\theta),\theta)$ via Mellin transform in the same way as in the proof of Lemma~\ref{lem:1}, but we need the Mellin transforms of the functions $g_{xx}(t,0)$ and $g_{xy}(t,0)$. The Mellin transform of $g_{xx}(t,0)$ is $r\zeta(s-3)\zeta(s-1)\Gamma(s)$. To determine the Mellin transform of $g_{x,y}(t,0)$, first we write \begin{align*} g_{x,y}(t,0) & \,\,=\,\,-\sum_{l=1}^{r}\sum_{m=1}^{\infty}\sum_{k=1}^{m}m(\gamma+\alpha k+m\beta l)\Phi''(tm)\\ & \,\,=\,\, -\sum_{m=1}^{\infty}m\left(rm\gamma+r\alpha\frac{m^2+m}{2}+m^2\beta\frac{r^2+r}{2}\right)\Phi''(tm)\\ & \,\,=\,\, -\frac{r}{2}\sum_{m=1}^{\infty}\left((\alpha+(r+1)\beta) m^3+(\alpha+2\gamma)m^2\right)\Phi''(tm). \end{align*} Since the Mellin transform of $\Phi''(t)$ is $\zeta(s-1)\Gamma(s)$, the Mellin transform of $g_{x,y}(t,0)$ is \[ -\frac{r}{2}\left((\alpha+(r+1)\beta)\zeta(s-3) +(\alpha+2\gamma)\zeta(s-2)\right)\zeta(s-1)\Gamma(s). \] We deduce the following asymptotic formulas as $\rho\to0^{+}$: \begin{align} g_{x,x}(\rho,0) & =6r\zeta(3)\rho^{-4}+\mathcal{O}(\rho^{-2})\label{eq:asy_gxx},\\ g_{x,y}(\rho,0) & =-3r\zeta(3)(\alpha+(r+1)\beta)\rho^{-4}-r\zeta(2)(\alpha+2\gamma)\rho^{-3}+\mathcal{O}(\rho^{-2})\label{eq:asy_gxy}. \end{align} The fact that $\rho(\theta)\sim \rho_0$ (since $|\theta|\leq |\eta|=o(\rho_0)$) and the argument in the proof of Lemma~\ref{lem:1} (with $\epsilon\to 0$) imply \begin{equation}\label{eq:dif_rho} \rho(\eta)-\rho_0\sim -\eta \, \frac{g_{xy}(\rho_0,0)}{g_{xx}(\rho_0,0)}=\mathcal{O}(|\eta|)=o(\rho_0). \end{equation} Similarly, we have the following estimate for the third partial derivatives \[ \frac{\partial^k}{\partial x^k}\frac{\partial^l}{\partial y^l} g (x,y)\Big|_{(x,y)=(\rho(\theta),\theta)}=\mathcal{O}\Big(\rho_0^{-5}\Big), \] uniformly for $\theta=o(\rho_0)$, and for any nonnegative integers $k$ and $l$ such that $k+l=3$. This shows that if $\eta=o(\rho_0)$, then we have the Taylor approximation \begin{multline*} g(\rho(\eta),\eta) = g(\rho_0,0)+g_{y}(\rho_0,0)\eta+g_x(\rho_0,0)(\rho(\eta)-\rho_0)\\ + \frac{1}{2}\Big(g_{yy}(\rho_0,0) \, \eta^2+2g_{x y}(\rho_0,0) \, \eta(\rho(\eta)-\rho_0)+g_{xx}(\rho_0,0) \, (\rho(\eta)-\rho_0)^2\Big)+\mathcal{O}(|\eta|^3\rho_0^{-5}), \end{multline*} where the implied constant in the error term is independent of $n$. Using \eqref{eq:dif_rho} and saddle-point equation $n=-g_x(\rho_0,0)$, we deduce that \begin{multline*} g(\rho(\eta),\eta)-g(\rho_0,0)+n(\rho(\eta)-\rho_0) =\\ g_y(\rho_0,0)\eta + \left(\frac{g_{yy}(\rho_0,0)g_{xx}(\rho_0,0)-(g_{xy}(\rho_0,0))^2}{g_{xx}(\rho_0,0)}\right)\frac{\eta^2}{2}+\mathcal{O}(|\eta|^3\rho_0^{-5}). \end{multline*} Let us define \begin{equation}\label{eq:def_mu_sigma} \sigma_n^2=\frac{g_{yy}(\rho_0,0)g_{xx}(\rho_0,0)-(g_{xy}(\rho_0,0))^2}{g_{xx}(\rho_0,0)} \, \text{ and }\, \mu_n=g_y(\rho_0,0), \end{equation} and we choose $\eta=\frac{t}{\sigma_n}$ where $t$ is a fixed real number. Then our estimate \eqref{eq:mgf_asymp} becomes \begin{equation}\label{eq:quotient_Q} \frac{Q_n(u,v,w)}{Q_n(1,1,1)}\sim \exp\left(\frac{\mu_n}{\sigma_n}\ t+\frac{t^2}{2}+\mathcal{O}(\sigma_n^{-3}\rho_0^{-5})\right), \end{equation} as $n\to\infty$ (this is valid as long as $\eta=\frac{t}{\sigma_n}=o(\rho_0)$, but we do not even know at this stage whether $\sigma_n^2$ is positive). Hence, let us estimate $\sigma_n^2$. Assume first that $(\alpha,\beta)\neq (0,0)$. Once again, by the Mellin transform technique, we have \begin{equation}\label{eq:asy_gyy} g_{yy}(\rho_0,0) \sim r\left(2\alpha^2+3(r+1)\alpha\beta+(r+1)(2r+1)\beta^2\right)\zeta(3)\rho_0^{-4}. \end{equation} Putting the estimates \eqref{eq:asy_gxx}, \eqref{eq:asy_gxy} and \eqref{eq:asy_gyy} into the formula for $\sigma_n^2$ in \eqref{eq:def_mu_sigma}, we have \begin{equation}\label{eq:var_rho} \sigma_n^2\sim \left(\frac{r\zeta(3)}{2}\, \alpha^2+\frac{r(r^2-1)\zeta(3)}{2}\, \beta^2\right)\rho_0^{-4}. \end{equation} Hence, $\eta=\mathcal{O}(\rho_0^2)$ and the estimate \eqref{eq:quotient_Q} becomes \[ \frac{Q_n(u,v,w)}{Q_n(1,1,1)}\sim \exp\left(\frac{\mu_n}{\sigma_n}\ t+\frac{t^2}{2}+\mathcal{O}(\rho_0)\right). \] Therefore, if $(\alpha,\beta)\neq (0,0)$ and $t$ is any fixed real number, then the latter identity together with \eqref{eq:mgf} yield \[ e^{-\mu_nt\sigma_n^{-1}}\mathbb{E}\left(e^{t\, \sigma_n^{-1}(\alpha X_n+\beta Y_n+\gamma Z_n)}\right)\sim e^{t^2/2}, \] as $n\to \infty$. This means, by Curtiss' theorem~\cite{Curtiss}, that \[ \frac{\alpha X_n+\beta Y_n+\gamma Z_n-\mu_n}{\sigma_n}\overset{\mathrm{d}}{\to} \mathcal{N}(0,1) \ \ \text{as}\ \ n\to\infty. \] To obtain the asymptotic formulas for $\sigma_n^2$ and $\mu_n$, recall from the saddle-point equation and \eqref{eq:mellin} (with $\epsilon=0$) that $n= 2r\zeta(3)\rho_0^{-3}-\frac{1}{12}r\rho_0^{-1}+\mathcal{O}(1)$. Inverting this yields \begin{equation}\label{eq:rhoton} \rho_0^{-1}=\frac{n^{1/3}}{(2r\zeta(3))^{1/3}}+\mathcal{O}(n^{-1/3}). \end{equation} So we first estimate $\sigma_n^2$ and $\mu_n$ in terms of $\rho_0$, then use the above to get the asymptotic formulas in terms of $n$. Such an estimate for $\sigma_n^2$ is already given in \eqref{eq:var_rho}. The estimate of $\mu_n$ can be obtained easily from the Mellin transform of $g_y(t,0)$, which is \[ \frac{r}{2}\left(\left(\alpha+(r+1)\beta\right)\zeta(s-2)+\left(\alpha+2\gamma\right)\zeta(s-1)\right)\zeta(s)\Gamma(s). \] By a straightforward calculation, we have \[ \sigma_n^2\sim \frac{\alpha^2+(r^2-1)\beta^2}{2^{7/3}(r\zeta(3))^{1/3}}\, n^{4/3}\, \text{ and }\, \mu_n= \left(\frac{1}{2}\alpha+\frac{r+1}{2} \beta\right)n+\frac{r^{1/3}\zeta(2)(\alpha+2\gamma)}{2^{5/3}(\zeta(3))^{2/3}}\, n^{2/3}+\mathcal{O}(n^{1/3}). \] If we now assume that $(\alpha,\beta)=(0,0)$ but $\gamma\neq 0$, then the Mellin transform of $g_{yy}(t,0)$ is $r\gamma^2\zeta(s-1)^2\Gamma(s)$ whose dominant singularity is a double pole at $s=2$. This leads to the asymptotic formula $ g_{yy}(\rho_0,0)= r\gamma^2\rho_0^{-2}\log\left(\rho_0^{-1}\right)+\mathcal{O}(\rho_0^{-2}). $ Hence, the formula in \eqref{eq:def_mu_sigma} gives \begin{equation}\label{eq:var_gamma} \sigma_n^2=r\gamma^2\rho_0^{-2}\log\left(\rho_0^{-1}\right)+\mathcal{O}(\rho_0^{-2}). \end{equation} Thus, for a fixed real number $t$, we have \[ \eta=\frac{t}{\sigma_n}=\mathcal{O}(\rho_0|\log \rho_0|^{-1/2}). \] So we still have our desired condition that $\eta=o(\rho_0)$. Moreover, applying \eqref{eq:asy_gxx} and \eqref{eq:asy_gxy} with $(\alpha,\beta)=(0,0)$, the estimate \eqref{eq:dif_rho} becomes \[ \rho(\eta)-\rho_0\sim -\eta \, \frac{g_{xy}(\rho_0,0)}{g_{xx}(\rho_0,0)}=\mathcal{O}(|\eta| \rho_0)=\mathcal{O}(\rho_0^2|\log \rho_0|^{-1/2}). \] On the other hand, for any $\theta=o(\rho_0)$, we have the following: \begin{align*} g_{xxx}(\rho(\theta),\theta)&\,\,=\,\,\mathcal{O}(\rho_0^{-5}), & g_{xxy}(\rho(\theta),\theta)\,\,=\,\,\mathcal{O}(\rho_0^{-4}),\\ g_{xyy}(\rho(\theta),\theta)&\,\,=\,\,\mathcal{O}(\rho_0^{-3}|\log \rho_0|), & g_{yyy}(\rho(\theta),\theta)\,\,=\,\,\mathcal{O}(\rho_0^{-3}). \end{align*} Therefore, \eqref{eq:quotient_Q} becomes \[ \frac{Q_n(u,v,w)}{Q_n(1,1,1)}\sim \exp\left(\frac{\mu_n}{\sigma_n}\ t+\frac{t^2}{2}+\mathcal{O}\left((\log n)^{-3/2}\right)\right). \] Just as in the previous case, this is enough to prove the central limit theorem. The asymptotic formula for the variance in terms of $n$ can be obtained from \eqref{eq:var_gamma} using \eqref{eq:rhoton}. The proof of Proposition~\ref{propCLT} is complete. \bibliographystyle{amsplain-nodash}
1,116,691,498,098
arxiv
\section*{Acknowledgements} \noindent We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC (China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG (Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR (Poland); ANCS/IFA (Romania); MinES, Rosatom, RFBR and NRC ``Kurchatov Institute'' (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER (Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We also acknowledge the support received from the ERC under FP7. The Tier1 computing centres are supported by IN2P3 (France), KIT and BMBF (Germany), INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom). We are thankful for the computing resources put at our disposal by Yandex LLC (Russia), as well as to the communities behind the multiple open source software packages that we depend on. \section{Suppression of background from other {\boldmath \ensuremath{\Pb}\xspace}-hadron decays} \label{subsec::addsel} A small background from \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \phi}\xspace decays, where one of the kaons from the $\phi$ is misidentified as a pion, is found to contaminate the signal. Candidate \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decays are therefore required to be outside of the window defined by ${1012.5 < M(\ensuremath{K^+K^-}\xspace) < 1026.5\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}$ and ${5324 < M(\ensuremath{K^+K^-}\xspace\KK) < 5424 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}$ in the \ensuremath{K^+K^-}\xspace and $\ensuremath{K^+K^-}\xspace\KK$ invariant masses when the mass hypothesis for the sole pion of the decay is switched into a kaon. In simulated events this selection removes $0.12 \%$ of the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace signal decays and does not affect the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decay mode. Other possible reflections, such as \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace decays, are found to be negligible. In order to remove background from $\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace D_s^{\mp}(\phi\pi^{\mp}) K^{\pm}$ decays when the $\pi^{\mp}$ and the $K^{\pm}$ mesons form a $\brabar{K}^{*0}$ candidate, events with the invariant mass of the $\ensuremath{K^+K^-}\xspace\pi^{\mp}$ system within ${1953.5 < M(\ensuremath{K^+K^-}\xspace\pi^{\mp}) < 1983.5 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}$, consistent with the known \ensuremath{\D^+_\squark}\xspace mass~\cite{Beringer:1900zz}, are excluded. Background from $b$-hadron decays containing a misidentified proton has also been considered. For candidate \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decays, the kaon with the largest \ensuremath{\mathrm{DLL}_{\proton\kaon}}\xspace is assigned the proton mass and the four-body invariant mass recomputed. The largest potential background contribution arises from $\ensuremath{\Lbar^0_\bquark}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{K^+K^-}\xspace \ensuremath{\overline \proton}\xspace \pi^+$ where the antiproton is misidentified as the kaon originating from the \ensuremath{\Kbar^{*0}}\xspace meson, and $\ensuremath{\L^0_\bquark}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{K^+K^-}\xspace K^-\ensuremath{\Pp}\xspace$, where the proton is misidentified as the pion originating from the \ensuremath{\Kbar^{*0}}\xspace meson. Simulation shows that these decays produce wide four-body mass distributions which peak around $5450\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ and $5500\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$, respectively. This background contribution is considered in the fit model discussed below. Other \ensuremath{B^{0}_{(s)}}\xspace decay modes containing a $\ensuremath{\PLambda}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{\Pp}\xspace \pi^-$ decay or background from $\ensuremath{\L^+_\cquark}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{\Pp}\xspace\ensuremath{K^-\pi^+}\xspace$ decays are found to be negligible. \section{Determination of the {\boldmath\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace} branching fraction} \label{sec:Branching} The branching fraction is calculated with the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace channel as normalization. Both decays pass the same selection and share almost identical topologies. However, since the two decay channels can have different polarizations, their angular distributions may differ which would cause a difference in their detection efficiencies. A factor \begin{equation} \lambda_{f_0}= \frac{\epsilon^{\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace}}{\epsilon^{\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace}} = \frac{1-0.29 f_0^{\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace}}{1-0.29 f_0^{\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace}} \nonumber \end{equation} \noindent is calculated, where $\epsilon^{\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace}$ and $\epsilon^{\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace}$ are the efficiencies for the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace and \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decays reconstruction, $f_0^{\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace}$ and $f_0^{\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace}$ their longitudinal polarization fractions, determined in Sect.~\ref{sec:Kpolarization} for the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace mode, and the factor 0.29 is obtained from simulation. The value of ${\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace)$ is computed from \begin{equation} {\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace) = \lambda_{f_0} \times \frac{f_d}{f_s} \times {\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace) \times \frac{N_{\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace}}{N_{\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace}}, \label{BRformulaBsBd} \end{equation} \noindent where $N_{\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace}$ and $N_{\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace}$ are the numbers of \ensuremath{\B^0_\squark}\xspace and \ensuremath{\B^0}\xspace decays, respectively, and $f_d/f_s = 3.75 \pm 0.29 $~\cite{LHCbfsfd} is the ratio of hadronization factors needed to account for the different production rates of \ensuremath{\B^0}\xspace and \ensuremath{\B^0_\squark}\xspace mesons. With the values given in Table~\ref{table_BRinput}, the result, \[ {\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace) = (1.10 \pm 0.24) \times 10^{-6}, \] \noindent is obtained, where only the statistical uncertainty is shown. \begin{table} \begin{center} \caption{\small Input values for the branching fraction computation.} \begin{tabular}{cc} \hline \hline Parameter & Value \\ \hline $\lambda_{f_0}$ & $ 1.01 \pm 0.06$ \\ $N_{\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace}$ & $1000 \pm 32\:\:\:\:$ \\ $N_{\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace}$ & $30 \pm 6\:\:$ \\ ${\cal B}(\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace)$ & $(9.8 \pm 0.6) \times 10^{-6}$~\cite{Beringer:1900zz} \\ \hline \hline \end{tabular} \label{table_BRinput} \end{center} \end{table} As a cross-check, a different decay mode, \decay{\Bd}{\jpsi\Kstarz}, with $J/\psi \ensuremath{\rightarrow}\xspace \mu^+ \mu^-$, has been used as a normalization channel. Special requirements were imposed to harmonize the selection of this reference with that for the signal. The obtained result is fully compatible with the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace based value. \section{Summary and conclusions} \label{sec:Conclusions} A total of $30 \pm 6$ $\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)$ candidates have been observed within the mass windows $1012.5 < M(\ensuremath{K^+K^-}\xspace) < 1026.5\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ and $746 < M(\ensuremath{K^-\pi^+}\xspace)< 1046\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$. The result translates into a significance of $6.2\,\sigma$. The analysis of the \ensuremath{K^+K^-}\xspace and the \ensuremath{K^-\pi^+}\xspace mass distributions is consistent with $(84 \pm 2) \%$ of the signal originating from resonant $\phi$ and \ensuremath{\Kbar^{*0}}\xspace mesons. The significance of the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace resonant contribution is calculated to be $6.1\,\sigma$. The branching fraction of the decay is measured to be \[ {\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace) = \left(1.10 \pm 0.24\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.14\,\ensuremath{\mathrm{(syst)}}\xspace \pm 0.08\left(\frac{f_d}{f_s}\right)\right) \times 10^{-6}, \] \noindent using the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decay as a normalization channel. This result is roughly three times the theoretical expectation in QCD factorization of $(0.4 \, {}^{+0.5}_{-0.3}) \times 10^{-6}$~\cite{Benm:2007rf} and larger than the perturbative QCD value of ${(0.65 \, {}^{+0.33}_{-0.23}) \times 10^{-6}}$~\cite{Ali:2007ff}, although the values are compatible within $1\,\sigma$. The result is also higher than the expectation of ${\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace) \times |V_{td}|^2/|V_{ts}|^2$. Better precision on both the theoretical and experimental values would allow this channel to serve as a probe for physics beyond the SM. An angular analysis of the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decay results in the polarization fractions and phase difference \begin{align} f_0 &= \phantom{-}0.51 \pm 0.15\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.07\,\ensuremath{\mathrm{(syst)}}\xspace, \nonumber \\ f_\parallel &= \phantom{-}0.21 \pm 0.11\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.02\,\ensuremath{\mathrm{(syst)}}\xspace, \nonumber \\ \cos\delta_{\parallel} &= -0.18 \pm 0.52\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.29\,\ensuremath{\mathrm{(syst)}}\xspace. \nonumber \end{align} \noindent The small value obtained for the longitudinal polarization fraction follows the trend of the $b \ensuremath{\rightarrow}\xspace s$ penguin decays \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace, \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace and \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \phi}\xspace. The comparison with the decay \ensuremath{B^0\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace, where $f_0 = 0.80^{+0.12}_{-0.13}$ ~\cite{PhysRevLett.100.081801}, shows a $2\,\sigma$ discrepancy. This is very interesting since the loop-mediated amplitudes of each decay differ only in the flavour of the spectator quark. The result is also compatible with the longitudinal polarization fraction $f_0= 0.40\pm 0.14$ measured in $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \rho^0 \ensuremath{\kaon^{*0}}\xspace$ decays~\cite{Lees:2011dq}, the penguin amplitude of which is related to \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace by $d \leftrightarrow s$ exchange. Finally, the result is smaller than the prediction of perturbative QCD, $f_0 = 0.712 \, {}^{+0.042}_{-0.048}$, given in Ref.~\cite{Ali:2007ff}. \section{Detector and software} \label{sec:Detector} The \mbox{LHCb}\xspace detector~\cite{Alves:2008zz} is a single-arm forward spectrometer covering the \mbox{pseudorapidity} range $2<\eta <5$, designed for the study of particles containing \ensuremath{\Pb}\xspace or \ensuremath{\Pc}\xspace quarks. The detector includes a high precision tracking system consisting of a silicon-strip vertex detector surrounding the $pp$ interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about $4{\rm\,Tm}$, and three stations of silicon-strip detectors and straw drift tubes placed downstream. The combined tracking system provides a momentum measurement with relative uncertainty that varies from 0.4\% at 5\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace to 0.6\% at 100\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace, and impact parameter resolution of 20\ensuremath{\,\upmu\rm m}\xspace for tracks with high transverse momentum (\mbox{$p_{\rm T}$}\xspace). Charged hadrons are identified using two ring-imaging Cherenkov (RICH) detectors~\cite{Adinolfi:2012an}. Photon, electron and hadron candidates are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers~\cite{Alves:2012ey}. The trigger~\cite{Aaij:2012me} consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. The software trigger used in this analysis requires a two-, three- or four-track secondary vertex with a high sum of the \mbox{$p_{\rm T}$}\xspace of the tracks and significant displacement from the primary $pp$ interaction vertices~(PVs). At least one track should have $\mbox{$p_{\rm T}$}\xspace > 1.7\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and impact parameter \ensuremath{\chi^2}\xspace~(\ensuremath{\ensuremath{\chi^2}\xspace_{\IP}}\xspace) with respect to all primary interactions greater than 16. The \ensuremath{\ensuremath{\chi^2}\xspace_{\IP}}\xspace is defined as the difference between the \ensuremath{\chi^2}\xspace of a PV reconstructed with and without the considered track. A multivariate algorithm~\cite{Gligorov:arXiv1210.6861} is used for the identification of secondary vertices consistent with the decay of a \ensuremath{\Pb}\xspace hadron. In the simulation, $pp$ collisions are generated using \mbox{\textsc{Pythia}}\xspace~6.4~\cite{Sjostrand:2006za} with a specific \mbox{LHCb}\xspace configuration~\cite{Belyaev:1307917}. Decays of hadronic particles are described by \mbox{\textsc{EvtGen}}\xspace~\cite{Lange:2001uf}, in which final state radiation is generated using \mbox{\textsc{Photos}}\xspace~\cite{Golonka:2005pn}. The interaction of the generated particles with the detector and its response are implemented using the \mbox{\textsc{Geant4}}\xspace toolkit~\cite{Allison:2006ve, *Agostinelli:2002hh} as described in Ref.~\cite{LHCb-PROC-2011-006}. \section{Introduction} \label{sec:Introduction} The measurement of \ensuremath{C\!P}\xspace asymmetries in flavour-changing neutral-current processes provides a crucial test of the Standard Model (SM). In particular, loop-mediated (penguin) decays of $B$ mesons are sensitive probes for physics beyond the SM. Transitions between the quarks of the third and second generation ($b \ensuremath{\rightarrow}\xspace s$) or between the quarks of the third and first generation ($b \ensuremath{\rightarrow}\xspace d$) are complementary since SM \ensuremath{C\!P}\xspace violation is tiny in $b \ensuremath{\rightarrow}\xspace s$ transitions and an observation of \ensuremath{C\!P}\xspace violation would indicate physics beyond the SM. For $b \ensuremath{\rightarrow}\xspace d$ transitions the SM branching fraction is an order of magnitude smaller than $b \ensuremath{\rightarrow}\xspace s$ due to the relative suppression of $|V_{td}|^2/|V_{ts}|^2$. It is particularly useful to have experimental information on pairs of channels related by $d \leftrightarrow s$ exchange symmetry to test that the QCD contribution to the decay is independent of the initial \ensuremath{\B^0}\xspace or \ensuremath{\B^0_\squark}\xspace meson. The \mbox{BaBar}\xspace and \mbox{Belle}\xspace experiments have performed measurements of $b \ensuremath{\rightarrow}\xspace sq\overline{q}$ processes, such as $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \KS$, $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \eta^{\prime} \KS$ and $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace f_0 \KS$~\cite{Abe:2003yt,Aubert:2005iy,Aubert:2005ja}, and of $b \ensuremath{\rightarrow}\xspace dq\overline{q}$ penguin diagrams, such as $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \KS \KS$ and $B^+ \ensuremath{\rightarrow}\xspace K^+ \KS$~\cite{Aubert:2006gm,Nakahama:2007dg}. These modes contain pseudo-scalar or scalar mesons in their final state whereas $\ensuremath{B^{0}_{(s)}}\xspace \ensuremath{\rightarrow}\xspace VV^{\prime}$ decays, where $V$ and $V^{\prime}$ are light vector mesons, provide a valuable additional source of information because the angular distributions give insight into the physics of hadronic $B$ meson decays and the interplay between the strong and weak interactions they involve. From the V$-$A structure of the weak interaction and helicity conservation in the strong interaction, the final state of these decays is expected to be highly longitudinally polarized. This applies to both tree and penguin decays. The \mbox{BaBar}\xspace and \mbox{Belle}\xspace experiments have confirmed that longitudinal polarization dominates in $b \ensuremath{\rightarrow}\xspace u$ tree processes such as $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \rho^+ \rho^-$~\cite{Abe:2007ez,Aubert:2007nua}, $\ensuremath{\Bu}\xspace \ensuremath{\rightarrow}\xspace \rho^0 \rho^+$~\cite{Zhang:2003up,Aubert:2009it} and $\ensuremath{\Bu}\xspace \ensuremath{\rightarrow}\xspace \omega \rho^+$~\cite{Aubert:2009sx}. However, measurements of the polarization in decays with both tree and penguin contributions, such as $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \rho^0 \ensuremath{\kaon^{*0}}\xspace$ and $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \rho^- \ensuremath{\kaon^{*+}}\xspace$~\cite{Lees:2011dq} and in $b \ensuremath{\rightarrow}\xspace s$ penguin decays, \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace~\cite{Chen:2005zv,BabarPhiKst2008}, \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace~\cite{Aaij:2011rf} and \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \phi}\xspace~\cite{Aaltonen:2011rs,LHCb-PAPER-2012-004,*Aaij:2013qha}, indicate a low value of the longitudinal polarization fraction comparable with, or even smaller than, the transverse fraction. The $\ensuremath{B^{0}_{(s)}}\xspace \ensuremath{\rightarrow}\xspace VV^{\prime}$ decays can be described by models based on perturbative QCD, or QCD factorization and SU(3) flavour symmetries. Whilst some authors predict a longitudinal polarization fraction $f_0\mathord{\sim}0.9$ for tree-dominated and $\mathord{\sim}0.75$ for penguin decays~\cite{Ali:1979al,*Suzuki:2002yk,Chen:2002pz}, other studies have proposed different mechanisms such as penguin annihilation~\cite{Benm:2007rf,Cheng:2008gxa} and QCD rescattering~\cite{Cheng:2004ru} to accommodate smaller longitudinal polarization fractions $\mathord{\sim}0.5$, although the predictions have large uncertainties. A review on the topic of polarization in $B$ decays can be found in Ref.~\cite{Beringer:1900zz}. There are only two other $\ensuremath{B^{0}_{(s)}}\xspace \ensuremath{\rightarrow}\xspace VV^{\prime}$ penguin modes that correspond to $b \ensuremath{\rightarrow}\xspace d$ loops. The first is the \ensuremath{B^0\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace decay. The \mbox{BaBar}\xspace collaboration reported the discovery of this channel with $6\,\sigma$ significance and a measurement of its branching fraction $\BF(\ensuremath{B^0\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace)=(1.28\,{}^{+0.35}_{-0.30}\pm0.11) \times 10^{-6}$~\cite{PhysRevLett.100.081801}. This is in tension with the results of the \mbox{Belle}\xspace collaboration that published an upper limit of $\BF(\ensuremath{B^0\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace)<0.8 \times 10^{-6}$ at the $90\%$ confidence level~\cite{PhysRevD.81.071101}. The \mbox{BaBar}\xspace publication also reported a measurement of the longitudinal polarization ${f_0 = 0.80^{+0.12}_{-0.13}}$~\cite{PhysRevLett.100.081801}, which is large compared to those from \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace ($f_0 = 0.494 \pm 0.036$~\cite{BabarPhiKst2008}), \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \phi}\xspace ($f_0 = 0.365 \pm 0.025$~\cite{LHCb-PAPER-2012-004}) and \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace ($f_0 = 0.31 \pm 0.13$~\cite{Aaij:2011rf}). The mode \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace is the other $b \ensuremath{\rightarrow}\xspace d$ penguin decay into vector mesons that has not previously been observed. This decay is closely linked to \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace, differing in the spectator quark and the final quark in the loop, as shown in Fig.~\ref{fig:penguinBphiKst}.\footnote{Both the decays \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace and \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace could also have contributions from QCD singlet-penguin amplitudes~\cite{Benm:2007rf}.} From the aforementioned relation between $b \ensuremath{\rightarrow}\xspace s$ and $b \ensuremath{\rightarrow}\xspace d$ transitions, their relative branching fractions should scale as $|V_{td}|^2/|V_{ts}|^2$ and their polarization fractions are expected to be very similar. Moreover, since both decays share the same final state, except for charge conjugation, \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace is the ideal normalization channel for the determination of the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace branching fraction. The \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decay is also related to \ensuremath{B^0\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace \ensuremath{\Kbar^{*0}}\xspace}\xspace, since their loop diagrams only differ in the spectator quark (${\ensuremath{\Ps}\xspace \text{ instead of } \ensuremath{\Pd}\xspace}$), although it has been suggested that {\rm S-wave}\xspace interference effects might break the SU(3) symmetry relating two channels~\cite{Gronau:1995hn}. Finally, it is also interesting to explore the relation of the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decay with the $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \rho^0 \ensuremath{\kaon^{*0}}\xspace$ mode since the penguin loop diagrams of these modes are related by the $d \leftrightarrow s$ exchange. The $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \rho^0 \ensuremath{\kaon^{*0}}\xspace$ decay also has a $b \ensuremath{\rightarrow}\xspace u$ tree diagram, but it is expected that the penguin contribution is dominant, since the branching fraction is comparable to that of the pure penguin \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decay. The most stringent previous experimental limit on the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace branching fraction is ${{\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace)<1.0 \times 10^{-3}}$ at the $90\%$ confidence level~\cite{Beringer:1900zz}, whereas calculations based on the QCD factorization framework predict a value of ${(0.4\,{}^{+0.5}_{-0.3}) \times 10^{-6}}$~\cite{Benm:2007rf} while in perturbative QCD a value of ${(0.65 \, {}^{+0.33}_{-0.23}) \times 10^{-6}}$~\cite{Ali:2007ff} is obtained. The precise determination of the branching fraction tests these models and provides a probe for physics beyond the SM. The study of the angular distributions in the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace channel provides a measurement of its polarization. In Ref.~\cite{Ali:2007ff}, a prediction of $f_0 = 0.712 \, {}^{+0.042}_{-0.048}$ is made for the longitudinal polarization fraction, using the perturbative QCD approach, that can be compared to the experimental result. In this paper the first observation of the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decay, with $\phi \ensuremath{\rightarrow}\xspace \ensuremath{K^+K^-}\xspace$ and $\ensuremath{\Kbar^{*0}}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{K^-\pi^+}\xspace$, is reported and the determination of its branching fraction and polarizations are presented. The study is based on data collected by the \mbox{LHCb}\xspace experiment at CERN from the $\sqrt{s} = 7\,$\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace proton-proton collisions of \mbox{LHC}\xspace beams. The dataset corresponds to an integrated luminosity of 1.0\,fb$^{-1}$. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.5]{./figs/penguinBphiKst.pdf} \includegraphics[scale=0.5]{./figs/penguinBdPhiKst.pdf} \caption{\small Feynman diagrams for the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace and the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decays.} \label{fig:penguinBphiKst} \end{center} \end{figure} \section{Fit to the four-body mass spectrum} \label{sec:mass-spectrum} The sample of $1277$ candidates, selected as described in Sections~\ref{sec:selection} and~\ref{subsec::addsel}, contains many \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decays whereas only a small contribution from \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decays is anticipated. Both signals are parametrized with identical shapes, differing only in the mass shift of $87.13\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ between the \ensuremath{\B^0}\xspace and \ensuremath{\B^0_\squark}\xspace mesons~\cite{Beringer:1900zz} which is fixed in the fit. The signal shapes are described by the sum of Crystal Ball (CB)~\cite{Skwarnicki:1986xj} and Gaussian\xspace functions that share a common mean. The CB function, which contains most of the signal, is a combination of a Gaussian\xspace function with a power law tail, accounting for the intrinsic detector resolution and the radiative tail toward low masses, respectively. The Gaussian\xspace shape describes events reconstructed with worse mass resolution, which produce a contamination of \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decays in the region of the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace signal peak. The dependence between the Gaussian\xspace and CB resolutions, $\sigma_{{\rm G}}$ and $\sigma_{{\rm CB}}$, respectively, is found to be \begin{equation} \label{eq:sigmas} \sigma_{{\rm G}} = \sqrt{\sigma_{{\rm CB}}^2 + (24.74\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)^2}, \end{equation} from a data sample of $25 \times 10^{3}$ \decay{\Bd}{\jpsi\Kstarz} decays. This channel is topologically very similar to the signal and is almost background free. The fit to this sample also provides the power law exponent of the CB function tail, which is subsequently fixed in the $\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)$ and $\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^+\pi^-}\xspace)$ mass models. The parameter that governs the transition from the Gaussian shape to the power law function in the CB function is unrestrained in the fit. The other unrestrained fit parameters include: the central $B$ meson mass, the width of the CB function, the fractional yield contained in the Gaussian\xspace function and the total signal yield. In addition to the \ensuremath{\B^0}\xspace and \ensuremath{\B^0_\squark}\xspace signal shapes, three more components are included. The first accounts for partially reconstructed $B$ meson decays into $\phi$ and $K$ or $K^*$ excited states where a pion has been lost. This is described by a convolution of the ARGUS shape~\cite{Albrecht:1990cs} with a Gaussian\xspace distribution. The second contribution is due to $\ensuremath{\L^0_\bquark}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{K^+K^-}\xspace K^-\ensuremath{\Pp}\xspace$ and $\ensuremath{\Lbar^0_\bquark}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{K^+K^-}\xspace \ensuremath{\overline \proton}\xspace \pi^+$ decays and is modelled with a histogram obtained from simplified simulations. The third contribution is an exponential function to account for combinatorial background. The data passing the selection criteria are fitted using an extended unbinned maximum likelihood fit. The invariant mass distribution of the candidates, together with the fit contribution, is shown in Fig.~\ref{fig::fig_final}. The yields of ${\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)}$ and ${\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^+\pi^-}\xspace)}$ decays are $30 \pm 6$ and $1 000 \pm 32$, respectively. The fit model is validated with $10,000$ pseudo-experiments, generated with simplified simulations, which show that the signal yields are unbiased. Table~\ref{tab::fit_results} summarizes the signal and background contributions resulting from the fit. A likelihood ratio test is employed to assess the statistical significance of the ${\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)}$ signal yield. This is performed using $\sqrt{2{\rm ln}(\mathcal{L}_{\rm s+b}/\mathcal{L}_{\rm b})}$, where $\mathcal{L}_{\rm s+b}$ and $\mathcal{L}_{\rm b}$ are the maximum values of the likelihoods for the signal-plus-background and background-only hypotheses, respectively.\footnote{The applicability of this method has been verified from the parabolic behaviour of the ${\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)}$ signal yield profile of $-2\ln \mathcal{L}_{\rm s+b}$ about its minimum.} This calculation results in $6.3\,\sigma$ significance for the ${\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)}$ signal. The fit gives $\sigma_{\rm CB} = 15.0 \pm 1.1\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ for the invariant mass resolution. Integration in a $\pm 30\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ mass window yields $26.4 \pm 5.7$ signal candidates and $8.2 \pm 1.3$ background events, composed of $5.4 \pm 0.2$ from ${\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^+\pi^-}\xspace)}$, $2.1 \pm 1.3$ from \ensuremath{\L^0_\bquark}\xspace and $0.7 \pm 0.4$ from combinatorial contributions. In order to explore systematic effects in the signal yield originating in the fit model two effects were considered. First, the amount of ${\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^+\pi^-}\xspace)}$ events under the ${\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)}$ signal is governed by the $24.74 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ factor in Eq.~\ref{eq:sigmas}. Similarly, the contamination of misidentified \ensuremath{\L^0_\bquark}\xspace decays under the signal is controlled by a tail that is parametrized. An extended likelihood is built by multiplying the original likelihood function by Gaussian\xspace distributions of these two nuissance parameters with standard deviations of $20\%$ of their nominal values at which they are centered. The corresponding systematic uncertainty in the signal yield is obtained by performing a fit that maximizes this modified likelihood. The systematic contribution is calculated subtracting the statistical uncertainty in quadrature and found to be $\pm 1.2$ events. Including this uncertainty results in a significance of $6.2\sigma$. Effects of other systematic uncertainties, discussed in Sect.~\ref{sec:Kpolarization}, have negiglible impact in the signal significance. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{figs/four-body-mass.pdf} \caption{\small Four-body $\ensuremath{K^+K^-}\xspace\ensuremath{K^-\pi^+}\xspace$ invariant mass distribution. The points show the data, the blue solid line shows the overall fit, the solid dark red shaded region is the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace signal, the light blue shaded region corresponds to the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace signal, the grey dotted line is the combinatorial background and the green dashed line and magenta dashed-dotted lines are the partially reconstructed and misidentified \ensuremath{\L^0_\bquark}\xspace backgrounds.} \label{fig::fig_final} \end{figure} \begin{table} \centering \caption{\small Results of the fit to the sample of selected candidates.} \begin{tabular}{cc} \hline \hline Contribution & Yield \\ \hline \vspace{-0.4cm} & \\ \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace & $\:\: 30 \pm 6$ \\ \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace & $1000 \pm 32$ \\ Partially reconstructed background & $\:\: 218 \pm 15$ \\ \ensuremath{\L^0_\bquark}\xspace background & $\:\: 13 \pm 8$ \\ Combinatorial background & $\:\: 10 \pm 6$ \\ \hline \hline \end{tabular} \label{tab::fit_results} \end{table} \section{Polarization analysis} \label{sec:Kpolarization} The $\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)$ decay proceeds via two intermediate spin-1 particles. The angular distribution of the decay is described by three transversity amplitudes $A_0$, $A_{\parallel}$ and $A_{\perp}$~\cite{Dighe:1995pd}. These can be obtained from the distribution of the decay products in three angles $\theta_1$, $\theta_2$ and $\varphi$, defined in the helicity frame. The convention for the angles is shown in Fig.~\ref{fig:anglesConv}. A flavour-averaged and time-integrated polarization analysis is performed assuming that the \ensuremath{C\!P}\xspace-violating phase is zero and that an equal amount of \ensuremath{\B^0_\squark}\xspace and \ensuremath{\Bbar^0_\squark}\xspace mesons are produced. Under these assumptions, the decay rate dependence on the polarization angles can be written as \begin{eqnarray} \label{formula:fullAngularEq} \frac{{\rm d}^3\Gamma}{{\rm d}{\rm cos}\theta_1 \, {\rm d}{\rm cos}\theta_2 \, {\rm d}\varphi} &\propto& |A_0|^2\cos^2\theta_1\cos^2\theta_2 + |A_{\parallel}|^2 \frac{1}{2}\sin^2\theta_1\sin^2\theta_2\cos^2 \varphi \\ &+& |A_{\perp}|^2\frac{1}{2}\sin^2\theta_1\sin^2\theta_2\sin^2\varphi + |A_0||A_{\parallel}|\cos\delta_{\parallel} \frac{1}{2\sqrt{2}}\sin 2\theta_1\sin 2\theta_2\cos\varphi. \nonumber \end{eqnarray} \noindent Additional terms accounting for the {\rm S-wave}\xspace and interference contributions, as in Ref.~\cite{BabarPhiKst2008}, are also considered. These terms are set to the values obtained for the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace sample. The polarization fractions are defined from the amplitudes as: $f_j=|A_j|^2/(|A_0|^2+|A_{\parallel}|^2+|A_{\perp}|^2)$ (with $j=0,\parallel,\perp$). In addition to the polarization fractions the cosine of the phase difference between $A_0$ and $A_{\parallel}$, $\cos\delta_\parallel$, is accessible in this study. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{figs/angles_Bs.pdf} \label{fig:anglesConv} \caption{\small Definition of the angles in \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decays where $\theta_1$ ($\theta_2$) is the $K^+$ ($K^-$) emission angle with respect to the direction opposite to the \ensuremath{\B^0_\squark}\xspace meson in the $\phi$ (\ensuremath{\Kbar^{*0}}\xspace) rest frame and $\varphi$ is the angle between the \ensuremath{\Kbar^{*0}}\xspace and $\phi$ decay planes in the \ensuremath{\B^0_\squark}\xspace rest frame.} \end{figure} The determination of the angular amplitudes depends on the spectrometer acceptance as a function of the polarization angles $\theta_1$ and $\theta_2$. The acceptance was found not to depend on $\varphi$. A parametrization of the acceptance as a function of $\theta_1$ and $\theta_2$ is calculated using simulated data and is used to correct the differential decay rate by scaling Eq.~\ref{formula:fullAngularEq}. Additionally, a small correction for discrepancies in the $p_{\rm T}$ spectrum and the trigger selection of the $B$ mesons between simulation and data is introduced. The data in a $\pm 30\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ window around the \ensuremath{\B^0_\squark}\xspace mass are fitted to the final angular distribution. The fit accounts for two additional ingredients: the tail of the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decays, that are polarized with a longitudinal polarization fraction of $f_0 = 0.494$~\cite{BabarPhiKst2008}, and the combinatorial background, parametrized from the distributions of events in the high-mass $B$ sideband $5450 < M(\ensuremath{K^+K^-}\xspace\ensuremath{K^-\pi^+}\xspace) < 5840\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ after relaxing the selection requirements. The latter accounts for both the combinatorial and misidentified \ensuremath{\L^0_\bquark}\xspace backgrounds. The systematic uncertainties in the determination of the angular parameters are calculated modifying the analysis and computing the difference with the nominal result. Three elements are considered. \begin{itemize} \item The uncertainty in the {\rm S-wave}\xspace fraction. This is computed modifying the {\rm S-wave}\xspace contribution by $50\%$ of its value. This covers within $2\,\sigma$ an {\rm S-wave}\xspace fraction from 0 to $30\%$, consistent with that typically found in decays of $B$ mesons to final states containing a \ensuremath{\kaon^{*0}}\xspace meson. \item The spectrometer acceptance. This contribution is calculated comparing the results considering or neglecting the above-mentioned $p_{\rm T}$ and trigger corrections to the acceptance. \item The combinatorial background. The background model derived from the $B$ mass sideband is replaced by a uniform angular distribution. \end{itemize} \begin{figure}[t!] \centering \includegraphics[scale=0.35]{figs/cth1.pdf} \includegraphics[scale=0.35]{figs/cth2.pdf} \caption{\small Result of the fit to the angular distribution of the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace candidates in (left) $\cos\theta_1$ and (right) $\cos\theta_2$. The red dotted line corresponds to the combinatorial background under the \ensuremath{\B^0_\squark}\xspace signal, the green dashed line is the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace signal in the \ensuremath{\B^0_\squark}\xspace region and the grey dotted-dashed line corresponds to the sum of the {\rm S-wave}\xspace and the interference terms.} \label{fig:anglesBs} \end{figure} \noindent The different contributions to the systematic uncertainty are given in Table~\ref{tab:ang_syst} and the one-dimensional projections of the angular distributions are shown Fig.~\ref{fig:anglesBs}. Other possible systematic sources, such as the uncertainty in the polarization parameters of the {\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace}, are found to be negligible. Considering all the above, the values obtained are \begin{align} f_0 &= \phantom{-}0.51 \pm 0.15\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.07\,\ensuremath{\mathrm{(syst)}}\xspace, \nonumber \\ f_\parallel &= \phantom{-}0.21 \pm 0.11\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.02\,\ensuremath{\mathrm{(syst)}}\xspace, \nonumber \\ \cos\delta_{\parallel} &= -0.18 \pm 0.52\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.29\,\ensuremath{\mathrm{(syst)}}\xspace. \nonumber \end{align} \noindent These results for the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decay are consistent with the values measured in \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decays of $f_0 = 0.494 \pm 0.036$, $f_\parallel = 0.212 \pm 0.035$ and $\cos\delta_{\parallel} = -0.74 \pm 0.10$~\cite{BabarPhiKst2008}. \begin{table}[t] \centering \caption{Systematic uncertainties of the angular parameters.} \begin{tabular}{cccc} \hline \hline Effect & $\Delta f_0$ & $\Delta f_\parallel$ & $\Delta \cos\delta_{\parallel}$ \\ \hline {\rm S-wave}\xspace & $0.07\phantom{0}$ & $0.02\phantom{0}$ & $0.29\phantom{0}$ \\ Acceptance & $0.007$ & $0.005$ & $0.002$ \\ Combinatorial background & $0.02\phantom{0}$ & $0.01\phantom{0}$ & $0.01\phantom{0}$ \\ \hline Total & $0.07\phantom{0}$ & $0.02\phantom{0}$ & $0.29\phantom{0}$ \\ \hline \hline \end{tabular} \label{tab:ang_syst} \end{table} \section{Determination of the S-wave contribution} \label{sec:Purity} The $\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)$ signal is expected to be mainly due to \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decays, although there are possible non-resonant contributions and \ensuremath{K^+K^-}\xspace and \ensuremath{K^-\pi^+}\xspace pairs from other resonances. To estimate the {\rm S-wave}\xspace contributions, it is assumed that the effect is the same for \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace and \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decays, therefore allowing the larger sample of \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decays to be used. The effect of this assumption is considered as a source of systematic uncertainty in Sect.~\ref{sec:syst}. The \ensuremath{K^+K^-}\xspace invariant mass distribution for $\phi$ candidates within a $\pm 30\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ window of the known \ensuremath{\B^0}\xspace mass is described by a relativistic spin-1 Breit-Wigner distribution convolved with a Gaussian\xspace shape to account for the effect of resolution. A linear term is added to describe the {\rm S-wave}\xspace contribution. The purity resulting from this fit is $0.95 \pm 0.01$ in a $\pm7\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ window around the known $\phi$ mass. The \ensuremath{K^+\pi^-}\xspace pairs are parametrized by the incoherent sum of a relativistic spin-1 Breit-Wigner amplitude and a shape that describes non-resonant and $\ensuremath{\kaon^{*0}}\xspace(1430)$ {\rm S-wave}\xspace contributions introduced by the LASS experiment~\cite{BabarPhiKst2008,LASS:1988}. The fraction of events from \ensuremath{\kaon^{*0}}\xspace decays within a $\pm150\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ window around the \ensuremath{\kaon^{*0}}\xspace mass results in a purity of $0.89 \pm 0.02$. When combining the \ensuremath{K^+K^-}\xspace and \ensuremath{K^+\pi^-}\xspace contributions, the total $\phi K^{*0}$ purity is found to be $0.84 \pm 0.02$. This purity can be translated into a p-value, quantifying the probability that the entire ${\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)}$ signal is due to decays other than $\phi \ensuremath{\Kbar^{*0}}\xspace$. After combining with the ${\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace (\ensuremath{K^+K^-}\xspace)(\ensuremath{K^-\pi^+}\xspace)}$ significance the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace is observed with $6.1 \, \sigma$ significance. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figs/masskkbs.pdf} \includegraphics[width=0.4\textwidth]{figs/masskpibs.pdf} \includegraphics[width=0.4\textwidth]{figs/masskkbd.pdf} \includegraphics[width=0.4\textwidth]{figs/masskpibd.pdf} \label{fig:kkkpimasses} \caption{\small Invariant mass distributions for (left) \ensuremath{K^+K^-}\xspace and (right) $K^{\mp}\pi^{\pm}$ pairs in a $\pm 30\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ window around the (top) \ensuremath{\B^0_\squark}\xspace and (bottom) \ensuremath{\B^0}\xspace mass. The solid blue line is the overall fit, the green dashed line corresponds to \ensuremath{\B^0}\xspace cross-feed into the \ensuremath{\B^0_\squark}\xspace mass window, the red dotted line is the {\rm S-wave}\xspace contribution and the light blue is the combinatorial background.} \end{figure} \section{Signal selection} \label{sec:selection} Signal \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace candidates are formed from $\phi \ensuremath{\rightarrow}\xspace \ensuremath{K^+K^-}\xspace$ and $\ensuremath{\Kbar^{*0}}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{K^-\pi^+}\xspace$ decays.\footnote{Inclusion of charge conjugated processes is implied in this work, unless otherwise stated.} The pairs of charged particles in the $\phi \ensuremath{\rightarrow}\xspace \ensuremath{K^+K^-}\xspace$ and the $\ensuremath{\Kbar^{*0}}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{K^-\pi^+}\xspace$ candidates must combine to give invariant masses ${1012.5 < M(\ensuremath{K^+K^-}\xspace) < 1026.5\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}$ and ${746 < M(\ensuremath{K^-\pi^+}\xspace)< 1046\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}$, consistent with the known $\phi$ and \ensuremath{\Kbar^{*0}}\xspace masses~\cite{Beringer:1900zz}. Each of the four tracks is required to have $\mbox{$p_{\rm T}$}\xspace > 500\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$ and \ensuremath{\ensuremath{\chi^2}\xspace_{\IP}}\xspace$>9$. Kaons and pions are distinguished by use of a log-likelihood algorithm that combines information from the RICH detectors and other properties of the event~\cite{Adinolfi:2012an}. The final state particles are identified by requiring that the difference in log-likelihoods of the kaon and pion mass hypotheses is \ensuremath{\mathrm{DLL}_{\kaon\pion}}\xspace$>2$ for each kaon candidate and $<0$ for the pion candidate. In addition, the difference in log-likelihoods of the proton and kaon hypotheses, \ensuremath{\mathrm{DLL}_{\proton\kaon}}\xspace, is required to be $<0$ for the kaon from the \ensuremath{\Kbar^{*0}}\xspace decay. This suppresses background from \ensuremath{\L^0_\bquark}\xspace decays. This requirement is not necessary for the kaons from the $\phi$ candidate owing to the narrow \ensuremath{K^+K^-}\xspace invariant mass window. The \ensuremath{K^-\pi^+}\xspace pair that forms the \ensuremath{\Kbar^{*0}}\xspace candidate is required to originate from a common vertex with a $\ensuremath{\chi^2}\xspace$ per number of degrees of freedom ($\ensuremath{\chi^2}\xspace/{\rm ndf}$) $<9$, and to have a positive cosine of the angle between its momentum and the reconstructed \ensuremath{B^{0}_{(s)}}\xspace candidate flight direction, calculated with the \ensuremath{B^{0}_{(s)}}\xspace decay vertex and the best matching primary vertex. The \ensuremath{K^-\pi^+}\xspace combination is also required to have $\mbox{$p_{\rm T}$}\xspace > 900\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$. The same conditions are imposed on the $\phi$ candidate. The \ensuremath{B^{0}_{(s)}}\xspace candidates are also required to fulfil some minimal selection criteria: the $\phi$ and \ensuremath{\Kbar^{*0}}\xspace candidates must form a vertex with $\ensuremath{\chi^2}\xspace/{\rm ndf}<15$; the distance of closest approach between their trajectories must be less than $0.3\,\ensuremath{\rm \,mm}\xspace$; and they must combine to give an invariant mass within ${4866 < M(\ensuremath{K^+K^-}\xspace\ensuremath{K^-\pi^+}\xspace) < 5866\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}$. In addition, a geometrical-likelihood based selection (GL)~\cite{Karlen:1998zz,MartinezSantos:1264603} is implemented using as input variables properties of the \ensuremath{B^{0}_{(s)}}\xspace meson candidate. These are \begin{itemize} \item the \ensuremath{B^{0}_{(s)}}\xspace candidate impact parameter (\ensuremath{{\rm IP}}\xspace) with respect to the closest primary vertex; \item the decay time of the \ensuremath{B^{0}_{(s)}}\xspace candidate; \item the \mbox{$p_{\rm T}$}\xspace of the \ensuremath{B^{0}_{(s)}}\xspace candidate; \item the minimum \ensuremath{\ensuremath{\chi^2}\xspace_{\IP}}\xspace of the four tracks with respect to all primary vertices in the event; and \item the distance of closest approach between the \ensuremath{\Kbar^{*0}}\xspace and $\phi$ candidates' trajectories reconstructed from their respective daughter tracks. \end{itemize} The GL is trained to optimize its discrimination power using representative signal and background samples. For the signal a set of \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace simulated events is used. For the background a sample of events where, in addition to the signal selections, other than those on the masses, requirements of $999.5<M(\ensuremath{K^+K^-}\xspace) < 1012.5 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ or ${1026.5<M(\ensuremath{K^+K^-}\xspace) < 1039.5 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}$ for the $\phi$ candidate and ${M(\ensuremath{K^+K^-}\xspace\ensuremath{K^-\pi^+}\xspace)>5413\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}$ for the four-body mass are applied. The selection of only the high-mass \ensuremath{B^{0}_{(s)}}\xspace sideband is motivated by the nature of the background in that region, which is purely combinatorial, whereas the low-mass sideband contains partially reconstructed $B$ meson decays that have topological similarities to the signal. \section{Systematic uncertainties on the branching fraction} \label{sec:syst} Four main sources of systematic effects in the determination of the branching fraction are identified: the fit model, the dependence of the acceptance on the longitudinal polarization, the purity of the signal and the uncertainty in the relative efficiency of \ensuremath{\B^0_\squark}\xspace and \ensuremath{\B^0}\xspace detection. Alternatives to the fit model discussed in Sect.~\ref{sec:mass-spectrum} give an uncertainty of $\pm 1.2$ in the signal yield. This results in a relative systematic uncertainty of $\pm 0.04$ on the branching fraction. The systematic uncertainty in the acceptance correction factor $\lambda_{f_0}$ originates from the uncertainties of the longitudinal polarization fractions, $f_0$, in the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace and \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace channels and is found to be $\pm 0.06$. As described in Sect.~\ref{sec:Purity} an {\rm S-wave}\xspace contribution of $0.16 \pm 0.02$ was found in the \ensuremath{K^+K^-}\xspace and \ensuremath{K^-\pi^+}\xspace mass windows of the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace candidates. The uncertainty caused by the assumption that this fraction is the same in \ensuremath{\B^0}\xspace and \ensuremath{\B^0_\squark}\xspace decays is estimated to be $50\%$ of the {\rm S-wave}\xspace contribution. This results in a $\pm 0.08$ contribution to the systematic uncertainty. This uncertainty also accounts for uncanceled interference terms between the \ensuremath{\kaon^{*0}}\xspace, the $\phi$ and their corresponding {{\rm S-wave}\xspace}s. These contributions are linear in the sine or cosine of polarization angles~\cite{BabarPhiKst2008} and cancel after integration. The dependence of the acceptance on the angles violates this cancellation contributing $\pm 0.04$ to the total $\pm 0.08$ {\rm S-wave}\xspace uncertainty. The \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace and \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace final states are very similar and a detector acceptance efficiency ratio $\sim 1$ is expected. However, small effects, such as the mass shift $M(\ensuremath{\B^0_\squark}\xspace)-M(\ensuremath{\B^0}\xspace)$, translate into slightly different \mbox{$p_{\rm T}$}\xspace distributions for the daughter particles. This results in an efficiency ratio of $1.005$, as determined from simulation. The deviation of $\pm 0.005$ from unity is taken as a systematic uncertainty that is propagated to the branching fraction. Finally, the uncertainty in the knowledge of the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decay branching fraction of $\pm 0.6 \times 10^{-6}$ is also accounted for and results in a relative uncertainty of 0.06 in the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decay branching fraction. A summary of the systematic uncertainties is shown in Table~\ref{table_systsum}. The final result for the \ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace decay branching fraction is \[ {\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace) = \left(1.10 \pm 0.24\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.14\,\ensuremath{\mathrm{(syst)}}\xspace \pm 0.08\left(\frac{f_d}{f_s}\right)\right) \times 10^{-6}, \] \noindent which corresponds to a ratio with the \ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace decay branching fraction of: \[ \frac{{\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0_\squark}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\Kbar^{*0}}\xspace}\xspace)}{{\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace)} = 0.113 \pm 0.024\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.013\,\ensuremath{\mathrm{(syst)}}\xspace \pm 0.009\left(\frac{f_d}{f_s}\right). \] \begin{table}[t] \begin{center} \caption{\small Sources of systematic uncertainty in the branching fraction measurement. The total uncertainty is the addition in quadrature of the individual sources.} \begin{tabular}{cc} \hline \hline Source & Relative uncertainty in \BF \\ \hline Fit model & $0.04\phantom{0}$ \\ $f_0$ & $0.06\phantom{0}$ \\ Purity & $0.08\phantom{0}$ \\ Acceptance & $0.005$ \\ {\ensuremath{\cal B}\xspace}(\ensuremath{\ensuremath{\B^0}\xspace \ensuremath{\rightarrow}\xspace \phi \ensuremath{\kaon^{*0}}\xspace}\xspace) & $0.06\phantom{0}$ \\ \hline Total & $0.12\phantom{0}$ \\ \hline \hline \end{tabular} \label{table_systsum} \end{center} \end{table}
1,116,691,498,099
arxiv
\section{Introduction} \label{intro} A number of X-ray binaries show flux periodicities at their respective orbital period, which may be caused by a number of effects. First, the source associated with the compact object in a binary may be eclipsed by the companion (usually of high mass) \citep[see e.g. a list ][]{wen06}. Second, a flux modulation may be caused by an optically-thick disc rim (which is highest at the point of impact of the gas stream from the inner Langrangial point in case of a donor filling its Roche lobe), obscuring the disc and/or its corona (e.g., \citealt{ws82,hm89}). This obscuration may lead to strong partial eclipses in so-called X-ray dippers. More generally, the disc and any associated structures may depart from its axial symmetry due to the influence of the companion, which may cause an orbital modulation. Third, wind from a high-mass companion may absorb/scatter the emission from the vicinity of the compact object, and the degree of absorption will depend on the orbital phase. In the case of Cyg X-1, both X-ray and radio emission are modulated by this effect, which modulations were modelled by, e.g., \citet{wen99} and \citet{sz07}, respectively. Fourth, phase-dependent absorption (via photon-photon pair production) of high-energy $\gamma$-rays may occur in a photon field axially asymmetric with respect to the compact object, especially that of the stellar photons (e.g., \citealt{bednarek06}). A fifth effect of the companion is reflection or reprocessing of the emission from around the compact object on the surface of the companion facing the compact object. This effects appears to be responsible for, e.g., the UV flux modulation from the X-ray binary 4U 1820--303 \citep{ak93,anderson97}. Finally, the optical/UV emission of the companion will be modulated if its shape departs from the spherical symmetry by partially or fully filling its Roche lobe, which effect is seen in Cyg X-1, e.g., \citet{brock2}. Then, there will be an intrinsic dependence of the emitted flux on the orbital phase if the orbit is elliptical. This leads, e.g., to periodic outbursts at the periastron of Cir X-1 \citep{parkinson03} and Be/X-ray binaries (see, e.g., \citealt{coe00,negueruela04} for reviews) in X-rays, and sometimes, at other wavelengths. Also, some orbital flux modulation may be due to the Doppler effect, which is in principle observable \citep{ps87}, but has not yet been detected in a binary. (Obviously, the Doppler effect leads to widely observed shifts of spectral lines from binaries.) In addition, a number of X-ray binaries show modulation at periods much longer than their orbital periods, so-called superorbital periodicity, see, e.g., a partial list in \citet{wen06}. In particular, Cyg X-1 shows such periodicity with the period of $\sim$150 d (e.g., \citealt{brock,kar01,od01,l06}, hereafter L06; \citealt{i07}, hereafter Paper I). The observed superorbital variability appears in most cases compatible with being caused by accretion disc and/or jet precession, which either results in variable obscuration of emitted X-rays as in Her X-1 \citep{k73}, or changes the viewing angle of the presumed anisotropic emitter, as in SS 433 \citep{k80} or Cyg X-1 (e.g., L06, Paper I), or both. The only known exception, in which the superorbital periodicity is clearly caused by modulation of the accretion rate (and thus not by a changing viewing angle of the source), is 4U 1820--303 \citep*{z07a}. A number of binaries show both orbital and superorbital modulations. Those currently known are LMC X-4, 2S 0114+650, SMC X-1, Her X-1, SS 433, 4U 1820--303 and Cyg X-1. An interesting issue then is whether there is any dependence of the parameters of the orbital modulation on the superorbital phase (or, similarly, on an average of the flux level). The shape of the profile of the orbital modulation in Her X-1 was found to depend on its superorbital phase \citep{sl99}, which appears to be due to the shadowing effect of the precessing accretion disc and scattering in its wind in that system. Recently, analogous dependencies of the shape of the orbital modulation on the average flux level have been found in LMC X-4, SMC X-1, Her X-1, as well as in Cen X-3 \citep{rp08}. Then, \citet{z07b} found such a dependence in 4U 1820--303 (of both the amplitude and the phase of the minimum flux) and interpreted it in terms of the size of the disc rim (partially obscuring the central source) changing with the variable accretion rate. In addition, there is the case of the peculiar Be/X-ray binary LS I +61$\degr$303, which shows orbital variability in the radio, X-ray and TeV emission, and a superorbital variability of the peak radio flux during an orbit \citep*{gregory99,gregory02}. \citet{gregory02} found a marked dependence of the phase of the peak of the orbital radio modulation on the superorbital phase in LS I +61$\degr$303. The presence of such a dependence may be due to interaction of the pulsar in that system with a variable circumstellar Be decretion disc \citep*{gregory02,z08}. It is of considerable interest to find out whether orbital modulation depends on the superorbital phase in Cyg X-1, the archetypical and very well studied black-hole system with a high-mass companion, the OB supergiant HDE 226868 \citep{walborn}. In this work, we study this issue and find that such dependence exists and is very strong in soft X-rays. We then explain it theoretically in terms of orbital-phase dependent absorption in the stellar wind interacting with the outer accretion disc. \begin{figure} \centerline{\epsfig{file= fig1_lognormal.eps,width=7.5cm}} \caption{ (a) The histogram of the count rates (dwell-by-dwell data) observed from Cyg X-1 for the ASM A, B, and C detector channels. (b) Distribution of the hardness ratios B/A, C/B, and C/A. (c) Distribution of the flux in the Ryle data. The flux units for the ASM and Ryle data are count s$^{-1}$ and Jy, respectively. The solid curves give the best-fitting lognormal distributions. } \label{fig:histogram} \end{figure} \section{The light curves and their analysis} \label{sec:method} \subsection{Data} We use the X-ray dwell data (MJD 50087-53789, i.e., 1996 January 5--2006 February 23; note a misprint in the start date in Paper I) obtained with three Scanning Shadow Cameras of the All-Sky Monitor (ASM) aboard {\it Rossi X-ray Timing Explorer\/} ({\textit{RXTE}}; \citealt*{brs93,lev96}), with the channels A, B, and C corresponding to the photon energy intervals of 1.5--3 keV, 3--5 keV, and 5--12 keV, respectively. We also use the corresponding 15-GHz radio data from the Ryle Telescope of the Mullard Radio Astronomy Observatory (see, e.g., \citealt*{poo99}; L06 for earlier analyses of the observations of Cyg X-1). Because Cyg X-1 is a highly variable source and the effects we search for are rather weak, we need to select accurately a homogeneous set of data. For most of the analysis in the paper, we use the data corresponding to the hard spectral state following the criteria defined in section 2 of Paper I. We require the average photon spectral index derived from the {\textit{RXTE}}/ASM fluxes to be $< 2.1$ \citep{z02}, and additionally we exclude hard-state intervals with high X-ray variability, namely we include only those 30-d intervals of the ASM data where $< 40$ per cent of points exceed by $4\sigma$ the average flux in the reference interval MJD 50660--50990. This has resulted in considering the following time intervals: MJD 50350--50590, 50660--50995, 51025--51400, 51640--51840, 51960--52100, 52565--52770, 52880--52975, 53115--53174, 53554--53690 (see fig. 1 in Paper I). \subsection{Mean fluxes and variance} We generallly follow the method of analyzing light curves described in Paper I, but with some modifications necessitated by the scientific goal of the present work. We use the orbital ephemeris of \citet{brock2} and the superorbital ephemeris of L06, see equations (1) and (4), respectively, in Paper I. We use the values of the orbital and superorbital periods of $P=5.599829$ d and $P_{\rm sup}=151.43$ d, respectively. We first divide an analyzed light curve into bins with the length of $P/20$. Then we average all points falling into a given bin weighted by the inverse squares of their measurement errors, obtaining the binned light curve, $F_i$. In this way, we avoid any contribution to our folded/averaged light curves from the source variability on time scales shorter than that corresponding to the length of our chosen phase bin (see Paper I). Note that unlike the method in Paper I, we do not prewhiten the light curves, i.e., do not subtract variability at one period in order to detect more clearly variability at another period. \begin{figure*} \centerline{\epsfig{file=fig2_bulge_flux_hr.eps,width=15.0cm}} \caption{Profiles of the orbital modulation of the ASM A (1.5--3 keV, lower crosses) and C (5--12 keV, upper crosses) data at 8 superorbital phase bins. The bin number and the phase of the bin center are given on each panel. Note that the minimum and maximum modulation are offset from the 0 and 0.5 phase, and appear instead close to $\Phi=0.125$ and 0.625, respectively. The unit of $F$ is count s$^{-1}$. The solid curves give the best-fitting theoretical outflow model (model 8 in Table \ref{tab:fits}) described in Section \ref{sec:interpretation}, which involves the absorption in the isotropic stellar wind as well as in the bulge situated at the disc edge. The dashed curves shows the model component due to the wind only. } \label{fig:profiles} \end{figure*} We have then looked into statistical properties of our distributions. We plot histograms of the fluxes for the ASM and Ryle data in Fig.\ \ref{fig:histogram}. We see that each of the histograms follows a lognormal distribution and it is completely inconsistent with a normal one. Our finding of the lognormal form of the variability of Cyg X-1 in the hard state on long time scales ($\sim$1/10-d to yr) in both X-rays and radio is supplemental to that of \citet*{uttley05}, who found the same type of distribution in X-rays on short time scales, $\sim$0.1--10 s, also in the hard state. This form of the flux distribution has important implications for calculating flux averages and the intrinsic dispersion, i.e., the standard deviation in the data. Namely, the standard-deviation error estimate based on the rms, namely \begin{equation} \sigma^2= {\sum_{i=1}^N \left(x_i-\bar x \right)^2 \over N(N-1)}, \label{eq:sigma} \end{equation} provides an unbiased estimate of the true standard-deviation error of the average of $x_i$ only if the distribution of $x_i$ is normal \citep{bevington92}. Therefore, for the purpose of calculating the averages and the rms standard deviations for our light curves, we have converted the count rates or fluxes in our binned light curves, $F_i$, into its logarithm, $G_i=\ln F_i$, with $G_i$ having now the distributions close to normal. We then separate the light curves binned based on the orbital phase into superobital phase bins of the length of $P_{\rm sup}/8$, with the mid-point of the first and the fifth bin at $\Phi=0$ and 0.5, respectively. Here, either the orbital phase, $\phi$, or superorbital phase, $\Phi$, is defined in the 0--1 interval, and 0 corresponds to the flux minimum as defined by the respective ephemeris. Then, we calculate folded and averaged profiles (of $G_i=\ln F_i$) of the orbital modulation within each superorbital phase bin, i.e., \begin{equation} G_{jk}={\sum_{i\in(j,k)} G_i\over I_{jk}}, \label{G_jk} \end{equation} where $i\in(j,k)$ counts over all points, $i$, falling into a given superorbital bin, $j$, {\it and\/} the orbital bin, $k$, and $I_{jk}$ is the number of such points. We estimate the error of this average using equation (\ref{eq:sigma}), i.e., \begin{equation} \sigma_{jk}^2= {\sum_{i\in(j,k)} \left(G_i-G_{jk}\right)^2 \over I_{jk}(I_{jk}-1)}, \label{sigma_jk} \end{equation} Note that this error estimate accounts for both the aperiodic variability of the source, i.e., intrinsic dispersion of individual fluxes contributing to a given orbital/superorbital bin (usually dominating), and the dispersion due to measurement errors. Also, since we use logarithms, $\sigma_{jk}$ represents a fractional error (and should not be divided by $G_{jk}$). The average and the average square error in a given superorbital bin are, \begin{equation} \bar G_j ={\sum_{k=1}^K G_{jk}\over K},\qquad \bar \sigma^2_j ={\sum_{k=1}^K \sigma^2_{jk}\over K}, \label{averages} \end{equation} respectively, where $K=20$ is the number of orbital bins. We need to characterize the strength of a given modulation. One way of doing it without making any assumptions about its shape is to measure the fractional rms of a given orbital modulation profile. To do it, we calculate the unweighted rms variance and then subtract from it the rms variance due to the uncertainties of the individual points, which is so-called excess variance \citep[see e.g.][]{edelson02}), \begin{equation} S_j^2= {\sum_{k=1}^K \left(G_{jk}-\bar G_j \right)^2 \over K-1} -\bar \sigma^2_j . \label{S2} \end{equation} Note that the variance difference above can be negative if the intrinsic variability is comparable or weaker than the measurement uncertainties. If this happens, we set this excess variance to zero. We again point out that $S_j$ represent already the fractional rms, i.e., it should not be further divided by $\bar G_j$ (which may be zero or negative). Then, we calculate the standard deviation of the above excess variance, $\Delta S^2_j$, following equation (11) of \citet{vaughan03}, hereafter V03, \begin{equation} \Delta S^2_j= \left(2\over K\right)^{1/2} \bar\sigma_j^2 \left(1+ {2 S_j^2\over \bar\sigma_j^2} \right)^{1/2}. \label{Delta_S2} \end{equation} We note that the transformation of $\Delta S^2_j$ into $\Delta S_j$ is not trivial. V03 have done it using the standard differential propagation of errors, obtaining their equations (B2) and (B3), which, however, we find not generally correct. Namely, the assumption behind using derivatives in propaging errors is that the uncertainty is much lower than the estimated quantity. This is often not the case for the excess variance, which can be null for either weak intrinsic variability or measurement errors comparable with that variability, see equation (\ref{S2}), whereas its uncertainty is always $>0$. Then, the error-propagation formula used by V03, $\Delta S_j=\Delta S_j^2/({\rm d}S_j^2/{\rm d}S_j)$ (using our notation), obviously fails, leading to infinite uncertainties. The cause for that is the failure of the assumption of $\Delta S_j^2\ll S_j^2$. To account for that, we calculate the uncertainty on the rms without that assumption, i.e., directly from the defition of the 1-$\sigma$ uncertainty range as $S_j^2\pm \Delta S_j^2$, \begin{equation} \Delta S_j = (S_j^2+\Delta S_j^2)^{1/2} - S_j. \label{Delta_S} \end{equation} Here we have chosen the upper error, which is larger than the lower one, and which is the only one possible for $S_j^2<\Delta S_j^2$. For $\Delta S_j^2\ll S_j^2$, this becomes the usual $\Delta S_j\simeq \Delta S_j^2/(2S_j)$ (as in V03), which equals $\Delta S_j\simeq\bar\sigma_j/K^{1/2}$. On the other hand, for $\Delta S_j^2\gg S_j^2$, the result is $\Delta S_j\simeq (\Delta S_j^2)^{1/2}$, which should be used to correct the upper part of equation (B3) in V03, and which equals $\Delta S_j\simeq (2/K)^{1/4}\bar \sigma_j$ (in our notation). Hereafter, we use equation (\ref{Delta_S}) to estimate the rms uncertainty. Note that the above uncertainty estimates are due to the measurement errors only, and they do not account for the long-term, red-noise, variability of the source properties (V03). This is a correct procedure for our sample containing most of the currently available ASM data, for which we are interested in their actual properties, and are not hypothesizing about their behaviour over time scales $\gg 10$ yr. \subsection{Hardness ratio} \label{sec:hr} We also would like to analyse the spectral variability of Cyg X-1 with orbital and superorbital phase. A useful measure of the spectral shape is the hardness ratio (HR) of the fluxes in various channels, which can be computed in a number of ways. The obvious one is to use already available mean fluxes and construct their ratio. This procedure, however, does not account for short-timescale spectral variability. The HR can also be computed for each observation (dwell) and then the mean can be obtained. However, we have already seen that fluxes follow the log-normal distribution, and therefore expect that their ratio could also be distributed in such a way. Indeed, Fig.\ \ref{fig:histogram}(b) demonstrates that the logarithm of HR have distributions close to normal. Therefore, for the unbiased estimation of the mean HR and its error, we take the logarithm of HR for each observation (using fluxes that are not pre-averaged within the $P/20$ bins) and average them within selected orbital and superorbital phase bins. We also note that the mean HR is computed without weighting the individual HRs according to their errors, because the error is systematically larger for harder spectra (as a result of a smaller flux in lower-energy channels), and therefore accounting for errors would result in a strongly biased estimate of the mean. \section{Strength of the orbital modulation vs.\ the superorbital phase} \label{sec:dependence} The folded and averaged profiles of the orbital modulation for the ASM A data are shown in Fig.\ \ref{fig:profiles}. We can see that the orbital modulation is variable, e.g., it appears to be the weakest at the superorbital phase $\Phi= 0.625$. However, there is also a fair amount of statistical noise, and the results of this figure need to be quantized. We can see here that the orbital modulation profiles are characterized by rather narrow minima, and thus would not be well fitted by a smooth function, e.g., a sinusoid. Thus, we first calculate the rms of each dependence to characterize its strength, following the method of Section \ref{sec:method}. Fig.\ \ref{fig:rms_asm}(a) shows the superorbital phase diagram for the ASM A channel. We can see the highly significant flux modulation with the superorbital period (cf.\ L06, Paper I). We also see that the minimum of the superorbital cycle is clearly offset from the ephemeris of L06 by $\Delta\Phi\simeq 0.1$ (which was based on $\sim$30 yr of data compared to 10 yr analyzed by us). The crosses in Fig.\ \ref{fig:rms_asm}(b) show the corresponding rms dependence. We very clearly see a strong dependence of the rms on $\Phi$, with the rms being anticorrelated with the flux. It also appears that some phase lag, $\la 0.1$, of the maximum of the rms with respect to the minimum of the flux is present. Fig.\ \ref{fig:rms_asm}(c) shows the results for all three ASM channels. The orbital modulation, due to bound-free absorption, is strongest in the 1.5--3 keV range and weakest in the 5--12 keV range (\citealt{wen99}; L06). Consequently, the statistical significance of the dependence on $\Phi$ decreases with the energy. \begin{figure} \centerline{\epsfig{file=fig3_profandrms_asm.eps,width=7.0cm}} \caption{ (a) The superorbital phase diagram for the ASM A channel. The unit of $F$ is count s$^{-1}$. (b) Comparison of the characterization of the ASM A rms dependence using different methods. The crosses are the intrinsic rms, $S$, of the orbital modulation as a function of the superorbital phase. The solid histogram gives the amplitude of the orbital variability as fitted by sum of three harmonics, see Section \ref{sec:dependence}. The dashed histogram gives the corresponding rms for the fitting functions. The solid curves in panels (a) and (b) show the dependencies for the theoretical outflow model (model 5 in Table \ref{tab:fits}). (c) The dependencies of the intrinsic rms of the orbital modulation on the superorbital phase for three ASM channels. The crosses with filled circles, open triangles and open squares correspond to the channels A, B, and C, respectively. } \label{fig:rms_asm} \end{figure} \begin{figure} \centerline{\epsfig{file=fig4_profandrms_ryle.eps,width=6.5cm}} \caption{ (a) The superorbital phase diagram for the Ryle 15 GHz data. The unit of $F$ is Jy. (b) The dependence of the intrinsic rms of the 15 GHz orbital modulation on the superorbital phase, consistent with being constant.} \label{fig:rms_ryle} \end{figure} In order to test the robustness of our finding of the dependence of the strength of the orbital modulation on $\Phi$, we have also calculated the rms for the ASM A taking into account the weights due to uncertainties of the individual points in the orbital phase diagrams (see \citealt{zdz04}). This alternative method gives only negligible differences with respect to the original one, and thus we do not show its results. Then, we have fitted the ASM A orbital modulation profiles with a sum of three sinusoidal harmonics, see equation (2) in Paper I, and calculated both the amplitude, $(F_\mathrm{max} -F_\mathrm{min})/(F_\mathrm{max}+ F_\mathrm{min})$, and the rms for it. In this way, we largely avoid contributions to the rms from residual aperiodic variability. The results are shown in Fig.\ \ref{fig:rms_asm}(b). We see that the values of the rms of the fitted functions are very similar to that calculated directly from the data in Fig.\ \ref{fig:profiles}. On the other hand, the amplitude (which is sensitive only to the extremes of the fitted function) is larger than the rms simply due to their different definitions. The amplitude also shows a strong dependence of the superorbital phase similar in shape to that of the rms; however, it appears consistent with no phase shift with respect to the flux profile (Fig.\ \ref{fig:rms_asm}a). We have then searched for a similar effect in the Ryle 15 GHz data. We have found, however, that no apparent dependence is seen, and the $\Phi$-dependent orbital modulation profiles look all similar, and consistent with the average orbital modulation (see Fig.\ 4 in L06). Thus, we show here, in Fig.\ \ref{fig:rms_ryle}, only the results of calculating the rms of the orbital modulation as a function of $\Phi$. In Fig.\ \ref{fig:rms_ryle}(b), we see that the strength of the orbital modulation is consistent with being constant, though we cannot rule out some dependence hidden in the statistical noise. We have also checked that the 2.25 and 8.30 GHz data from the Green Bank Interferometer (see L06; Paper I) also do not show any statistically significant dependencies. \section{Spectral variability} \label{sec:dips} \subsection{Hardness ratio} \begin{figure} \centerline{\epsfig{file=fig5ab_hr_1D_orb_sup.eps,width=6.7cm}} \vspace{0.5cm} \centerline{\epsfig{file= fig5c_hr_2D_orb_sup.eps,width=6.7cm}} \caption{The mean hardness ratio in the ASM channels C and A in the hard state as a function of the (a) orbital and (b) superorbital phase. (c) Contour plot of the smoothed distribution of the hardness ratio C/A over the orbital and superorbital phases. } \label{fig:hr} \end{figure} The X-ray modulations can also be tracked through the hardness ratio. The largest and easily detectable variability is shown by the ratio of count rates in ASM channels C and A (C/A). Fig.\ \ref{fig:hr} presents the dependence of the mean C/A (computed from the logarithm of the ratio, see Section \ref{sec:hr}) on orbital and superorbital phases. We see a strong peak at orbital phase $\phi\sim 0$, which can be explained by absorption in the nearly isotropic wind \citep[see ][]{wen99}. The dependence of C/A on the superorbital phase also shows a very significant hardening around $\Phi\sim0$. The two-dimensional dependencies on $\phi$ and $\Phi$ demonstrate a plateau with C/A$\approx$1.4, a significant increase in hardness around $\phi=0.0\pm0.2$, and two peaks at superorbital phase $\Phi\sim-0.1$ and 0.1, which significance is not certain. Other hardness ratios C/B and B/A show similar behaviour, but of smaller amplitude. \begin{figure} \centerline{\epsfig{file=fig6ab_dips.eps,width=7.2cm}} \vspace{0.5cm} \centerline{\epsfig{file= fig6c_dips_so_orb.eps,width=6.7cm}} \caption{The distribution of X-ray dips over (a) orbital and (b) superorbital phase corrected for the coverage. The solid histogram is the hard state studied by us, while the dashed histogram is for the entire ASM data set. The fraction scale corresponds to the solid histograms. (c) Contour plot of the smoothed distribution of all X-ray dips over the orbital and superorbital phases. } \label{fig:dips} \end{figure} \subsection{X-ray dips} X-ray dips, which are believed to result from absorption in blobs in the stellar wind, are characterised by significant drop in the count rate (see e.g. \citealt{bal00}, hereafter BC00; \citealt{fc02}). However, most markedly they manifest themselves by spectral hardening (BC00). It is of interest to study their distribution over the orbital and superorbital phase and to compare these distribution to the corresponding dependencies of the HR. In order to define the dips, we use the ratio of the ASM count rates in channels B and A, B/A (HR1 in BC00), and the analogous C to B ratio, C/B (HR2 in BC00). We then use the criteria of B/A$>$2 or C/B$>$2.5, which is similar to that of BC00 except that they stated that they used {\it both\/} criteria simultaneously. With the present ASM calibration, we found only 56 dips satisfying their criterion in the ASM data used by us. The cause for almost no dips with both hardnesses large appears to be caused by the dip absorption being partial, i.e., with some small fraction of the flux remaining unabsorbed. Then, at a relatively low absorbing column, the flux in the A channel is reduced but the B and C channels are only weakly affected, so this case yields a B/A ratio increase but not C/B. On the other hand, at a column yielding a substantial reduction of the B flux, the C/B ratio increases, but the absorbed flux in the A channel is so low that it is dominated by the constant unabsorbed component, which results in no substantial increase of the B/A hardness. In the hard state data, we have found 1151 dips (814 with B/A$>$2 and 387 with C/B$>$2.5) among 31211 observations, while in the whole 10-year data set without any selection we have found 1336 X-ray dips (995 with B/A$>$2 and 437 with C/B$>$2.5) among 60127 independent observations. Thus most of the dips happen during the hard state. This is expected because the spectral softening and increase of the luminosity in the soft state strongly increases the ionization level of the wind, which results in a weaker photoelectric absorption (BC00; \citealt{wen99}). This also strongly confirms the accuracy of our criterion defining the hard state. Fig.\ \ref{fig:dips}(a) shows the distribution of the dips over the orbital phase renormalized to the number of ASM observations in each bin. The picture looks relatively similar to fig.\ 5 in BC00 (based on $\sim$2 yr of the data, i.e., five times less than in our data set). The peak is at $\phi\simeq 0$, and it is relatively symmetric, especially for the hard-state data only. We have also checked that the distributions of the dips selected separately in the B/A and C/B look very similar. On the other hand, the additional peak at $\phi\simeq 0.6$ claimed by BC00 is not found by us, and appears to be due to a statistical fluctuation in the previous data set. Indeed, the total number of counts in the three bins forming that excess was 34, whereas the continuum level (i.e, without the excess) in those three bins corresponds to about 25. Thus, the excess corresponds to only $\sim\! 1.5\sigma$ in the Poisson statistics. The existence of the feature at $\phi\simeq 0.6$ is also not supported by the dependence of the HR, which shows no signs of spectral hardening at this phase (see Fig.\ \ref{fig:hr}a,c). Then we have studied the distribution of the X-ray dips over the superorbital phase. The results are shown in Fig.\ \ref{fig:dips}(b). We see a maximum around $\Phi\simeq 0.05$--0.1, which is consistent with the position of the flux minimum (see Fig.\ \ref{fig:rms_asm}a). The distribution is clearly asymmetric relative to the peak, with a slower rise and faster decline, and it looks like the inverted flux (i.e., $-\ln F$) of Fig.\ \ref{fig:rms_asm}(a). Then, the two-dimensional distribution of the dips in $\phi$ and $\Phi$ is shown in Fig.\ \ref{fig:dips}(c). We see that most of the dips that give rise to the peak in the orbital phase distribution around $\phi\simeq 0.0\pm 0.2$ happen around the superorbital phase of $\Phi\simeq 0.1\pm0.2$. (The statistical significance of the presence of two, rather than one, separate peaks there is rather low, $\sim\! 2\sigma$.) The distributions of the dips resemble strongly that of the HR, which is natural because the dips just represent a tail of the HR distribution. \section{Theoretical interpretation} \label{sec:interpretation} \subsection{Wind geometry in Cyg X-1} \label{geometry} \begin{figure} \centerline{\epsfig{file= fig7a_disk.eps,width=7.5cm}} \vspace{0.2cm} \centerline{\epsfig{file= fig7b_disk.eps,width=7.5cm}} \vspace{0.2cm} \centerline{\epsfig{file= fig7c_disk.eps,width=7.5cm}} \caption{A drawing illustrating the effect of a bulge at the outer edge of a precessing inclined disc. The material in the bulge absorbs some of the X-ray emission originating close to the disc center. The orbital modulation due to the bulge is seen to strongly depend on the superorbital phase. In addition, there will also be orbital modulation due to the direct wind from the supergiant, not shown here for clarity. The elongation of the supergiant, almost filling its Roche lobe, is not shown here. A view along the orbital plane: (a) the superorbital phase of 0, when the disc is seen closest to edge-on and the effect of the bulge is strongest; (b) the opposite case of the superorbital phase of 0.5. (c) A view from the top, with the arrow showing the direction of the observer. The angle $\phiorb_{\rm b}$ gives the azimuthal displacement of the bulge centre relative to the line connecting the stars, and it is $>0$ in the case shown here. The maximum of the absorption corresponds then to $\phi=-\phiorb_{\rm b}$. This view is for any value of $\Phi$ except for the shown orientation of the elliptical image of the disc, which corresponds to $\Phi=0$ or 0.5. } \label{fig:model} \end{figure} Let us first summarise our findings. In the radio, we see no modulation of the orbital variability with the superorbital phase, while in the X-rays such a modulation is visible. In addition to the previously known spectral hardening at orbital phase $\phi\sim0$ (visible in the HR and distribution of the X-ray dips), we find a significant increase in the HR around superorbital phase $\Phi=0$. This effect can be tracked in the dependence of the HR as well as the distribution of the X-ray dips. Our interpretation of the observed dependencies is as follows. The absence of statistically significant superorbital dependence of the orbital modulation of the 15-GHz radio emission is consistent with the radio being emitted by a jet in the system, in which case the orbital modulation is caused by wind absorption far away from the disc \citep{sz07}. For the X-rays, the situation is more complicated. The X-ray orbital modulation is due to variable absorption by the wind of the X-rays emitted close to the disc centre. The absorption can be separated into two components. One is independent of the superorbital modulation, and is due to absorption in the part of the wind steady in the comoving frame, as usually assumed. The other component is due to the part of the flow feeding the outer edge of the disc, and thus forming a bulge. In Cyg X-1 system, though the OB star does not fill completely the Roche lobe, the wind density is enhanced inside the Roche lobe, which is an analog of the Roche lobe overflow but by the wind. Such a focused wind \citep{fc82,gies86b} in some way forms the accretion disc, known to exist in the system. The main argument for the existence of the disc is an overall similarity of the X-ray spectra and timing properties of Cyg X-1 to those of low-mass X-ray binaries, in which case accretion has to form a disc (see, e.g., \citealt{zg04}). The disc formation, most likely, leads to a condensation of the wind matter near the disc outer edge on the side of the companion in the form of a bulge, similar to the disc bulge inferred to be present in low-mass X-ray binaries, e.g., \citet{wh82,ws82,pw88,hm89}, as illustrated in Fig.\ \ref{fig:model}. On the other hand, the bulge can also be formed (see, e.g., \citealt{bor01}) by a shock wave in the wind when it encounters the gravity of the companion, the disc, or a wind from the disc, which is also likely to be present. In any case, when the fast, $>1000$ km s$^{-1}$, wind is stopped, the density increases dramatically. A fraction of the focused wind might pass the black hole and be visible as additional absorption at orbital phase $\phi\sim0.5$, however we find no evidence for that in the X-ray data. An issue in the above scenario is the position of the bulge relative to the line connecting the stars. Consider the accretion process in the corotating frame of the binary. In the case of low-mass X-ray binaries, the accretion stream leaves the L1 point with a small velocity and, being deflected by the Coriolis force, hits the disc (with the outer edge defined by the stream orbital angular momentum) at an azimuthal angle $\phiorb_{\rm b}\sim 60\degr$, which is measured from the line connecting the stars with the origin at the compact object (see Fig. \ref{fig:model}c and the entry for $\phi_h - 180\degr$ in table 2 in \citealt{ls75}). For the mass-ratio in Cyg X-1, $q = M_{\rm BH}/M_{\rm C} = 0.36\pm0.05$ \citep{gies03}, the gas freely falling from L1 point would hit the disc at $\phiorb_{\rm b}\sim 70\degr$. However, these considerations neglect the radiative acceleration of the stream as well as the diffusive spreading of the accretion disc and therefore its potentially much larger size, with both effects significantly reducing $\phiorb_{\rm b}$. An additional complexity is brought by a possibility of the non-synchroneous rotation of the companion in high-mass systems. For example, a slower stellar rotation allows the wind to be launched with a non-zero angular momentum in the corotating frame and leads to the increase of $\phiorb_{\rm b}$, while the opposite is true for the faster rotation. The rotation of the companion in Cyg X-1 is compatible with corotation \citep{gies86a}, and therefore probably does not affect much the gas kinematics. Then, if we measure this angle, $\phiorb_{\rm b}$, in units of the 0--1 orbital phase, absorption of the X-ray emission in the bulge will peak at the orbital phase of $\phi\simeq 1-\phiorb_{\rm b}$. Indeed, the typical phase of major X-ray dips in low-mass X-ray binaries is $\simeq 0.8$--0.9 \citep{pw88}. Some other high-mass X-ray binaries show dips at $\phi \simeq 0.8$--0.9, also thought to be caused by the accretion stream passing through the line of sight (\citealt{bor01} and references therein). A crucial further complication in Cyg X-1 is that the disc is inclined with respect to the binary plane and thus precesses. The precession causes changes of the position of the bulge with respect to the line of sight. During a single binary revolution, the bulge moves up and down, while the inclination of the disc remains approximately constant (since $P_{\rm sup}\gg P$), see Figs.\ \ref{fig:model}(a, b). At $\Phi$ close to zero, we see the disc at the highest angle, i.e., most edge-on. The displacement of the bulge centre $\phiorb_{\rm b}$ relative to the line connecting the stars (see discussion above and Fig.\ \ref{fig:model}c) will also cause a small shift of the superorbital phase at which the bulge absorption is maximal. On the other hand, we see the disc close to face-on at $\Phi=0.5$, see Fig.\ \ref{fig:model}(b), and then the bulge is always outside the line of sight to the X-ray source. Thus, that additional absorption component is absent. The above considerations explain the dependence of hardness ratio on orbital and superorbital phases as well as the distribution of the X-ray dips. Based on the two-dimensional distribution of the dips (Fig.\ \ref{fig:dips}c), we have calculated that at least 1/3 of all the X-ray dips are caused by the bulge, and the rest are due to the isotropic part of the stellar wind. The picture in Fig.\ \ref{fig:model} can also be used to calculate the expected X-ray orbital profiles caused by the wind and bulge absorption. We can assume a specific density profile of the wind and the bulge, and calculate the optical depth during a revolution for a given superorbital phase. \subsection{Model} \label{sec:model} Let us consider first the isotropic component of the wind. The wind mass density as a function of distance from the center of the star, $r$, can be estimated from the mass conservation law \begin{equation} \rho_{\rm iso}(r) =\frac{\dot{M}}{4\pi r^2 v(r)}, \label{eq:rho_w} \end{equation} where $\dot M$ is the mass loss rate. We assume $v(r)\propto (1-R_*/r)^{\zeta}$, where $R_*$ is the stellar radius, and consider the attenuation cross-section independent of the distance. We thus get the absorption coefficient in the form \begin{equation} \label{eq:abswind} \alpha_{\rm iso}(r) = \alpha_{0} \left( \frac{a}{r} \right)^2 \left( \frac{1-R_*/a}{1-R_*/r} \right)^{\zeta}, \label{eq:alpha_w} \end{equation} where $a$ is the separation between the black hole and the companion, and $\alpha_{0}$ is the absorption coefficient at $r=a$. We define here the characteristic optical depth, $\tau_{\rm iso,0} =a\alpha_{0}$. \begin{figure} \centerline{\epsfig{file=fig8a_geometry.eps,width=8cm}} \vspace{0.2cm} \centerline{\epsfig{file= fig8b_geometry.eps,width=6cm}} \caption{ (a) Geometry of the wind. (b) Geometry of the bulge. } \label{fig:geom} \end{figure} The focused wind can be described by the cone of half-opening angle ${\theta_{\max}}$ centred around the line connecting the stars (see Fig. \ref{fig:geom}a for geometry). The additional opacity can be scaled to the opacity of the isotropic component and its angular dependence can be approximated by a parabola \citep{fc82,gies86b} \begin{equation} \label{eq:focwind} \alpha_{\rm fw}(r,\theta) = \alpha_{0} ( \eta_{\rm fw} -1 ) \left[ 1 - \left( \frac{\theta}{{\theta_{\max}}} \right)^2 \right], \quad \theta<{\theta_{\max}}, \label{eq:alpha_fw} \end{equation} where $\theta$ is the angle measured from the line connecting the stars and $\eta_{\rm fw}$ is the ratio of the wind density in the direction of the black hole to that of the isotropic component. The total wind absorption coefficient is defined by the sum $\alpha_{\rm w}(r, \theta) = \alpha_{\rm iso}(r) + \alpha_{\rm fw}(r,\theta)$. Let us now compute the optical depth through the wind along the line of sight. It depends on the position of the observer. We introduce the coordinate system centred at the black hole with the $z$-axis along the normal to the orbital plane, and the observer in the $x$--$z$ plane, so that the direction to the observer is $\bmath{n} = (\sin i, 0, \cos i)$. Position of the companion is then $\bmath{a}=a(\cos \phi, \sin\phi, 0)$, where $\phi$ is the orbital phase. The angle between the line of sight $\bmath{n}$ and $\bmath{a}$ varies with phase: \begin{equation} \cos \xi = \bmath{n}\cdot \frac{\bmath{a}}{a}= \sin i \cos \phi . \label{eq:cos_xi} \end{equation} The impact parameter is $a\sin\xi$ and the distance of some point in the wind to the supergiant centre is $r=\sqrt{s^2+a^2 \sin^2 \xi}$, where $s$ is its distance to the point of the closest approach (which can be negative). The corresponding radius vector is $\bmath{r}= \bmath{n} (s + a \cos \xi) - \bmath{a}$ (see Fig. \ref{fig:geom}a). The angle between $\bmath{r}$ and $-\bmath{a}$ is then \begin{equation} \cos \theta = \frac{ \bmath{r}}{r} \cdot \frac{(-\bmath{a}) }{a} = \frac{1}{r} \left( a\sin^2\xi - s \cos\xi\right) . \label{eq:cos_th} \end{equation} The optical depth through the wind is computed as \begin{equation} \tau_{\rm w}(\phi) = \int_{-a\cos\xi}^{\infty} \alpha_{\rm w}(r, \theta) {\rm d}s . \label{eq:tau_w} \end{equation} Let us apply this formalism to Cyg X-1. We take the ratio of the separation to the supergiant radius $a/R_* \approx 2.3$ \citep{zi05}, the inclination $i=40\degr$ (see Paper I and references therein), the velocity profile exponent $\zeta=1.05$, and the focused wind parameters ${\theta_{\max}}=20\degr$ and $\eta_{\rm fw}=3$ \citep{fc82,gies86b}. In this case, the optical depth through the isotropic component of the wind from the black hole to the infinity in the radial direction away from the companion is about $0.73\tau_{\rm iso,0}$, in the perpendicular direction it is $1.26\tau_{\rm iso,0}$ (i.e. at phases $\phi=0.25, 0.75$) and at zero orbital phase along the line of sight $\tau_{\rm w}(0)\approx 3 \tau_{\rm iso,0}$. The typical optical depth provided additionally by the focused wind across its cone is $\approx {\theta_{\max}} (\eta_{\rm fw} -1)\frac{2}{3} \tau_{\rm iso,0}\approx 0.45 \tau_{\rm iso,0}$. Thus for Cyg X-1 at orbital phases $\phi\sim 0$, the focused wind adds only 15 per cent to the opacity produced by isotropic wind, while at $\phi\sim 0.5$, its contribution can reach 60 per cent, but the absorption itself at this phase is low. Thus in the first approximation, we can use only the isotropic wind model and include the corrections introduced by the focused wind later. In the case of the bulge, we first need to compute the position of the bulge centre, $\bmath{b}$, relative to the black hole. For the prograde precession (see L06) the unit vector along the normal to the precessing accretion disc is $\bmath{d}= (-\sin\delta \cos\Phi, -\sin \delta\sin\Phi, \cos\delta)$, where $\delta$ is the precession angle. Assume now that the bulge centre lies at the disc plane and the projection of $\bmath{b}$ on the orbital plane $x$--$y$ makes an angle $\phiorb_{\rm b}$ with the line connecting the black hole to the companion (i.e. the azimuth of $\bmath{b}$ is $\phi+ \phiorb_{\rm b}$, see Fig. \ref{fig:model}c). We then get the unit vector of the bulge centre \begin{equation} \bmath{b} = \frac{[\cos(\phi+ \phiorb_{\rm b}), \sin (\phi+ \phiorb_{\rm b}), \tan\delta \cos(\phi+ \phiorb_{\rm b}-\Phi)]}{\sqrt{1+\tan^2\delta \cos^2 (\phi+ \phiorb_{\rm b}-\Phi)}} . \label{eq:bulge_position} \end{equation} The angle it makes to the line of sight is given by (see Fig. \ref{fig:geom}b) \begin{eqnarray} \cos\beta & =& \bmath{b}\cdot \bmath{n} \\ & = & \frac{\sin i \ \cos (\phi+ \phiorb_{\rm b}) + \cos i \tan\delta\cos(\phi+ \phiorb_{\rm b}-\Phi) } {\sqrt{1+\tan^2\delta \cos^2 (\phi+ \phiorb_{\rm b}-\Phi) }}. \nonumber \label{eq:beta} \end{eqnarray} Let us assume an exponential dependence of the absorption coefficient on the distance $p$ from the bulge centre, \begin{equation} \label{eq:absbulge} \alpha_{\rm b}(p) = \alpha_{\rm b,0} \exp(-p/r_{\rm b}) , \label{eq:alpha_b} \end{equation} with $r_{\rm b}$ being the bulge scale-height. This gives the optical depth from the bulge centre to infinity of $\tau_{\rm b,0}=r_{\rm b}\alpha_{\rm b,0}$. On the other hand, the optical depth from the black hole through the bulge along the line of sight, \begin{equation} \tau_{\rm b}(\phi,\Phi) = \int_{-R\cos\beta}^{\infty} \alpha_{\rm b}(p) {\rm d} s , \end{equation} depends on the orbital as well as superorbital phase. Here $R$ is the distance to the bulge centre from the black hole (i.e., approximately the disc size) and $p =\sqrt{s^2+ R^2 \sin^2 \beta}$. For simplicity we assume that the wind and the bulge are independent and therefore the orbital modulation profile is given by \begin{equation} \label{eq:attenuation} F(\phi, \Phi) = F_0(\cos\psi) \exp[-\tau_{\rm w}(\phi)]\ \exp[-\tau_{\rm b}(\phi,\Phi)] , \end{equation} where $F_0$ is the intrinsic flux (which depends on $\Phi$) without absorption in the direction of the observer and $\psi$ is the angle between the disc normal and the line of sight: \begin{equation} \label{eq:cospsi} \cos\psi= \bmath{n} \bmath{\cdot} \bmath{d} = \cos i \cos \delta - \sin i \sin \delta \cos\Phi . \end{equation} The retrograde precession can be modelled by substituting $\Phi\rightarrow-\Phi$ in the above formulae. \begin{sidewaystable} \centering \rotcaption{Best-fitting model parameters.} \begin{tabular}{@{}crccccccrccl@{}} \hline \# & Model$^a$ & $\delta$$^b$ & $\tau_{\rm iso,0}$$^c$ & $\tau_{\rm b,0}$$^d$ & $\phiorb_{\rm b} $$^e$ & $\Delta\phisup$$^f$ & $A$$^g$ & $\eta$, $\beta_{\rm j}$, $\tau$$^h$ & $\tau_{\rm C}/\tau_{\rm A}$$^i$ & $C/A$$^j$ & $\chi^2/{\rm dof}$$^k$ \\ & & deg & & & & & & & \\ \hline 1 & W+B a & $7.5\pm0.5$ & $0.09 \pm 0.03$ & $1.05^{+0.55}_{-0.44}$ & $0.07 \pm 0.04$ & $0.03 \pm 0.02 $ & $9.2 \pm 0.2$ & -- & & & $151.9/154$ \\ 2 & W+B b & 10.0 (f) & $0.09 \pm 0.02$ & $0.8^{+0.4}_{-0.35}$ & $0.08 \pm 0.03$ & $0.04 \pm 0.01 $ & $11.6 \pm 0.8$ & $-0.27\pm0.06$ & & & $155.2/154$ \\ 3 & W+B c & 5.0 (f) & $0.07 \pm 0.03$ & $1.35^{+0.65}_{-0.6}$ & $0.07 \pm 0.04 $ & $0.03 \pm 0.01 $ & $2.90 \pm0.10 $ & $0.47\pm0.03$ & & & $148.8/154$ \\ 4 & W+B c & 7.5 (f) & $0.09 \pm0.03$ & $1.05^{+0.50}_{-0.45}$ & $0.07 \pm 0.04$ & $0.03 \pm 0.01 $ & $3.55 \pm0.13 $ & $0.36\pm0.03$ & & & $151.8/154$ \\ 5 & W+B c & 10.0 (f) & $0.09 \pm 0.02$ & $0.8^{+0.4}_{-0.35}$ & $0.08\pm 0.04$ & $0.03 \pm 0.02 $ & $4.04 \pm 0.14$ & $0.29\pm0.02$ & & & $154.7/154$ \\ 6 & W+B d & 10.0 (f) & $0.09 \pm 0.02$ & $0.8^{+0.4}_{-0.35}$ & $0.08 \pm 0.04$ & $0.04 \pm 0.01 $ & $14.8\pm1.2$ & $0.56\pm0.05$ & & & $155.9/154$ \\ 7& F+W+B c & 10.0 (f) & $0.09 \pm 0.02$ & $0.8^{+0.4}_{-0.35}$ & $0.08 \pm 0.04$ & $0.03 \pm 0.01 $ & $4.20\pm0.15$ & $0.29\pm0.02$ & & & $155.4/154$ \\ 8 & W+B c & 10.0 (f) & $0.09 \pm 0.02$ & $0.9^{+0.4}_{-0.35}$ & $0.08\pm 0.04$ & $0.01 \pm 0.02 $ & $4.15\pm0.14$ & $0.27\pm0.02$ & $0.3\pm0.1$ & $1.31\pm0.03$ & $359.2/318$ \\ \hline \end{tabular} \begin{flushleft} {$^{a}$The models described in Section \ref{sec:model}: W is the isotropic wind model, F is the focused wind and B stands for the bulge. Small letters giving the models of the intrinsic emission from Section \ref{sec:fitting}. The model 8 is fitted to the ASM A and C channels simultaneously. $^{b}$The precession angle. $^{c}$Characteristic optical depth of the isotropic wind. $^{d}$Characteristic optical depth of the bulge. $^{e}$The shift of the bulge centre in orbital phase (fraction of the orbit). $^{f}$The shift in superorbital phase. $^{g}$The model normalization in the ASM A channel. $^{h}$The anisotropy parameter, the jet velocity, or the slab optical depth. $^{i}$The ratio of absorption coefficients in channels C and A. $^{j}$The ratio of model normalizations in channels C and A. $^{k}$$\chi^2$ and the number of degrees of freedom. The errors on the parameters are given at 90 per cent confidence level for one parameter, i.e. for $\Delta\chi^2=2.71$. The size scale of the bulge in units of the disc size, $r_{\rm b}/R$, is fixed at 0.2 and inclination $i$ is $40\degr$ in all of the models.} \end{flushleft} \label{tab:fits} \end{sidewaystable} \subsection{Modelling the data} \label{sec:fitting} In order to describe the profiles presented in Fig. \ref{fig:profiles} with the model of Section \ref{sec:model}, we need to specify the angular distribution of the intrinsic flux, $F_0(\cos\psi)$. In Paper I we have considered four simple analytical models: (a) the black body, with the flux proportional to the projected area, $F_0(\cos\psi)=A \cos\psi$. (b) an anisotropic model of $F_0(\cos\psi)=A \cos\psi (1+ \eta \cos\psi)$ with parameter $\eta$ giving the degree of deviation from the black body. Such anisotropy can be produced for example by thermal Comptonization (Paper I; \citealt{st85,vp04}), which the dominant radiative process giving rise to X-rays in the hard state of Cyg X-1 \citep[e.g.,][]{gier97,pc98,p98}. (c) the steady jet model, $F_0(\cos\psi)=A [\gamma_{\rm j}(1- \beta_{\rm j} \cos\psi)]^{-(1+\Gamma)}$, where $\beta_{\rm j}=v/c$ is the jet velocity, $\gamma_{\rm j} =1/\sqrt{1-\beta_{\rm j}^2}$ is the jet Lorentz factor, and $\Gamma$ is the photon index of the X-ray radiation. By the 'jet', we mean here either the base of the jet in the direct vicinity of the black hole, or an outflowing corona \citep[see e.g.][]{b99,mbp01,mnw05}. (d) the slab absorption model, $F_0(\cos\psi)= A \exp(-\tau/\cos\psi)$, which can be associated, for example, with some kind of a disc outflow. All the models provide a good fit to the superorbital variability of Cyg X-1 (Paper I). Models (b) and (c) can be considered as more physically motivated, but we consider here all of them. In order to keep the number of parameters to minimum we fix the inclination of the system $i=40\degr$. The precession angle $\delta$ is not well determined in models (b)--(d) as it is anticorrelated with other parameters ($\eta, \beta_{\rm j}, \tau$, see Paper I). Thus we fix it at three values between $5\degr$ and $10\degr$. The parameters describing the absorption of radiation are the characteristic optical depths $\tau_{\rm iso,0}$ and $\tau_{\rm b,0}$ for the wind and bulge, respectively. Additional parameters are the bulge density scale measured in units of the disc size, $r_{\rm b}/R$, and the phase shift, $\phiorb_{\rm b}$, of the position of the bulge centre. An arbitrary shift in the superorbital phase, $\Delta\phisup$ (due to the uncertainty of the superorbital ephemeris), is also introduced (i.e. we replace $\Phi$ by $\Phi-\Delta\phisup$ in all formulae of Section \ref{sec:model}). The parameters describing the radiation pattern are the normalization, $A$ (for ASM A channel), and the anisotropy parameter, $\eta$, in model (b), $\beta_{\rm j}$ in model (c) (where we fix $\Gamma$ at a typical hard-state value of 1.7), and the slab optical depth, $\tau$, in model (d). We consider the prograde precession of the disk (L06). In order to understand the influence of the model complexity on the results, we consider first only the isotropic component of the wind (models W in Table \ref{tab:fits}) and fit the ASM A profiles only, which show strongest variability. We find that parameters $r_{\rm b}/R$ and $\tau_{\rm b,0}$ are anticorrelated, and cannot be determined separately. This happens because various combinations of the two parameters can give the same optical depth through the bulge at a given impact parameter. Therefore, we fix $r_{\rm b}/R=0.2$. The best-fitting model parameters are presented in Table \ref{tab:fits}. For model (a), the precession angle agrees within the errors with the results of Paper I. The jet model, (c), provides a slightly better fit for smaller precession angles. The models (b) and (d) also give statistically similar fits. The phase shifts $\Delta\phisup$ and $\phiorb_{\rm b}$ are well constrained by all the models. The fits require the shift of the bulge centre from the line connecting the stars by $\phiorb_{\rm b} \approx0.07$ (i.e., $25 \degr$, see Fig.\ \ref{fig:model}c). All the models give similar optical depths through the wind and the bulge. The wind optical depth $\tau_{\rm w}$ varies between 0.28 (i.e. $\approx3\tau_{\rm iso,0}$) and 0.08 (i.e. $\approx\tau_{\rm iso,0}$) for $\phi$ varying between 0 and 0.5. For the bulge, $\tau_{\rm b}$ varies between 0.15 and 0.007 at $\Phi=0$ and between 0.05 and 0.008 at $\Phi=0.5$. We now add an additional focused wind component with the parameters specified in Section \ref{sec:model} and fit the data using jet model (c). The resulting best-fitting parameters are not very much different from those obtained with the isotropic wind model (compare entries 5 and 7 in Table 1). This is expected, because the focused wind affects the total opacity on average at about a 30 per cent level. Finally, we fit the light curves in channels A and C simultaneously. Two additional parameters have to be introduced: the ratio of the absorption coefficients (and optical depths) in channels C and A, $\tau_{\rm C}/\tau_{\rm A}$, and the ratio of the normalizations (intrinsic hardness ratio), $C/A$. The best-fitting results for the main model parameters change only slightly (compare entries 5 and 8 in Table 1). Because the mean absorption coefficients in channels A and C differ only by a factor of 3, the absorbing gas has to be rather strongly ionized. For the retrograde precession, all these models give much worse fits to the data. \section{Discussion} \subsection{The origin of beat frequencies} \label{sec:beat} \begin{figure} \centerline{\epsfig{file=fig9_fourier.eps,width=8cm}} \caption{Power density spectra predicted by our models (arbitrary normalization). The solid curves show the power spectrum of the flux for the outflow model, 5 in Table \ref{tab:fits}. The inset zooms on the frequency range near $1/P$. The dashed curves show the model with the absorption only in the wind, i.e., neglecting the presence of the bulge. } \label{fig:fourier} \end{figure} A collateral effect of the coupling between the orbital and superorbital modulations may be appearance of additional frequencies in the power spectrum. If the two modulations were independent, there would be simply two peaks in the power spectrum at the corresponding frequencies. On the other hand, if one modulation depends on the other, beat frequencies, at $\nu=1/P\pm 1/P_{\rm sup}$, may appear. Indeed, L06 reported finding the lower of the beat frequencies (albeit at a relatively limited statistical significance), and also found that its origin from X-ray reflection from the surface of the companion is unlikely. Here, we have tested whether the discovered dependence of the orbital modulation on the superorbital phase may indeed cause beat frequencies to appear. Using our model (given by equation (\ref{eq:attenuation}) and other formulae of Sections \ref{sec:model}, \ref{sec:fitting} with parameters of model 5 in Table \ref{tab:fits}), we have generated a light curve and computed Fourier power-density spectrum (PDS). We have found that our model gives rise to strong peaks at frequencies $1/P_{\rm sup}$ and $1/P$ with harmonics as well as to two peaks in the power spectrum at the beat frequencies. Interestingly, the lower beat-frequency peak is 3.7 to 5.4 times stronger than the higher one (depending on whether we compute Fourier transforms from the flux or from the logarithm of the flux). Fig.\ \ref{fig:fourier} shows the flux Fourier transform for this case for the outflow model. We then compare these predictions with a simpler model where absorption in the bulge is neglected. In this case, there are two beat-frequency peaks of equal strength in the PDS of the flux (see the dashed curves in Fig.\ \ref{fig:fourier}), while they are missing in the PDS computed from the logarithm of the flux, because the coupling disappears. If on the other hand, only bulge produces absorption, PDS shows both beat-frequency peaks with the strength ratio of 10 and 5 for the flux and its logarithm, respectively. Finally, we experimented with the model where intrinsic flux, $F_0$, as a function of superorbital phase was assumed to be constant and both bulge and wind are responsible for absorption. Now, the strength of the peak at $1/P_{\rm sup}$ has diminished by three orders of magnitude, while the behaviour of PDS at $1/P$ and the beat frequencies was almost identical to the full model with variations of $F_0$ (the ratio of peak strengths is 6.5 and 5.5 for the flux and its logarithm, respectively). We see that the lower beat-frequency peak is always stronger than the higher one when absorption is modulated by the bulge (for prograde precession). The coupling discovered in this work thus predicts a presence of beat frequencies with a stronger low-frequency peak, which is consistent with the discovery by L06 of only the low-frequency peak. \subsection{Superorbital variability and outbursts of Cyg X-1} \label{sec:outbursts} It is of interest to consider whether the superorbital variability of Cyg X-1 is related to other aspects of the source activity. Recently, the MAGIC collaboration \citep{albert07} reported detecting TeV emission from Cyg X-1. That detection, on MJD 54002, took place in the middle of a strong X-ray outburst of Cyg X-1 \citep{t06}. We have checked that that time corresponds to the peak of the superorbital cycle, $\Phi\simeq 0.5$, when the disc and jet of Cyg X-1 are most face-on. On the other hand, L06 found that the superorbital cycle was uncorrelated with the appearance of other strong X-ray outbursts of the source of duration of days \citep{sbp01,gol03}. Thus, the significance of the coincidence of the TeV burst with the peak of the superorbital cycle in the present case remains unknown. Interestingly, the orbital phase of the TeV outburst was at $\phi\simeq 0.9$ \citep{albert07}, at which absorption of TeV photons by pair production on the stellar photons is very strong. A possible way to obtain detectable TeV emission is then via pair cascades. We note that the statistical significance of the detection was relatively limited, $4.1\sigma$, and thus an independent confirmation of this deteciton is desirable. \section{Conclusions} We have discovered the dependence of the orbital modulation strength and the hardness ratio on the superorbital phase of Cyg X-1. The observed effects can be explained by the presence of the absorbing material more or less fixed in the corotating frame of the stars. We associate this material with the bulge formed by the accreting stream impacting the accretion disc. Because of the disc precession (causing superorbital variability), the bulge moves up and down and its influence on absorption varies. At the superorbital phase 0.5, the line of sight does not pass through the bulge, while at $\Phi\approx 0$ the absorption in the bulge is maximal. We estimate the maximal optical depth at 1.5--3 keV through the bulge (for our line of sight) of about 0.15, while the stellar wind produces twice as much of the absorption. Using a simple model of the bulge and the stellar wind incorporating the angular dependence of the intrinsic X-ray radiation from the black hole vicinity, we were able to reproduce the detailed shape of superorbital variability as well as of the orbital modulation at various superorbital phases. We find the bulge centre is displaced from the line connecting the stars by about $25\degr$. We also study the distribution of the X-ray dips over superobrital phase we find their concentration towards the superorbital phase $0.1$, which coincides with the position of the flux minimum. We thus are in position to claim that the X-ray dips observed in Cyg X-1 at around zero orbital phase have direct relation to the bulge which, in turn, causes variation of the orbital modulation with the superorbital phase. We Fourier analyse our model, and find it explains the finding of only the lower beat frequency between the orbital and superorbital frequencies in the observed power spectrum (L06), provided the disc precession is prograde. We also find that both the X-ray and radio fluxes of Cyg X-1 in the hard state on time scales $\ga\! 10^4$-s have lognormal distributions, which complements the finding of a lognormal flux distribution in the hard state on $\sim$1-s time scales \citep{uttley05}. We stress out that the lognormal character of the flux distribution requires that flux logarithms rather than fluxes themselves should be used for averaging and error analysis. We also correct a mistake in the treatment of V03 of the uncertainty of intrinsic rms variability of light curves in the case when the uncertainty is higher than the intrinsic rms (which is often close to null). The mistake stems from the failure of the assumption of the uncertainty to be much less than the estimated quantity, used in the standard propagation of errors. \section*{ACKNOWLEDGMENTS} JP has been supported by the Academy of Finland grant 110792. AAZ has been supported by the Academy of Finland exchange grant 112986, the Polish MNiSW grants 1P03D01128 and NN203065933 (2007--2010), and the Polish Astroparticle Network 621/E-78/SN-0068/2007. AI has been supported by the Graduate School in Astronomy and Space Physics, V\"ais\"al\"a foundation and by the Russian Presidential program for support of leading scientific schools (grant NSH-784.2006.2). We thank J. Miko{\l}ajewska for valuable discussion regarding the rotation speed of the companion of Cyg X-1. We are thankful to Guy Pooley for the data from the Ryle telescope. JP and AI acknowledge the support of the International Space Science Institute (Bern). JP thanks the Department of Astrophysical Sciences, Princeton University, for hospitality during his visit. We acknowledge the use of data obtained through the HEASARC online service provided by NASA/GSFC.
1,116,691,498,100
arxiv
\section{Introduction} Synchronization which describes the adjustment of rhythms of self-sustained oscillators due to an interaction is a fundamental phenomenon of nonlinear sciences. This phenomenon has been observed in physical, chemical, biological, and social systems \cite{1}. In recent years, many efforts have been devoted to extend the concept of synchronization to quantum systems such as Van der Pol oscillators \cite{2,3,4}, atomic ensembles \cite{5,6}, trapped ions \cite{7}, and cavity optomechanics \cite{8,9,10,11}. In general, a quantum system can be either continuous or discrete. In the previous studies \cite{2,3,4,5,6,7,8,9,10,11}, most authors have considered the quantum synchronization of continuous-variable systems with classical analogs since they can be described by quasiprobability distributions in phase space such as the Wigner function. For example, in Refs. \cite{8,9,10,11}, the authors have investigated the quantum synchronization of optomechanical systems formed by optical and mechanical modes \cite{12,13,14,15}. The measures of complete and phase synchronization of continuous-variable quantum systems have been proposed \cite{16}. For discrete-variable systems without classical analogue, the Pearson product-moment correlation coefficient can be used to measure the degree the synchronization of spin systems \cite{17}. The authors have investigated the synchronization of two qubits in a common environment using the Bloch-Redfield master equation and found that two qubits can not be synchronized for purely dephasing case \cite{17}. Recently, the measure of quantum synchronization using the Husimi Q representation and the concept of spin coherent states has been suggested by Roulet and Bruder \cite{18}. This measure can be used to study the synchronization of discrete-variable systems including qubits and qutrits. The authors have pointed out that qubits can not be synchronized since they lack a valid limit cycle and a spin 1 could be phase-locked to a weak external driving \cite{18}. Later, the authors investigated the quantum synchronization and entanglement generation of two qutrits using the Lindblad master equation \cite{19}. Very recently, the quantum synchronization of two quantum oscillators within one common dissipative environment at zero temperature was investigated with the help of a path integral formalism \cite{20}. In the previous works \cite{17,18,19}, the Markovian and Born approximations were employed and the temperature of bath was assumed to be zero. Note that the rotating wave approximation was used in the previous work \cite{20}. Thus, the influence of the temperature of the bath or the non-Markovian effects were not taken into accounted in the above works. In the present paper, we study the quantum synchronization and correlations of two qutrits within one common bath using the hierarchy equation method \cite{21,22,23,24}. The two qutrits have no direct interaction. In particular, in the derivation of hierarchy equations, the Markovian, Born, and rotating wave approximations are not used. The hierarchy equation method is a high-performance method and is suitable for strong- and ultrastrong-coupling systems like chemical and biophysical systems \cite{25,26,27,28}. Our results show that the measures of quantum synchronization and correlations could increase with the increase of the coupling strength between each qutrit and the common bath. The influence of the temperature of the bath depends heavily on the detuning of two qutrits. If the detuning is much smaller than the frequecies of two qutrits, then the maximal value of the measure of quantum synchronization increases with the increase of the temperature of the bath. However, if the detuning is not much smaller than the frequencies of two qutrits, the temperature of the bath could play a destructive role in the synchronization of two qutrits. In addition, the correlation time of the qutrits and bath plays an important role in the generation of quantum synchronization and correlations. The phase locking between two qutrits without direct interaction can be achieved if they are put into one bath and the dissipation is taken into accounted. In particular, two qutrits can not be synchronized in the purely dephasing case. The Arnold tongue of synchronization and quantum correlations (measured by quantum mutual information) can be obtained in the present model. The shape of the Arnold tongue can be adjusted by the temperature, coupling strength, and correlation time of the system. The organization of this paper is as follows. In Sec. II, we introduce the model and the hierarchy equation method. In Sec. III, we briefly review the measures of quantum synchronization and correlations. In Sec. IV, we investigate the influence of the temperature, coupling strength, and correlation time of the system on the quantum synchronization and correlations of two qutrits. In Sec. V, we summarize our results. \section{Model and hierarchy equation method} In this section, we introduce the model and hierarchy equation method used in the present work. We consider a system formed by two qutrits with no direct interaction and the free Hamiltonian is (set $\hbar = 1$) \begin{eqnarray} H_S = \omega_1 J_1^z + \omega_2 J_2^z, \end{eqnarray} where $\omega_1$ and $\omega_2$ are frequencies of qutrit 1 and qutrit 2, respectively. The detuning between two qutrits is $\Delta = \omega_2 - \omega_1$. We assume two qutrits are put into a common thermal bath. The free Hamiltonian of the thermal bath is \begin{eqnarray} H_B = \sum_k \omega_k b_k^{\dag} b_k, \end{eqnarray} where $\omega_k$ is the frequency of the $k$th mode of the thermal bath. The interaction Hamiltonian of two qutrits and bath is \begin{eqnarray} H_I = \sum_k g_k V(b_k^{\dag} + b_k), \end{eqnarray} where $g_k$ is the coupling strength between the qutrits and the $k$th mode of the bath. Here, $b_k^{\dag}$ and $b_k$ are the creation and annihilation operators of the thermal bath; V is the system operator coupled to the bath. Without loss of generality, we suppose \begin{eqnarray} V = (1 + h)(J_1^z + J_2^z) + (1 - h)(J_1^x + J_2^x), \end{eqnarray} where $h$ is an anisotropy coefficient with $-1 \leq h \leq 1$. In the interaction picture, the dynamics of the present system is \cite{22} \begin{eqnarray} \rho^I_S(t) &=& U(t) \rho_S(0), \label{sol} \\ U(t) &=& \mathcal{T} \exp\{ -\int _0^t dt_2\int _0^{t_2} dt_1 V(t_2)^{\times}[\Re[C(t_2 - t_1)]V(t_1)^{\times} \\ \nonumber && + i \Im[C(t_2 - t_1)] V(t_1)^\diamond] \} \rho_S(0), \end{eqnarray} where $\rho_S$ is the reduced density matrix of the system and $\mathcal{T}$ is the chronological time-ordering operator. Here, $O_1^{\times}O_2 \equiv [O_1,O_2] = O_1O_2 - O_2O_1$ and $O_1^{\diamond}O_2 \equiv \{O_1,O_2\} = O_1O_2 + O_2O_1$. Note that $\Re[C(t_2 - t_1)]$ and $\Im[C(t_2 - t_1)]$ are the real and imaginary parts of the bath time-correlation function $C(t_2 - t_1) = \langle B(t_2)B(t_1)\rangle$, respectively, and $B(t) = \sum_k (g_k b_k e^{-i\omega_k t} + g_k^*b^{\dag}e^{i\omega_k t})$. In the present work, we choose the Drude-Lorentz spectrum \cite{21,22,23,24} \begin{eqnarray} J(\omega) = \omega \frac{2\lambda\gamma}{\pi(\gamma^2 + \omega^2)}, \end{eqnarray} where $\lambda$ is the coupling strength between qutrits and bath, $\gamma$ represents the width of the spectral distribution of the bath mode. The quantity $1/\gamma$ represents the correlation time of the bath. Particularly, if $\gamma$ is much larger than any other frequency scale, the Markovian approximation is valid. For a bath with the Drude-Lorentz spectrum, the bath correlation function is \cite{23} \begin{eqnarray} \langle B(t_2)B(t_1)\rangle &=& \sum_{k=0}^{\infty} c_k e^{-\nu_k |t_2 - t_1|}, \label{bath_corr}\\ \nu_k &=& \frac{2\pi k }{\beta}(1 - \delta_{0k}) + \gamma \delta_{0k},\\ c_k &=& \frac{4\gamma \lambda \nu_k}{\beta(\nu_k^2 - \gamma^2)}(1 - \delta_{0k}) + \gamma \lambda [\cot(\frac{\gamma\beta}{2}) - i]\delta_{0k}, \end{eqnarray} where $\beta = 1/(kT)$ is the inverse temperature of the thermal bath. Using Eqs.(\ref{sol}) and (\ref{bath_corr}), the dynamics of the model can be describe by the hierarchy equation \cite{21} \begin{eqnarray} \dot{\rho}^n(t) &=& -(iH_s^{\times} + \sum_{\mu = 1,2}\sum_{k=0}^M n_{\mu k}\nu_k)\rho^n(t)\nonumber\\ && - \sum_{\mu = 1,2} (\frac{2\lambda}{\beta\gamma} - i\lambda - \sum_{k=0}^M\frac{c_k}{\nu_k}) V^{\times}_{\mu}V^{\times}_{\mu}\rho^n(t) \nonumber\\ && - i\sum_{\mu = 1,2}\sum_{k=0}^M n_{\mu k}[c_k V_{\mu}\rho^{n_{\mu k} ^-}(t) - c_k^* \rho^{n_{\mu k} ^-}(t)V_{\mu}]\nonumber\\ && - i\sum_{\mu = 1,2}\sum_{k=0}^M V^{\times}_{\mu} \rho^{n_{\mu k} ^+}(t). \end{eqnarray} Note that $\rho^{n_{\mu k} ^+} = \rho^{n_{\mu k} \rightarrow n_{\mu k} + 1}$ ($\rho^{n_{\mu k} ^-} = \rho^{n_{\mu k} \rightarrow n_{\mu k} - 1}$) denotes an increase (decrease) in the $\mu k$'th component of the multi-index. It is worth noting that in the derivation of the above equation, the Markovian, Born, and rotating wave approximations are not used. The hierarchy equation method is an exact method which is also suitable for strong- and ultrastrong-coupling systems. The density matrix of two qutrits at arbitrary time can be obtained from the initial state of the system and the above hierarchy equation of motion. In the present work, we assume two qutrits are put into a common bath, i.e., $V_1 = V_2 = V$. \section{Measures of quantum synchronization and correlations} For a discrete-variable system, one can use the Husimi Q representation to describe the phase portrait of a spin coherent state. In general, a spin coherent state is defined as \cite{18,19} \begin{eqnarray} |\theta, \phi\rangle = e^{-i\phi J_z} e^{-i\theta J_y} |J, J\rangle, \end{eqnarray} with the completeness relation \begin{eqnarray} \int_0^\pi d\theta \sin{\theta} \int_0^{2\pi} d\phi |\theta, \phi\rangle \langle \theta, \phi| = (4\pi)/(2J + 1). \end{eqnarray} For a spin 1 system, we have \begin{eqnarray} |\theta, \phi\rangle &=& \frac{e^{-i\phi}}{2}(1 + \cos\theta) |1,1\rangle + \frac{\sin{\theta}}{\sqrt{2}} |1,0\rangle \nonumber\\ && + \frac{e^{i\phi}}{2} (1 - \cos{\theta}) |1,-1\rangle. \label{spin_coherent_states} \end{eqnarray} The measure of quantum synchronization proposed by Roulet and Bruder is defined as \cite{19} \begin{eqnarray} S_{r}(\phi) &=& \int_0^{2 \pi} d\phi_2\int_0^{\pi} d\theta_1 \int_0^{\pi} d\theta_2 \sin{\theta_1} \sin{\theta_2} \nonumber\\ && \times Q(\theta_1, \theta_2, \phi + \phi_2, \phi_2) - \frac{1}{2\pi}, \label{def_S} \end{eqnarray} where \begin{eqnarray} Q(\theta_1, \theta_2, \phi + \phi_2, \phi_2) &=& \frac{9}{16\pi^2} (\langle \theta_1, \phi + \phi_2|\otimes\langle \theta_2, \phi_2|) \nonumber\\ && \rho (| \theta_1, \phi + \phi_2 \rangle \otimes | \theta_2, \phi_2\rangle). \label{Q} \end{eqnarray} Here, $Q(\theta_1, \theta_2, \phi + \phi_2, \phi_2)$ is the Husimi Q function and $\phi = \phi_1 - \phi_2$ is the relative phase of two spins. It can be viewed as a phase-space distribution of density matrix $\rho$ based on spin coherent states. Note that $S_r(\phi)$ depends upon the relative phase $\phi$ explicitly. Physically, it can be used to estimate whether two spins have tendency towards phase locking \cite{19}. If $S_r(\phi)$ is always zero, then there is no fixed phase relation of two spins, i.e., no phase locking of two spins. Using Eqs. (\ref{spin_coherent_states})-(\ref{Q}), we obtain the measure of synchronization of two spins 1 as \begin{eqnarray} S_r(\phi) &=& \frac{(32 \xi + 9\pi^2 \eta)}{256\pi}, \\ \xi &=& e^{2i\phi} \rho_{37} + e^{-2i\phi} \rho_{73}, \\ \eta &=& e^{i\phi}(\rho_{24} + \rho_{35} + \rho_{57} + \rho_{68}) \nonumber\\ && + e^{-i\phi}(\rho_{42} + \rho_{53} + \rho_{75} + \rho_{86}), \end{eqnarray} where $\rho_{jk}$ is the element of density matrix $\rho$. In order to measure the entanglement of two spins, we employ the logarithmic negativity which is defined by \cite{29,30} \begin{equation} E(\rho)\equiv \log_2{(1+2N)}= \log_2||\rho^{T}||, \end{equation} with $\rho^{T}$ being the partial transpose of density matrix $\rho$. Here, $||\rho^{T}||$ is the trace norm of $\rho^{T}$ and $N$ is negativity defined by \cite{29,30} \begin{equation} N\equiv\frac{||\rho^{T}||-1}{2}. \end{equation} $N$ is the absolute value of the sum of the negative eigenvalues of $\rho^{T}$. Now, we consider the quantum mutual information $I$ as a measure of all quantum correlations between two subsystems \cite{19,31} \begin{eqnarray} I = S(\rho_1) + S(\rho_2) - S(\rho), \end{eqnarray} with $\rho_1 = Tr_2(\rho)$ and $\rho_2 = Tr_1(\rho)$. Note that $S(\rho) = - Tr[\rho \ln(\rho)]$ is the Von Neumann entropy of density matrix $\rho$. In Ref. \cite{31}, the authors have proposed mutual information as an order parameter for quantum synchronization of a quantum system. \section{Discussions} \subsection{Influence of coupling strength $\lambda$} \begin{figure}[tbp] \centering {\scalebox{0.3}[0.3]{\includegraphics{fig1.eps}}} \vspace*{8pt} \caption{$S_r(\phi)$ is plotted as a function of $\phi$ for $\lambda = 0$ (red line), $\lambda = 0.02 \omega_1$ (green line), and $\lambda = 0.05 \omega_1$ (blue line). The parameters are $\beta = 0.3/\omega_1, \gamma = 2\omega_1, \Delta = 0.01\omega_1$, and $h = -1$. } \label{fig1} \end{figure} In Fig. 1, we plot $S_r(\phi)$ of steady state as a function of the relative phase $\phi$ of two spins for different values of coupling strength $\lambda$. From Fig. 1, one can find that if the coupling constant is zero, then $S_r(\phi)$ is always zero and there is no fixed phase relation between two spins. This implies two spins can not be synchronized in the case of $\lambda = 0$. Physically, in the case of $\lambda = 0$, there is no direct or indirect interaction between two qutrits. It is obvious that two qutrits can not be synchronized without any interaction. On the other hand, the maximal value of $S_r(\phi)$ increases with the increase of the coupling strength $\lambda$. For example, the maximum of $S_r(\phi)$ can be about 0.037 if $\lambda = 0.05$. Therefore, two spins can be synchronized in the presence of the interaction of spins and the common bath. \subsection{Influence of anisotropy coefficient $h$} \begin{figure}[tbp] \centering{\scalebox{0.3}[0.3]{\includegraphics{fig2a.eps}}} \centering{\scalebox{0.3}[0.3]{\includegraphics{fig2b.eps}}} \caption{$S_r(\phi)$ is plotted as a function of $\phi$ for $\gamma = 0.2\omega_1$ (upper panel) and $\gamma = 4\omega_1$ (lower panel). The parameters are $\beta = 0.3/\omega_1, \Delta = 0.01\omega_1$, and $\lambda = 0.03 \omega_1$. } \label{fig2} \end{figure} We now turn to discuss the influence of the anisotropy coefficient $h$ on the synchronization of two spins. The synchronization of two qubits within a common Markovian environment has been investigated by employing the Bloch-Redfield master equation \cite{17}. It is found that two qubits can not be synchronized for purely dephasing case. The Markovian and Born approximation were employed in this work \cite{17}. In the following, we will show that two spins can not be synchronized in purely dephasing case without using the Markovian and Born approximation. In Fig. 2, we plot $S_r(\phi)$ as a function of $\phi$ for different values of $h$ with $\gamma = 0.2\omega_1$ (upper panel) and $\gamma = 4\omega_1$ (lower panel). One can clearly see that the maximal value of $S_r(\phi)$ decreases with the increase of the parameter $h$. In particular, the values of $S_r(\phi)$ for $\gamma = 0.2 \omega_1$ (upper panel) and $\gamma = 4 \omega_1$ (lower panel) are always zero if $h = 1$ and two spins can not be synchronized for the purely dephasing case. Note that, in Ref. \cite{17}, the authors have assumed that $\gamma \gg \omega_1$ and $\gamma \gg \omega_2$ in order to ensure the validity the Markovian approximation. However, in the present work, we use the hierarchy equation method to investigate the present system without the Markovian and Born approximations. More precisely, it is not necessary to assume $\gamma \gg \omega_1$ and $\gamma \gg \omega_2$ in our work. We extend the result of Ref. \cite{17} to the case of non-Markovian bath, i.e., two spins without direct interaction can not be synchronized in the purely dephasing case. We find dissipation is indispensable for the synchronization of two spins in Markovian or non-Markovian environment. \subsection{Influence of temperature} \begin{figure}[tbp] \centering{\scalebox{0.3}[0.3]{\includegraphics{fig3a.eps}}} \centering{\scalebox{0.3}[0.3]{\includegraphics{fig3b.eps}}} \caption{$S_r(\phi)$ is plotted as a function of $\phi$ with $\Delta = 0.001\omega_1$ (upper panel) and $\Delta = 0.1\omega_1$ (lower panel). The parameters are $\gamma = 0.2 \omega_1$, $\lambda = 0.05 \omega_1$, and $h = -1$. } \label{fig3} \end{figure} \begin{figure}[tbp] \centering{\scalebox{0.3}[0.3]{\includegraphics{fig4a.eps}}} \centering{\scalebox{0.3}[0.3]{\includegraphics{fig4b.eps}}} \caption{$S_r(\phi)$ is plotted as a function of $\phi$ with $\Delta = 0.001\omega_1$ (upper panel) and $\Delta = 0.1\omega_1$ (lower panel). The parameters are $\gamma = 20 \omega_1$, $\lambda = 0.05 \omega_1$, and $h = -1$. } \label{fig4} \end{figure} The synchronization of two spins has been studied with the help of the Lindblad master equation and the temperature of the baths was assumed to zero \cite{18,19}. In this section, we investigate the influence of the temperature of the bath. Comparing the upper panel and lower panel of Fig. 3 ($\gamma = 0.2\omega_1$) and Fig. 4 ($\gamma = 20\omega_1$), we see the effects of the temperature of the bath depends crucially on the detuning of two spins. On the one hand, if the detuning is much smaller than the frequencies of spins ($\Delta \ll \omega_i$), the maximal value of $S_r(\phi)$ increases with the increase of the temperature as one can see from the upper panel of Fig. 3 and Fig. 4. On the other hand, the maximum of $S_r(\phi)$ decreases with the increase of the temperature if $\Delta = 0.1\omega_1$ as one can see from the lower panel of Fig. 3 and Fig. 4. One possible reason for the different influences of the temperature of the bath on $S_r(\phi)$ for different detuning $\Delta$ is as follows. The interactions between the qutrits and the common bath plays an important role in the generation of $S_r(\phi)$. The two qutrits interact with each other indirectly via their direct interactions with the common bath. The temperature of the common bath plays a constructive role in this process. However, as the system evolves, the interactions between the common bath and two qutrits can disturb the dynamics of two qutrits. In this case, the temperature of the bath plays a destructive role. The steady state value of the quantum synchronization measure is a result of the two effects of the common bath. If the detuning $\Delta$ is very small, the two qutrits can be synchronized in a short time and the temperature of the bath plays a constructive role. However, if the detuning $\Delta$ is large enough, it takes a long time to synchronize the two qutrits and the temperature of the bath plays a destructive role. \subsection{Arnold tongue} \begin{figure}[tbp] \centering{\scalebox{0.4}[0.3]{\includegraphics{fig5.eps}}} \caption{The logarithmic negativity $E$, mutual information $I$, and $S_r(\phi=0)$ are plotted as functions of the dimensionless time $\omega_1 t$ for $\Delta = 0.001\omega_1$, $\lambda = 0.05\omega_1$, $\beta = 0.3/ \omega_1$, and $h = -1$. } \label{fig5} \end{figure} \begin{figure}[tbp] \centering{\scalebox{0.4}[0.3]{\includegraphics{fig6.eps}}} \caption{The Arnold tongue of the present system. The quantum mutual information $I$ (left panel) and maximal value of $S_r(\phi)$ (right panel) are plotted as functions of the detuning $\Delta$ and coupling strength $\lambda$ with $\gamma = 0.2 \omega_1$, $\beta = 0.3/ \omega_1$, and $h = -1$. } \label{fig6} \end{figure} \begin{figure}[tbp] \centering{\scalebox{0.4}[0.3]{\includegraphics{fig7.eps}}} \caption{The Arnold tongue of the present system. The quantum mutual information $I$ (left panel) and maximal value of $S_r(\phi)$ (right panel) are plotted as functions of the detuning $\Delta$ and coupling strength $\lambda$ with $\gamma = 4 \omega_1$, $\beta = 0.3/ \omega_1$, and $h = -1$. } \label{fig7} \end{figure} In Fig. 5, we plot the logarithmic negativity $E$, mutual information $I$, and $S_r(\phi=0)$ as functions of the dimensionless time $\omega_1 t$. The entanglement first increases and then decreases with time. Eventually, the entanglement becomes zero at $\omega_1 t \approx 1.5$ while $I$ and $S_r$ are not zero at this time. After a certain time interval, the values of $I$ and $S_r$ are not changed with time and the two spins are synchronized. In order to see the steady state mutual information and synchronization measure more clearly, we plot the quantum mutual information $I$ (left panel) and maximum of $S_r(\phi)$ (right panel) as functions of the detuning $\Delta$ and coupling strength $\lambda$ in Figs 6 and 7. The Arnold tongue which is the characteristic property of synchronization can be observed in these figures. We calculate the logarithmic negativity of two spins for many different parameters and find that there is no steady state entanglement even in the presence of synchronization. This result is similar to the previous works \cite{3,31}. Consequently, the mutual information has bee proposed as an order parameter for quantum synchronization \cite{31}. In the present work, we assume there is no direct interaction between two spins and find they could not be entangled in the steady state, i.e., $E(\rho_{steady}) = 0$. However, the mutual information of two spins at steady state could be larger than zero. Therefore, we plot the mutual information of two spins in Figs. 6 and 7. Comparing Fig. 6 and Fig. 7, we find the Arnold tongue could be adjusted by the parameter $\gamma$. Particularly, the Arnold tongue in Fig. 6 is very narrow and it is usually very difficult observe synchronization of two spins experimentally \cite{1}. If we increase the parameter $\gamma$, then the Arnold tongue could be broadened significantly as one can see from Fig. 7. Therefore, the synchronization of two spins could be observed in experiments more easily if we increase the parameter $\gamma$. \section{Conclusions} In the present work, we have studied the quantum synchronization and correlations of two qutrits in one non-Markovian environment with the help of the hierarchy equation method. There is no direct interaction between two qutrits. Each qutrit interacts with the common non-Markovian bath. In order to measure quantum synchronization of discrete systems, we adopted the measure $S_r(\phi)$ proposed by Roulet and Bruder \cite{18,19}. This measure is based on the Husimi Q representation and spin coherent states. We have investigated the influence of the temperature, correlation time, and coupling strength between qutrits and bath on the quantum synchronzation and correlations of two qutrits without using the Markovian, Born, and rotating wave approximations. The influence of dissipation and dephasing on the synchronization of two qutrits was also discussed. We first discussed the influence of the coupling strength of qutrits and bath on the quantum synchronization of two qutrits. If there is no interaction between each qutrit and the common bath, then they do not interact with each other at all. Obviously, they can not be synchronized in this case. If we increase the coupling strength of qutrits and bath, they can be synchronized when dissipation is taken into accounted. Particularly, we found that two spins without direct interaction in a non-Markovian bath can not be synchronized for purely dephasing case which is a generalization of the Markovian case \cite{17}. In other words, dissipation is indispensable for the quantum synchronization of two spins in non-Markovian or Markovian bath. Then, we studied the influence of the temperature of the common bath on the quantum synchronization of two spins. Our results show that the influence of the temperature of the common bath depends heavily on the detuning between two spins. If the detuning is much smaller than the frequencies of two spins, the maximal value of $S_r(\phi)$ increases with the increase of the temperature. However, when the detuning is not much smaller than the frequencies of two spins, the maximal value of $S_r(\phi)$ decreases with the increase of the temperature. Finally, we plot the maximal value of $S_r(\phi)$ as a function of the detuning $\Delta$ and coupling strength $\lambda$. The Arnold tongue which is the characteristic property of synchronization can be observed in the present model. The logarithmic negativity of two spins for many different parameters was also calculated. We find that there is no steady state entanglement even in the presence of synchronization \cite{3,31}. Therefore, we plot the mutual information of two spins. The Arnold tongue could be adjusted by the parameter $\gamma$ significantly. Particularly, the Arnold tongue is very narrow in the non-Markovian case $\gamma < \omega_i$ ($i = 1, 2$). Thus, it is usually very difficult observe synchronization of two spins experimentally in the non-Markovian case \cite{1}. If we increase the parameter $\gamma$, then the Arnold tongue could be broadened significantly. Therefore, the synchronization of two spins could be observed in experiments more easily if they are put into a Markovian environment. \section*{Acknowledgments} This work is supported by the National Natural Science Foundation of China (Grant Nos. 11047115, 11365009 and 11065007), the Scientific Research Foundation of Jiangxi (Grant Nos. 20122BAB212008 and 20151BAB202020.)
1,116,691,498,101
arxiv
\section{Introduction} \subsection*{Inflation and High-Energy Physics} Inflation, an era of accelerated expansion of the early universe, currently provides us with the best understanding of the initial conditions for the subsequent cosmological eras. The simplest mechanism to explain this quasi-exponential, de Sitter like expansion, is to assume that the energy density of the universe was then dominated by the one of a scalar field, the inflaton, endowed with a very flat potential in Planck units, so that it slowly rolls down its potential. This results in a homogeneous, isotropic and spatially flat Universe on cosmological scales, as required by observations of the cosmic microwave background (CMB). Moreover, it naturally comes with a mechanism by which the quantum fluctuations of the inflaton are stretched to cosmological scales to give rise to primordial density fluctuations at the origin of the CMB anisotropies and of the large scale structure of the universe that we observe nowadays, a scenario in perfect agreement with the latest CMB data from the Planck satellite~\cite{Akrami:2018odb,Akrami:2019izv}. Despite its success at explaining data in a simple manner, single-field slow-roll inflation is usually seen only as a phenomenological description that emerges from a more realistic physical framework to be determined (see, e.g., Ref.~\cite{Baumann:2014nda}). One of the main reasons behind this is the peculiar ultraviolet (UV) sensitivity of inflation: order-one changes in the strengths of the interactions of the field(s) responsible for inflation with Planck-scale degrees of freedom generically have significant effects on the inflationary dynamics, to the point sometimes of ruining inflation itself. Addressing this UV sensitivity implies justifying in a controllable setup that high-energy interactions are innocuous, which can be done either by specifying the physics at the Planck scale, typically in string theory constructions, or at least by taking it into account using the methods of effective field theory (EFT). Either way, this naturally leads one to consider the impact of the existence of several degrees of freedom during inflation, and indeed the UV sensitivity of inflation provides us with a formidable opportunity to use the early universe as a giant particle detector. In this respect, looking for new physics in cosmological data, for instance through non-Gaussianities or/and features of the primordial fluctuations, can be seen as looking for multifield effects (see, e.g., Refs.~\cite{Wands:2010af,Chen:2010xka,Wang:2013eqj,Renaux-Petel:2015bja,Meerburg:2019qqi} for reviews). Typical UV embeddings of inflation include several scalar fields interacting through their potential as well as through their kinetic terms, with a Lagrangian of the type \bae{\label{S-intro} {\cal L}=-\frac{1}{2}g^{\mu\nu}G_{IJ}(\phi)\partial_\mu\phi^I\partial_\nu\phi^J-V(\phi). } This general class of so-called non-linear sigma models have been studied for a long time (see, e.g., the review~\cite{Lyth:1998xn}), but recent years have seen a flurry of activity concerning them (see, e.g., Refs.~\cite{Cremonini:2010ua Turzynski:2014tza Carrasco:2015uma Renaux-Petel:2015mga Hetz:2016ics Achucarro:2016fby Tada:2016pmk, Brown:2017osf Renaux-Petel:2017dia Mizuno:2017idt Achucarro:2017ing Krajewski:2018moi Christodoulidis:2018qdw Linde:2018hmx Garcia-Saenz:2018ifx Garcia-Saenz:2018vqf Achucarro:2018vey Achucarro:2018ngj Achucarro:2019pux Bjorkmo:2019aev Grocholski:2019mot Fumagalli:2019noh Bjorkmo:2019fls Christodoulidis:2019mkj Christodoulidis:2019jsx Aragam:2019khr Mizuno:2019pcm Bravo:2019xdo Achucarro:2019mea Garcia-Saenz:2019njm Chakraborty:2019dfh Bjorkmo:2019qno Wang:2019gok Ferreira:2020qkf Braglia:2020fms Palma:2020ejf Fumagalli:2020adf Braglia:2020ea }), in particular about geometrical aspects related to the curved field space described by the metric $G_{I J}$, the possibility to inflate along trajectories characterised by a strongly non-geodesic motion in field space, and the corresponding distinct observational signatures. \subsection*{Stochastic inflation} Standard Perturbation Theory (SPT) during inflation treats perturbatively quantum fluctuations around supposedly homogeneous classical background fields. This distinct treatment is not only conceptually unsatisfactory, but it is also expected to break down in the presence of very light scalar fields whose large-scale evolutions are dominated, not by their classical dynamics, but instead by quantum diffusion effects. The stochastic approach aims at dealing directly with the super-Hubble parts of the quantum fields driving inflation (see Refs.~\cite{STAROBINSKY1982175,Starobinsky:1986fx,NAMBU1988441,NAMBU1989240,Kandrup:1988sc,Nakao:1988yi,Nambu:1989uf,Mollerach:1990zf,Linde:1993xx,Starobinsky:1994bd} for the first papers on the subject). The corresponding theory, resulting from a \textit{coarse-graining} procedure, can be thought of as an EFT for long-wavelength modes during inflation. More precisely, and concentrating for definiteness on test scalar fields evolving in de Sitter space, the scalar fields are divided into infrared (IR) and UV parts delineated by a constant physical scale, the first one corresponding to the ``coarse grained'' super-Hubble parts of the quantum fields, with comoving momenta smaller than the time-dependent cutoff $k_\sigma(N)=\sigma a(N) H$, with a small positive parameter $\sigma \ll 1$ and where $N=\ln a$ is the number of $e$-folds\xspace. The IR sector of the theory can be understood as an open system receiving a continuous flow of UV modes as they cross the growing coarse-graining scale $k_\sigma$. Strikingly, the effect of this flow can be understood as classical random kicks added to the deterministic dynamics of the IR fields. More technically, the IR fields verify stochastic, so-called \textit{Langevin} equations, rather than the deterministic equations verified by the background fields in SPT. An excellent agreement between the stochastic formalism and usual quantum field theory techniques has been found in a number of studies, mostly in the paradigmatic setup of the $\lambda \phi^4$ theory in de Sitter space, but also including backreaction in the single-field slow-roll regime~\cite{Tsamis:2005hd,Prokopec:2007ak,Finelli:2008zg,Finelli:2010sh,Garbrecht:2013coa,Garbrecht:2014dca,Onemli:2015pma,Cho:2015pwa}. This agreement is noteworthy because the computations of correlation functions are almost immediate in the stochastic theory, at least in the simplest contexts: it enables one to determine without effort what would be the results of intricate loop calculations in renormalised perturbative quantum field theory. Moreover, and importantly, the stochastic formalism enables one to resum the IR divergences of perturbative QFT, and derive fully non-perturbative results (such as equilibrium distributions in de Sitter space), a subject that has attracted a lot of attention and has been investigated using a variety of methods (see, e.g., Refs.~\cite{Seery:2007we, Enqvist:2008kt, 2009JCAP...05..021S, Burgess:2009bs, Seery:2010kh, Gautier:2013aoa, Guilleux:2015pma Gautier:2015pca Markkanen:2017rvi LopezNacir:2019ord Gorbenko:2019rza Mirbabayi:2019qtx Adshead:2020ijf, Moreau:2020gib Cohen:2020ph }). The stochastic formalism is not only useful for such formal investigations, as well as to tackle the issues related to eternal inflation~\cite{Linde:1986fc,Linde:1986fd,Goncharov:1987ir}, but it can also be used to compute observationally relevant quantities such as the power spectrum, higher $n$-point functions and other statistical properties of the adiabatic curvature perturbation $\zeta$ generated during inflation. This is achieved with the help of the separate universe approach, which states that each region of the universe slightly larger than the Hubble radius evolves like a separate FLRW universe that is locally homogeneous and evolves independently from its neighbours~\cite{Wands:2000dp}. Then, patching these regions enables one to deduce the curvature perturbation on even larger scales, identified as the fluctuation of the local number of $e$-folds of expansion $N(\mathbf{x})$, a method known as the $\delta N$ formalism~\cite{Salopek:1990jq,Sasaki:1995aw,Sasaki:1998ug,Lyth:2004gb}. Its generalisation to stochastic inflation was called the \emph{stochastic-$\delta N$ formalism}~\cite{Fujita:2013cna,Fujita:2014tja,Vennin:2015hra,Kawasaki:2015ppx,Assadullahi:2016gkk,Vennin:2016wnk,Pinol:2018euk}, and it enables one to compute the statistical properties of $\zeta$ in a non-perturbative manner (see also Refs.~\cite{Rigopoulos:2003ak,Rigopoulos:2004gr,Rigopoulos:2004ba,Rigopoulos:2005xx,Rigopoulos:2005ae} for a related approach), reducing to SPT in a suitable classical limit, while being able to treat the regime where quantum diffusion effects dominate. This has notably proved useful recently to compute the abundance of primordial black holes (PBH) resulting from the collapse of local overdensities generated during inflation~\cite{Kawasaki:2015ppx,Pattison:2017mbe,Ezquiaga:2018gbw,Biagetti:2018pjj,Ezquiaga:2019ftu,Panagopoulos:2019ail} (see, e.g., Refs.~\cite{Bullock:1996at,GarciaBellido:1996qt,Ivanov:1997ia,Yokoyama:1998pt} for early applications of the stochastic formalism in this context), a field that regained attention as PBHs are considered as candidates for LIGO/Virgo gravitational wave sources~\cite{Bird:2016dcv,Clesse:2016vqa,Sasaki:2016jop}, a possibly important component of dark matter (see, e.g., Refs.~\cite{Carr:2016drx,Carr:2020gox}), as well as possible explanations of the microlensing events found by OGLE~\cite{Niikura:2019kqi} and even of the hypothetical Planet 9~\cite{Scholtz:2019csj,Witten:2020ifl}. Despite many achievements, and the fact that stochastic inflation with multiple fields has already been studied (see \cite{GarciaBellido:1993wn,GarciaBellido:1994vz,GarciaBellido:1995br} for first works at the early stage of stochastic inflation), we stress that it has never been formulated in a manner that is generally covariant under field redefinitions, nor derived from first principles in this context. This, together with the many recent developments concerning the geometrical aspects of nonlinear sigma models, constitute the main motivations of this work. \subsection*{Path integrals and Hamiltonian action} In the present paper, we begin by showing a ``heuristic" derivation of the phase-space Langevin equations of stochastic inflation in the general context of multifield inflation with curved field space, by working at the level of the classical equations of motion, but we also propose a rigorous path-integral derivation solving the ambiguities of this heuristic approach. Path integrals are ubiquitous in physics, from statistical physics and quantum mechanics to field theories. In the context of stochastic inflation, they appear in a manner quite similar to the path-integral representation of the Brownian motion of a system linearly coupled to a thermal bath that is integrated out~\cite{Feynman:1963fq}, the role of the system and the bath being respectively replaced by the IR and UV sectors~\cite{Morikawa:1989xz,Calzetta:1999zr,Matarrese:2003ye,Liguori:2004fa,Levasseur:2013ffa,Levasseur:2013tja,Levasseur:2014ska,Moss:2016uix,Tokuda:2017fdh,Prokopec:2017vxx,Tokuda:2018eqs} (see also Refs.~\cite{Calzetta:1995ys,Calzetta:1996sy,Calzetta:1999xh,Parikh:2020nrd} for the use of similar tools in other gravitational contexts). Path integrals are first constructed on a discrete time (and space for field theories that we shall focus on from now on) grid as the integral over all possible discrete \textit{jumps} from a field’s value to any other one, with fixed initial and final values. In the continuous limit, it corresponds to an integral over all the possible \textit{paths} to go from a fixed initial point to a fixed final one, thus justifying its name as ``integration over possible histories". Microscopically, the law governing the probability of a given jump between times $N_{j-1}$ and $N_j$ is dictated by the unitary operator $\hat{U}_j=\mathrm{e}^{-i\hat{H}(\phi,\pi;N_j)(N_j-N_{j-1})}$, where $\hat{H}$ is the Hamiltonian operator of the system, and $\phi$ and $\pi$ denote the corresponding fields and momenta. In this fundamental phase-space approach, the action entering in the final expression for the path integral over the values of the fields and momenta is called the \textit{Hamiltonian action} and reads $S=\int \mathrm{d}^4x \left[ \pi \dot{\phi} - \mathcal{H}(\phi,\pi) \right]$ where $\mathcal{H}$ is the Hamiltonian density associated with $H$. Note that when the Hamiltonian (density) is at most quadratic in momenta, it is possible to perform exactly the path integration over them, and express the theory as a path integral over fields only. However one would recover the standard Lagrangian action only when the terms quadratic in momenta are field-independent~\cite{Weinberg:1995mt}, which is neither the case in general, nor in our situation of interest. \subsection*{Partition function, ``in-in" formalism and doubling of the degrees of freedom} In particle physics, transition amplitudes between asymptotic ``in" and ``out" states can be deduced from time-ordered correlation functions. The latter can themselves be derived from the generating functional $Z[J]$, i.e. the partition function with sources, which has a convenient path-integral representation. In cosmology, one rather looks for the expectation values of operators in some ``in" state defined in the far past (typically the Bunch-Davies vacuum), as well as the corresponding causal equations of motion that they verify. However these can also be deduced from a generating functional expressed as a path integral, with the important peculiarity, for this ``in-in” partition function, that the path integral turns out to be performed on a Closed-Time-Path (CTP) of integration in the time domain, as represented in Fig.~\ref{fig: CTP} in the main body of this paper. Working with this CTP amounts to considering a ``doubling of the histories": one along the forward branch, and one along the reverse one, and with doubled degrees of freedom, one version for each of the two paths. Naturally, there is no doubling of the genuine physical degrees of freedom in the theory, but only as dummy variables inside the path integral: the two copies of the degrees of freedom are treated independently at any time \textit{but} the final one, at which the two branches of the CTP close, and boundary conditions must be imposed. Of course, the ``in-in" formalism was not intended for cosmology in the first place, but rather developed in the field of non-equilibrium statistical and quantum field theories, in which it is also known as the \textit{Schwinger-Keldysh formalism}~\cite{Schwinger:1960qe,Keldysh:1964ud}, proving extremely useful to describe quantum and thermal fluctuations, dissipation, decoherence and many other effects in various areas of physics (see, e.g., Refs.~\cite{Kamenev-book,Calzetta:2008iqa,altland_simons_2010}). \subsection*{Coarse-graining} Stochastic inflation corresponds to a low-energy effective version of the full theory that can be described by the ``in-in" path integral as explained above. To derive it, one must thus identify the relevant degrees of freedom (the super-Hubble modes in our case), and integrate out of the theory the other ones (the sub-Hubble modes). After splitting the full system into our subsystem of interest composed of IR fields, plus a bath of UV fluctuations, one can perturbatively integrate out explicitly the UV modes of the description. However, remembering that ``integrating out is different from truncating", the UV fluctuations will leave an imprint on the IR dynamics, and this will be the source of the explicit noise and randomness in the equations of motion for the long-wavelength fields. This concept of \textit{coarse-grained effective action} is widely used in physics, from the study of Brownian processes in statistical physics, to the applications of renormalisation in field theories and decoherence in quantum mechanics, but was also applied to the cosmological context~\cite{Calzetta:1995ys,Calzetta:1999zr,Levasseur:2013ffa,Tokuda:2017fdh,Tokuda:2018eqs}. The coarse-graining procedure can also be understood at the level of the density matrix, which for a bipartite system (IR and UV sectors) can give the EFT for an open system (the IR modes) by tracing out the environment (the UV modes) and obtaining the \textit{reduced density matrix}. Be it at the level of the partition function or the density matrix, the coarse-graining approach within the in-in formalism is powerful because it enables one to control the approximations that are made and possibly derive next-order corrections~\cite{Burgess:2014eoa,Burgess:2015ajz,Collins:2017haz,Hollowood:2017bil}. \subsection*{Langevin equations, multiplicative noise and ambiguity of the discretisation scheme} As we explain in the body of this paper, the effect of the UV modes on the IR dynamics is encapsulated in the \textit{influence action}. After careful investigation and introduction of auxiliary variables, it can be shown that this results in an explicit noise term in the equations of motion for the IR fields, with a covariance dictated by the (real part of the) power spectrum of the UV modes. The long-wavelength fields thus verify Langevin equations, with a deterministic drift coming from the ordinary background dynamics, but supplemented by a diffusion term due to the random kicks. Crucially, the effect of the small-scale, quantum fluctuations on the long-wavelength, classicalised IR fields, can be interpreted as a classical noise. Hence, the resulting theory describes genuinely quantum effects, albeit in a classically-looking stochastic manner. Langevin equations have been studied for a long time in the context of Brownian processes, signal theory, etc. They constitute Stochastic Differential Equations (SDE) rather than Ordinary Differential Equations (ODE), and this difference is crucial. Indeed, consider the simplest example of the Brownian motion of a particle, due to shocks with its environment at a given temperature; its position is a random quantity whose statistical properties may be determined. However for a given realisation, the position of the particle, although being a continuous function of time, is not a differentiable function of time due to the properties of the white noise that affects its dynamics. Thus, the mathematical understanding of trajectories and in particular time derivatives of the position of the particle, is intricate and leads to interesting subtleties. Of course, a discrete-time interpretation of the dynamics is always possible and may even be clearer, and complications arise when going to the continuous-time limit of the description. A famous example (for statistical physicists) of possible difficulties is met when the noise is \textit{multiplicative}, that is when its amplitude (or covariance) is itself a function of the random variable that verifies the Langevin equations. Then, there is an ambiguity when going from the discrete-time representation to the continuous one: at which time exactly should the random variable that enters the noise amplitude be evaluated? When dealing with ODEs, we are used to forget about these subtleties because any choice of a discrete scheme leads to the same physical result. However, this is not the case any more for SDEs with multiplicative noise, for which different scheme choices, usually parameterised by a number $\alpha$ between $0$ and $1$, lead to different values for physical quantities like statistical averages, Probability Density Functions (PDF), etc. Amongst the infinite number of possible choices for $\alpha$, two have been particularly investigated for their interesting properties, the prepoint, $\alpha=0$ It\^o discretisation~\cite{ito1944109}, and the midpoint, $\alpha=1/2$ Stratonovich~\cite{stratonovich1966new} one. Indeed, while It\^o is widely used in applied and computational mathematics for its appealing mathematical properties (the covariance matrix can be arbitrarily reduced in any frame to identify independent noises, the noise at a given time step only depends on the values of the random variables at previous time steps, etc.), Stratonovich may be preferred in theoretical physics, where changes of variable are ubiquitous, because the standard chain rule for the derivative of composite functions is only verified in that case. In particular, this last property simplifies discussions about general covariance of the equations. In this respect, it is important to highlight that, while a given SDE, interpreted with different schemes, defines different physical theories, it is always possible to describe the same physics by using different discretisation schemes. Indeed, one knows how to go from one continuous form of a SDE understood in a given discretisation scheme, to another form with a different scheme, while leaving the physics unaffected. Keeping this in mind, whether the conventional form of the Langevin equations of stochastic inflation should be interpreted according to It\^o or Stratonovich schemes has already been discussed in the literature. On one hand, the Stratonovich scheme has been advocated by the fact that white noises should be treated as the limit of colored noises when the smooth decomposition between short and long-wavelength modes becomes sharp~\cite{Mezhlumian:1991hw}. On the other hand, it has been suggested that only the It\^o scheme could be invariant under reparameterisation of the time variable~\cite{Vilenkin:1999kd}, and consistently reproduce one-loop QFT computations in the $\lambda \varphi^4$ theory~\cite{Tokuda:2017fdh}. Eventually, it has also been argued that the choice between the two prescriptions exceeds the accuracy of the stochastic approach~\cite{Vennin:2015hra}. In our previous paper~\cite{Pinol:2018euk}, we tackled for the first time the issue of the discretisation ambiguity of the Langevin equations of stochastic inflation in the multifield context, and we discovered various conceptual issues with the stochastic description of IR fields during inflation, that we called ``inflationary stochastic anomalies". \subsection*{Inflationary stochastic anomalies} In stochastic inflation, the covariance matrix of the noises entering the Langevin equation is proportional to the (real part of the) power spectra of the UV modes. However the UV modes themselves evolve according to linear equations of motion (at first order in perturbation theory for the UV modes) whose ``coefficients" are set by the values of the IR fields that constitute the random variables of interest. Thus, the noise amplitude for the IR fields clearly depends on their own values, which corresponds to a multiplicative noise. Actually, the situation is even more intricate since rigorously the power spectra of UV modes (and thus the noise amplitude) cannot be simply expressed as functions of the IR fields at the current time, but rather are solutions of differential equations that involve them. This situation is called \textit{non-Markovian}, in contrast to Markov processes where the noise amplitude only depends on the random variables at the time step of evaluation, and not at previous times. However even letting aside the non-Markovian difficulty, the multiplicative noise results in the discretisation scheme ambiguity discussed above, and since the derivation of the Langevin equations does not \textit{a priori} come with any prescription regarding their discrete-time version, one should choose how to interpret them (i.e. prescribe a value for the parameter $\alpha$) based on physical criteria. However, in our previous paper~\cite{Pinol:2018euk}, we found that no choice was satisfactory because of the following. The standard chain rule for the derivative of composite functions is only verified in the Stratonovich $\alpha=1/2$ case. Thus, for any other choice, the Langevin equations as they are usually shown do not respect general covariance under field redefinitions. However at that time we thought the Stratonovich choice was not satisfactory neither, even if for a different reason: only in the It\^o case is the frame of reduction of the noise matrix (necessary to identify independent Gaussian white noises and solve the Langevin equations numerically or proceed further analytically) irrelevant to the final result, as already known in statistical physics contexts (see e.g. Refs.~\cite{ryter1980properties,deker1980properties, dilemma,vKampen-manifold,GRAHAM1985209}). So we were left with a dilemma: breaking of general covariance following the It\^o interpretation or spurious frame-dependence in the Stratonovich one? It is important to note that, although more striking in the multifield context, this ambiguity is also present in single-field models of stochastic inflation. Although we showed that, for such a single scalar field in the overdamped limit, the difference between the two prescriptions is numerically small in the final correlation functions, the conceptual issue was still remaining. By including a tadpole diagram cancelling the frame dependence in the Stratonovich scheme, a covariant and frame-independent formulation was proposed in Ref.~\cite{Kitamoto:2018dek}, considering the overdamped limit (i.e. in field space and not in phase space) of test scalar fields in de Sitter space and in a Markovian approximation. In this paper, we will show how inflationary stochastic anomalies are solved in full generality from first principles. \subsection*{Structure of the paper} The structure of the paper is as follows. We begin by introducing in Sec.~\ref{sec: heuristic} the definitions and the concepts behind stochastic inflation in phase space with several scalar fields and a general field-space metric, and developing an intuitive approach to derive ``heuristically" the Langevin equations for the coarse-grained fields and their momenta. We also highlight the conceptual issues behind these equations and their derivation using the classical equations of motion. Notably, we review in Sec.~\ref{sec: stochastic anomalies} why these equations suffer from ``inflationary stochastic anomalies", an issue that we solve by using the Stratonovich discretisation satisfying general covariance, and identifying that the quantum nature of the fluctuating fields entails the existence of a preferred noise frame. The corresponding covariant It\^o SDE, which can readily be used in numerical and analytical computations, are also derived as one of our main results. In Sec.~\ref{sec: effective hamiltonian action}, we turn to the rigorous derivation of stochastic inflation using a path-integral approach. This enables one to solve the other conceptual issues of the heuristic approach and to keep a better control over the approximations made throughout, paying a particular attention to the doubling of the degrees of freedom and the necessary boundary conditions imposed at the UV/IR transition by the Closed-Time-Path of integration. We also show how the identification of covariant Vilkovisky-DeWitt variables in phase space, is crucial to maintain general covariance. We derive the influence action for the long-wavelength fields and momenta, resulting from integrating out the UV modes, and we show how the coarse-grained effective action can be interpreted to derive Langevin equations with manifestly real noises. We finish in Sec.~\ref{sec:Markovian} by showing, in the Markovian limit, the phase-space, covariant Fokker-Planck equation corresponding to our multifield Langevin equations, as well as some analytical approximations for the noises' amplitudes. These results can be used in practical applications of our covariant multifield stochastic inflation framework. Sec.~\ref{sec: conclusions} is then devoted to conclusions and future prospects. Eventually, we gathered in appendices some technical details as well as a summary of our notations. We adopt natural units, $c=\hbar=1$ throughout this paper. \subsection*{Main results} We gather here in a few lines the main results of the paper: \begin{itemize} \item ``Inflationary stochastic anomalies" are solved by the observation that the quantum nature of the fluctuating fields provides one with a natural frame for reducing the noise covariance matrix: the one of the independent creation and annihilation operators. This leads to a unique set of independent Gaussian white noises in the Langevin equations (up to a constant, irrelevant, orthogonal matrix), and highlights the genuine quantum origin of their stochasticity. \item The Langevin equations as they are usually derived must be interpreted with the Stratonovich discretisation scheme and the preferred frame mentioned above, but they are easier to interpret and use after transforming them to their It\^o version. The corresponding noise-induced terms can then be used to define covariant time-derivatives compatible with It\^o calculus, $\mathfrak{D}_N$, see Eqs.~\eqref{eq: ItoD for X}--\eqref{eq: ItoD for V}. The resulting, It\^o-covariant, phase-space, Langevin equations for multifield inflation with curved field space and including back-reaction on the metric are eventually found to be: \bae{ \boxed{ \mathfrak{D}_N\varphi^I=\frac{\varpi^I}{H}+\xi^{QI}, \qquad \mathfrak{D}_N\varpi_I=-3\varpi_I-\frac{V_I}{H}+\xi^{\tilde{P}}_I\,. \label{Langevin-intro} } } Here, $V_I$ denotes the gradient of the potential, $H$ is the local Hubble scale, given in terms of the infrared fields $\varphi^I$ and momenta $\varpi_I$ by the Friedmann equation~\eqref{laspeIR}, and indices are raised with the inverse field-space metric. We also find the auto-correlation of the Gaussian white noises to be given by, for $\tilde{X}=(Q,{\tilde{P}})$: \bae{ \boxed{ \braket{\xi^{\covXI}(N)\xi^{\covYJ}(N^\prime)}\equiv A^{\tilde{X}\covYIJ}(N) \delta(N-N^\prime) =\Re \mathcal{P}^{\tilde{X}\covYIJ}(N ;k_\sigma(N)) \delta(N-N^\prime)\,, \label{noise-properties-intro} } } with $\mathcal{P}^{\tilde{X}\covYIJ}$ the dimensionless power spectra of the UV modes $(Q^I,{\tilde{P}}_I)$ that follow the EoMs~\eqref{eq: UV EoM} deduced from the action~\eqref{eq: cov S2}, and evaluated at the scale $k_\sigma(N)=\sigma a(N) H$ that joins the IR sector at the time $N$. \item When the dynamics can be approximated as Markovian, it is possible to derive the phase-space Fokker-Planck equation for the one-point scalar PDF $P(\varphi^I,\varpi_J;N)$ as \begin{empheq}[box=\fbox]{align} \partial_N P=&-D_{\varphi^I}\left[\frac{G^{IJ}}{H}\varpi_J P\right]+\partial_{\varpi_I}\left[\left(3\varpi_I+\frac{V_I}{H}\right)P\right] \\ &+\frac{1}{2}D_{\varphi^I}D_{\varphi^J}(A^{QQIJ}P)+D_{\varphi^I}\partial_{\varpi_J}(A^{Q{\tilde{P}}I}{}_J P) +\frac{1}{2}\partial_{\varpi_I}\partial_{\varpi_J}(A^{{\tilde{P}}\covP}{}_{IJ}P), \nonumber \end{empheq} with $D_{\varphi^I}$ a phase-space covariant derivative defined by its action on field-space vectors: $D_{\varphi^I}\calU^J=\nabla_I\calU^J+\Gamma_{IL}^K\varpi_K\partial_{\varpi_L}\calU^J$, where $\nabla_I$ is the usual field-space covariant derivative. Under a slow-varying approximation, we further provide some analytical estimates for the noise properties in Eqs.~\eqref{eq: power spectra massive case-QQ}--\eqref{eq: power spectra massive case-PP}. \end{itemize} \section{Stochastic formalism: heuristic approach} \label{sec: heuristic} In this section we introduce the concepts and definitions used throughout the paper, by showing a heuristic derivation, made at the level of the classical equations of motion, of the Langevin equations in the general class of multifield models described by the action \eqref{S-intro}. Our analysis is valid beyond the test approximation, i.e. it takes into account the backreaction of the scalar fields on the spacetime metric. Moreover, we do so using a phase-space Hamiltonian language, without assuming any slow-roll regime (see, e.g., Refs.~\cite{Habib:1992ci,Tolley:2008na,Enqvist:2011pt,Kawasaki:2012bk,Rigopoulos:2016oko,Moss:2016uix,Grain:2017dqa,Tokuda:2017fdh,Prokopec:2017vxx,Ezquiaga:2018gbw,Tokuda:2018eqs,Cruces:2018cvq,Firouzjahi:2018vet,Pattison:2019hef,Fumagalli:2019ohr,Prokopec:2019srf,Ballesteros:2020sre} for previous works on the subject, albeit not in this general multifield context, and sometimes with different results and approaches). Eventually, we highlight the limitations of this heuristic approach, and stress the non-Markovian character of the IR dynamics. \subsection{Generalities and ADM formalism} The general action of several scalar fields minimally coupled to gravity that we consider is given by \bae{\label{eq: general S} S=\int\mathrm{d}^4x\sqrt{-g}\left[\frac{1}{2}M_\text{Pl}^2\calR-\frac{1}{2}g^{\mu\nu}G_{IJ}(\phi)\partial_\mu\phi^I\partial_\nu\phi^J-V(\phi)\right]. } Here $\calR$ is the Ricci scalar associated with the spacetime metric $g_{\mu\nu}$, $G_{IJ}$ denotes the metric of the field space, curved in general, spanned by the scalar fields $\phi^I$, and $V(\phi)$ denotes the scalar potential. In the ADM formalism~\cite{Arnowitt:1962hi,Salopek:1990jq}, the spacetime metric is written in the form \bae{\label{eq: ADM form} \mathrm{d} s^2=-\calN^2\mathrm{d} t^2+\gamma_{ij}(\mathrm{d} x^i+\beta^i\mathrm{d} t)(\mathrm{d} x^j+\beta^j\mathrm{d} t)\,, } where $\calN$ is the lapse function, $\beta^i$ is the shift vector, and $\gamma_{ij}$ is the spatial metric. The action~\eqref{eq: general S} then reads $S=\int \mathrm{d} t \mathrm{d}^3 x \calL$ with the Lagrangian density \bae{\label{eq: ADM action} \calL=\calN\sqrt{\gamma}\left[\frac{M_\text{Pl}^2}{2}\left(\calR^{(3)}+K_{ij}K^{ij}-K^2\right) +\frac{1}{2\calN^2}G_{IJ}v^\indIv^J-\frac{1}{2}G_{IJ}\gamma^{ij}\partial_i\phi^I\partial_j\phi^J-V\right]\,, } where $\gamma={\rm det}(\gamma_{ij})$ and $\calR^{(3)}$ is the Ricci curvature of the spatial hypersurfaces. Here, spatial indices are lowered and raised with $\gamma_{ij}$ and its inverse $\gamma^{ij}$, \bae{ K_{ij}=\frac{1}{2\calN}\left(2\beta_{(i | j)}-\dot{\gamma}_{ij}\right), } is the extrinsic curvature of spatial slices (where dots denote time derivatives, the symbol $|$ denotes the spatial covariant derivative associated with the spatial metric $\gamma_{ij}$, and parentheses signal symmetrisation), and one has \bae{ v^I=\dot{\phi}^I-\beta^i\partial_i\phi^I. } The Lagrangian~\eqref{eq: ADM action} does not depend upon the time derivatives of $\calN$ and $\beta^i$. This shows that the lapse function and the shift vector are not dynamical variables, and that the only dynamical variables are $\phi^I$ and $\gamma_{ij}$ whose canonically conjugate momenta are given by \begin{eqnarray} \pi_I&=&\var{\calL}{\dot{\phi}^I}=\frac{\sqrt{\gamma}}{\calN}G_{IJ}v^J\,, \\ \pi^{ij}&=&\var{\calL}{\dot{\gamma}_{ij}}=\frac{M_\text{Pl}^2}{2}\sqrt{\gamma}(K\gamma^{ij}-K^{ij})\,. \end{eqnarray} The Hamiltonian density is given by the Legendre transform $\calH=\pi_I\dot{\phi}^I+\pi^{ij}\dot{\gamma}_{ij}-\calL$, or equivalently the action can be written in a Hamiltonian form as (see e.g. Ref.~\cite{Langlois:1994ec} for the single-field case) \bae{\label{eq: Hamiltonian action} S=\int\mathrm{d}^4x\left[\pi_I\dot{\phi}^I+\pi^{ij}\dot{\gamma}_{ij}-\calH\right], } where \bae{\label{eq: Hamiltonian} \calH= \sqrt{\gamma} \left( \calN C+\beta^i C_i\right), } and the so-called constraints read \bae{ C& \equiv \frac{2}{\spmetM_\text{Pl}^2}\left[\pi_{ij}\pi^{ij}-\frac{1}{2}\left(\pi^i_i\right)^2 \right]-\frac{M_\text{Pl}^2}{2}\calR^{(3)} + \frac{1}{2\gamma} G^{IJ}\pi_I\pi_J+G_{IJ} \frac{\gamma^{ij}}{2} \partial_i\phi^I\partial_j\phi^J+V, \\ C_i& \equiv -2 \left( \frac{\pi^j_i}{\sqrt{\gamma}} \right)_{| j}+\frac{1}{{\sqrt{\gamma}}}\pi_I\partial_i\phi^I=\frac{1}{\sqrt{\gamma}} \left(-2 \partial_k \left( \gamma_{ij} \pi^{jk} \right)+\pi^{jk} \partial_i \gamma_{jk} +\pi_I\partial_i\phi^I \right) \,. } The Hamilton equations $\dot \gamma_{ij}=\frac{\delta}{\delta \pi^{ij}} \left( \int \mathrm{d}^3 x \calH \right)$ and $\dot \pi^{ij}=-\frac{\delta}{\delta \gamma_{ij}} \left(\int \mathrm{d}^3 x \calH \right)$ give the dynamical parts of the Einstein equations, whose explicit form will not be needed in what follows, while the variation with respect to the lapse and shift enforce the energy and momentum constraints \begin{equation} \label{eq: constraints} C=C_i=0\,. \end{equation} Eventually, Hamilton equations in the scalar sector can be written in the compact form \bae{ \dot{\phi}^I&= \frac{\calN}{\sqrt{\gamma}}G^{IJ}\pi_J+\beta^i\partial_i\phi^I, \label{eq: rescaled phi EoM} \\ {\cal D}_t\pi_I&=-\sqrt{\gamma}\calN V_I+{\cal D}_i\left(\sqrt{\gamma}\calN G_{IJ}\gamma^{ij}\partial_j\phi^J\right)+{\cal D}_i\left(\beta^i\pi_I\right)\,. \label{eq: rescaled pi EoM} } Here $V_I=\partial V/\partial\phi^I$, while ${\cal D}_t$ and ${\cal D}_i$ are field-space covariant spacetime derivatives, whose actions on field-space vectors $\calU^I$ and covectors $\calV_I$ read \bae{ {\cal D}_\mu\calU^I=\partial_\mu\calU^I+\Gamma^I_{JK}\left(\partial_\mu\phi^J\right)\calU^K\,, \quad {\cal D}_\mu\calV_I=\partial_\mu\calV_I-\Gamma^K_{IJ}\left(\partial_\mu\phi^J\right)\calV_K\,, \quad \text{with } \mu\in\{t,i\}, \label{def-full-covariant-derivative} } and where $\Gamma^I_{JK}$ are the Christoffel symbols associated with the field-space metric $G_{IJ}$. \subsection{Gauge choice and smoothing procedure} \label{gauge-smoothing} In the stochastic framework, all fields (actual scalar fields as well as the spacetime metric) are divided into a classical IR component and a quantum UV component, which are the counterparts of respectively the background and the fluctuations in standard perturbation theory (SPT). An important difference between the two setups is that the fields' IR components have large-scale fluctuations, which is nothing else than what the stochastic theory aims at describing. Hence, gauge issues, which usually only concern the equivalent of the UV part, also apply to the IR sector. As standard in stochastic inflation, we will deal with fluctuations of scalar type only, letting aside vector and tensor perturbations for our purpose here, something that is not restrictive as we elaborate on below. A convenient gauge to study multifield inflation in SPT is the spatially flat gauge, in which all genuine (scalar type) fluctuating degrees of freedom are in the scalar fields. The same holds true in the stochastic context, and we will use the scalar gauge freedom so that spatial slices are homogeneous, with no fluctuation, neither on small nor on large scales. In what we can call the \emph{stochastic spatially flat gauge}, we thus have \begin{equation} \gamma_{ij}(N,\mathbf{x})=a^2(N)\delta_{ij} \quad \textrm{and} \quad a(N) \propto \mathrm{e}^N\,, \end{equation} where $a$ is spatially constant, and we choose to work with the time variable $N$ such that $N= \ln (a)$ up to an arbitrary constant. This choice is convenient and conceptually relevant (see Refs.~\cite{Sasaki:1995aw,Lyth:2004gb,Pattison:2019hef}). In the same manner as in SPT, in this gauge, the local number of $e$-folds\xspace of expansion computed in each (super)-Hubble patch is then identical~\cite{Salopek:1990jq,Lyth:2004gb}, and simply coincides with $N$. Saying it otherwise, neglecting any shear component that are suppressed on large scales in standard situations, the flat gauge coincides with the uniform-$N$ gauge. This way, the stochastic formalism enables one to determine how the inhomogeneities of the scalar fields evolve in different patches, with a local clock that is deterministic and shared by all patches. We will not do this in this paper, but this implies that we can, at least in principle, easily use our results in the framework of the stochastic-$\delta N$ formalism to deduce the properties of the large-scale curvature perturbation $\zeta$ (see, e.g., Refs.~\cite{Fujita:2013cna,Fujita:2014tja,Vennin:2015hra,Kawasaki:2015ppx,Assadullahi:2016gkk,Vennin:2016wnk,Pinol:2018euk}).\footnote{Anticipating somewhat on following elements of the paper, let us mention that this important fact holds even when taking into account the tensor and vector modes. First, at quadratic order in the action as considered in the paper, the UV parts of the tensor and vector modes are decoupled from the UV parts of the scalar ones. More importantly, the tensor degrees of freedom, properly defined non-perturbatively in a way that they do not affect the spatial volume element, are such that at leading-order in the gradient expansion, their IR parts are time-independent and locally homogeneous in each $\sigma$-Hubble patch (while the vector modes vanish). Hence, they can be transformed away by a choice of spatial coordinates. This does not affect neither the local Hubble parameter, nor the proper time~\cite{Lyth:2004gb}, and thus our time variable $N$ is a local clock that is deterministic and shared by all patches, despite the existence of large-scale tensor fluctuations.} In the stochastic spatially flat gauge, covariant spatial derivatives reduce to usual ones, the curvature of spatial slices $\calR^{(3)}$ vanishes, and $\sqrt{\gamma}=a^3$. To simplify equations, we also \textit{rescale momenta}, $\pi_I \to a^3 \pi_I$, which we will use from now on and for the rest of this paper.\\ Let us now discuss the smoothing procedure splitting any quantity between its IR and UV components, first in the simpler context of quantum field theory in a fixed de Sitter background. For each quantity written in Fourier space as $X(N,\mathbf{x})=\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}} X(N,\mathbf{k})$, its IR component is defined by coarse-graining as \bae{ X_\text{IR}(N,\mathbf{x})=\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}}W\left(\frac{k}{k_\sigma(N)}\right)X(N,\mathbf{k}), \label{def-XIR} } with some window function $W$ such that $W \simeq 1$ when its argument is small, and $W \simeq 0$ when its argument is large, i.e., smearing out short-wavelength modes $k>k_\sigma(N) \equiv \sigma a(N) H$, corresponding to a constant smoothing physical scale $\lambda_{{\rm s}}=(\sigma H)^{-1}$. In this context, $\sigma \ll 1$ is a small parameter ensuring that the smoothing scale is somewhat larger than the Hubble scale --- allowing for a gradient-, i.e., a $\sigma$-expansion --- and therefore that the infrared component can be considered as classicalised. As usual in physics, the details of this coarse-graining procedure should not affect physical observables, i.e. in this context, the properties of fluctuations on physical scales $\lambda \gg \lambda_{{\rm s}}$. Like in the context of the renormalisation group, a smooth window function seems physically motivated and desirable. However, this comes in general at the expense that the resulting description involves colored noises~\cite{Winitzki:1999ve,Matarrese:2003ye,Liguori:2004fa}, which are more difficult to handle analytically than white noises. In this paper, we will conservatively use the simpler choice of a sharp window function $W(x)=\theta(1-x)$, which is largely used in the literature. This has the advantage of being intuitive, and this will enable us to use the well-developed machinery of stochastic differential equations with white noises. However, as we will see in section~\ref{sec: effective hamiltonian action}, in the path-integral approach in which short-wavelength fluctuations are integrated out, special attention has to be paid to the integration measure’s split into IR and UV sectors~\cite{Rigopoulos:2016oko,Moss:2016uix,Tokuda:2017fdh,Tokuda:2018eqs}. For notational simplicity, we will also write $k_\sigma(N)= \sigma a(N) H$ in the case of scalar fields backreacting on spacetime, although the time-dependent Hubble scale $H$ is not defined \textit{a priori} in such a stochastic context, but should emerge as an IR quantity itself. One will find indeed that the quantity $1/\calN_\IR$ plays the role of a ``local Hubble parameter'' (see Eq.~\eqref{laspeIR}), in agreement with the literature in SPT~\cite{Lyth:2004gb}. One can thus imagine self-consistently defining the smoothing scale such that Eq.~\eqref{def-XIR} is verified for $X=\cal{N}$ with $k_\sigma=\sigma a /\calN_\IR$. We will not consider further this slight ambiguity, that is also present in single-field inflation while not being addressed in the literature to the best of our knowledge, and simply assume that the smoothing scale can be defined at least implicitly through a procedure similar to the one suggested above. \subsection{Stochastic equations} \label{subsec: stochastic equations} We now decompose the scalar fields and the metric components into IR and UV parts as \bae{\label{eq: IRUV decomposition} \bce{ \displaystyle \phi^I=\varphi^I+Q^I, & \displaystyle \pi_I=\varpi_I+P_I=\varpi_I+{\tilde{P}}_I+\Gamma^K_{IJ}\varpi_K Q^J, \\ \displaystyle \calN=\calN_\IR+\calN_\UV, & \displaystyle \beta^i=a^{-2}\delta^{ij}\partial_j\psi, } } where $\varphi^I$, $\varpi_I$, and $\calN_\IR$ are IR quantities, and one can fix $\beta^i_\text{IR}=0$ as it is a pure gauge choice in the long-wavelength limit. The second term in the decomposition of $\pi_I$, where $\Gamma_{IJ}^K$ is evaluated at the infrared values of fields $\varphi^I$, may seem arbitrary, but it ensures that the UV quantity ${\tilde{P}}_I$ transforms at linear order in a covariant manner under field redefinitions, as we prove in Sec.~\ref{subsec: covariant perturbations} In the heuristic stochastic approach, one simply substitutes the decomposition~(\ref{eq: IRUV decomposition}) into the original EoM~(\ref{eq: rescaled phi EoM}) and (\ref{eq: rescaled pi EoM}), keeping all nonlinearities in the IR sector --- albeit working at leading-order in the gradient expansion --- but keeping only linear terms in UV quantities. One thus obtains \bae{ \varphi^{I\prime}=\frac{1}{H} G^{IJ}\varpi_J+\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}}\left[W^\prime\left(\frac{k}{k_\sigma}\right)\phi^I(N,\mathbf{k}) +\left(1-W\left(\frac{k}{k_\sigma}\right)\right)E^{QI}(N,\mathbf{k})\right], \label{first} } and \bae{ {\cal D}_N\varpi_I&=-3\varpi_I- \frac{1}{H} V_I+\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}}\left[W^\prime\left(\frac{k}{k_\sigma}\right) \left( \pi_I(N,\mathbf{k})-\Gamma_{IJ}^K\varpi_K\phi^J(N,\mathbf{k}) \right)\right. \nonumber \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \left.+\left(1-W\left(\frac{k}{k_\sigma}\right)\right)E^{\tilde{P}}_I(N,\mathbf{k})\right], \label{second} } where a prime $^\prime$ denotes a simple derivative with respect to $N$ and we denote the covariant time derivative ${\cal D}_N\varpi_I=\partial_N\varpi_I-\Gamma_{IJ}^K\varphi^{J\prime}\varpi_K$ --- covariant with respect to field redefinitions of the IR fields --- by the same symbol ${\cal D}_N$ as in the fully nonlinear Eq.~\eqref{def-full-covariant-derivative} for simplicity. Here, $E^{QI}$ and $E^{\tilde{P}}_I$, whose expressions are given below in Eqs.~\eqref{eq: EQ} and \eqref{eq: EP}, stand for the linearised EoM in Fourier space, and the expression of $H \equiv 1/\calN_\IR$ in terms of $\varphi^I$ and $\varpi_I$ will be given in Eq.~\eqref{laspeIR}. In SPT, which one formally recovers in the limit $k_\sigma \to 0$, one assumes that the dynamics of fluctuations decouples from the one of the background, in which case one has $E^{QI}=E^{\tilde{P}}_I=0$. The terms in $W^\prime$ vanish in this limit and thus each of the equations~\eqref{first} and \eqref{second} splits into two parts, one for the background and one for the fluctuations. In the heuristic approach to stochastic inflation, one still assumes that UV fluctuations obey the same evolution equations $E^{QI}=E^{\tilde{P}}_I=0$ as in SPT. However, due to the time-dependence of the coarse-graining scale $k_\sigma(N)$, the IR dynamics is affected by the flow of UV modes joining the IR sector, an effect described by the terms involving the time-derivative of the window function, $W^\prime$. Thus writing \bae{\label{eq: noises} \xi^{QI}(x)=\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}} W^\prime\left(\frac{k}{k_\sigma(N)}\right) Q^I(N,\mathbf{k}), \quad \xi^{{\tilde{P}}}_I(x)=\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}} W^\prime\left(\frac{k}{k_\sigma(N)}\right) {\tilde{P}}_I(N,\mathbf{k}), } one obtains the desired effective equations of motion for infrared fields and momenta, \bae{\label{eq: Langevin in heuristic} \varphi^{I\prime}=\frac{1}{H} G^{IJ}\varpi_J+\xi^{QI}, \quad\quad {\cal D}_N\varpi_I=-3 \varpi_I-\frac{1}{H} V_I+\xi^{{\tilde{P}}}_I, } the so-called Langevin equations. In this description, one assumes that the UV quantities $Q^I(N,\mathbf{k})$ and ${\tilde{P}}_I(N,\mathbf{k})$, which in fact are quantum operators, can be described classically as they join the IR sector at the time $N_\sigma(k)$ such that $k=k_\sigma(N_\sigma(k))$. Hence, the $\xi$'s can be interpreted as classical random noises, and when computing their statistical properties, one identifies ensemble averages with expectation values of the corresponding operators in the quantum vacuum state of the theory. As we treat UV fluctuations at linear order, one can consider that the $\xi$'s obey a Gaussian statistics, with zero mean and fully characterised by their two-points correlations. In the case of the sharp window function $W(x)=\theta(1-x)$, the latter can be easily computed and are directly related to the power spectra of the UV fluctuations when they reach the coarse-graining scale: \bae{\label{eq: noise correlation} \braket{\xi^{\covXI}(x)\xi^{\covYJ}(x^\prime)}= \underbrace{\mathcal{P}^{\tilde{X}\covYIJ}(N,k_\sigma(N))\frac{k_\sigma^\prime}{k_\sigma}}_{\textstyle \begin{array}{c} A^{\tilde{X}\covYIJ}(N)\end{array}} \frac{\sin k_\sigma r}{k_\sigma r} \delta(N-N^\prime), } where $r=|\mathbf{x}-\mathbf{x}^\prime|$ is the comoving distance between the spacetime points $x$ and $x^\prime$, and with the dimensionless power spectra ${\cal P}$ such that \bae{\label{eq: power spectra} \braket{Q^{\covXI}(N,\mathbf{k})Q^{\covYJ}(N,\mathbf{k}^\prime)}= (2\pi)^3\delta^{(3)}(\mathbf{k}+\mathbf{k}^\prime)\frac{2\pi^2}{k^3}\mathcal{P}^{\tilde{X}\covYIJ}(N,k)\,. } Here, we used a condensed notation adapted to our phase-space description where $\tilde{X}=(Q,{\tilde{P}})$ refers both to UV fields and covariant momenta, i.e. $\xi^{\covXI}=(\xi^{Q I },\,\xi^{{\tilde{P}} I}=G^{I J}(\varphi) \xi^{{\tilde{P}}}_J)$ and $Q^{\covXI}=(Q^{I},\,{\tilde{P}}^{I} = G^{I J}(\varphi) {\tilde{P}}_{J})$. The auto-correlation of the noises $A^{\tilde{X}\covYIJ}$ is a contravariant rank-2 tensor since it inherits the transformation properties of the UV modes $(Q^{I},\,{\tilde{P}}^{I})$. Because of the presence of the delta function $\delta(N-N^\prime)$ coming from the time derivative of the step window function, the $\xi$'s can be regarded as white noises. This property would not hold true had we chosen a smooth window function. Notice also that the noise correlations~\eqref{eq: noise correlation} are proportional to $k_\sigma^\prime/k_\sigma\times\sin k_\sigma r/k_\sigma r$. First, the ratio $k_\sigma^\prime/k_\sigma=1-\epsilon$ in Eq.~\eqref{eq: noise correlation}, with $\epsilon$ in Eq.~\eqref{epsilon}, may be approximated by unity. Indeed, this slow-roll correction is likely to be too precise for the accuracy of the coarse-graining procedure, and other authors also proposed considering a slightly time-dependent parameter $\sigma$ such that $\sigma H$ is exactly constant~\cite{Starobinsky:1994bd}. Second, the precise form $\sin{k_\sigma r}/k_\sigma r$ of the apparent spatial correlation in the noises' two-point function depends on the choice of the window function $W$. However, since we neglected any spatial dependence of the IR fields, this oscillating and decaying term should only be understood as a step theta function $\theta(1-k_\sigma r)$ taking values $1$ inside a ``$\sigma$-Hubble patch" and $0$ outside, in agreement with the separate universe approach. The $\xi$'s can thus be understood as $k_\sigma$-patch-independent Brownian noises, the evolution of each $\sigma$-Hubble patch being determined only by the local physics. In this paper we will discuss only one-point statistics (one ``$\sigma$-Hubble patch" statistics to be accurate), the idea being that starting from one progenitor $\sigma$-Hubble patch, the observable universe at the end of inflation is made of many $\sigma$-Hubble patches that emerge from the same initial condition. Hence, by ergodicity, the ensemble average of the stochastic evolution of one $\sigma$-Hubble patch can also be seen as the spatial average among these $\sigma$-Hubble patches. Moreover, the study of the one-point statistics is not as restrictive as it may seem, and it is actually possible to extract detailed spatial information from it. Indeed, any two $\sigma$-Hubble patches initially share the same dynamics, until they become separated by the physical distance $(\sigma H)^{-1}$ and subsequently evolve independently. Using this time-scale correspondence, Starobinsky and Yokoyama have shown in Ref.~\cite{Starobinsky:1994bd} how, once the Fokker-Planck operator for the one-point probability density function (PDF) is known, one can determine the evolution equation for the two-point PDF, or even any $n$-point PDF (at different spatial and temporal locations), and thus, at least in principle, retrieve all the statistical information (see, e.g., \cite{Markkanen:2019kpv,Markkanen:2020bfc,Moreau:2019jpn} for recent applications).\footnote{Naturally, given the hard cutoff in the spatial correlations of the stochastic noises, spatial correlations can be reliably computed only when the relevant length scales are well above $H^{-1}$, but that is overwhelmingly the case for observationally relevant scales.} This logic is also put to good use in the stochastic-$\delta N$ approach, with which one can compute Fourier space correlation functions of the observable large-scale curvature perturbation $\zeta$ (and not only statistics of the inflationary fields) (see, e.g., Refs.~\cite{Fujita:2013cna,Fujita:2014tja,Vennin:2015hra,Kawasaki:2015ppx,Assadullahi:2016gkk,Vennin:2016wnk,Pinol:2018euk}). Eventually, note that first-principles methods to compute the power spectra will be reviewed in Sec.~\ref{subsec: classicalisation}, and that analytical estimates will be discussed in Sec.~\ref{sec:Markovian}. Before explaining in Sec.~\ref{limitations} why this heuristic approach to stochastic inflation is not fully satisfactory, let us now fill in the gaps in the above description by characterising the dynamics of the UV fluctuations. \subsection{Dynamics of UV fluctuations} First, one needs to relate the perturbations of the non-dynamical parts of the metric, $\calN_\UV$ and $\psi$, to the genuine degrees of freedom: the UV parts of the scalar fields and momenta $Q^I$ and ${\tilde{P}}_I$. For this, it is important to notice that the energy and momentum constraints~\eqref{eq: constraints} contain no time derivative. Hence, contrary to the Hamilton equations~\eqref{first} and \eqref{second}, no explicit noise enters in their IR/UV decomposition, and they can be straightforwardly split into independent equations on large and small scales. The momentum constraint gives no information on large scales, in agreement with the fact that all choices of threadings are equivalent at leading-order in the gradient expansion, while the small scale component gives the expression of $\calN_\UV$: \bae{ \calN_\UV=\frac{\varpi_I Q^I}{2M_\text{Pl}^2H^2}. \label{lapseUV} } As for the energy constraint, its long-wavelength limit is non-trivial and is equivalent to the first Friedmann equation, while the small scale part relates $\psi$ to $\calN_\UV$ and $Q^I,{\tilde{P}}_I$: \bae{ \frac{3M_\text{Pl}^2}{\calN_\IR^2} &\equiv 3M_\text{Pl}^2H^2=\frac{1}{2} G^{IJ}\varpi_I\varpi_J+V, \label{laspeIR} \\ 2M_\text{Pl}^2H^2 \frac{k^2}{a^2}\psi&=\varpi_I{\tilde{P}}^I+V_I Q^I+6 M_\text{Pl}^2 H^3 \calN_\UV\,. \label{eq: perturbation energy const} } The equation~\eqref{laspeIR} confirms that $1/\calN_\IR$ plays the role of a local Hubble parameter, with the usual Friedmann constraint holding in each $k_\sigma$-patch. In this respect, note that if one converts $\varpi_I$ to $\varphi^{I\prime}$ with use of the IR EoM~(\ref{eq: Langevin in heuristic}), the Friedmann equation would include an explicit noise term. This demonstrates the conceptual advantage of the Hamiltonian language over the Lagrangian one in the stochastic formalism. Equipped with the constraints~\eqref{lapseUV} and \eqref{eq: perturbation energy const}, one can express $E^{QI}$ and $E^{\tilde{P}}_I$ in the condensed form: \bae{ E^{QI}(N,\mathbf{k})=&-{\cal D}_N Q^{I}(N,\mathbf{k})+\frac{{\tilde{P}}^I(N,\mathbf{k})}{H}+M^2_{{\tilde{P}} Q}{}^I{}_J Q^J(N,\mathbf{k}), \label{eq: EQ} \\ E^{\tilde{P}}_I(N,\mathbf{k})=&-{\cal D}_N {\tilde{P}}_I(N,\mathbf{k})- 3{\tilde{P}}_I(N,\mathbf{k})-\frac{k^2}{a^2 H} Q_I(N,\mathbf{k}) -\frac{1}{H} M^2_{QQIJ} Q^J(N,\mathbf{k})-M^2_{Q{\tilde{P}}I}{}^J{\tilde{P}}_J, \label{eq: EP} } where indices are lowered and raised with the IR metric in field space $G_{I J}(\varphi)$ and its inverse, and \bae{ M^2_{QQIJ}&=V_{;IJ}-R_I{}^{KL}{}_J\varpi_K\varpi_L +\frac{1}{2M_\text{Pl}^2H}(V_I\varpi_J+\varpi_I V_J)+\frac{3\varpi_I\varpi_J}{2M_\text{Pl}^2}, \label{M2QQ} \\ M^2_{Q{\tilde{P}}IJ}&=M^2_{{\tilde{P}} QIJ}=\frac{\varpi_I\varpi_J}{2M_\text{Pl}^2H^2}\,, \label{M2QP} } with $V_{;IJ} \equiv \nabla_J V_I=V_{,I J}-\Gamma_{I J}^K V_{K}$ the covariant Hessian of the potential, and $R^S{}_{IJK}\equiv\Gamma^S_{IK,J}-\Gamma^S_{IJ,K} +\Gamma^R_{IK}\Gamma^S_{JR}-\Gamma^R_{IJ}\Gamma^S_{KR}$ the Riemann tensor of the field space. To obtain the expressions~\eqref{eq: EQ}--\eqref{eq: EP}, and in accordance with treating the UV modes linearly, we have simplified the infrared ``coefficients'' by neglecting the noise terms in Eq.~\eqref{eq: Langevin in heuristic}, and similarly, we used $E^{QI}=0$ in Eq.~\eqref{eq: EP}. As expected, the equations $E^{QI}=E^{\tilde{P}}_I=0$ are equivalent to the EoM for linear perturbations in SPT~\cite{Sasaki:1995aw}, with background fields replaced by IR ones, i.e. their combination gives \bae{\label{eq: UV eom mode} {\cal D}_N^2 Q^I+\left(3-\epsilon\right){\cal D}_N Q^I+\left(\frac{k^2}{a^2H^2}\delta^{I}_{J} + \frac{M^2{}^I{}_J}{H^2}\right)Q^J=0, } where (consistently neglecting noise terms in the second equality) \bae{ \epsilon \equiv-\frac{H^\prime}{H} = \frac{\varpi_I\varpi^I}{2M_\text{Pl}^2H^2}, \label{epsilon} } and the mass matrix reads \bae{ M^2{}^I{}_J=V^I{}_{;J}-R^{IKL}{}_J\varpi_K \varpi_L-\frac{H}{a^3M_\text{Pl}^2}{\cal D}_N\left(\frac{a^3}{H}\varpi^I\varpi_J\right). } \subsection{Limitations of the heuristic approach} \label{limitations} Although qualitatively satisfying, the above heuristic approach to stochastic inflation suffers from a number of technical and conceptual issues. \begin{itemize} \item When going from Eqs.~\eqref{first}--\eqref{second} to \eqref{eq: noises}, we attributed $\phi^I(N,k_\sigma(N))$, the Fourier component of the full field at the transition time, to the UV part $Q^I$ (and similarly for momenta), in a rather arbitrary manner. \item We assumed that the UV modes obey $E^{QI}=E^{\tilde{P}}_I=0$, i.e. the same equations as in standard perturbation theory with background fields replaced by IR ones. \item Despite the fact that $\varphi^I$ and $\varpi_I$ are real, the noise correlation $\braket{\xi^{QI}\xi^{{\tilde{P}}J}}$ has an imaginary component, owing to the fact that the quantum operators $Q^I(N,k_\sigma(N))$ and ${\tilde{P}}^I(N,k_\sigma(N))$ do not commute. To interpret Eq.~(\ref{eq: Langevin in heuristic}) as proper real stochastic equations, one has to replace by hand $\braket{\xi^{QI}\xi^{{\tilde{P}}J}} \to \frac12 \left( \braket{\xi^{QI}\xi^{{\tilde{P}}J}}+\braket{\xi^{{\tilde{P}}J}\xi^{QI}}\right)=\Re \left[ \text{$\braket{\xi^{QI}\xi^{{\tilde{P}}J}}$} \right]$, i.e. to take the (real) vev of hermitian operators only (see e.g. Ref.~\cite{Grain:2017dqa}).\footnote{This problem is not present for the $\xi^Q \xi^Q$ and $\xi^{\tilde{P}} \xi^{\tilde{P}}$ correlations which are real, as the (real space) $Q ^I$ (and the ${\tilde{P}}^I$ separately) are hermitian operators that commute with one another at equal times, see also Eqs.~\eqref{Q-quantisation} and \eqref{two-point-vacuum}.} \item In addition to these difficulties, there remains an ambiguity in the treatment of stochastic differential equations of the type~\eqref{eq: Langevin in heuristic} as the continuous limit of discrete processes. In a previous letter~\cite{Pinol:2018euk}, we emphasised the role of such discretisations and unveiled the presence of \emph{inflationary stochastic anomalies}, potentially inducing spurious frame dependences or breaking the covariance of the theory. \end{itemize} All these difficulties are related and motivates a careful treatment of quantum aspects of the problem. First, in Sec.~\ref{sec: stochastic anomalies}, we will discuss and solve the issue of the aforementioned stochastic anomalies. Critical to this resolution is the identification of \textit{independent} Gaussian white noises --- as required from a proper mathematical treatment of stochastic differential equations --- in one-to-one correspondence with the \textit{independent} quantum creation and annihilation operators necessary for the quantisation of the UV modes. Second, in Sec.~\ref{sec: effective hamiltonian action}, we solve the other difficulties related to IR/UV interactions by working at the level of the action and integrating out the quantum UV modes in the closed-time-path formalism. We do so paying a particular attention to issues of covariance and, following Refs.~\cite{Tokuda:2017fdh,Tokuda:2018eqs}, to the integration measure’s split into IR and UV sectors. Notably, the fact that UV modes become IR, but not the reverse, entails the existence of fluctuations without dissipation, in contrast to ordinary open systems. \subsection{To be or not to be Markovian} \label{subsec:Markovian?} Here we would like to stress an ever-present subtlety, be it in the heuristic approach or in a proper quantum field theory treatment. It lies in the fact that the effective dynamics of the coarse-grained scalar fields is stricly speaking non-Markovian (see, e.g., Refs.~\cite{Boyanovsky:1994me,Rau:1995ea,Boyanovsky:1998pg,Xu:1999aq,Berera:2007qm,Farias:2009wwx,Farias:2009eyj,Gautier:2012vh,Buldgen:2019dus} for related discussions in various areas of physics). For the equations~\eqref{eq: Langevin in heuristic} to describe a Markov process, characterised by the absence of memory, one would need the statistical properties of the noises to be a function of the infrared variables $(\varphi^I,\varpi_I)$ at current time $N$. However, the power spectra of the UV modes~\eqref{eq: power spectra}, or in a related manner their mode functions, are not even functions of the IR variables. They are simply solutions of the differential equations~\eqref{eq: EQ} and \eqref{eq: EP}, whose ``coefficients" depend on the IR variables, and that are evaluated at time $N$ for the mode with wave number $k_\sigma(N)$. Moreover, this effective ``background" for the UV dynamics is described by coarse-grained fields whose values were affected by previous realisations of the noises. The dynamics described by such equations is thus very rich and complex. In this respect, we would like to stress that the bulk of this paper as well as its main result~\eqref{Langevin-intro}--\eqref{noise-properties-intro} do not involve any Markovian approximation, as our emphasis is on the first-principle derivation of these manifestly covariant equations. This means that our Langevin-type equations can be in principle solved numerically together with the dynamics of the UV modes dictating the noise properties. They can also serve as a basis for future analytical works, and in this context, it can be convenient to resort to the Markovian approximation, at the expense for instance of assuming some slow-varying regime. We discuss such analytical estimates in Sec.~\ref{sec:Markovian}. One of the advantage of the Markovian approximation is that one can then write a Fokker-Planck (FP) equation for the (one-point) probability density function of the IR fields and momenta, with the result~\eqref{eq: phase space fokker-planck}. Such an equation is easier to handle numerically or analytically than the Langevin-type equations, and indeed covariance is even more manifest with such a formulation. However, we stress again that our main results hold more generally. \section{Stochastic anomalies and their solution} \label{sec: stochastic anomalies} A generic difficulty in the description of stochastic processes is that stochastic equations like the one~\eqref{eq: Langevin in heuristic} are not mathematically defined unless specifying their discretisation schemes (see e.g. Refs.~\cite{risken1989fpe,vankampen2007spp}). In particular, in our context, different choices of discretisations (among which It\^o and Stratonovich are the most famous ones) can lead to a violation of the EoM's covariance, and/or an unphysical noise-frame dependence as we pointed out previously~\cite{Pinol:2018euk}. We first review such \emph{stochastic anomalies} in Sec.~\ref{discretisation}. To make the physics easier to grasp, we sometimes restrict ourselves to the particular case of a Markov process. Indeed, this enables one to write the so-called Fokker-Planck equation, corresponding to the Langevin equations with a no-memory noise, that dictates the deterministic evolution of the probability density function for the IR fields and their momenta. We then explain in Sec.~\ref{subsec: classicalisation} why the particular framework of stochastic inflation, where the classical stochastic noise emerges from a quantum field theory description, provides us with a preferred frame for the reduction of the noise auto-correlation matrix: the one of independent creation and annihilation quantum operators. Stochastic anomalies are thus solved when interpreting the Langevin equations~\eqref{eq: Langevin in heuristic} with this particular choice of frame and in the Stratonovich scheme. However, this resolution is rather formal and in order to make it more explicit, we introduce \emph{stochastically-parallel-transported vielbeins} in Sec.~\ref{vielbeins-Ito}. Strikingly, with the use of such vielbeins, the Langevin equations~\eqref{eq: Langevin in heuristic} interpreted in the Stratonovich scheme can be recast in the form~\eqref{eq: covariant Langevin Ito} interpreted in the It\^o scheme, featuring covariant derivatives adapted to It\^o calculus. These equations constitute one of the main results of this paper: they are manifestly covariant, readily adapted to numerical implementations, and when supplemented with a Markovian approximation, they lead to the phase-space Fokker-Planck equation~\eqref{eq: phase space fokker-planck}. \subsection{Ambiguity of the discretisation scheme} \label{discretisation} A stochastic differential equation (SDE), or equivalently its solution as a stochastic integral, is mathematically defined as the infinitesimal-step, continuous limit of a finite-step, discrete summation, in the same way that the Riemann integral is defined. From step $i$ to step $i+1$, the integrand must be evaluated at some time between times $N_i$ and $N_{i+1}$, expressed as $(1-\alpha)N_i+\alpha N_{i+1}$ with the parameter $0\le\alpha\le1$. The Riemann integral of differential functions is independent of this discretisation choice of $\alpha$ in the continuous limit. However, due to the non-differentiability of the stochastic noise, the stochastic integral does depend on $\alpha$. Conveniently for our purpose, let us explain these subtleties in a situation where the stochastic variables $\calX^I$ are coordinates on a manifold, endowed with a metric whose components in these coordinates are $G_{I J}$ (see Appendix~\ref{appendix: stochastic calculus} for generic mathematical properties independently of this specific context). Let us further assume that the stochastic process under study is described by a deterministic drift $h^I$ as well as noises' amplitudes $g^{\calXI}_A$ that all transform as vectors under redefinitions of the coordinates $\calX^I$, and such that the corresponding set of Langevin equations reads \bae{\label{eq: dXdt} \dif{\calX^I}{N}=h^I+g^{\calXI}_A \circ_\alpha \xi^A, \quad \braket{\xi^A(N)\xi^B(N^\prime)}=\delta^{AB}\delta(N-N^\prime), \quad \delta^{AB} g_A^{\calXI} g_B^{\calXJ}=A^{\calX\genX IJ}, } together with a specified discretisation scheme $\alpha$ represented by the symbol $\circ_\alpha$. Here $A^{\calX\genX IJ}$ stands for the auto-correlation of the effective noises $\xi^I=g^{\calXI}_A\xi^A$, i.e. $\langle \xi^I(N) \xi^J(N) \rangle=A^{\calX\genX IJ}\delta(N-N^\prime)$, and transforms as a rank-2 tensor under coordinate transformations.\footnote{Notice that in this more general mathematical context, we label the noises' amplitudes $g^{\calXI}_A$ and their auto-correlation $A^{\calX\genX IJ}$ by the stochastic variables $\calX^I$ that receive the stochastic kicks via the Langevin equation. This notation is slightly different from the one used in the specific multifield stochastic inflation context, where the noises' amplitudes $g^{\covXI}_A$ and their auto-correlation $A^{\tilde{X}\covX IJ}$ are labeled by the UV modes $(Q^I, {\tilde{P}}^I)$ that are responsible for the stochastic kicks received by the IR fields $(\varphi^I,\varpi^I)$.} It is important to understand that prescribing this auto-correlation is not sufficient to define the corresponding SDE, but that one needs to specify the full set of $g^{\calXI}_A$'s, i.e. the decomposition of the $\xi^I$'s onto a set of independent noises $\xi^A$, what we may call in what follows an orthonormal frame for the noises. The setup~\eqref{eq: dXdt} encapsulates the specific case of stochastic inflation when momenta are neglected (although a decomposition into independent noises $\xi^A$ has not been identified yet), i.e., only the first of the Langevin equations~\eqref{eq: Langevin in heuristic} is considered here. We do so to simplify the presentation, but the discussion will be extended next to the more general setup of Langevin equations in phase space. The simplest situation for such kind of SDE is when the noise amplitude $g^{\calXI}_A$ is only a (deterministic) function of time, in which case the noise is called \emph{additive}, and all discretisations have the same continuous limit. In the generic case however, the noise also depends on the stochastic variables $\calX^I$ at time $N$, in which case it is called \emph{multiplicative} and choices of discretisations matter. As we stressed in Sec.~\ref{subsec:Markovian?}, the stochastic equations for the coarse-grained scalar fields are even more complicated as, contrary to standard SDEs, the dependence of the noise on the stochastic variables is only indirect. This is the reason why we sometimes refer to them as Langevin-type equations. Despite this, let us begin by explaining the covariance issue in the simplest context of a Markovian description in which the $g^{\calXI}_A$'s explicitly depend on $\calX^I(N)$. In that case, there is a one-to-one correspondence between the Langevin equations~\eqref{eq: dXdt} with a given discretisation scheme $\alpha$ and a Fokker-Planck partial differential equation for the transition probability from the initial state $\calX^I_\mathrm{ini}(N_\mathrm{ini})$ to $\calX^I(N)$, sometimes simply referred to as the probability density function (PDF) of the stochastic variables, $P\left(N;\calX^I\right)$: \bae{\label{eq: covariant fokker-planck} \partial_NP_\mathrm{s}=-\nabla_I(h^\indIP_\mathrm{s})+\alpha\nabla_I\left[g^{\calXI}_A \nabla_J \left(g^{\calXJ}_\indAP_\mathrm{s}\right)\right]-\left(\alpha-\frac{1}{2}\right)\frac{1}{\sqrt{G}}\partial_I\partial_J\left(\sqrt{G} A^{\calX\genX IJ}P_\mathrm{s}\right)\,. } Here, $\nabla_I$ is the usual field-space covariant derivative and we defined a rescaled PDF $P_\mathrm{s}=P/\sqrt{G}$, with $G=\textrm{det}(G_{I J})$, where the subscript $s$ indicates that it is a scalar under redefinition of the coordinates. From this expression, it is possible to identify two particular values of $\alpha$. It is indeed possible to set to zero the second term in Eq.~\eqref{eq: covariant fokker-planck} with the choice of a \emph{prepoint}, $\alpha=0$ discretisation, called the It\^o scheme. Another interesting option is to keep this second term but to set to zero the third one by preferring a \emph{midpoint}, $\alpha=1/2$ discretisation, called the Stratonovich scheme. In the rest of this section, we will review the pros and cons of each of these two choices, keeping in mind that our derivation of the Langevin equations in the previous section did not come with any prescription for $\alpha$, so at this stage one should discriminate between the possibilities of discretisation based on physical arguments. \subsubsection{It\^o scheme} \label{Ito-scheme} The It\^o scheme, corresponding to $\alpha=0$, is widely used in applied and computational mathematics because it has the advantage of expressing explicitly the stochastic variables at time $N_{i+1}$ in terms of known values at $N_i$. Not only is it conceptually clear, but it is also easy to implement numerically, which explains its widespread use in various areas of science. However, this description seems to suffer from a fundamental issue in our context: in the Fokker-Planck (FP) equation~\eqref{eq: covariant fokker-planck}, where the third term survives for $\alpha=0$, only partial derivatives $\partial_I$ appear, rather than covariant derivatives $\nabla_I$. Thus, the FP equation as it is breaks covariance. More precisely, the problem is not that this equation is formulated in terms of non-covariant objects, i.e., that is not manifestly covariant, it is that it is not consistent with $P_\mathrm{s}$ being a scalar quantity. Actually, this fundamental flaw can already be seen at the level of the Langevin-type equations, even when the process is not assumed to be Markovian. Indeed, the standard chain rule for the derivation of composite functions of the stochastic variables $\calX^I$ does not hold in the It\^o prescription, but gets corrected by the auto-correlation of the noise. This so-called It\^o's lemma states that, under a change of variables $\calX^I\to\bar{\genX}^{{\bar{\indI}}}=\bar{\genX}^{{\bar{\indI}}}(\calX^I)$, the infinitesimal variations read~\cite{ito1944109}: \bae{ \mathrm{d}\bar{\genX}^{{\bar{\indI}}}=\pdif{\bar{\genX}^{{\bar{\indI}}}}{\calX^I}\mathrm{d}\calX^I+\frac{1}{2}\frac{\partial^2\bar{\genX}^{{\bar{\indI}}}}{\partial\calX^I\partial\calX^J}A^{\calX\genXIJ}\mathrm{d} N. \label{eq: Ito-lemna} } We prove such kinds of exotic properties of stochastic calculus in Appendix~\ref{appendix: stochastic calculus} and refer the interested reader to it. However the form of this lemma can be easily understood: a white noise is not a differentiable function because its infinitesimal variation $\mathrm{d} \xi$ is proportional to $\sqrt{\mathrm{d} N}$ rather than $\mathrm{d} N$, thus $\mathrm{d} \xi/\mathrm{d} N$ diverges when $\mathrm{d} N \rightarrow 0$. Therefore at order $\mathrm{d} N$ even the second derivative of $\calX^I$ matters in the Taylor expansion~\eqref{eq: Ito-lemna}. The conclusion is that the standard infinitesimal variation $\mathrm{d}\calX^I$ does not transform as a vector, contrary to the expectation for the infinitesimal variation of a coordinate on a manifold. Thus, although equations~\eqref{eq: dXdt} and \eqref{eq: Langevin in heuristic} are covariant under the standard chain rule, they are actually not if they are interpreted in the It\^o sense, precisely because the standard chain rule is not verified. This fact forbids us to interpret the Langevin equations derived in the heuristic approach with the It\^o scheme. However, covariance and It\^o together are not doomed to fail, and it is actually possible to define covariant derivatives compatible with It\^o calculus that compensate for the breaking of the standard chain rule. A possible such derivative for the coordinates $\calX^I$ is given by \bae{\label{eq: ItoD for X} \mathfrak{D}\calX^I=\mathrm{d}\calX^I+\frac{1}{2}\Gamma^I_{JK}A^{\calX\genXJK}\mathrm{d} N, } which we show to transform as a vector in It\^o calculus in Appendix~\ref{appendix: stochastic calculus}.\footnote{Related notions of It\^o covariant derivatives have been discussed in the literature independently of the context of inflation in Ref.~\cite{GRAHAM1985209}.} There we also derive It\^o-covariant derivatives for vectors $\calU^I$ and covectors $\calV_I$ when they are subject to Langevin equations with noises $g^{\calUI}$ and $g^{\calV}_{I}$: \bae{ &\quad \mathfrak{D}\calU^I={\cal D}\calU^I+\frac{1}{2}\left(\Gamma^I_{JS,K}-\Gamma^M_{JS}\Gamma^I_{\indMK}\right)\calU^S A^{\calX\genXJK}\mathrm{d} N+\Gamma^I_{JK}A^{\calX{\tilde{\genU}}JK}\mathrm{d} N, \\ &\quad \mathfrak{D}\calV_I={\cal D}\calV_I-\frac{1}{2}\left(\Gamma^S_{IJ,K}+\Gamma^M_{IJ}\Gamma^S_{KM}\right)\calV_S A^{\calX\genXJK}\mathrm{d} N -\Gamma^K_{IJ}A^{\calX{\tilde{\genV}}J}{}_K\mathrm{d} N, \label{eq: ItoD for V} } where the quantities $A^{\calX{\tilde{\genU}}IJ}=g^{\calXI}_A g^{{\tilde{\genU}}J}_A$ and $A^{\calX{\tilde{\genV}}I}{}_J=g^{\calXI}_A g^{\tilde{\genU}}_{JA}$ are the cross-correlations between the coordinate noise $g^{\calXI}_A$ and the covariant combinations of (co)vector noise: \bae{ g^{{\tilde{\genU}}I}_A=g^{\calUI}_A+\Gamma^I_{JK}\calU^J g^{\calXK}_A, \qquad g^{\tilde{\genV}}_{IA}=g^\calV_{IA}-\Gamma_{IJ}^K\calV_K g^{\calXI}_A. } Note also that the difference between these It\^o-covariant derivatives and usual covariant derivatives for vectors and covectors, ${\cal D}\calU^I=\mathrm{d}\calU^I+\Gamma^I_{JK}\calU^J \mathrm{d} \calX^K$ and ${\cal D}\calV_I=\mathrm{d}\calV_I-\Gamma^J_{IK}\calV_J \mathrm{d} \calX^K$, only contains terms proportional to noise amplitudes squared. Had we obtained Langevin equations of the type~\eqref{eq: Langevin in heuristic} but with $\mathrm{d}$ and ${\cal D}$ replaced by $\mathfrak{D}$, then they would be covariant under field redefinitions (and induced redefinitions of momenta) if and only if, interpreted in It\^o. Actually, we will see in section~\ref{vielbeins-Ito} that exactly these It\^o-covariant derivatives emerge when interpreting our equations in the Stratonovich scheme, and reformulating them in the It\^o-language. However for the moment, one should abandon the It\^o scheme together with equations~\eqref{eq: Langevin in heuristic}, as covariance would then be lost. Let us now discuss the second most popular discretisation. \subsubsection{Stratonovich scheme} \label{Strato} The midpoint, or Stratonovich, discretisation corresponds to $\alpha=1/2$. Physicists like it because it is intuitive to use in analytical calculations: as proved in Appendix~\ref{appendix: stochastic calculus}, the standard chain rule applies, hence it is easy to check the covariance of a given equation and straightforward to perform changes of variables. Saying it more trivially: when physicists make ``naive'' computations by applying standard rules in a stochastic context, like what we did in Sec.~\ref{sec: heuristic}, they are implicitly using the Stratonovich scheme. As we can clearly see from the FP equation~\eqref{eq: covariant fokker-planck}, it is the only choice that respects covariance. Again, this can be understood already at the level of Langevin equations, since because the standard chain rule applies, infinitesimal variations $\mathrm{d} \varphi^I$ and ${\cal D} \varpi_I$ are well vectors and covectors of the field space. Nonetheless, although general covariance is respected, this description is not yet satisfactory. Indeed, when the noise is multiplicative (which it is in most interesting scenarios), the second term in Eq.~\eqref{eq: covariant fokker-planck} depends explicitly on the identification of an orthonormal frame for the noises, through the appearance of $g^{\calXI}_A$. However, the only outcome of our derivation for stochastic inflation so far has been the auto-correlation of the effective noises, for instance $\braket{\xi^{QI}(N)\xi^{QJ}(N^\prime)}=\frac{{k_\sigma{}^\prime}}{k_\sigma}\mathcal{P}^{QQIJ}(k_\sigma)\delta(N-N^\prime)$ in a given $\sigma$-Hubble patch. Of course, it is always possible to reduce the auto-correlation matrix in a frame where it is diagonal, i.e. to find a ``square-root matrix'' $g_A^{\calXI}$ verifying $\delta^{AB} g_A^{\calXI} g_B^{\calXJ}=A^{\calX\genX IJ}$. However, such a frame is not unique, and an ambiguity remains: the physics described by the FP equation~\eqref{eq: covariant fokker-planck} depends on the choice of this frame. This is easily seen if, after a choice $g^{\calXI}_A$, one performs a rotation to another orthonormal frame in which the noise is diagonal again, with an orthonormal matrix $R^{{\bar{\indB}}}_{\phantom{{\bar{\indB}}}A}$ such that the ``square-root matrix'' of the noise correlations changes without affecting its auto-correlation: $g_A^{\calXI}=R^{{\bar{\indB}}}_{\phantom{{\bar{\indB}}}A}{\bar{g}}_{{\bar{\indB}}}^{\calXI}$. Then, the second term in the FP equation transforms as $\nabla_I\left[g_A^{\calXI} \nabla_J(g_A^{\calXJ} P_s)\right]=\nabla_I\left[R^{{\bar{\indB}}}_{\phantom{{\bar{\indB}}}A} R^{{\bar{\indC}}}_{\phantom{{\bar{\indB}}}A}{\bar{g}}_{{\bar{\indB}}}^{\calXI} \nabla_J({\bar{g}}_{{\bar{\indC}}}^{\calXJ} P_s)\right]+ \nabla_I\left[R^{{\bar{\indB}}}_{\phantom{{\bar{\indB}}}A}\left(\nabla_J R^{{\bar{\indC}}}_{\phantom{{\bar{\indB}}}A} \right){\bar{g}}_{{\bar{\indB}}}^{\calXI}{\bar{g}}_{{\bar{\indC}}}^{\calXJ} P_s \right]$. Because the matrix $R$ is orthonormal, $R^{{\bar{\indB}}}_{\phantom{{\bar{\indB}}}A} R^{{\bar{\indC}}}_{\phantom{{\bar{\indB}}}A}=\delta^{{\bar{\indB}}{\bar{\indC}}}$, the result would be frame-independent if there was only the first of these two terms. However since $R$ can depend on the position in field space, its (covariant) derivative is not zero and there is no reason in general for $R^{{\bar{\indB}}}_{\phantom{{\bar{\indB}}}A}\left(\nabla_J R^{{\bar{\indC}}}_{\phantom{{\bar{\indB}}}A} \right)$ to vanish, hence the frame-dependence of the result. Actually this difficulty holds for all discretisation schemes except when $\alpha=0$, the It\^o case where this second term in Eq.~\eqref{eq: covariant fokker-planck} is killed. That also explains why the It\^o scheme is often preferred in numerical implementations: it is possible to use an algorithm to reduce the noise correlation matrix in an orthonormal frame, and the result does not depend on the choice of such frame. However this apparently unsolvable ambiguity, breaking of covariance in It\^o or spurious frame-dependence in Stratonovich, is solved by the understanding that in stochastic inflation there is actually a preferred frame in which the noise is diagonal, and that it is given by the basis of independent creation and annihilation operators of the quantum UV modes. This result has close links to the classicalisation of light scalar fields on super-Hubble scales as we shall see now. \subsection{Classicalisation and frame of independent creation and annihilation operators} \label{subsec: classicalisation} As is well known in the context of multifield inflation (see e.g. Refs.~\cite{Salopek:1988qh,GrootNibbelink:2001qt,Tsujikawa:2002qx,Weinberg:2008zzc}), the quantum operators $\hat{Q}^I$ and $\hat{{\tilde{P}}}_I$ should be decomposed on a $N_\mathrm{fields}$-dimensional set of independent creation and annihilation operators (labed by the index $A$) as: \bae{ \bce{ \displaystyle \hat{Q}^I(N,\mathbf{k})=Q^I_A(N,k)\hat{a}^A_\mathbf{k} \Bigl(Q^{I}_A(N,k \Bigr)^*\hat{a}^{A \dagger}_{-\mathbf{k}}, \\[10pt] \displaystyle \hat{{\tilde{P}}}_I(N,\mathbf{k})={\tilde{P}}_{IA}(N,k)\hat{a}^A_\mathbf{k}+\left({\tilde{P}}_{IA}(N,k)\right)^*\hat{a}^{A \dagger}_{-\mathbf{k}}, } \quad \text{with } \left[\hat{a}^A_\mathbf{k},\hat{a}^{B \dagger}_\mathbf{k^\prime}\right]=(2\pi)^3 \delta^{AB}\delta^{(3)}(\mathbf{k}-\mathbf{k^\prime}), \label{quantisation} } where note that indices $A, B \ldots$ can be raised and lowered with the symbol $\delta_{AB}$, so that the up or down position has no particular meaning. One should therefore follow the evolution of the $2N_\mathrm{fields}^2$ complex mode functions $Q^I_A(N,k)$ and ${\tilde{P}}_{IA}(N,k)$ verifying the first order differential equations $E^{QI}=0=E^{{\tilde{P}}}_{I}$~\eqref{eq: EQ}--\eqref{eq: EP}, or equivalently solve $N_\mathrm{fields}$ times (corresponding to the label $A$) the coupled $N_\mathrm{fields}$-dimensional system of second-order differential equations~\eqref{eq: UV eom mode} verified by each set of $Q^I_A(N,k)$, each time with different initial conditions. This stems from the fact that in order to define a vacuum state $\ket{0}$ and to quantise the fluctuations when all relevant momenta are sub-Hubble, one should identify $N_\mathrm{fields}$ independent fields, each coming with its own creation and annihilation operators. Note that in a generic system of coordinates or/and curved field space, the field fluctuations $\hat{Q}^I$ do not verify the above property, as they are kinetically coupled. However, their projections on a set of vielbeins, or even clearer, on a set of parallel-transported vielbeins (see Sec.~\ref{vielbeins-Ito}), naturally provide independent degrees of freedom inside the Hubble radius. Relatedly, let us remark that even with a fixed vacuum state annihilated by the $\hat{a}^A_\mathbf{k}$'s, there is no unique choice of independent operators verifying the commutation relations in Eq.~\eqref{quantisation}. Indeed, once such a set is given, any other one related by a unitary transformation $U$ provides another suitable set, i.e. the equations~\eqref{quantisation} take the same form in terms of the barred quantities such that \bae{ &\hat{\bar{a}}^{{\bar{\indA}}}_\mathbf{k}=U^{{\bar{\indA}}}{}_B\hat{a}^{B}_\mathbf{k} \qquad \text{and} \label{rotation-a-operators} \\ &\bar{Q}^I_{\bar{\indA}}=Q^I_B(U^\dagger)^B{}_{\bar{\indA}}, \quad \bar{{\tilde{P}}}_{I{\bar{\indA}}}={\tilde{P}}_{IB}(U^\dagger)^B{}_{\bar{\indA}}\,, \label{rotation-basis} } without changing neither the operators $\hat{Q}^I$ and $\hat{{\tilde{P}}}_I$ nor the vacuum state $\ket{0}$. This arbitrariness is of course equivalently visible at the level of the quantisation conditions. Indeed, the commutation relations \bae{\label{eq: commutation relations} &[\hat{Q}^I(N,\mathbf{x}),\hat{{\tilde{P}}}_J(N,\mathbf{x}^\prime)]=\frac{i \delta^I_J}{a^3(N)} \delta^{(3)}(\mathbf{x}-\mathbf{x}^\prime) \,, \\ &[\hat{Q}^I(N,\mathbf{x}),\hat{Q}^J(N,\mathbf{x}^\prime)]=[\hat{{\tilde{P}}}_I(N,\mathbf{x}),\hat{{\tilde{P}}}_J(N,\mathbf{x}^\prime)]=0, } impose the following relations on the mode functions: \bae{ &Q^I_A(N,k) {\tilde{P}}^*_{JA}(N,k)-\textrm{c.c.}=\frac{i \delta^I _J}{a^3(N)}, \label{i-part}\\ &Q^I_A(N,k)Q^{*J}_A(N,k)-\textrm{c.c.}={\tilde{P}}_{IA}(N,k) {\tilde{P}}^*_{JA}(N,k)-\textrm{c.c.}=0, \label{Q-quantisation} } where, as before, the sum over $A$ is implicit.\footnote{As usual, these relations, once verified at some initial time, hold at all time by virtue of the equations of motion verified by the mode functions.} It is then apparent that two sets of mode functions related by a time-independent unitary matrix like in Eq.~\eqref{rotation-basis} are equally valid and describe the same physics. Once this quantisation is in place, the two-point vacuum expectation value of the quantum UV operators $\hat{Q}^{\covXI}=\left(\hat{Q}^I,\hat{{\tilde{P}}}^I\right)$ at a given time $N$ are given by \bae{ \braket{ 0| \hat{Q}^{\covXI}(N,\mathbf{k})\hat{Q}^{\covYJ}(N,\mathbf{k^\prime}) | 0}=(2\pi)^3 \delta^{(3)}(\mathbf{k}+\mathbf{k^\prime}) \delta^{AB} Q^{\covXI}_A(N,k)\left(Q^{\covYJ}_B(N,k)\right)^*, \label{two-point-vacuum} } thus providing the power spectra entering into the properties of the noises~\eqref{eq: noise correlation}. However, to describe their statistics at time $N$, only the mode $k_\sigma(N)$ matters. Crucially, this mode is far outside the Hubble radius for $\sigma \ll 1$, which we indeed considered from the start to ensure that the gradient-expansion is valid and that the infrared fields can be considered as classical (see Sec.~\ref{gauge-smoothing}). Relatedly, in this regime, the complex mode functions $Q^I_A(N,k)$ and ${\tilde{P}}_{IA}(N,k)$ become real to a very good accuracy (up to an irrelevant constant unitary matrix), corresponding to fluctuations being in a highly squeezed state~\cite{PhysRevD.42.3413,Albrecht_1994,Polarski:1995jg,KIEFER_1998}. This property is well known for a single light scalar field, and we will see in Sec.~\ref{sec:Markovian} that it also holds for multiple scalar fields in the massless approximation. More interestingly, we also show there that it is actually valid in the slow-varying approximation for light scalars of masses $m_i<3/2 H$ (see Sec.~\ref{sec:generic}), the situation of interest for the stochastic formalism.\footnote{In this framework, the presence of heavy degrees of freedom with masses $m_i>3/2 H$, for which Eq.~\eqref{eq: squeezing} is not applicable (because the mode functions of heavy fields have genuinely time-dependent phases on super-Hubble scales), is not problematic, as their mode functions are anyway $\sigma$-suppressed at coarse-graining scale crossing.} Using this, it is thus possible to forget about the complex conjugates ${}^*$ and to consider: \bae{\label{eq: squeezing} \hat{Q}^I(N,\mathbf{k})\underset{k\ll aH}{\simeq}Q^I_A(N,k)\left(\hat{a}^A_\mathbf{k}+\hat{a}^{A \dagger}_{-\mathbf{k}}\right), \qquad \hat{{\tilde{P}}}_I(N,\mathbf{k})\underset{k\ll aH}{\simeq}{\tilde{P}}_{IA}(N,k)\left(\hat{a}^A_\mathbf{k}+\hat{a}^{A \dagger}_{-\mathbf{k}}\right). } It is then natural to define the variables $b_\mathbf{k}^A=\hat{a}^A_\mathbf{k}+\hat{a}^{A \dagger}_{-\mathbf{k}}$ where we forgot the hat on purpose. Indeed, these are the only ``quantum" operators that we are left with on super-Hubble scales and they all commute with one another, i.e. $ \left[b_\mathbf{k}^A,b^{B}_\mathbf{k^\prime}\right]=0$, hence the fluctuations can be understood as classical.\footnote{Of course, the canonical commutation relations for fields and momenta still hold, and whether cosmological perturbations completely lost their quantum nature or not is a field of research that has its own dedicated literature, see e.g. Refs.~\cite{SUDARSKY_2011,Martin_2016,martin2019cosmic,ashtekar2020emergence} for recent references. In this paper, we shall be conservative and consider that super-Hubble fluctuations can well be treated classically.} It can easily be checked that this definition of the $b_\mathbf{k}^A$'s endows them with Gaussian statistics with $\braket{b^A_\mathbf{k}}=0$ and $\braket{b^A_\mathbf{k}b^B_\mathbf{k^\prime}}=(2\pi)^3\delta^{AB}\delta^{(3)}(\mathbf{k}+\mathbf{k^\prime})$, where the brackets of quantum vacuum expectation values can now be understood as statistical ensemble averages for the stochastic fields $b_\mathbf{k}^A$. The noises~\eqref{eq: noises} can now be expressed as \bae{ &\xi^{QI}(x)=f_\sigma Q^I_A(N,k_\sigma(N)) \xi^A(x), \quad \xi_{{\tilde{P}}}^{I}(x)= f_\sigma {\tilde{P}}_{IA}(N,k_\sigma(N)) \xi^A(x), \text{ with } f_\sigma=\sqrt{\frac{k_\sigma^3}{2\pi^2}\frac{{k_\sigma{}^\prime}}{k_\sigma}}, } where, again, the ratio $k_\sigma^\prime/k_\sigma$ may be approximated by unity. We also defined \bae{ \xi^A(x)=f_\sigma^{-1}\int \frac{\dd^3 \mathbf{k}}{(2\pi)^3} \mathrm{e}^{i \mathbf{k} \cdot \mathbf{x}}\dif{\theta(k-k_\sigma(N))}{N}b^A_\mathbf{k}, } that are independent Gaussian white noises normalised to almost unity in a given $\sigma$-Hubble patch: \bae{ \braket{\xi^A(x) \xi^B(x^\prime)} = \frac{\sin{k_\sigma r}}{k_\sigma r} \delta^{AB} \delta(N-N^\prime), \text{ with } r=|\mathbf{x}-\mathbf{x^\prime}|. } Recalling that the spatial correlation $\sin{k_\sigma r}/k_\sigma r$ should be approximated by the theta function $\theta(1-k_\sigma r)$ taking values $1$ inside a $\sigma$-Hubble patch and $0$ outside, one eventually finds \bae{\label{eq: normalized-independent-noises} \braket{\xi^A(x) \xi^B(x^\prime)} = \begin{cases} \delta^{AB} \delta(N-N^\prime),& \text{if } r=|\mathbf{x}-\mathbf{x^\prime}| \leq \left(\sigma a H \right)^{-1}, \\ 0, & \text{otherwise.} \end{cases} } Strikingly, these are the same noises $\xi^A$ that appear both in $\xi^{QI}$ and $\xi^I_{{\tilde{P}}}$. This means that we were able to ``decorrelate" the $2N_\mathrm{fields}$ correlated noises by expressing them in terms of $N_\mathrm{fields}$ uncorrelated ones. In a mathematical language, one would say that the noise amplitude $A^{\tilde{X}\covYIJ}$, with $(\tilde{X},\tilde{Y}) \in (Q,{\tilde{P}})$, can be understood as a bilinear form whose matrix is of dimension $2N_\mathrm{fields} \times 2N_\mathrm{fields}$, but of rank $N_\mathrm{fields}$ only, and can thus be reduced. The Langevin equations~\eqref{eq: Langevin in heuristic} are thus rewritten as \bae{\label{eq: Langevin independent noise} \varphi^{I\prime}=\frac{1}{H}G^{IJ}\varpi_J+ f_\sigma Q^I_A \circ \xi^A, \quad\quad {\cal D}_N\varpi_I=-3\varpi_I-\frac{V_I}{H}+f_\sigma {{\tilde{P}}}_{IA} \circ \xi^A, } where now the Stratonovich interpretation, indicated by the simple symbol $\circ \equiv\circ_{1/2}$, is non-ambiguous as independent noises $\xi^A$ have been identified, and where covariance is respected as the standard chain rule applies. To be precise, there is strictly speaking a family of possible independent noises $\xi^A$, but, taking into account that we used real $(Q^I_A,{\tilde{P}}_{I A})$ variables in Eq.~\eqref{eq: Langevin independent noise}, these noises are simply related by a constant rotation $U$ (when restricting Eq.~\eqref{rotation-a-operators} to real orthogonal matrices). Like in Eq.~\eqref{rotation-basis}, this induces a constant rotation $U^T$ of the noises' amplitudes $(Q^I_A,{\tilde{P}}_{IA})$, which, as we have seen in Sec.~\ref{Strato}, does not lead to any ambiguity. Modulo this irrelevant rotation, we can hence talk about \textit{the} frame of independent noises used in the Langevin-type equations~\eqref{eq: Langevin independent noise}. Eventually, one has to be careful there with the covariant time derivative ${\cal D}_N$, as it contains stochastic noises through the time derivative $\varphi^{I\prime}$. It should hence also be discretised in the Stratonovich scheme: \bae{\label{eq: Strato D_N} {\cal D}_N\calU^I=\calU^{I\prime}+\Gamma^I_{JK}\calU^J\circ\varphi^{K\prime}, \qquad {\cal D}_N\calV_I=\calV_I{}^\prime-\Gamma_{IJ}^K\calV_K\circ\varphi^{J\prime}\,, } where the symbol $\circ$ indicates that when discretised, the term on its left should be evaluated at the midpoint, i.e. one has for instance: \bae{\Gamma_{IJ}^K\calV_K\circ\varphi^{J\prime} \equiv \frac{G^{J L}}{H}\Gamma_{IJ}^K\calV_K \varpi_L + \left(f_\sigma \Gamma_{IJ}^K\calV_K Q^I_A \right) \circ \xi^A\,. } Now that we understood that the frame of independent creation-annihilation operators provides the right frame in which to formulate the Langevin equations with a Stratonovich discretisation, all issues of covariance and frame-dependences are solved, but this resolution is still somewhat formal. In order to make this resolution more readily apparent, we will go one step further and derive equivalent Langevin-type equations in the It\^o scheme, which are easier to deal with numerically. \subsection{It\^o-covariant Langevin equations} \label{vielbeins-Ito} In order to go from the Stratonovich Langevin equations~\eqref{eq: Langevin independent noise} to It\^o ones with the same physical content, one will introduce vielbeins defining a local orthonormal frame along the IR trajectory. This additional structure will eventually disappear from the final It\^o Langevin equations, while generating It\^o-covariant derivatives, but for this, one has to be careful about their definitions. Let us first consider a given point in field space, say the initial condition for $\varphi^I$ in a given realisation of the stochastic processes. At this point, it is possible to reduce the metric 2-form $G_{IJ}$ to identity $\delta_{\alpha\beta}$ by using projectors $e^I_\alpha$ from one basis to the other. Then they verify the following relations \emph{at this point}: \bae{ G_{IJ}e^I_\alpha e^J_\beta=\delta_{\alpha \beta}, \quad \text{and} \quad \delta^{\alpha\beta}e^I_\alpha e^J_\beta=G^{IJ}. } For these variables to constitute a set of vielbeins, these relations should hold along the whole IR trajectory. We thus ask this property to be conserved along the trajectory, ${\cal D}_N\left(G_{IJ}e^I_\alpha e^J_\beta\right)=0$. Because ${\cal D}_N G_{IJ}=0$ by definition, if we write ${\cal D}_N e^I_\alpha = \Omega_{\alpha}^{\phantom{\alpha}\beta} e^I_\beta$, we find that the matrix $\Omega$ must be anti-symmetric, parameterizing the local rotation of the orthonormal frame. Then, which anti-symmetric matrix to choose is a matter of convenience. For example, a popular choice is the decomposition in the so-called adiabatic/entropic basis~\cite{Gordon:2000hv,GrootNibbelink:2001qt}, defined by a Gram-Schmidt orthogonalisation process applied to the successive covariant derivatives of $\varphi^{I\prime}$, in which case the entries of the anti-symmetric matrix correspond to covariant turn rates of the background trajectory in field space. An even simpler choice in some sense is to use \emph{parallel-transported} vielbeins which verify ${\cal D}_N e^I_\alpha=0$, i.e. to chose $\Omega=0$. These or other choices of vielbeins may have their own advantages for the analytical understanding of the behaviour of UV fluctuations (see sections~\ref{sec:light} and \ref{sec:generic}). In the following, we make the choice $\Omega=0$ but we stress that this is merely for convenience, and that the resulting It\^o-covariant Langevin equations do not depend on this choice, as any set of vielbeins disappear altogether from the final result. More important is to note that again, the covariant time derivative ${\cal D}_N$ is a stochastic derivative, with an underlying discretisation corresponding to the Stratonovich scheme, as defined in Eq.~\eqref{eq: Strato D_N}. Therefore the parallel transport of vielbeins must be realised in the following stochastic way: \bae{ e^{I}_\alpha{}^\prime=-\Gamma^I_{JK}e^J_\alpha\circ\varphi^{K\prime}. } We call these vielbeins \emph{stochastically-parallel-tranported} vielbeins, as this equation defining them is nothing but a Langevin equation. The vielbeins thus really become new stochastic variables, i.e., the collection of stochastic processes reads $\calS^n=\left(\varphi^I,\varpi_I,e^I_\alpha\right)$: coordinates on the field space, covectors and vectors. Notice that with these definitions the indices $\alpha$ and $\beta$ can be raised and lowered with the metric $\delta_{\alpha\beta}$, i.e. the up or down position makes no difference. We define the projections of the UV modes along those vielbeins, $Q^\alpha_A$ and ${\tilde{P}}^\alpha_A$ as \bae{ Q^\alpha_A=G_{IJ}e^{I\alpha}Q^J_A, \qquad {\tilde{P}}^\alpha_A=e^{I\alpha}{\tilde{P}}_{IA}. \label{def-projected-perturbations} } These variables are scalars in field space and one deduces from Eq.~\eqref{eq: UV eom mode} and by virtue of our choice $\Omega=0$ that they verify the simple second-order differential equation: \bae{\label{eq: UV eom mode vielbein} Q^{\alpha}_A{}^{\prime\prime}+\left(3-\epsilon\right)Q^{\alpha}_A{}^\prime+\left(\frac{k^2}{a^2H^2}\delta^\alpha_\beta + \frac{M^2{}^\alpha{}_\beta}{H^2} \right) Q^\beta_A=0, \quad \text{with } M^2{}^\alpha{}_\beta =e_I^\alpha e^J_\beta M^2{}^I{}_J\,. } An advantage of the parallel-transported vielbeins is thus that the perturbations $Q^\alpha_A$ in such a basis are not kinetically coupled but only mix via the projection of the mass matrix, $M^2{}^\alpha{}_\beta$, which we will use for analytical estimates in Sec.~\ref{sec:Markovian}. This is of course equivalent to our statement in Sec.~\ref{subsec: classicalisation} that these projected fields are independent deep inside the Hubble radius, making easier the quantisation process. Independently of this, let us now reformulate our system of Langevin equations~\eqref{eq: Langevin independent noise} with the new set of stochastic variables augmented by the vielbeins: \bae{\label{eq: Langevin independent noise vielbein} \bce{ \displaystyle \varphi^{I\prime} & \displaystyle \hspace{-5pt} =\frac{1}{H}G^{IJ}\varpi_J+f_\sigma e^I_\alpha Q^\alpha_A\circ\xi^A, \\[5pt] \displaystyle \varpi_I{}^\prime & \displaystyle \hspace{-5pt} =-3\varpi_I-\frac{V_I}{H}+\Gamma_{IJ}^K\varpi_K\circ\varphi^{J\prime}+f_\sigma G_{IJ}e^J_\alpha{\tilde{P}}^\alpha_A\circ\xi^A, \\[5pt] e^{I}_\alpha{}^\prime & \displaystyle \hspace{-5pt} =-\Gamma^I_{JK}e^J_\alpha\circ\varphi^{K\prime}. } } Although it is not manifest, we know these equations respect general covariance, and as is proved in Appendix~\ref{appendix: stochastic calculus}, it is always possible to move from a given discretisation scheme to another one in the continuous description, by adding a noise-induced deterministic drift of the form (going from Stratonovich to It\^o), $\frac{1}{2}\left(\partial g^{n}_A/\partial \calS^m\right)g^{m}_A$ to the equation of motion for the process $\calS^n$. Let us then find the equivalent It\^o description of equations~\eqref{eq: Langevin independent noise vielbein}, keeping in mind that the $Q^\alpha_A,{\tilde{P}}_{\alphaA}$ are not given functions of the IR stochastic variables, but rather solutions of differential equations that involve them. Using Eq.~\eqref{eq: scheme conversion} with $\alpha=1/2$, we thus find: \bae{ \varphi^{I\prime}&=\frac{1}{H}G^{IJ}\varpi_J+f_\sigma e^I_\alpha Q^\alpha_A\xi^A \nonumber \\ &\qquad +\frac{f_\sigma^2}{2}\left(\pdif{e^I_\alpha}{e^J_\beta}Q^\alpha_A\right)\times\left(-\Gamma^J_{KL}e^K_\beta e^L_\gamma Q^\gamma_A\right), &&\hspace{-10pt}\text{[noise of $e^J_\beta$]} \label{Ito-first} \displaybreak[0] \\ \varpi_I{}^\prime&=-3\pi_I-\frac{V_I}{H}+\frac{G^{J L}}{H}\Gamma_{IJ}^K\varpi_K \varpi_L +f_\sigma\left(G_{IJ}e^J_\alpha{\tilde{P}}^\alpha_A+\Gamma_{IJ}^K\varpi_K e^J_\alpha Q^\alpha_A \right)\xi^A \nonumber \\ &\qquad +\frac{f_\sigma^2}{2}\left(\pdif{\Gamma_{IJ}^K}{\varphi^L}\varpi_K e^J_\alpha Q^\alpha_A+\pdif{G_{IJ}}{\varphi^L}e^J_\alpha{\tilde{P}}^\alpha_A\right)\times\left(e^L_\beta Q^\beta_A\right) &&\hspace{-10pt}\text{[noise of $\varphi^L$]} \nonumber \\ &\qquad +\frac{f_\sigma^2}{2}\left(\Gamma_{IJ}^K\pdif{\varpi_K}{\varpi_L}e^J_\alpha Q^\alpha_A\right)\times\left(G_{LS}e^S_\beta{\tilde{P}}^\beta_A+\Gamma_{LS}^R\varpi_R e^S_\beta Q^\beta_A\right) &&\hspace{-10pt}\text{[noise of $\varpi_L$]} \nonumber \\ &\qquad +\frac{f_\sigma^2}{2} \pdif{e^J_\alpha}{e^K_\beta}\left(G_{IJ}{\tilde{P}}^\alpha_A+\Gamma_{IJ}^K\varpi_K Q^\alpha_A \right)\times\left(-\Gamma^K_{LM}e^L_\beta e^M_\gamma Q^\gamma_A\right), &&\hspace{-10pt}\text{[noise of $e^K_\beta$]} \displaybreak[0] \\ e^{I}_\alpha{}^\prime&=-\frac{G^{K L}}{H}\Gamma^I_{JK}e^J_\alpha \varpi_L - f_\sigma \Gamma^I_{JK} e^J_\alpha e^K_\beta Q^\beta_A \xi^A \nonumber \\ &\qquad +\frac{f_\sigma^2}{2}\left(-\pdif{\Gamma^I_{JK}}{\varphi^L}e^J_\alpha e^K_\beta Q^\beta_A\right)\times\left(e^L_\gamma Q^\gamma_A\right) &&\hspace{-10pt}\text{[noise of $\varphi^L$]} \nonumber \\ &\qquad +\frac{f_\sigma^2}{2}\left(-\Gamma^I_{JK}\pdif{(e^J_\alpha e^K_\beta)}{e^L_\gamma} Q^\beta_A\right)\times\left(-\Gamma^L_{RS}e^R_\gamma e^S_\delta Q^\delta_A\right), &&\hspace{-10pt}\text{[noise of $e^L_\gamma$]} } where the absence of a symbol $\circ_\alpha$ means that the underlying discretisation is the It\^o scheme. We recognise the appearance of the corrective terms $\propto A^{QQ IJ}$ and $A^{Q{\tilde{P}} I}{}_J$ that are needed to make It\^o-Langevin equations covariant. For instance, the last term in Eq.~\eqref{Ito-first}, coming from the Stratonovich to It\^o conversion, reads $-\frac12f_\sigma^2 \Gamma^{I}_{K L} e^K_\alpha Q^\alpha_A e^L_\gamma Q^\gamma_A=-\frac12 \Gamma^{I}_{K L} A^{QQ K L}$ where \bae{ A^{QQ K L}=f_\sigma^2 Q^K_A(N,k_\sigma(N)) Q^L_A(N,k_\sigma(N))=\frac{k_\sigma^\prime}{k_\sigma} {\cal P}^{QQKL}(N,k_\sigma(N)), } is intrinsically defined independently of the vielbeins (see Eqs.~\eqref{eq: noise correlation} and \eqref{eq: power spectra}).\footnote{Since we have taken into account the classicalisation of the perturbations on super-Hubble scales and considered the mode functions to be real, notice that here the cross-correlation $A^{Q{\tilde{P}} I}{}_{J}=f_\sigma^2 Q^I_A P_{JA}$ is automatically real. Anyway, the reality of the auto-correlation of the noises will be rigorously proven from the path-integral approach in Sec.~\ref{sec: effective hamiltonian action}.} With similar manipulations, one can rewrite the equations in It\^o with use of the covariant derivatives previously defined in Eqs.~(\ref{eq: ItoD for X})--(\ref{eq: ItoD for V}), as \bae{\label{eq: covariant Langevin Ito} \bce{ \displaystyle \mathfrak{D}_N\varphi^I & \displaystyle \hspace{-5pt} =\frac{\varpi^I}{H}+f_\sigma e^I_\alpha Q^\alpha_A \xi^{A}, \\ \displaystyle \mathfrak{D}_N\varpi_I & \displaystyle \hspace{-5pt} =-3\varpi_I-\frac{V_I}{H}+ f_\sigma e_I^\alpha{\tilde{P}}_{\alpha,A}\xi^{A}, \\ \displaystyle \mathfrak{D}_N e^I_\alpha & \displaystyle \hspace{-5pt} =0. } } This self-consistency of the Langevin equations is quite remarkable given the degree of complexity of these stochastic differential equations. In particular, as announced, the vielbeins disappear completely from any physical quantity computed from the first two equations in Eq.~\eqref{eq: covariant Langevin Ito}, as they do not appear in the It\^o-covariant derivatives, and as it is only the auto-correlation of the effective noises $(\xi^{QI}=f_\sigma e^I_\alpha Q^\alpha_A \xi^{A}= f_\sigma Q^I_A \xi^{A},\,\xi^{{\tilde{P}}}_{I}= f_\sigma e_I^\alpha{\tilde{P}}_{\alpha,A}\xi^{A}=f_\sigma {\tilde{P}}_{I,A}\xi^{A})$ that matter in It\^o. Moreover, the covariance of Eqs.~\eqref{eq: covariant Langevin Ito} is manifest. Now that the question of ``stochastic anomalies" is solved, let us present a more rigorous derivation of the Langevin equations in phase space, with use of the coarse-grained effective hamiltonian action in a path-integral approach. As already mentioned, this will enable one to correctly treat the IR-UV interactions at the time of crossing the coarse-graining scale, so that all noises are manifestly real and that UV modes dictating their properties obey the same EoM as in SPT. \section{Coarse-grained effective Hamiltonian action}\label{sec: effective hamiltonian action} In this section, we derive the covariant Langevin-type equations of multifield stochastic inflation in a midpoint discretisation scheme, based on functional methods borrowed from non-equilibrium quantum field theory. We will begin by reviewing some of these notions and explaining the roadmap and principles of the computation, before turning to the computation itself. In particular, we will have to deal with several difficulties. First, as usual in cosmology we want to describe the fields dynamics, or their equal-time ``in-in" correlation functions. This is different from QFT in Minkoswki spacetime where we are interested in scaterring ``in-out" amplitudes. Thus, the time integration contour should follow a closed-time-path (CTP). Second, we want to compute an effective action for the long wavelength IR modes coupled to the bath of short wavelength UV modes, with interactions that should be specified upon physical arguments. In particular, we will solely consider the IR/UV couplings coming from the flow of UV modes joining the IR sector, i.e. the couplings specific to the time-dependent split between the two sectors. Last but not least, we want to pay particular attention to the covariance of the theory. On the UV side, it is known that the perturbations $\delta\phi^I$ do not transform beyond linear order as genuine vectors under field redefinitions. In SPT, this subtlety is only relevant when computing the action at cubic order or higher in perturbations. However, in a stochastic context, we have to take this into account even when considering the action at quadratic order in fluctuations, as the part of the action that is linear in $\delta\phi^I$ does not vanish, but rather plays a crucial role in determining the UV-IR transition. This also applies to momentum perturbations $\delta\pi_I$ that do not even transform as covectors at linear order, and for which quadratic corrections are also needed. On the IR side, based on the arguments developed in the previous section, we will also interpret our path integral as the continuous limit of discrete integrations with an explicit scheme corresponding to the midpoint (Stratonovich-like) discretisation, to ensure that the standard chain rule for changes of variables is appplicable. \subsection{Roadmap and principles of the computation} In particle physics, one wishes to compute transition amplitudes between asymptotic ``in" states (in the far past) and ``out" states (in the far future), which are defined long before interactions are switched on, and long after they are switched off. The situation is crucially different in cosmology, where one is rather interested in vacuum expectation values of quantum operators. Actually, the notion of a future, asymptotic, ``out" state is more intricate in a cosmological context as particles keep interacting at least gravitationally in curved spacetimes; boundary conditions can only be imposed in the far past, when the wavelengths of relevant fields are much smaller than the Hubble radius. Expectation values in such time-dependent contexts can be deduced from the ``in-in" partition function (rather than the ``in-out" one for particle physics) in the presence of external currents, $Z\left[J_{\indXI}\right]$, which is the generating functional of all correlation functions defined as expectation values in the initial (vacuum) state. This ``in-in" generating functional, which can be thought of as summing over all possible ``out" states, can be computed using a closed-time-path contour of integration~\cite{Schwinger:1960qe,Keldysh:1964ud,Jordan:1986ug,Calzetta:1986ey,2009AdPhy..58..197K, Weinberg:2005vy,Calzetta:2008iqa} shown in Fig.~\ref{fig: CTP}, and according to: \bae{ Z\left[J_{\indXI}\right]= \int_C\scrD \phi^{\indXI}\exp\left(iS\left[\phi^{\indXI}\right]+i\int\mathrm{d}^4xJ_{\indXI}\phi^{\indXI}\right)\,, \label{ZJ} } where the subscript $C$ precises the contour of integration. Here, in accordance with first principles in the path-integral approach, $S$ denotes the classical action in the Hamiltonian form~\eqref{eq: Hamiltonian action}, and notations are similar as before: the index $X$ denotes position or momentum in phase space, i.e. the Hamiltonian action depends on the scalar fields $\phi^{QI}=\phi^I$ and on their (contravariant) momenta $\phi^{PI}=G^{IJ}(\phi^K)\pi_J$. Formally, it will prove useful for computations to lower and raise the index $X$ with the appropriate metric $\frac{1}{i}\sigma_{2XY}=\big(\begin{smallmatrix}0&-1\\1&0 \end{smallmatrix}\big)$ and its inverse. Note that contrary to ``in-out" partition functions in which $Z[0]$ must be computed as the sum over all vacuum bubbles to enforce a correct normalisation, for ``in-in" partition functions, it is trivial to see that $Z[0]=1$ as the norm of the initial vacuum state should be. Eventually, we note that the condensed notation $\scrD\phi^{XI}$ that we will use throughout should really be understood as the canonical phase-space measure $\prod_{I,J} \scrD\phi^I\scrD\pi_J$. \begin{figure} \centering \includegraphics[width=0.9\hsize]{CTP_N.pdf} \caption{ Closed-time-path $C$ of integration used in the ``in-in" formalism.} \label{fig: CTP} \end{figure} To avoid the formal path integral along the closed contour $C$, one can divide it in two path integrals over the forward ($+$) and backwards ($-$) parts of the time contour $C=C^+\cup C^-$. This boils down to doubling the number of degrees of freedom in the path integral, with $\phi^{\indXI\pm}$ living respectively on $C^+$ and $C^-$. The $\pm$ fields and momenta should be considered independent, except at future infinity where the time path closes (truly, at any time later than the ones of interest), and where we use the usual boundary conditions that $\pm$ fields coincide, i.e. $\phi^{I+}(+\infty)=\phi^{I-}(+\infty)$, but that momenta $\pi_I^\pm$ are left unconstrained. The time flow being reversed on the $C^-$ branch, the path integral to perform can be rewritten along a forward contour only as \bae{ Z\left[J_{\indXI}^\pm\right]=\int_{C^+}\scrD\phi^{\indXI\pm}\exp\left(iS\left[\phi^{\indXI+}\right]-iS\left[\phi^{\indXI-}\right]+i\int\mathrm{d}^4xJ_{\indXI}^+\phi^{\indXI+}-i\int\mathrm{d}^4xJ_{\indXI}^-\phi^{\indXI-}\right). } In practice, we will make use of the so-called Keldysh basis (letting aside the various indices here): \bae{ \bpme{ \phi^\mathrm{cl} \\ \phi^\mathrm{q} }=\bpme{ 1/2 & 1/2 \\ 1 & -1 }\bpme{ \phi^+ \\ \phi^- } \quad \Leftrightarrow \quad \bpme{ \phi^+ \\ \phi^- }= \underbrace{ \bpme{ 1 & 1/2 \\ 1 & -1/2 }}_{\textstyle K}\bpme{ \phi^\mathrm{cl} \\ \phi^\mathrm{q} }, \label{classical-quantum-def} } where $\phi^\mathrm{cl}$ and $\phi^\mathrm{q}$ are respectively referred to as the classical and quantum components of the fields, and $K$ is the matrix of change of basis. The rationale for this denomination is that among the solutions of the saddle point equations for the Keldysh action $S\left[\phi^{\indXI+}\right]-S\left[\phi^{\indXI-}\right]$, there is always one with vanishing quantum component and classical component obeying the classical equation of motion $\delta S/\delta \phi^{\indXI}=0$. Although we are using natural units with $\hbar=1$, one can intuitively think of the quantum component as $\hbar$-suppressed, and indeed the stochastic equations we will derive, with classical equations of motion corrected by noises of quantum-mechanical origin, can be seen as semi-classical equations of motion. Introducing covariant notations, latin indices $a$, $b$, $\cdots$ for the $\pm$ fields, and Fraktur indices $\mathfrak{a}$, $\mathfrak{b}$, $\cdots$ for the Keldysh label $\mathrm{cl}/\mathrm{q}$, the corresponding change of basis can be summarised as $\phi^a=K^a{}_\mathfrak{a} \phi^\mathfrak{a}$. To keep compact expressions, we use the convention of summation over repeated indices, and we will use well-chosen metrics to raise and lower them: the metric $\sigma_{3}{}_{ab}=\mathrm{diag}(1,-1)_{ab}$ in the $\pm$ basis; and the corresponding one $\sigma_1{}_{\mathfrak{a}\mathfrak{b}}=\left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)_{\mathfrak{a}\mathfrak{b}}$ in the Keldysh basis. Note that in the latter, the matching condition of the CTP branches at future infinity reads $\phi^{I\mathrm{q}}(+\infty)=0$, again with no constraint on momenta. The generating functional in this basis reads (note that the Jacobian $|K|$ is unity) \bae{ Z\left[J_{\indXI}^\mathfrak{a}\right]&=\int\scrD\phi^{\indXI\mathfrak{a}}\exp\left(iS\left[\phi^{\indXI\mathfrak{a}}\right]+i\int\mathrm{d}^4xJ_{\indXI}^\mathrm{q}\phi^{\indXI\mathrm{cl}}+i\int\mathrm{d}^4xJ_{\indXI}^\mathrm{cl}\phi^{\indXI\mathrm{q}}\right), } where $S\left[\phi^{\indXI\mathfrak{a}}\right]=S\left[\phi^{\indXI\mathrm{cl}}+\phi^{\indXI\mathrm{q}}/2\right]-S\left[\phi^{\indXI\mathrm{cl}}-\phi^{\indXI\mathrm{q}}/2\right]$. \bfe{width=0.9\hsize}{CTP_UVIR.pdf}{ In the stochastic approach, the original path integral along the closed contour $C$ shown in Fig.~\ref{fig: CTP} is divided for each wavenumber $\mathbf{k}$ into a path integral over UV fields $\delta\phi^{\indXI}(N,\mathbf{k})$, and into one over IR fields $\varphi^{\indXI}(N,\mathbf{k})$. Hence, the corresponding UV path is closed at the transition time $N_\sigma(k)$ with the boundary condition $\delta\phi^{I\mathrm{q}}(N_\sigma(k),\mathbf{k})=0$~\eqref{eq: boundary conditions}, while the IR path starts there with the other boundary condition $\varphi^{I\mathrm{cl}}(N_\sigma(k),\mathbf{k})=0$, and is closed at future infinity with the boundary condition $\varphi^{I\mathrm{q}}(+\infty)=0$.}{fig: CTP_UVIR} Formally, the ``in-in" vacuum expectation value of any operator at time $t_\star$ can be computed by introducing this operator at this particular time on the closed-time-path of integration and with vanishing currents. Equivalently, $n$-point functions can be derived by calculating the $n^{\mathrm{th}}$ functional derivatives of the above generating functional $Z\left[J_{\indXI}^\mathfrak{a}\right]$ with respect to the external currents $J_{\indXI}^\mathfrak{a}$ and evaluating them at $J_{\indXI}^\mathfrak{a}=0$. As for the equations of motion verified by the expectation values of $\phi^{\indXI}$, they can be determined by extremising the quantum effective action, defined as the Legendre transformation of $W\left[J_{\indXI}^\mathfrak{a}\right]=-i \ln Z\left[J_{\indXI}^\mathfrak{a}\right]$. However in the following we choose to follow another route: based on the physical distinction between quantum short-wavelength modes and classical long-wavelength ones, we want to derive a classical (albeit stochastic) effective theory for the latter only, and then compute expectation values within this new theory. This amounts in our setup to derive what can be called the ``coarse-grained effective Hamiltonian action" that governs the dynamics of the coarse-grained scalar fields in a Hamiltonian language (see e.g. Refs.~\cite{Calzetta:1999zr,Calzetta:2008iqa,Vacca:2012vt} for related concepts). Indeed, based on the scale separation provided by the physical Hubble radius $H^{-1}$, the original fields can be written in real space as IR+UV: $\phi^{\indXI\mathfrak{a}}(x)=\varphi^{\indXI\mathfrak{a}}(x)+\delta\phi^{\indXI\mathfrak{a}}(x)$, where we have in mind the Fourier cutoff $k_\sigma(N)$ discussed in Sec.~\ref{sec: heuristic}.\footnote{Note that the cutoff $k_\sigma(N)=\sigma a(N)H$ is not deterministic because the Hubble parameter depends on the stochastic realisations of the fields. Hence, the cutoff scale has the same status in the path-integral and in the heuristic approaches, i.e. it is understood to be defined self-consistently as mentioned at the end of Sec.~\ref{gauge-smoothing}. The following discussion is independent of this subtlety.} The Fourier components of the fields and momenta thus verify: \bae{\label{eq: decompositon Fourier} \phi^{\indXI\mathfrak{a}}\left(N,\mathbf{k}\right)=\left\{ \begin{array}{ll} \delta \phi^{\indXI\mathfrak{a}}\left(N,\mathbf{k}\right), & \text{ if } N<N_\sigma(k), \\ \varphi^{\indXI\mathfrak{a}}\left(N,\mathbf{k}\right), & \text{ if } N>N_\sigma(k), \end{array} \right. } where $N_\sigma(k)$ represents the time at which the modes of modulus $k$ cross the UV/IR cutoff, and hence at which boundary conditions need to be specified for the fields. Because the UV modes $\delta \phi^\mathfrak{a}(N,\mathbf{k})$ stop being defined at the time $N_\sigma(k)$, their time path actually closes at this particular time, which enforces the boundary condition $\delta \phi^{I\mathrm{q}}\left(N_\sigma(k),\mathbf{k}\right)=0$, like for the full fields whose time path closes at future infinity. Inversely, the time path of IR modes $\varphi^\mathfrak{a}(N,\mathbf{k})$ begins at that time, with vanishing initial conditions for the classical component of the fields, $\varphi^{I\mathrm{cl}}\left(N_\sigma(k),\mathbf{k}\right)=0$ (see Fig.~\ref{fig: CTP_UVIR}).\footnote{As we will see, the stochastic equations derived heuristically in section~\ref{sec: heuristic} actually concern the classical component of the fields, so that the boundary condition $\varphi^{I\mathrm{cl}}\left(N_\sigma(k),\mathbf{k}\right)=0$ agrees with the fact that in the stochastic approach, IR fields with wavevectors $\mathbf{k}$ do not exist before the time $N_\sigma(k)$.} Note that again, neither IR nor UV momenta are constrained at the time $N_\sigma(k)$. Because these conditions will be crucial to specify interactions between IR and UV modes, we rewrite them together: \bae{\label{eq: boundary conditions} \phi^{I\mathfrak{a}}\left(N_\sigma(k),\mathbf{k}\right)=\left\{ \begin{array}{ll} \delta \phi^{I\mathrm{cl}}\left(N_\sigma(k),\mathbf{k}\right), & \text{ if } \mathfrak{a}=\mathrm{cl}, \\ \varphi^{I\mathrm{q}}\left(N_\sigma(k),\mathbf{k}\right), & \text{ if } \mathfrak{a}=\mathrm{q}, \end{array} \right. } assigning the Fourier component at the transition time, either fully to the UV part for the classical component, or fully to the IR part for the quantum component. It will become clear when discussing IR-UV interactions in the discretised version of the path integral that, indeed, no boundary condition is required for the momenta, because in the path integral they can be evaluated at intermediate time steps and we can avoid to specify their values at the exact time $N_\sigma(k)$. It was actually shown in the context of a single test scalar field in de Sitter that these boundary conditions at $N_\sigma(k)$ enables the to-be-found stochastic description to correctly reproduce the propagators of the corresponding free QFT~\cite{Tokuda:2017fdh,Tokuda:2018eqs}. Now that this decomposition into IR and UV fields is fully specified, one may rewrite the generating functional (at vanishing currents for simplicity) as \bae{ Z&=\int\scrD\varphi^{\indXI\mathfrak{a}}\exp\left(i\SHam_\mathrm{eff}\left[\varphi^{\indXI\mathfrak{a}}\right]\right), \qquad \text{with} \label{Z} \\ \exp\left(i\SHam_\mathrm{eff}\left[\varphi^{\indXI\mathfrak{a}}\right]\right)&=\int\scrD\delta\phi^{\indXI\mathfrak{a}}\exp\left(iS\left[\varphi^{\indXI\mathfrak{a}}+\delta\phi^{\indXI\mathfrak{a}}\right]\right), \label{Seff} } where the path integral over the UV modes has to be performed explicitly. Then, we will see that upon the introduction of auxiliary stochastic variables describing possible deviations from the classical EoM, one needs not perform the path integral over the IR fields, but simply observe which IR trajectories have non-zero weights in the remaining path integral, and hence obtain the desired Langevin equations. However before that, let us note that Eq.~\eqref{Seff} provides only a ``naive" expression, and that one has to be careful about the fact that field perturbations themselves do not transform covariantly under field redefinitions beyond linear order~\cite{Vilkovisky:1984st,Gong:2011uw}. To ensure that the resulting effective theory respects general covariance, the path integral should be expressed in terms of covariant objects, and we now turn to the identifications of suitable ones in our Hamiltonian formulation. \subsection{Covariant perturbations in the Hamiltonian language} \label{subsec: covariant perturbations} It is well known that field perturbations are not covariant objects beyond linear order. This subtlety is usually irrelevant if one is only interested in the Gaussian properties of the inflationary fluctuations, because SPT is defined around homogeneous fields ${\phi_\bg}^{X I}(N)$ that solve the classical equations of motion, $\frac{\delta S}{\delta \phi^{X I}}\big\rvert_{{\phi_\bg}^{\indXI}}=0$, and any non-covariant contribution coming from the linear action in terms of $\delta \phi^{\indXI}$ thus vanishes. However, the aim of stochastic inflation is precisely to take into account the difference between the effective equations of motion verified by the coarse-grained scalar fields and the classical equations of motion verified by ${\phi_\bg}^{X I}(N)$ in SPT. In this context, the part of the action that is linear in $\delta \phi^{\indXI}$ not only does not vanish but actually plays a crucial role. Thus, in stochastic inflation, perturbations should be covariant objects at least up to quadratic order, even to describe only Gaussian statistics and contrary to SPT. In anticipation of our later setup, we define the perturbations at some spacetime point $x$ by the displacements of the full inflaton fields $\phi^I(x)$ and their conjugate momenta $\pi_I(x)$ from their coarse-grained values $\varphi^I(x)$ and $\varpi_I(x)$ (the homogeneous background ${\phi_\bg}$ and ${\pi_\bg}$ would instead be used as reference fields in SPT): \bae{\label{eq: def of dphi} \delta\phi^I(x)=\phi^I(x)-\varphi^I(x), \qquad \delta\pi_I(x)=\pi_I(x)-\varpi_I(x). } These finite displacements~(\ref{eq: def of dphi}) do not transform covariantly under field redefinitions beyond the linear approximation, and therefore one needs to relate them to contravariant/covariant infinitesimal perturbations. The $\delta\phi$'s expansion has been already discussed in Ref.~\cite{Gong:2011uw}. The two neighbouring points in field space $\phi(x)$ and $\varphi(x)$ can be connected by a unique field-space geodesic, which we parameterise by the affine parameter $\lambda$ such that $\phi(\lambda=0)=\varphi(x)$ and $\phi(\lambda=1)=\phi(x)$ (see Fig.~\ref{fig: transport method}). We then define the Vilkovisky-DeWitt-type variable $Q^I$ by the ``initial velocity" \bae{ \left.\dif{\phi^I}{\lambda}\right|_{\lambda=0}=Q^I. } This geometrical definition ensures that $Q^I$ lies in the tangent space of the point $\varphi(x)$, i.e. that it behaves as desired as a vector under field redefinitions. Using the fact that, by definition, $\phi^I(\lambda)$ verifies the geodesic equation \bae{ {\cal D}_\lambda^2\phi^I(\lambda)=\dif{^2\phi^I}{\lambda^2}+\Gamma^I_{JK}\dif{\phi^J}{\lambda}\dif{\phi^K}{\lambda}=0\,, } where ${\cal D}_\lambda$ represents the covariant derivative with respect to the affine parameter $\lambda$, one can express $\phi^I(\lambda)$ in terms of $Q^I$ by using the following expansion around $\lambda=0$: \bae{ \phi^I(\lambda)&=\phi^I(\lambda=0)+\left.\dif{\phi^I}{\lambda}\right|_{\lambda=0}\lambda+\frac{1}{2}\left.\dif{^2\phi^I}{\lambda^2}\right|_{\lambda=0}\lambda^2 +\cdots \nonumber \\ &=\varphi^I+Q^I\lambda-\frac{1}{2}\Gamma^I_{JK}Q^J Q^K\lambda^2+\cdots, } thus obtaining the field perturbations \bae{\label{eq: dphi expansion} \delta\phi^I=\phi^I(\lambda=1)-\phi^I(\lambda=0)=Q^I-\frac{1}{2}\Gamma^I_{JK}Q^J Q^K+\cdots. } The non-tensorial feature of the Christoffel symbols explicitly shows the non-covariance of the finite perturbations $\delta\phi^I$ beyond the linear approximation. The displacement $\delta\pi_I$ can also be expressed in terms of a truly covariant tensor in a similar way. For that, let us consider a family $\pi_I(\lambda)$ of covectors at each point along the geodesic $\phi^I(\lambda)$, and such that $\pi_I(\lambda=0)=\varpi_I(x)$ and $\pi_I(\lambda=1)=\pi_I(x)$. It is then natural to define a second Vilkovisky-DeWitt-type variable ${\tilde{P}}_I$ by the ``initial momentum velocity" along the geodesic as \bae{\label{eq: covP def} {\tilde{P}}_I={\cal D}_\lambda\pi_I|_{\lambda=0}=\left( \dif{\pi_I}{\lambda}-\Gamma_{IJ}^K\pi_K\dif{\phi^J}{\lambda} \right) |_{\lambda=0} =P_I-\Gamma^K_{IJ}\varpi_K Q^J, } where on the right-hand side, the naive $P_I$ such that $ \delta\pi_I=\pi_I(\lambda=1)-\pi_I(\lambda=0)=P_I \lambda+{\cal O}(\lambda^2)$ does not even transform covariantly at linear order, contrary to ${\tilde{P}}_I$, whose intrinsic geometrical definition ensures that it transforms as a covector at all order in perturbation theory. If one now imposes that the covectors ${\cal D}_\lambda\pi_I$ are parallel-transported along the geodesic: \bae{ 0={\cal D}_\lambda^2\pi_I=\dif{^2\pi_I}{\lambda^2}-2\Gamma^K_{IJ}\dif{\phi^J}{\lambda}\dif{\pi_K}{\lambda}-(\Gamma^S_{IJ,K}-\Gamma^S_{IR}\Gamma^R_{JK}-\Gamma^R_{IJ}\Gamma^S_{\indRK})\pi_S\dif{\phi^J}{\lambda}\dif{\phi^K}{\lambda}, } it is possible to express $\delta\pi_I$ in terms of ${\tilde{P}}_I$. However, we note that imposing ${\cal D}_\lambda^2\pi_I=0$ is \emph{one} simple possible choice, but that others are possible, corresponding to a freedom in the identification of a suitable covariant momentum perturbation. We refer the interested reader to Appendix~\ref{appendix: covP} for more details on this point, to which we will come back, and here just quote the relation between $\delta\pi_I$ and ${\tilde{P}}_I$ for this particular choice: \bae{\label{eq: covariant momentum UV perturbation} \delta\pi_I={\tilde{P}}_I+\Gamma^K_{IJ}\varpi_K Q^J +\Gamma^K_{IJ}Q^J {\tilde{P}}_K+\frac{1}{2}(\Gamma^S_{IJ,K}-\Gamma^S_{IR}\Gamma^R_{JK} +\Gamma^R_{IJ}\Gamma^S_{\indRK})\varpi_S Q^J Q^K+\cdots. } Equipped with these geometrically defined objects, we are now ready to compute the covariant effective action up to second order in the UV fields and momenta as \bae{\label{eq: covariant effective action} \exp\left(i\SHam_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}}]\right)=\int\scrD Q^{\covXI\mathfrak{a} }\exp\left(i\SHam^{(0)}[\varphi^{\indXI\mathfrak{a}}]+\SHam^{(1)}[\varphi^{\indXI\mathfrak{a}},Q^{\covXI\mathfrak{a}}]+\SHam^{(2)}[\varphi^{\indXI\mathfrak{a}},Q^{\covXI\mathfrak{a}}] \right), } with $Q^{\covXI\mathfrak{a}}=(Q^{I\mathfrak{a}}, {\tilde{P}}^{I\mathfrak{a}})$ used as a short-hand notation, and like for Eq.~\eqref{ZJ}, $\scrD Q^{\covXI\mathfrak{a}}$ truly refers to the canonical phase-space measure $\prod_{I,\mathfrak{a},J,\mathfrak{b}}\scrD Q^{I\mathfrak{a}} \scrD {\tilde{P}}_{J}^{\mathfrak{b}}$. \subsection{Covariant CTP action and IR-UV interactions} To investigate the effect of linear UV perturbations on the IR dynamics, we must first covariantly expand the action up to second order in the perturbations. Starting from the Hamiltonian action~\eqref{eq: Hamiltonian action}, and expanding it up to second order in the fields' covariant UV perturbations $Q^{\covXI}$ as well as metric UV perturbations $\calN_\UV$ and $\psi$,\footnote{Let us stress again that the stochastic approach is perturbative in the UV parts of the fields, and we only treat them up to quadratic order in this work (i.e. at the level of linearised perturbation theory). However no expansion is used at this stage for the IR parts of the fields, for which all nonlinearities are kept, at leading order in the gradient expansion.} one finds \bae{ \SHam^{(0)}&=\int\mathrm{d}^4x\,a^3\left[\varpi_I \varphi^{I\prime}-\frac{1}{H}\left(\frac{1}{2}G^{IJ}\varpi_I\varpi_J+V+3M_\text{Pl}^2H^2\right)\right], \label{eq: S0} \\ \SHam^{(1)}&=\int\mathrm{d}^4x\,a^3\left[{\tilde{P}}_I \left(\varphi^{I\prime}-\frac{\varpi^I}{H} \right) -Q^I\left({\cal D}_N\varpi_I+3\varpi_I+\frac{V_I}{H}\right) -\calN_\UV\left(\frac{1}{2}\varpi_I\varpi^I+V-3M_\text{Pl}^2H^2\right)\right], \label{eq: cov S1} \\ \SHam^{(2)}&=\int\mathrm{d}^4x\,a^3\left[-3M_\text{Pl}^2H^3\calN_\UV^2 -\calN_\UV\left(\varpi_I{\tilde{P}}^I+V_I Q^I+2M_\text{Pl}^2H^2\frac{\partial_i^2}{a^2}\psi\right) +\varpi_I Q^I\frac{\partial_i^2}{a^2}\psi \right.\nonumber \\ &\qquad\left.-\frac{1}{H}\left(\frac{1}{2}{\tilde{P}}_I{\tilde{P}}^I-\frac{1}{2}Q_I\frac{\partial_i^2}{a^2}Q^I+\frac{1}{2}V_{;IJ}Q^I Q^J -\frac{1}{2}R_I{}^{JK}{}_L\varpi_J\varpi_K Q^I Q^L\right)+{\tilde{P}}_I{\cal D}_N Q^I\right. \nonumber \\ &\qquad\left. + \left(\varphi^{I\prime}-\frac{\varpi^I}{H} \right) \frac{1}{2}R_{IJK}{}^L \varpi_L Q^J Q^K \right] \,.\label{eq: cov S2} } In usual perturbation theory, where the coarse-grained fields and momenta are replaced by their homogeneous values that are solutions of the classical equations of motion dictated by $\SHam^{(0)}$, the linear action $\SHam^{(1)}$ vanishes. It is thus sufficient to use covariant variables at linear order only, and one needs not bother about quadratic terms in $(Q^I, {\tilde{P}}_I)$ in Eqs.~\eqref{eq: dphi expansion} and \eqref{eq: covariant momentum UV perturbation}. Here on the contrary, $S^{(1)}$ does not vanish because the time derivatives of the coarse-grained fields and momenta differ slightly from their classical values (by a quantity one can interpret as a classical random noise as we shall find soon). Relatedly, one can check that the manifest covariance of the result~\eqref{eq: S0}--\eqref{eq: cov S2} would not have hold if one had not expanded $\delta\phi^{\indXI}$ to quadratic order in covariant perturbations. Let us now examine its three contributions. \paragraph*{Pure IR sector} $\SHam^{(0)}$ governs the propagation and self-interactions of the IR fields, without consideration of the UV modes at all. More generally, it should be interpreted as dictating the deterministic drift for the IR fields. Notice that in general, for generic potential $V$ and field-space metric $G_{IJ}$, this classical drift action can be non-linear in the IR modes. Thus, we will not bother writing explicitly a rather complex expression for $\SHam^{(0)}\left[\varphi^{\indXI\mathfrak{a}}\right]=\SHam^{(0)}[\varphi^{\indXI\mathrm{cl}}+\varphi^{\indXI\mathrm{q}}/2]-\SHam^{(0)}[\varphi^{\indXI\mathrm{cl}}-\varphi^{\indXI\mathrm{q}}/2]$, given that we will only be interested in its variation evaluated at classical IR solutions, which simply reads \bae{\label{eq: var-S0} \left.\var{\SHam^{(0)}\left[\varphi^{\indXI\mathfrak{a}}\right]}{\varphi^{\indYJ\mathrm{q}}}\right|_{\varphi^{\indYJ\mathrm{q}}=0}= \left.\var{\SHam^{(0)}\left[\varphi^{\indXI}\right]}{\varphi^{\indYJ}}\right|_{\varphi^{\indXI}=\varphi^{\indXI\mathrm{cl}}}. } In a related manner, one can see that because of its structure, $\SHam^{(0)}\left[\varphi^{\indXI\mathfrak{a}}\right]$ only contains odd powers of $\varphi^{X I \mathrm{q}}$, the quantum component of the fields. Thus, its expansion in quantum fields is trivial up to quadratic order, and one can actually write: \bae{ \label{eq: S0 linear in varphi-q} \SHam^{(0)}\left[\varphi^{\indXI\mathfrak{a}}\right]= \left. \int\mathrm{d}^4x \var{\SHam^{(0)}\left[\varphi^{\indXI}\right]}{\varphi^{\indYJ}(x)}\right|_{\varphi^{\indXI}=\varphi^{\indXI\mathrm{cl}}} \varphi^{\indYJ\mathrm{q}}(x) +\mathcal{O}\left(\varphi^\mathrm{q}\right)^3\,. } \paragraph*{IR-UV interactions} As already stated, we focus in this derivation on the IR-UV interactions stemming from the continuous flow of quantum UV modes to the open system of classical IR ones, which results from the time-dependent Fourier cutoff $k_\sigma(N)$. Interestingly, these interactions are encoded in $\SHam^{(1)}$ in the time derivatives of the IR fields, as seen in the heuristic approach (Sec.~\ref{sec: heuristic}). We thus focus on those terms and neglect other ones in $\SHam^{(1)}$, which amounts to considering the time derivatives acting as $\delta(N-N_\sigma(k))$ in Fourier space as we shall see now. However first, because we will write the discretised version of the linear action $\SHam^{(1)}$, it will prove useful to take a step back. The form of $\SHam^{(1)}$ that we displayed in Eq.~\eqref{eq: cov S1} is physically appealing because it makes explicitly appear the background EoM for the IR fields, times the UV perturbations, in an explicitly covariant form. However to find it we had first to integrate by parts the term $\int \mathrm{d}^4 x a^3 Q^{I \prime} \varpi_I=-\int \mathrm{d}^4 x a^3 Q^I (\varpi_I^\prime + 3 \varpi_I)$ and then combine it with the change from $P_I$ to ${\tilde{P}}_I=P_I-\Gamma^K_{IJ}\varpi_K Q^J$ to form the covariant time derivative ${\cal D}_N \varpi_I$. Thus, going back to this previous version, the relevant terms in $\SHam^{(1)}$ can be more simply expressed (i.e. with less time derivatives) as $\SHam^{(1)} \supset \int \mathrm{d}^4x\,a^3 \left({\tilde{P}}_I \varphi^{I\prime}-Q^I({\cal D}_N\varpi_I + 3 \varpi_I)\right)=\int \mathrm{d}^4x \,a^3\left(P_I\varphi^{I\prime}+Q^{I\prime}\varpi_I\right)$, which we will now take as our starting point to compute IR-UV interactions in the Keldysh basis. Note that each of these two terms is not covariant when taken separately, but that their sum is indeed covariant. Understanding why these terms with time derivatives are peculiar is easier in Fourier space, and requires the investigation of the action in terms of Keldysh fields: \bae{\label{eq: S1 Keldysh} &\SHam^{(1)}\left[\varphi^{\indXI\mathfrak{a}},Q^{\indXI\mathfrak{a}} \right] \nonumber \\ &\quad\supset\int \mathrm{d} N \frac{\dd^3 \mathbf{k}}{(2\pi)^3}\,a^3(N) \left[P_I^\mathrm{cl}\left(N,\mathbf{k}\right)\varphi^{I\mathrm{q}\prime}\left(N,\mathbf{k}\right)+Q^{I\mathrm{cl}\prime}\left(N,\mathbf{k}\right)\varpi_I^{\mathrm{q}}\left(N,\mathbf{k}\right)\right] + \left( \mathrm{cl} \leftrightarrow \mathrm{q} \right). } Now, remember that the same Fourier component of UV and IR modes can never be defined at the same time, except at the transition time $N_\sigma(k)$. Of course it means that the support of these terms is of measure zero, and this is actually why terms without derivatives were neglected in $\SHam^{(1)}$.\footnote{Strictly speaking this discussion applies only to terms in $\SHam^{(1)}$ that are bilinear in IR and UV fields. Terms that are higher-order in IR quantities contain nonlinear IR-UV mode mixings, but the stochastic formalism does not aim at taking into account these couplings that are also present in Minkowski spacetime.} However the terms with derivatives that we kept play a special role. Let us look for example at the first term, going back to the discrete description of the phase-space path integral for the mode $\mathbf{k}$ around the time $N_\sigma(k)$: \bae{ &\int_{N_\sigma(k)-\Delta N}^{N_\sigma(k)+\Delta N} \mathrm{d} N\,a^3(N) P_I^\mathrm{cl}\left(N,\mathbf{k}\right)\varphi^{I\mathrm{q}\prime}\left(N,\mathbf{k}\right) \nonumber \\ &\qquad=\Delta N\left[a^3(N_\sigma(k)+\widetilde{\Delta N})P_I^\mathrm{cl}\left(N_\sigma(k)+\widetilde{\Delta N},\mathbf{k}\right)\frac{\varphi^{I\mathrm{q}}\left(N_\sigma(k)+\Delta N,\mathbf{k}\right)-\varphi^{I\mathrm{q}}\left(N_\sigma(k),\mathbf{k}\right)}{\Delta N} \right. \nonumber \\ &\qquad\qquad\left. + a^3(N_\sigma(k)-\Delta N+\widetilde{\Delta N})P_I^\mathrm{cl}\left(N_\sigma(k)-\Delta N+\widetilde{\Delta N},\mathbf{k}\right)\frac{\varphi^{I\mathrm{q}}\left(N_\sigma(k),\mathbf{k}\right)-\varphi^{I\mathrm{q}}\left(N_\sigma(k)-\Delta N,\mathbf{k}\right)}{\Delta N} \right] \nonumber \\ &\qquad =a^3(N_\sigma(k)-\Delta N+\widetilde{\Delta N})\, P_I^\mathrm{cl}\left(N_\sigma(k)-\Delta N+\widetilde{\Delta N},\mathbf{k}\right)\varphi^{I\mathrm{q}}\left(N_\sigma(k),\mathbf{k}\right), } with $0<\widetilde{\Delta N}<\Delta N$ and where we used the conditions~\eqref{eq: decompositon Fourier} to get the second equality. Note that while the fields' values $\varphi^I$ and $Q^I$ are evaluated on the discrete time grid at $N_j=j \Delta N$, the momenta $\varpi_I$ and $P_I$ can be evaluated at intermediate time steps $\widetilde{N_j}=j \Delta N + \widetilde{\Delta N}$, which enables us to compute IR-UV interactions without specifying boundary conditions at the exact time $N_\sigma(k)$ for them. In the same way, we find that the second term contributes $-a^3(N_\sigma(k)+\widetilde{\Delta N})\,Q^{I\mathrm{cl}} \left(N_\sigma(k),\mathbf{k}\right)\varpi_{I}^{\mathrm{q}}\left(N_\sigma(k)+\widetilde{\Delta N},\mathbf{k}\right)$. As for the third and the fourth term from the $(\mathrm{cl}$-$\mathrm{q})$ permutation in Eq.~\eqref{eq: S1 Keldysh}, they vanish by virtue of the boundary conditions~\eqref{eq: boundary conditions}. Thus, the interaction action that we consider can be rewritten in the continuous limit as \bae{\label{eq: Sint} \SHam^{(\mathrm{int})}\left[ \varphi^{\indXI\mathfrak{a}},Q^{\indXI\mathfrak{a}}\right]&=\int\mathrm{d}^4x\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\delta(N-N_\sigma(k))\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}}a^3\left[P_I(x)^\mathrm{cl}{\varphi}^{I\mathrm{q}}(N,\mathbf{k})-Q^{I\mathrm{cl}}(x) \varpi_I^\mathrm{q}(N,\mathbf{k})\right] \nonumber \\ &=\int\mathrm{d}^4x\,a^3Q^{\indXI\mathfrak{a}}(x)\varphi^\prime_{\indXI\mathfrak{a}}(x), } where we introduced the pseudo ``time derivatives" $\varphi^\prime_{\indXI\mathfrak{a}}(x)$ with lower indices that are defined as \bae{\label{eq: time-derivatives} \bce{ \displaystyle \varphi^\prime_{QI\mathfrak{a}}(x):=-\delta^\mathrm{cl}_\mathfrak{a}\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\delta(N-N_\sigma(k))\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}}\varpi_I{}^\mathrm{q}(N,\mathbf{k}), \\[10pt] \displaystyle \varphi^\prime_{P I \mathfrak{a}}(x):=\delta^\mathrm{cl}_\mathfrak{a}\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\delta(N-N_\sigma(k))\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}}\varphi_I{}^{\mathrm{q}}(N,\mathbf{k}), } } thus restricting the couplings to be of the form $\mathrm{UV}^\mathrm{cl}\times\mathrm{IR}^\mathrm{q}$. Those interaction terms will be the ones responsible for the modification of the classical equations of motion for the IR fields, once the UV perturbations are integrated out. Let us now investigate the dynamics of the latter. \paragraph*{UV dynamics} As a first comment, note that the last line of $\SHam^{(2)}$ seemingly goes beyond the approximation of treating UV modes up to quadratic order in the action, as it consists of a quadratic term in the UV perturbations, multiplied by the ``background-like'' equation of motion for the IR fields, i.e. by a quantity of order of the to-be-found noise. The careful reader will also have noticed that such a term is exactly of the kind that can ambiguously appear depending on the exact definition of a covariant momentum UV perturbation ${\tilde{P}}_I$, as we explain in Appendix~\ref{appendix: covP}. Because this arbitrariness can not affect the physics, we are free to make the choice ($\kappa= 1/2$) such that this term proportional to the Riemann tensor of the field space vanishes. This procedure also fixes the form of $\SHam^{(3)}$ as is shown in Appendix~\ref{appendix: covP}, thus we conclude that this term does not affect the Gaussian properties of the theory and we will discard it in what follows, leaving for future work the investigation of this subtlety and of potentially interesting non-Gaussian features related to the geometry of the field space. Extremising the action~\eqref{eq: S0}--\eqref{eq: cov S2} with respect to the non-dynamical fields $\calN_\UV$ and $\psi$ that appear without any time-derivative, one recovers the local Friedmann equation~\eqref{laspeIR} as well as the expressions~\eqref{lapseUV} and \eqref{eq: perturbation energy const} for $\calN_\UV$ and $\psi$ in terms of $Q^I$ and ${\tilde{P}}_I$. Plugging them back into the second-order action, one can write the latter in the condensed form \bae{\label{eq: S2 with indices} \SHam^{(2)}=\frac{1}{2}\int\mathrm{d}^4x\mathrm{d}^4x^\prime Q^{\indXI}(x)\Lambda_{X\indYIJ}(x,x^\prime)Q^{\indYJ}(x^\prime), } where we used the non-covariant variables $Q^{\indXI}$ that naturally appear in $\SHam^{(\mathrm{int})}$ instead of the covariant perturbations $Q^{\covXI}$. As a result, some of the following intermediate steps will not be manifestly covariant. However, one is perfectly allowed to use such non-covariant objects to make calculations, and then to switch back to covariant ones using the relation~(\ref{eq: covP def}) ${\tilde{P}}_I=P_I-\Gamma_{IJ}^K\varpi_K Q^J$. Instead of quoting the kernel $\Lambda_{X\indYIJ}$ corresponding to the non-covariant UV modes that only appear in intermediate steps, we rather show its covariant counterpart $\Lambda_{\tilde{X}\covYIJ}$, which is given by \bae{ \bpme{ \Lambda_{QQIJ}, & \Lambda_{Q{\tilde{P}}IJ} \\ \Lambda_{{\tilde{P}} QIJ}, & \Lambda_{{\tilde{P}}\covPIJ} }=\delta^{(4)}(x-x^\prime)a^3\bpme{ \frac{1}{H}\left(G_{IJ}\frac{\partial_i^2}{a^2}-M^2_{QQIJ}\right), & -G_{IJ}({\cal D}_N+3 )-M^2_{Q{\tilde{P}}IJ} \\ G_{IJ}{\cal D}_N-M^2_{{\tilde{P}} QIJ}, & -G_{IJ}/H }, } where the differential operators act on the $x^\prime$ coordinates, and with $M^2_{QQIJ}$ and $M^2_{Q{\tilde{P}}IJ}=M^2_{{\tilde{P}} QIJ}$ already given in Eqs.~\eqref{M2QQ}--\eqref{M2QP}. Extremising $\SHam^{(2)}$ with respect to the covariant UV perturbations yields the following classical EoM for the UV fields: \bae{\label{eq: UV EoM} \int\mathrm{d}^4x^\prime\Lambda_{\tilde{X}\covYIJ}(x,x^\prime)Q^{\covYJ}(x^\prime)=0, \quad \Leftrightarrow \quad \bce{ \displaystyle {\cal D}_N Q^I=\frac{{\tilde{P}}^I}{H}+M^2_{{\tilde{P}} Q}{}^I{}_J Q^J, \\ \displaystyle {\cal D}_N{\tilde{P}}_I=-3{\tilde{P}}_I+\frac{1}{H}\left(G_{IJ}\frac{\partial_i^2}{a^2}-M^2_{QQIJ}\right)Q^J-M^2_{Q{\tilde{P}}I}{}^J{\tilde{P}}_J\,, } } which are nothing else than equations~(\ref{eq: EQ}) and (\ref{eq: EP}) $E^{QI}=0=E^{{\tilde{P}}}_I$ found in the heuristic approach (Sec.~\ref{sec: heuristic}). Strikingly, as we shall see in the next subsection, it is sufficient to know the (inverse of the) kernel operator $\Lambda$ to compute the corrections to the IR dynamics, due to their interactions with UV fluctuations as dictated by $\SHam^{(\mathrm{int})}$. Thus, the UV modes dictating these corrections can be understood as evolving according to $\SHam^{(2)}$ only, hence they verify the EoM~\eqref{eq: UV EoM}, similar to the one of SPT but with background fields replaced by their infrared counterparts. This is an interesting improvement from the path-integral approach compared to the heuristic one, where we had to \emph{assume} that the dynamics of UV modes was decoupled from the one of IR ones, see the paragraph before Eq.~\eqref{eq: noises} and the one after Eq.~\eqref{M2QP}. \\ Eventually, in the path integral~\eqref{eq: covariant effective action} with doubled degrees of freedom, the quadratic action written in terms of the fields in the Keldysh basis reads: \bae{ \SHam^{(2)}\left[ \varphi^{\indXI\mathfrak{a}},Q^{\indXI\mathfrak{a}}\right]=\frac{1}{2}\int\mathrm{d}^4x\mathrm{d}^4x^\prime Q^{\indXI\mathfrak{a}}(x)\Lambda_{X\indYIJ\mathfrak{a}\mathfrak{b}}(x,x^\prime)Q^{\indYJ\mathfrak{b}}(x^\prime), } where $\Lambda_{X\indYIJ\mathfrak{a}\mathfrak{b}}$ is given by the basis transformation $\Lambda_{X\indYIJ\mathfrak{a}\mathfrak{b}}=(K^T)_\mathfrak{a}{}^a\Lambda_{X\indYIJab}K^b{}_\mathfrak{b}$, with $K$ given in Eq.~\eqref{classical-quantum-def}, and with the $\pm$ basis operator \bae{\label{eq: Lambda_ab} \Lambda_{X\indYIJab}=\mathrm{diag}\left(\Lambda_{X\indYIJ}(\varphi^+),-\Lambda_{X\indYIJ}(\varphi^-)\right). } Note that, as the differential operator $\Lambda_{X\indYIJ}$ depends on the IR fields $\varphi^{\indXI}$, one has in principle to distinguish between its evaluations on $+$ IR fields and on $-$ IR fields: $\Lambda_{X\indYIJ}(\varphi^+)$ and $\Lambda_{X\indYIJ}(\varphi^-)$. However, as we already noticed, one can think of the expansion in the quantum components as an expansion in $\hbar$. In this respect, in order to derive the leading-order quantum effects, it sufficient to use the expression of $\Lambda$ at zeroth-order: \bae{\label{eq: Lambda_ab classical} \Lambda_{X\indYIJab}=\Lambda_{X\indYIJ}(\varphi^\mathrm{cl})\sigma_{3ab}+\mathcal{O}\left(\varphi^\mathrm{q}\right)\delta_{ab}\,. } Although it may seem a crude approximation, we will check the consistency of this expansion in the next subsection, and explain why higher-order corrections are indeed not needed for our computation. \subsection{Covariant coarse-grained effective Hamiltonian action and Langevin equations} \label{subsec:effective Hamiltonian action-Langevin equations} Now we have to gather the three contributions to the covariant coarse-grained Hamiltonian effective action and perform the following path integral: \bae{ \exp\left(i\SHam_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}}]\right)=&\exp\left(i\SHam^{(0)}[\varphi^{\indXI\mathfrak{a}}]\right) \nonumber \\ & \times \int \scrD Q^{\indXI\mathfrak{a}}\exp\left[i\int\mathrm{d}^4x\,a^3\varphi^\prime_{\indXI\mathfrak{a}}Q^{\indXI\mathfrak{a}} +\frac{i}{2}\int\mathrm{d}^4x\mathrm{d}^4x^\prime Q^{\indXI\mathfrak{a}}\Lambda_{X\indYIJ\mathfrak{a}\mathfrak{b}}Q^{\indYJ\mathfrak{b}}\right]. } Note that we safely replaced the measure $\scrD Q^{\covXI\mathfrak{a}}$ by $\scrD Q^{\indXI\mathfrak{a}}$ in the path integral~\eqref{eq: covariant effective action} as the Jacobian for the transformation $Q^{\covXI\mathfrak{a}}\to Q^{\indXI\mathfrak{a}}$ is exactly one. The Gaussian integral over the UV modes can be performed exactly to give: \bae{ \SHam_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}}]=&\, \SHam^{(0)}[\varphi^{\indXI\mathfrak{a}}]\underbrace{+\frac{i}{2}\ln\left[\mathrm{Det}\left(\Lambda\right)\right]}_{\textstyle \begin{array}{c} \SHam_\mathrm{ren}[\varphi^{\indXI\mathrm{cl}}]\end{array}} \underbrace{-\frac{1}{2}\int\mathrm{d}^4x\mathrm{d}^4x^\prime \left(a^3\varphi_{\indXI\mathfrak{a}}^\prime\right)_x(\Lambda^{-1})^{X\indYIJ\mathfrak{a}\mathfrak{b}}{}_{xx^\prime}\left(a^3\varphi^\prime_{\indYJ\mathfrak{b}}\right)_{x^\prime}}_{\textstyle \begin{array}{c} \SHam_\mathrm{IA}[\varphi^{\indXI\mathfrak{a}}]\end{array}}. \label{def-influence} } The first term dictates the classical, background dynamics of the IR fields, and contains no new information compared to SPT. The second term is nothing but the usual QFT one-loop correction which can be computed in principle, and then reabsorbed by renormalisations of the bare parameters in the classical action $S$. We will thus omit this contribution in the following, although the renormalisation procedure is of course highly non-trivial to perform explicitly. More important for us is the third contribution $\SHam_\mathrm{IA}$ called the \emph{influence action}, describing the influence on the coarse-grained fields of the small-scale UV fluctuations that were integrated out (or more generally, the influence of an environment on the system of interest~\cite{Feynman:1963fq}). In the rest of this subsection, we compute this influence action, discuss its physical interpretation and derive the resulting stochastic equations for the coarse-grained fields.\\ As $\Lambda^{-1}$ is nothing but the closed-time-path-ordered two point correlation function of UV modes, it is easier to express it first in the $\pm$ basis with latin indices $a$, $b$, $\cdots$, and then translate it into the Keldysh basis with use of the matrix of change of basis $K^a{}_\mathfrak{a}$. So we first focus on \bae{\label{eq: two point function} i(\Lambda^{-1}&)^{X\indYIJab}(x,x^\prime) \nonumber \\ & =\int \scrD Q^{\indXIa} \exp{\left[\frac{i}{2}\int \mathrm{d}^4 x \mathrm{d}^4 x^\prime Q^{\indXIa} \Lambda_{X\indYIJab} Q^{\indYJb}\right]} Q^{\indXIa}(x) Q^{\indYJb}(x^\prime) \\ &= \bce{ \displaystyle \theta(N-N^\prime)\braket{\hat{Q}^{\indXI}(x)\hat{Q}^{\indYJ}(x^\prime)}+\theta(N^\prime-N)\braket{\hat{Q}^{\indYJ}(x^\prime)\hat{Q}^{\indXI}(x)}, & (a=+,\,b=+), \\ \displaystyle \braket{\hat{Q}^{\indYJ}(x^\prime)\hat{Q}^{\indXI}(x)}, & (a=+,\,b=-), \\ \displaystyle \braket{\hat{Q}^{\indXI}(x)\hat{Q}^{\indYJ}(x^\prime)}, & (a=-,\,b=+), \\ \displaystyle \theta(N^\prime-N)\braket{\hat{Q}^{\indXI}(x)\hat{Q}^{\indYJ}(x^\prime)}+\theta(N-N^\prime)\braket{\hat{Q}^{\indYJ}(x^\prime) \hat{Q}^{\indXI}(x)}, & (a=-,\,b=-), \nonumber } } where the ordering of the quantum operators is determined by the chronological order along the closed-time path $C$, and the brackets $\braket{\cdots}$ denote usual vacuum expectation values of UV operators under the $\varphi^{\indXI}$-dependent measure $\scrD Q^{\indXI} \exp{\left( i \SHam^{(2)}\left[\varphi^{\indXI}, Q^{\indXI} \right] \right)}$. Note that these expectation values are unambiguously defined in the same way on both branches of the CTP, as we recall that for our computation, it is sufficient to evaluate the kernel $\Lambda$ with vanishing quantum components of the IR fields, i.e. $\Lambda(\varphi^\mathrm{cl})$, see Eq.~\eqref{eq: Lambda_ab classical}. Thus, for each component of the $\Lambda^{-1}$ matrix we are able to forget the $\pm$ indices and we can compute them as in usual perturbation theory, but with background fields replaced by their IR counterparts. With use of the dimensionless unequal time two-point functions \bae{ (2\pi)^3\delta^{(3)}(\mathbf{k}+\mathbf{k}^\prime)\frac{2\pi^2}{k^3}\mathcal{P}^{X\indYIJ}(N,N^\prime; k)=\Braket{\hat{Q}^{\indXI}(N,\mathbf{k})\hat{Q}^{\indYJ}(N^\prime ,\mathbf{k}^\prime)}, } $\Lambda^{-1}$ can be expressed more explicitly as \bae{ &(\Lambda^{-1})^{X\indYIJab}(x,x^\prime) \nonumber \\ &=-i\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{x}^\prime)}\frac{2\pi^2}{k^3} \bpme{ \begin{array}{c} \theta(N-N^\prime)\mathcal{P}^{X\indYIJ}(N,N^\prime;k) \\ +\theta(N^\prime-N){\mathcal{P}^{X\indYIJ}}^*(N,N^\prime;k) \end{array} & {\mathcal{P}^{X\indYIJ}}^*(N,N^\prime;k) \\ \mathcal{P}^{X\indYIJ}(N,N^\prime;k) & \begin{array}{c} \theta(N^\prime-N)\mathcal{P}^{X\indYIJ}(N,N^\prime;k) \\ +\theta(N-N^\prime){\mathcal{P}^{X\indYIJ}}^*(N,N^\prime;k) \end{array} }^{ab}, } where we used that $\mathcal{P}^{\indYX\indJI}(N^\prime,N;k)={\mathcal{P}^{X\indYIJ}}^*(N,N^\prime;k)$ as a consequence of $\hat{Q}^{X I}(x)$ being hermitian operators, and hence $\hat{Q}^{X I \dagger}(N,\mathbf{k}) =\hat{Q}^{X I}(N,-\mathbf{k})$. We now express $\Lambda^{-1}$ in the Keldysh basis as \bae{\label{eq: Lambda in Keldysh} &(\Lambda^{-1})^{X\indYIJ}{}_{\mathfrak{a}\mathfrak{b}}(x,x^\prime)= (K^T)_\mathfrak{a}{}^a \sigma_{3ab}\, (\Lambda^{-1})^{X\indYIJbc}(x,x^\prime)\, \sigma_{3cd} K^d{}_\mathfrak{b} \nonumber \\ &=-i\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{x}^\prime)} \frac{2\pi^2}{k^3} \bpme{ 0 & -2i\theta(N^\prime-N)\Im\mathcal{P}^{X\indYIJ}(N,N^\prime;k) \\ 2i\theta(N-N^\prime)\Im\mathcal{P}^{X\indYIJ}(N,N^\prime;k) & \Re\mathcal{P}^{X\indYIJ}(N,N^\prime;k) }_{\mathfrak{a}\mathfrak{b}}\,, } where we used $\theta(N^\prime-N)+\theta(N-N^\prime)=1$. The influence action $\SHam_\mathrm{IA}$ in Eq.~\eqref{def-influence} can then be explicitly obtained after contracting twice with $\varphi^\prime_{\indXI}{}^\mathfrak{a}(x) \propto \delta^\mathfrak{a}_\mathrm{q}\delta(N-N_\sigma(k))$ (note that the position of the $\mathfrak{a}$ index has been flipped compared to Eq.~\eqref{eq: time-derivatives} with use of the $\sigma_1^{\mathfrak{a}\mathfrak{b}}$ metric), retaining only the $\mathrm{q}$-$\mathrm{q}$ component of Eq.~\eqref{eq: Lambda in Keldysh} evaluated at equal times $N=N^\prime=N_\sigma(k)$: \bae{\label{eq:IA} \SHam_\mathrm{IA}=&\,\frac{i}{2}\int\mathrm{d}^4x\mathrm{d}^4x^\prime \left(a^3\varphi_{\indXI}{}^{\mathrm{q}}\right)_x(\Re\Pi^{X\indYIJ})_{xx\prime}\left(a^3\varphi_{\indYJ}{}^{\mathrm{q}}\right)_{x^\prime}, } where \bae{\label{eq: Pi tensor} \Pi^{X\indYIJ}(x,x^\prime)&=\int\frac{\dd^3 \mathbf{k}}{(2\pi)^3}\mathrm{e}^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{x}^\prime)} \frac{2\pi^2}{k^3} \delta(N-N_\sigma)\delta(N^\prime-N_\sigma) \mathcal{P}^{X\indYIJ}(N,N^\prime;k) \nonumber \\ &=\frac{k_\sigma{}^\prime}{k_\sigma}\frac{\sin\left(k_\sigma|\mathbf{x}-\mathbf{x}^\prime|\right)}{k_\sigma|\mathbf{x}-\mathbf{x}^\prime|}\mathcal{P}^{X\indYIJ}(N,k_\sigma)\delta(N-N^\prime), } with $\mathcal{P}$ on the second line being simply the usual equal-time dimensionless two-point correlation function. Note that any higher-order correction in the quantum components of the IR fields, coming from evaluating $\Lambda$ beyond the leading order result~\eqref{eq: Lambda_ab classical}, would generate terms of order $\mathcal{O}(\varphi^\mathrm{q})^3$ in the influence action. Put it otherwise, our computation is exact up to quadratic order in the quantum components. Although we have just seen that the contractions with the ``time derivatives" $\varphi^\prime_{\indXI}{}^\mathfrak{a}$ only kept the information about the $\mathrm{q}$-$\mathrm{q}$ component of $\Lambda^{-1}{}_{\mathfrak{a}\mathfrak{b}}$, it is still interesting to notice the ``causality structure"~\cite{2009AdPhy..58..197K,Calzetta:2008iqa} of this operator. \paragraph*{Classical-Classical component.} First, the ``cl-cl" component in Eq.~\eqref{eq:IA} is zero. It is also easy to check that there is no term independent of $\varphi^\mathrm{q}$ in the fully nonlinear, purely IR Keldysh action $\SHam^{(0)}[\varphi^{\indXI\mathrm{cl}},\varphi^{\indXI\mathrm{q}}]$, as can be seen for example from its expansion~\eqref{eq: S0 linear in varphi-q} in the quantum components of the IR fields. This means that for vanishing quantum components $\varphi^{\indXI\mathrm{q}}=0$, the effective action $S_\mathrm{eff}=\SHam^{(0)}+S_\mathrm{IA}$ is zero: $\SHam_\mathrm{eff}[\varphi^{\indXI\mathrm{cl}},0]=0$. This was expected because for $\varphi^\mathrm{q}=0$, the fields coincide on the forward and backward parts of the closed-time contour and thus the two contributions cancel each other. A last interpretation is that the quantum components do not propagate alone and must mix to classical ones. In this respect, note that although our derivation was done at lowest non-trivial order in the quantum components of the fields and momenta, with $\Lambda \to \Lambda(\varphi^{\mathrm{cl}})$ in the path integral over UV modes, this property actually holds non-perturbatively. Indeed, any correction to the current computation would be proportional to powers of $\varphi^\mathrm{q}$, and thus would still be vanishing when evaluated on configurations with purely classical components. \paragraph*{Classical-Quantum component.} This component is interesting because, although non-zero in Eq.~\eqref{eq: Lambda in Keldysh}, it results in a vanishing contribution to the influence action after contracting with the ``time-derivatives" $\varphi^\prime_{\indXI}{}^{\mathfrak{a}} \propto \delta^\mathfrak{a}_\mathrm{q}$, a property inherited from the boundary conditions~\eqref{eq: boundary conditions}. If $\varphi^{I\mathrm{cl}}$ and $Q^{I\mathrm{q}}$ were not vanishing at $N_\sigma(k)$, $\SHam_\mathrm{IA}$ would be augmented by a cross-term of the form $\left[\varphi^\mathrm{cl} \left(\Lambda^{-1}\right)_{\mathrm{cl},\mathrm{q}} \varphi^\mathrm{q} +\left( \mathrm{cl} \leftrightarrow \mathrm{q} \right) \right]$ and proportional to the imaginary part of the power spectrum. The mixed ``q-cl"/``cl-q" components in the influence action are more generally known for describing the dissipation of the system (the IR modes) by backreacting on the environment (the UV modes), and being responsible for the famous fluctuation-dissipation theorem. Indeed, if no boundary condition was imposed at the time $N_\sigma(k)$, the classical field configurations $\varphi_{\indXI}{}^\mathrm{cl}(x)$ would get an extra friction term in their equations of motion of the form $-2\int\mathrm{d}^4x^\prime\,\Im\Pi^{X\indYIJ}(x,x^\prime)a^3(N^\prime)\varphi_{\indYJ}{}^\mathrm{cl}(x^\prime)$. However in our setup of stochastic inflation, the continuous flow from UV to IR modes via the time-dependent cutoff $k_\sigma(N)$ is unidirectional and we expect no such backreaction, and thus no dissipation.\footnote{The corresponding mass and friction terms entailed by this classical-quantum component were neglected by hand in Refs.~\cite{Morikawa:1989xz,Matarrese:2003ye,Levasseur:2013ffa}. Moss and Rigopoulos cast doubt on the naive way to perform the IR-UV decomposition by a time-dependent window function~\cite{Rigopoulos:2016oko,Moss:2016uix}, and indeed, in Refs.~\cite{Tokuda:2017fdh,Tokuda:2018eqs}, Tokuda and Tanaka carefully showed that the stochastic theory enables one to recover the free propagators only when choosing the appropriate boundary conditions in the Keldysh basis, with the consequence of prohibiting the classical-quantum component.} \paragraph*{Quantum-Quantum component.} The ``q-q" component is the only one that survives in the influence action after contracting with the pseudo ``time-derivatives", and because it is quadratic in the quantum parts of the fields and momenta, it constitutes a non-trivial quantum correction to the classical dynamics, again describing the effects of the integrated out short-scale fluctuations on the IR sector. Let us now discuss its physical implications.\\ Strikingly, the influence action~\eqref{eq:IA} is purely imaginary. This implies that in the path integral~\eqref{Z} over the IR components, the weights of configurations with non-zero quantum components are exponentially suppressed. This important fact warrants that our expansion in the quantum components of the fields (and momenta) is well justified. In a related manner, even though we do not use the formalism of density matrices in our paper, it can be shown quite generally that such imaginary ``$\mathrm{q}$-$\mathrm{q}$" component in the influence action acts to suppress the off-diagonal terms of the reduced density matrix $\rho_\mathrm{r}(\varphi^+,\varphi^-)$ obtained after tracing out the environment (the UV modes here), a process that can be understood as decoherence (see, e.g. Refs.~\cite{Feynman:1963fq,Calzetta:2008iqa}). The exponential suppression of the quantum components of the fields in the weigth $\mathrm{e}^{iS_\mathrm{IA}}$ of the path integral is of course reminiscent of statistical field theory. Following the seminal paper of Feynman and Vernon~\cite{Feynman:1963fq}, this insight is put to good use by performing what is sometimes called a Hubbard-Stratonovich transformation~\cite{1957SPhD....2..416S,1959PhRvL...3...77H}: introducing auxiliary fields $\xi^{\indXI}$, the exponential of the influence action can be rewritten as \bae{ \mathrm{e}^{iS_\mathrm{IA}}=\int\scrD\xi^{\indXI}P\left[\xi^{\indXI}; \varphi^{\indXI\mathrm{cl}}\right]\mathrm{e}^{i\int\mathrm{d}^4x\,a^3\xi^{\indXI}\varphi_{\indXI}{}^\mathrm{q}}, \label{HS} } where $P\left[\xi^{\indXI} ; \varphi^{\indXI\mathrm{cl}}\right]$ denotes the Gaussian weight \bae{ P[\xi^{\indXI}; \varphi^{\indXI\mathrm{cl}}]=\sqrt{\mathrm{Det}( 2 \pi \Re\Pi)}^{-1}\exp\left[-\frac{1}{2}\int\mathrm{d}^4x\mathrm{d}^4x^\prime\xi^{\indXI}\left(\Re\Pi^{-1}_{X\indYIJ}\right)_{\varphi^{\indXI\mathrm{cl}}}\xi^{\indYJ}\right]\,, \label{Gaussian-weight} } and where the subscript $\varphi^{\indXI\mathrm{cl}}$ recalls that $\Pi$, as essentially the Green's function of $\Lambda(\varphi^{\indXI\mathrm{cl}})$, can thus be seen as a (complicated) functional of the IR classical components $\varphi^{\indXI\mathrm{cl}}$. The manipulation~\eqref{HS}--\eqref{Gaussian-weight} is a simple mathematical identity, in essence the inverse of a Gaussian integration. Yet, it offers a very useful physical insight. Indeed, the partition function~\eqref{Z} can now be rewritten as \bae{ Z&=\int \scrD \varphi^{\indXI\mathfrak{a}} \exp\left(i\SHam_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}}] \right)\, \label{Z-new} \\ &=\int \scrD \varphi^{\indXI\mathrm{cl} } \int\scrD\xi^{\indXI} P\left[\xi^{\indXI};\varphi^{\indXI\mathrm{cl}}\right] \int \scrD \varphi^{\indXI\mathrm{q}} \,\mathrm{exp}( \underbrace{ i \SHam^{(0)}\left[\varphi^{\indXI\mathfrak{a}}\right]+i\int\mathrm{d}^4x\,a^3\xi^{\indXI}\varphi_{\indXI}{}^\mathrm{q}}_{\textstyle \begin{array}{c} i \tilde{\SHam}_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}},\xi^{\indXI}]\end{array}} )\,, \nonumber } with $\int\scrD\xi^{\indXI}P[\xi^{\indXI};\varphi^{\indXI\mathrm{cl}}]=1$ for any realisation of $\varphi^{\indXI\mathrm{cl}}$. Upon the introduction of the Hubbard-Stratonovich fields $\xi^{\indXI}$, the imaginary quadratic interactions of the quantum components in $\SHam_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}}]$ have been turned into a real linear coupling between the quantum components and the auxiliary fields in the new real effective action $\tilde{\SHam}_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}},\xi^{\indXI}]$. Of course, the physical interpretation behind Eq.~\eqref{Z-new} is that $P\left[\xi^{\indXI};\varphi^{\indXI\mathrm{cl}}\right]$ endows the $\xi$'s with Gaussian statistics with \bae{\label{eq: noise stat.} \bce{ \displaystyle \braket{\xi^{\indXI}(x)} \equiv \int\scrD\xi^{\indXI}\xi^{\indXI}(x)P\left[\xi^{\indXI};\varphi^{\indXI\mathrm{cl}}\right]=0, \\ \displaystyle \braket{\xi^{\indXI}(x)\xi^{\indYJ}(x^\prime)}\equiv \int\scrD\xi^{\indXI}\xi^{\indXI}(x)\xi^{\indYJ}(x^\prime)P\left[\xi^{\indXI};\varphi^{\indXI\mathrm{cl}}\right]= \left.\Re\,\Pi^{X\indYIJ}(x,x^\prime)\right|_{\varphi^{\indXI\mathrm{cl}}}. } } The delta function $\delta(N-N^\prime)$ in $\Re\Pi$, see Eq.~\eqref{eq: Pi tensor}, indicates that $\xi$'s can be interpreted as Gaussian white noises, like in the heuristic approach, with amplitudes determined by the power spectra of the UV modes on the ``background" of the IR classical components. Additionally, it is interesting to notice that the reality of the noise is guaranteed in this first-principle derivation, contrary to the heuristic approach where this feature has to be added by hand (see Sec.~\ref{limitations}). The partition function~\eqref{Z-new} together with equations~\eqref{eq: Pi tensor} and \eqref{eq: noise stat.} represent one of the main results of this paper.\\ It is now relatively straightforward to take into account the effect of the quantum components on the classical ones. Indeed, recall that our computation of $\SHam_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}}]$ was made up to quadratic order in the quantum components. Consistently neglecting cubic terms in the expression~\eqref{eq: S0 linear in varphi-q} for $\SHam^{(0)}$, the quantum components therefore enter only linearly in $\tilde{\SHam}_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}},\xi^{\indXI}]$, and the path integral over them can hence be performed explicitly, yielding the delta functional $\delta\left( \left.\var{\SHam^{(0)}\left[\varphi^{\indXI}\right]}{\varphi_{\indYJ}}\right|_{\varphi^{\indXI}=\varphi^{\indXI\mathrm{cl}}}+a^3 \xi^{\indYJ}\right)$ in the remaining path integral over the classical components of the IR fields, $\varphi^\mathrm{cl}$, and the auxiliary variables $\xi$. Thus, the only trajectories with non-zero weights in the path integral are the ones that verify the following equations of motion: \bae{\label{eq: raw Langevin} \varphi^{I\mathrm{cl}}{}^\prime=\frac{\varpi^{I\mathrm{cl}}}{H}+\xi^{QI}, \qquad \varpi_I^{\mathrm{cl}}{}^\prime=-3\varpi_I^\mathrm{cl}-\frac{V_I\left(\varphi^{I\mathrm{cl}}\right)}{H}+\frac{1}{H}\Gamma^{K}_{IJ}\left(\varphi^{I\mathrm{cl}}\right)\varpi_K^\mathrm{cl}\varpi^{J\mathrm{cl}} +\xi^{P}_I, } where we will omit to write explicitly ``cl" for simplicity in what follows. While the first equation is already in a manifestly covariant form, the second one is not. However this is not surprising as neither $\varpi_I^\prime$ nor $P_I$ (that was integrated out), are covariant quantities themselves. However this equation does respect general covariance, as is seen by using $\xi^{P}_I=\xi^{{\tilde{P}}}_I+\Gamma^K_{IJ}\varpi_K\xi^{QJ}$, as well as $\varpi_I^\prime={\cal D}_N\varpi_I+\Gamma^K_{IJ}\varpi_K\varphi^{J\prime}$ and replacing $\varphi^{J\prime}$ by its value according to the first Langevin equation. Eventually, the stochastic EoM~(\ref{eq: raw Langevin}) can be summarised in an explicitly covariant way as (again, removing the ``cl" exponent for conciseness) \bae{\label{eq: covariant Langevin} \varphi^{I\prime}=\frac{\varpi^I}{H}+\xi^{QI}, \qquad {\cal D}_N\varpi_I=-3\varpi_I-\frac{V_I}{H}+\xi^{{\tilde{P}}}_I. } As we discussed at length in Sec.~\ref{sec: stochastic anomalies}, these Langevin equations should be understood as the continuous limit of a discrete process with a Stratonovich scheme. Moreover, the identification of independent quantum fields in the Bunch-Davies regime provides one, upon classicalisation, with an essentially unique set of independent white noises with which to formulate these Stratonovich Langevin-type equations. As we also explained there, It\^o's discretisation also has a number of advantages, and one can convert the latter equations into the corresponding It\^o's ones as: \bae{\label{eq: Ito-Langevin-final} \boxed{ \mathfrak{D}_N\varphi^I=\frac{\varpi^I}{H}+\xi^{QI}, \qquad \mathfrak{D}_N\varpi_I=-3\varpi_I-\frac{V_I}{H}+\xi^{\tilde{P}}_I, } } with use of the It\^o-covariant derivatives~(\ref{eq: ItoD for X})--(\ref{eq: ItoD for V}). Let us also remind the reader that the local Hubble parameter $H$ is explicitly given in terms of the IR fields and momenta through the Friedmann constraint~\eqref{laspeIR} \bae{ 3M_\text{Pl}^2H^2=\frac{1}{2}G^{IJ}(\varphi)\varpi_I\varpi_J+V(\varphi)\,, } without modification compared to the heuristic approach. Eventually, let us comment on the status of these equations. As the derivation above shows, these are the semi-classical equations governing the trajectories that have a non-zero weight in the closed-time path integral, but they do not yet correspond to physical quantities: the expectation values of the quantum theory are only recovered once taking the ensemble averages over the noises. More precisely, this statistical average exactly reproduces the quantum average only when the initial effective action $\SHam_\mathrm{eff}[\varphi^{\indXI\mathfrak{a}}]$ is at most quadratic in the quantum components, resulting in the above delta functional (letting aside here the fact that we only integrated out the UV fluctuations at quadratic order in the action). It is in that sense that the stochastic equations~\eqref{eq: Ito-Langevin-final}, derived at lowest non-trivial order in the quantum components, can be qualified as ``semi-classical''. As described in Sec.~\ref{sec: stochastic anomalies}, physical quantities derived from It\^o's SDE~\eqref{eq: Ito-Langevin-final} only depend on the auto-correlation of the noises, which is physically specified by the UV two-point correlations. The presence of It\^o-covariant derivatives also manifestly guarantees general covariance. These equations are thus free from any stochastic anomaly. Furthermore, as we stressed above, the reality of noises is also ensured, as their auto-correlations~\eqref{eq: noise stat.} derived from the CTP formalism are automatically given by the real part of the UV two-point functions. \section{Markovian analytical approximations and phase-space Fokker-Planck equation} \label{sec:Markovian} As we explained in Sec.~\ref{subsec:Markovian?}, stochastic inflation is strictly speaking not described by a Markov process. Indeed, the noise amplitude is the solution of the differential equation verified by the UV modes which develop on the stochastic IR background, rather than an explicit function of the IR fields themselves. In particular, the noise amplitude \emph{a priori} depends on the whole past history of the stochastic process. However, in some situations, the noise amplitude can be approximately expressed in terms of the instantaneous IR fields, in which case the dynamics can be thought of as Markovian and a powerful tool becomes accessible: the Fokker-Planck equation. In this section, we deal with these Markovian cases. We begin by showing the covariant Fokker-Planck equation that dictates the evolution of the one-point probability density function (PDF) for the IR fields and momenta, that can be inferred from the Langevin equations~\eqref{eq: Ito-Langevin-final} when assuming a Markovian dynamics. Then we show how to approximate the noise amplitude, first in the simpler situation in which the scalar fields are strictly massless, and then in a generic case under the assumption of slow-varying masses. \subsection{Covariant Fokker-Planck equation in phase space} Let us first reemphasise that throughout this work, we treat the fields as locally homogeneous, i.e. at leading order in a gradient expansion. Although this might seem very crude, following the separate universe approach, this nonetheless enables one to capture the full nonlinear dynamics on super-Hubble scales. Hence, as described in Sec.~\ref{subsec: stochastic equations}, the Langevin equations~\eqref{eq: Ito-Langevin-final} govern the stochastic dynamics of a representative $\sigma$-Hubble patch. In the Markovian limit, with the assumption that the noise amplitudes are well approximated as functions of the current IR field values (and momenta), these Langevin equations give rise to the corresponding Fokker-Planck (FP) equation, with use of the rule presented in Appendix~\ref{subsec:FP}, as \bae{\label{eq: phase space fokker-planck} \partial_N P=&-D_{\varphi^I}\left[\frac{G^{IJ}}{H}\varpi_J P\right]+\partial_{\varpi_I}\left[\left(3\varpi_I+\frac{V_I}{H}\right)P\right] \nonumber \\ &+\frac{1}{2}D_{\varphi^I}D_{\varphi^J}(A^{QQIJ}P)+D_{\varphi^I}\partial_{\varpi_J}(A^{Q{\tilde{P}}I}{}_J P) +\frac{1}{2}\partial_{\varpi_I}\partial_{\varpi_J}(A^{{\tilde{P}}\covP}{}_{IJ}P)\,. } Here we defined a last covariant derivative $D_{\varphi^I}$ with respect to the IR fields, the phase-space one, whose action on a rank-1 tensor is $D_{\varphi^I}\calU^J=\nabla_I\calU^J+\Gamma_{IL}^K\varpi_K\partial_{\varpi_L}\calU^J$ and generalisation to rank-$n$ tensors is straightforward. As for the $A^{\tilde{X}\covYIJ}$'s, these are the noises' auto-correlations at coincident points: \bae{ A^{\tilde{X}\covYIJ}(N)\delta(N-N^\prime)=\braket{\xi^{\covXI}(N)\xi^{\covYJ}(N^\prime)}=\frac{k_\sigma^\prime}{k_\sigma} \Re[\mathcal{P}^{\tilde{X}\covYIJ}(N,k_\sigma(N))]\delta(N-N^\prime), \label{noises-FP-coincident} } which are here assumed to be functions of $\varphi^I(N)$ and $\varpi_I(N)$, and up to the factor $k_\sigma^\prime/k_\sigma$ that may be approximated by unity, are nothing else than the real parts of the UV power spectra. One should remember that in the FP equation in field space~\eqref{eq: covariant fokker-planck}, which we previously wrote for pedagogical reasons, the scalar PDF $P_\mathrm{s}$ is rescaled compared to the PDF $P$ that directly results from the Langevin equations. Here, on the contrary, it is easy to check that the phase-space PDF $P(\varphi^I,\varpi_I,N)$ (truly the transition probability given some initial state), is already a scalar quantity, without the need of any rescaling. In this respect, although we skipped the intermediate steps of the computation, we stress that Eq.~\eqref{eq: phase space fokker-planck} is not postulated, but simply derived from the Langevin equations and Eq.~\eqref{FP-from-Langevin}, with covariant phase-space derivatives naturally emerging from the computation. Given the important complexity of the phase-space It\^o-Langevin equations~\eqref{eq: Ito-Langevin-final}, the manifestly covariant form of the FP equation~\eqref{eq: phase space fokker-planck} is rather remarkable and provides a non-trivial consistency check of the former. This equation generalises the FP equation that we proposed in our previous paper in a simpler setup~\cite{Pinol:2018euk}: in field space and for test scalar fields in de Sitter spacetime only. Also, the ``stochastic anomalies" were not solved there, and the form of the FP equation was simply assumed based on the requirement of general covariance. Not only do we present here the derivation of this phase-space FP equation, but we are also confident that it can now be used to compute correlation functions of multifield inflation with curved field space in realistic setups where the fields backreact on the geometry of spacetime. However, it would be restrictive to consider that the virtue of this equation only concerns these situations: the It\^o-Stratonovich ambiguity was also plaguing single-field inflation, and our first-principle derivation, with emphasis on manifest covariance, enabled us to solve it in this simpler context as well. The remaining nontrivial difficulty is now to prescribe values for the auto-correlation of the noises $A^{\tilde{X}\covYIJ}$, and in the next two sections, we turn to interesting particular cases where we can give analytical estimates. Note that this will be possible because we assume from now on a slow-varying regime, which was not the case until here. Also, because the dynamics of the UV modes in SPT is conveniently solved in terms of the conformal time $\tau$ such that $\mathrm{d} N= a H \mathrm{d} \tau$, we will also make use of this time variable in what follows. In our context in which $H$ is a stochastic quantity, conformal time is strictly speaking not a deterministic variable like the number of $e$-folds\xspace, but we will nonetheless make this approximation, justified as follows. The noise auto-correlation at time $N$ only depends on the UV fluctuations with wavenumber $k_\sigma(N)$, which exited the Hubble radius $\simeq -\mathrm{ln}(\sigma)$ $e$-folds\xspace before $N$. As the UV fluctuations follow the Bunch-Davies behaviour until only a few $e$-folds\xspace before Hubble crossing (all the more so for light fields of particular relevance in the stochastic formalism), in practice it is necessary to follow the evolution of a given mode $k_\sigma(N)$ during only for a few $e$-folds\xspace (typically 5), a duration that is not large enough for stochastic effects to accumulate and significantly affect the local Hubble scale. \subsection{Massless limit} \label{sec:light} For analytically understanding the UV fluctuations, it is particularly useful to use the projections of the mode functions on a set of parallel-transported vielbeins, the $Q^\alpha_A$ introduced in Eq.~\eqref{def-projected-perturbations}. They provide independent degrees of freedom deep inside the Hubble radius, only mixing via the projected mass matrix $M^2{}^\alpha{}_\beta$, as can be seen from their EoM~\eqref{eq: UV eom mode vielbein}. In this section, we consider that this projected mass matrix is completely negligible. By consistency of the slow-varying approximation, we also use the zeroth-order, locally de-Sitter expression of the scale factor $a(\tau)\simeq -1/(H_\star \tau)$, where $H_\star$ denotes the Hubble scale, considered constant around Hubble crossing, i.e. in the period interpolating between the Bunch-Davies regime and the crossing of the coarse-graining scale, such that $k_\sigma(\tau) \tau \simeq -\sigma (1+{\cal O}(\epsilon))$. Under these conditions, the mode functions $Q^\alpha_A$ simply provide $N_\mathrm{fields}$ independent copies $\left(Q^\alpha_A \propto \delta^\alpha_A\right)$ of the standard single-field massless mode function in quasi de Sitter spacetime, which read, with Bunch-Davies initial condition: \bae{ Q^\alpha_A(\tau,k)=-i\delta^\alpha_A\frac{\mathrm{e}^{-ik\tau}}{a\sqrt{2k}}\left(1-\frac{i}{k\tau}\right) \,. \label{simple-mode-functions} } Note that we used the freedom of redefining mode functions with an arbitrary unitary matrix as explained in Sec.~\ref{subsec: classicalisation}, in order to choose a phase that leads to explicitly real values of $Q^\alpha_A$ (and ${\tilde{P}}^\alpha_A$) on super-Hubble scales. From these mode functions, and using Eq.~\eqref{two-point-vacuum}, one deduces the multifield power spectrum of the UV modes at coarse-graining scale crossing: \bae{ \mathcal{P}^{QQIJ} \left(\tau,k_\sigma(\tau)\right)&=\frac{k_\sigma(\tau)^3}{2\pi^2}e^I_\alpha e^J_\beta Q^\alpha_A\left(\tau,k_\sigma(\tau)\right) Q^{\beta*}_A\left(\tau,k_\sigma(\tau)\right) \nonumber\\ & = \left(\frac{H_\star}{2\pi}\right)^2G^{IJ}\left(1+\sigma^2 \right), \label{PQQ-simple} } where we recall that $\sigma \ll 1$, so that the last term should be neglected. Notice that although mass effects are not taken into account in this section, the introduction of the parallel-transported vielbeins enables one to capture the geometrical effects of the curved field space at the level of UV fields $\left(\mathcal{P}^{QQIJ} \propto G^{IJ}\right)$. Then, neglecting slow-roll suppressed metric perturbations $\propto M^{2}{}_{{\tilde{P}} Q}$ in ${\tilde{P}}$ for consistency, the momentum UV modes read \bae{ {\tilde{P}}^\alpha_A (\tau,k)&\simeq \frac{\mathrm{d}}{a\mathrm{d} \tau} Q^\alpha_A(\tau,k)=-\delta^\alpha_A\frac{H^2 \mathrm{e}^{-ik\tau}}{ \sqrt{2k^3}}k^2 \tau^2\,, \label{P-mode-function-simple} } so that using again Eq.~\eqref{two-point-vacuum}, one obtains, for the power spectra involving momenta : \bae{ \mathcal{P}^{Q{\tilde{P}}I}{}_J \left(\tau,k_\sigma(\tau)\right)&= -\sigma^2 H_\star \left(\frac{H_\star}{2\pi}\right)^2\delta^I_J\left(1-i\sigma\right), \\ \mathcal{P}^{{\tilde{P}} {\tilde{P}}}{}_{IJ}\left(\tau,k_\sigma(\tau)\right)&=\sigma^4 H_\star^2 \left(\frac{H_\star}{2\pi}\right)^2G_{IJ}\,. } Note that the cross power-spectrum has a non-zero imaginary part $\Im\mathcal{P}^{Q{\tilde{P}}I}{}_J=\sigma^3H_\star\left(\frac{H_\star}{2\pi}\right)^2\delta^I_J$, remnant from the quantum nature of the scalar fields, and completely fixed by the non-vanishing commutation relation between $Q$ and ${\tilde{P}}$ in Eq.~\eqref{eq: commutation relations}. Naturally, the fact that it is suppressed by the small parameter $\sigma$ is related to the highly squeezed state of the fluctuations and to the fact that they ``classicalise'' on super-Hubble scales. However, notice anyway that the Schwinger-Keldysh derivation shows that only the real parts of the power spectra appear in the statistics of the stochastic noises. In the strict massless and ``slow-roll'' regime of this section, the mode functions of the momenta~\eqref{P-mode-function-simple} at coarse-graining scale crossing are suppressed by $\sigma^2$ compared to the ones of the fields~\eqref{simple-mode-functions}, hence the power spectra involving the former should be self-consistently set to zero in practical computations. However, this property only holds within this framework, and in general, the power spectra involving momenta, while ``slow-roll'' suppressed, are not $\sigma$ suppressed and should be considered, as we will show in the next section. \subsection{Slow-varying masses} \label{sec:generic} Let us now go one step further and consider the effects of a non-zero mass matrix $M^2{}^\alpha{}_\beta$. First we notice that at early times the mass term is negligible compared to the gradient term, i.e., $\forall\alpha,\beta, \quad M^2{}^\alpha{}_\beta\ll k^2/a^2$. Thus initial conditions and the first stage of evolution of the perturbations are equivalent to the massless case. However, the behaviour is different around Hubble crossing. To identify these non-trivial mass effects, we make the assumption that the projected mass matrix is approximately constant in the period interpolating between the Bunch-Davies regime and the crossing of the coarse-graining scale, a feature observed in many concrete models of inflation. It is then possible to diagonalise the mass matrix locally, \emph{around the time of Hubble crossing}, making use of the set of mass eigenvalues $m_i^2$ and corresponding eigenvectors $D^\alpha{}_i$ such that $M^{2\alpha}{}_\betaD^\beta{}_i=m_i^2D^\alpha{}_i$\, (no sum on $i$). According to our assumption, these quantities can be considered constant in the interpolating period, so that the vielbein-basis EoM~\eqref{eq: UV eom mode vielbein} result then in the simple set of diagonal equations in the mass eigenbasis: \bae{\label{eq: UV eom mode eigenbasis} \partial_N^2 Q^i_A +\left(3-\epsilon\right)\partial_NQ^i_A +\left(\frac{k^2}{a^2H^2}+\frac{m^2_i}{H^2}\right)Q^i_A=0, \qquad \text{(no sum on $i$)}, } where $Q^i_A=(D^{-1})^i{}_\alpha Q^\alpha_A$ denotes the projected mode functions on the mass eigenbasis. We here note that the mass matrix $M^{2\alpha}{}_\beta$ is real and symmetric, hence the mass eigenvalue $m_i^2$ are real, and one can take the diagonalising matrix $D^\alpha{}_i$ to be a real orthonormal matrix (with $(D^{-1})^i{}_\alpha=(D^T)^i{}_\alpha=D^\alpha{}_i$). It is important to notice that the mass eigenvalues $m_i^2$ are scalars in field space, and that they also correspond to eigenvalues of the original mass matrix $M^2{}^I{}_J$, with eigenvectors given by $e^I_i=e^I_\alphaD^\alpha{}_i$, i.e. \bae{ M^{2I}{}_J e^J_i=m_i^2e^I_i, \qquad \text{(no sum on $i$)}. \label{originalM2-mi} } Moreover, taking into account the orthonormality of $D^\alpha{}_i$, the set of vectors $e^I_i$, rotated from the vielbeins $e^I_\alpha$, constitute another set of vielbeins, hence satisfying $G_{IJ}e^I_i e^J_j=\delta_{ij}$ and $\delta^{ij}e^I_i e^J_j=G^{IJ}$. The initial conditions for the $Q^i_A$ are simply given by $Q^i_A(\tau,k) \underset{-k\tau\gg 1}{\to} \frac{-i}{a\sqrt{2k}}(D^{-1})^i{}_A \mathrm{e}^{-ik\tau}$ with $(D^{-1})^i{}_A=(D^{-1})^i{}_\alpha\delta^\alpha_A$, so that the corresponding solution of Eq.~\eqref{eq: UV eom mode eigenbasis} reads, at leading-order in the slow-varying approximation: \bae{\label{eq: mode Q} Q^i_A(\tau,k)= (D^{-1})^i{}_A Q^i(\tau,k), \qquad \text{(no sum on $i$)}, } with $Q^i$ the familiar single-field mode function \bae{ Q^i(\tau,k)=\frac{\mathrm{e}^{i(\nu_i-1/2)\pi/2}}{2a}\sqrt{-\pi\tau}H_{\nu_i}^{(1)}(-k\tau), \quad \text{with } \nu_i= \bce{ \displaystyle \sqrt{\frac{9}{4}-\frac{m_i^2}{H^2}}, & \text{if } \displaystyle \frac{m_i^2}{H^2}<\frac{9}{4}, \\[10pt] \displaystyle i\sqrt{\frac{m_i^2}{H^2}-\frac{9}{4}}, & \text{if } \displaystyle \frac{m_i^2}{H^2}\geq\frac{9}{4}. } \label{def-Qi} } expressed in terms of $H_{\nu_i}^{(1)}$, the Hankel function of the first kind and of order $\nu_i$. Hence, one obtains the dimensionless power spectrum of UV modes at the time when $k=\sigma aH$ as \bae{\label{eq: UV PS massive case} \mathcal{P}^{QQIJ}(\tau,k_\sigma(\tau))=\frac{k_\sigma^3(\tau)}{2\pi^2}\sum_i e^I_i e^J_i|Q^i(\tau,k_\sigma(\tau))|^2, } where we used the orthonormality of the matrix $D^\alpha{}_i$. The result~\eqref{eq: UV PS massive case} is interesting because intermediate steps like the parallel-transported vielbeins or the diagonalising matrix $D^\alpha{}_i$ disappear altogether: to compute the right-hand side, the only requirement is to know the mass eigenvalues $m^2_i$ and the corresponding eigenvectors $e^I_i$ forming a set of vielbeins, Eq.~\eqref{originalM2-mi}, which is easy to obtain numerically from $M^{2I}{}_J$ once a position in phase space $\left(\varphi^I,\varpi_I\right)$ is specified. Moreover, the sum in Eq.~\eqref{eq: UV PS massive case} is nicely understood as a mass-weighted metric in field space, and indeed, the massless limit $\propto G^{IJ}$ is easily recovered by setting $\nu_i=3/2,\, \forall i$. Notice also that $\mathcal{P}^{QQIJ}$ is automatically real and symmetric, as should be from first principles (see Sec.~\ref{subsec: classicalisation}). Moving to momenta, and according to the UV EoM~\eqref{eq: UV EoM}, one obtains \bae{\label{eq: covPi} {\tilde{P}}^i_A=H \partial_NQ^i_A -\frac{\varpi^i\varpi_j}{2M_\text{Pl}^2H}Q^j_A, } where $\varpi_i=e^I_i \varpi_I$. Using the properties of Hankel functions, we note that the time derivative of $Q^i$ can be simply expressed, at leading-order in the slow-varying approximation, as \footnote{For completeness, at next order one has instead $q_{\nu_i}(\sigma)=\left(\nu_i-\frac{3}{2}\right)+\epsilon\left(\frac{1}{2}-\nu_i \right)-\sigma\frac{H_{\nu_i-1}^{(1)}(\sigma)}{H_{\nu_i}^{(1)}(\sigma)}$, $\nu_i=\sqrt{9/4+3\epsilon -(1+2\epsilon)m_i^2/H^2}$, and assuming both $\epsilon$ and $m_i^2/H^2$ small, one further obtains $q_{\nu_i}(\sigma) = -m_i^2/(3H^2)+ O(\epsilon^2) + O(\epsilon \times m_i^2/H^2) + O(\sigma^2)$. However, evaluating $q_{\nu_i}$ beyond leading-order is too precise compared to the rest of our computation, as we have anyway considered the projected mass matrix to be constant.} \bae{ \partial_NQ^i|_{k_\sigma} =q_{\nu_i}(\sigma)Q^i|_{k_\sigma}, \quad \text{with} \quad q_{\nu_i}(\sigma)= \nu_i-\frac{3}{2} -\sigma\frac{H_{\nu_i-1}^{(1)}(\sigma)}{H_{\nu_i}^{(1)}(\sigma)}\,, \label{def-qnu} } where we evaluated it at the time of crossing of the coarse-graining scale. One therefore obtains \bae{ {\tilde{P}}^i_A|_{k_\sigma}=\sum_{j}\left(Hq_{\nu_i}(\sigma)\delta^i_j-\frac{\varpi^i\varpi_j}{2M_\text{Pl}^2H}\right) (D^{-1})^j{}_A Q^j|_{k_\sigma} \equiv \sum_{j} \mathscr{Q}^i{}_j (D^{-1})^j{}_A Q^j|_{k_\sigma} , } so that all power spectra at coarse-graining scale crossing can be summarised as \bae{ \mathcal{P}^{QQIJ}&=\frac{k_\sigma^3}{2\pi^2}\sum_{i}e^I_i e^J_i|Q^i|^2_{k_\sigma}, \label{eq: power spectra massive case-QQ}\\ \mathcal{P}^{Q{\tilde{P}}I}{}_J&=\frac{k_\sigma^3}{2\pi^2}\sum_{i,j}e^I_i e_{Jj}\mathscr{Q}^{*j}{}_i|Q^i|^2_{k_\sigma}, \label{eq: power spectra massive case-QP}\\ \mathcal{P}^{{\tilde{P}}\covP}{}_{IJ}&=\frac{k_\sigma^3}{2\pi^2}\sum_{i,j,k}e_{Ii}e_{Jj}\mathscr{Q}^i{}_{k}\mathscr{Q}^{*j}{}_k|Q^k|^2_{k_\sigma}. \label{eq: power spectra massive case-PP} } One knows from first principles that $\mathcal{P}^{{\tilde{P}}\covP}{}_{IJ}$ should be real and symmetric, similarly to $\mathcal{P}^{QQIJ}$, while $\Im\mathcal{P}^{Q{\tilde{P}}I}{}_J=\sigma^3H_\star\left(\frac{H_\star}{2\pi}\right)^2\delta^I_J$. Because our analytical expressions are based on several approximations, these properties are not necessarily precisely verified by Eqs.~\eqref{eq: power spectra massive case-QP}--\eqref{eq: power spectra massive case-PP}, a discrepancy that can be used as a quantitative diagnostic of the quality of the approximations in practical numerical computations. However, we note once again that only the real parts of the power spectra anyway enter into the properties of the stochastic noises (see Eq.~\eqref{noises-FP-coincident}). Although obtained for analytically estimating the noises amplitudes in stochastic multifield inflation, Eqs.~\eqref{eq: power spectra massive case-QQ}--\eqref{eq: power spectra massive case-PP} are of more general interest in the context of multifield inflation with slow-varying quantities, replacing $\sigma$ by $k/aH$ when necessary, and they constitute new results to the best of our knowledge.\footnote{A related formula for the trace $G_{IJ}\mathcal{P}^{QQIJ}$ has already been used without proof in Ref.~\cite{McAllister:2012am}, with excellent agreement with exact numerical computations.} Given the number of approximations performed, it is difficult to control the degree of accuracy of the above formulae, but they constitute a proof of principle that it is possible to obtain Markovian analytical approximations, and they constitute a basis for future improvements. The discussion has been kept quite general until now, but as is well known, the behaviour of super-Hubble fluctuations strongly depends on the mass parameter. Hence, from these generically applicable formulas, in the stochastic context, two physically different regimes should be distinguished depending on the various values of $m_i^2$, leading either to real positive $\nu_i$ for ``light'' fields (the first line in Eq.~\eqref{def-Qi}), or imaginary $\nu_i \equiv i \mu_i$ for heavy fields (the second line there). For heavy fields, one can write \bae{ \frac{k_\sigma^3(\tau)}{2\pi^2}|Q^i(\tau,k_\sigma(\tau))|^2 \underset{ \frac{m_i^2}{H^2}\geq\frac{9}{4} {=} 4 \pi \mathrm{e}^{-\mu_i \pi}\left(\frac{H}{2\pi}\right)^2 \left(\frac{\sigma}{2}\right)^{3} |H_{i\mu_i}^{(1)}(\sigma)|^2\,, \label{mod-square-massive} } with the small argument expansion \bae{ H_{i\mu}^{(1)}(\sigma) \underset{\sigma \ll 1}{\simeq} -i \frac{\Gamma(i \mu)}{\pi}\left(\frac{\sigma}{2} \right)^{-i \mu}+\frac{1+\coth(\mu \pi)}{\Gamma(1+i \mu)}\left(\frac{\sigma}{2} \right)^{i \mu}\,. } The factor $|H_{i\mu_i}^{(1)}(\sigma)|^2$ in Eq.~\eqref{mod-square-massive} hence describes the characteristic super-Hubble oscillations of heavy fields, but more importantly here, the power spectrum~\eqref{mod-square-massive} is suppressed by $\sigma^3$. This explicit dependence on the a priori arbitrary coarse-graining parameter $\sigma$ is not really worrisome: it simply comes from the fact that fluctuations of heavy fields are strongly suppressed on super-Hubble scales, and should simply be discarded from the stochastic description (and in the sums~\eqref{eq: power spectra massive case-QP}--\eqref{eq: power spectra massive case-PP}), whose aim is to describe the long-term dynamics generated by light scalars. Turning to them, and using $H_\nu^{(1)}(\sigma) \underset{\sigma \ll 1}{\simeq} -(i/\pi) \Gamma(\nu)\left(\sigma/2\right)^{-\nu}$, the last term of $q_{\nu_i}(\sigma)$ in Eq.~\eqref{def-qnu} should be neglected as of order $\sigma^2$, and one obtains \bae{ \frac{k_\sigma^3(\tau)}{2\pi^2}|Q^i(\tau,k_\sigma(\tau))|^2 \underset{ \frac{m_i^2}{H^2}< \frac{9}{4} {=} \left(\frac{H}{2\pi}\right)^2\left( \frac{\Gamma(\nu_i)}{\Gamma(3/2)} \right)^2 \left(\frac{\sigma}{2}\right)^{3-2\nu_i}\,, \label{mod-square-light} } here with only a power-law dependence on $\sigma$. This dependence can be neglected, and $\left(\frac{\sigma}{2}\right)^{3-2\nu_i}$ can be approximated by unity, under the condition that the latter is taken to verify \bae{ \frac{\sigma}{2} \gg \mathrm{e}^{-\left(3-2\nu_i \right)^{-1}}, } which is easily compatible with $\sigma \ll 1$ for a light enough mass (see Refs.~\cite{Starobinsky:1994bd,Grain:2017dqa} for discussions in a single-field context). For intermediate masses $0.1 \lesssim m_i^2/H^2 \lesssim 1$, stochastic effects are less important but may not be completely negligible (see, e.g., Ref.~\cite{Fumagalli:2019ohr}), and the resulting $\sigma$-dependence indicates that the coarse-graining procedure, made at leading-order in the gradient expansion, should be refined in order to properly treat these situations. More precisely, let us add that in theories that are not completely scale invariant, it is expected that the Langevin equations, which describe the distribution of field values in $\sigma$-Hubble patches, do depend on $\sigma$. Yet another question is to see how $\sigma$ disappears when computing physical observables on scales much larger than the cutoff scale. It is likely that the stochastic-$\delta N$ approach needs to be modified to deal with these situations of intermediate masses, but this is largely outside the scope of this paper. Eventually, as discussed in Sec.~\ref{subsec: classicalisation}, one can check explicitly that for light scalars with $m_i^2<9/4 H^2$, the complex mode functions $Q^I_A(N,k)$ and ${\tilde{P}}_{IA}(N,k)$ (or equivalently, $Q^i_A$ and ${\tilde{P}}^i_A$) become approximately real up to an irrelevant constant unitary matrix. This stems from the fact that, the $Q^\alpha$ being independent fields inside the Hubble radius, the variables $(D^{-1})^i{}_\alpha Q^\alpha$, obtained by rotation of the former, equally provide a set of independent variables (and indeed, we have seen that the orthonormal matrix $D^\alpha{}_i$ drops out of all correlators). Hence, one could also have rotated the annihilation (and creation) operators $\hat{a}^{A}$ and absorbed in their definitions the individual phase factors $\mathrm{e}^{i \nu_i \pi/2}$ of the mode functions~\eqref{def-Qi}. The corresponding transformation can be described by the relations \eqref{rotation-a-operators}--\eqref{rotation-basis} with the unitary matrix \bae{ U^{{\bar{\indA}}}{}_B=D^{{\bar{\indA}}}{}_i\,\mathrm{diag}\left(\mathrm{e}^{i(\nu_i-3/2)\pi/2}\right){\!}^i{}_j(D^{-1})^j{}_B\,, } with which one obtains the expressions \bae{ \bar{Q}^i_{\bar{\indA}}=(D^{-1})^i{}_{\bar{\indA}} \mathrm{e}^{- i (\nu_i-3/2) \pi/2} Q^i \quad \text{(no sum on $i$)}\,, \qquad \bar{{\tilde{P}}}^i_{{\bar{\indA}}}=\mathscr{Q}^i{}_j \bar{Q}^j_{\bar{\indA}} } that become manifestly real on super-Hubble scales. \section{Conclusions}\label{sec: conclusions} In this paper, we derive an effective stochastic theory for the super-Hubble, coarse-grained, scalar fields during inflation. We do so in a phase-space approach and for the general class of nonlinear sigma models~\eqref{S-intro}, characterised by their potentials and curved field spaces. We first give in section~\ref{sec: heuristic} a “heuristic” derivation of the corresponding Langevin equations in phase space, in order to introduce concepts and notations used all the way. We point out the limitations of the heuristic approach that uses the classical equations of motion, as well as the non-Markovian nature of the dynamics. Section~\ref{sec: stochastic anomalies} is devoted to the resolution of the “inflationary stochastic anomalies” that we pointed out in our previous paper~\cite{Pinol:2018euk}: because of the very quantum nature of the scalar fields, the theory contains a preferred frame that corresponds to the basis of independent creation and annihilation operators. This frame must be used to define independent noises in the Langevin equations, removing the possibility of any ambiguity in the choice of such a frame, and the corresponding Langevin equations should be interpreted according to a Stratonovich, midpoint, discretisation scheme. Along this discussion, we show how the classicalisation of quantum fluctuations on super-Hubble scales enables one to interpret the noises as classical random variables rather than quantum operators. We also show explicitly the transformation of the Stratonovich-Langevin equations to their It\^o version by the addition of noise-induced drifts, and explain how these terms can be combined with the usual time-derivatives to define new, covariant in It\^o calculus, time-derivatives. With the final form~\eqref{Langevin-intro}, the Langevin equations can be readily used in numerical and analytical computations. Section~\ref{sec: effective hamiltonian action} is devoted to the rigorous derivation of the Langevin equations using a path-integral approach, which solves the remaining conceptual issues of the heuristic one. We begin by recalling that for the intrinsically time-dependent problems of interest in cosmology, like in other nonequilibrium situations, the relevant partition function is the one that dictates “in-in” correlation functions and causal equations of motions, and that it is defined by a closed-time path of integration. Equivalently, in this also called Schwinger-Keldysh formalism, the degrees of freedom are doubled along the conventional path, and we pay particular attention to the boundary conditions that connect them. In accordance with first principles, we also use the Hamiltonian action rather than the Lagrangian one, which is conceptually clearer for our phase-space study and in a stochastic context in which fields and momenta are not time-differentiable in the ordinary sense. Eventually, to deal with the UV parts of the fields and momenta, we identify phase-space covariant Vilkovisky-DeWitt variables, a crucial step to maintain the general covariance of the stochastic theory under redefinitions of the scalar fields. Because we are only interested in the super-Hubble dynamics, we integrate out explicitly the UV fields from the path integral, and find the influence action that describes the deviation of the IR dynamics from the background one of Standard Perturbation Theory. The final result is the Hamiltonian, coarse-grained effective action for the IR fields at first order in quantum corrections, which after a final manipulation consisting of introducing auxiliary classical variables $\xi$, can be shown to give rise to the noises in the Stratonovich-Langevin equations. The statistics of the noise at a given time is found as the real parts of the UV power spectra at the coarse-graining scale of that time. The fact that the noises are explicitly real is one of the improvements from the path-integral approach compared to the heuristic one. In section~\ref{sec:Markovian} we consider cases where the Markovian approximation is valid, and derive the covariant, phase-space Fokker-Planck (FP) equation corresponding to our Langevin equations. Thanks to the resolution of the anomalies, this equation is free from the ambiguities previously present in the literature, even in the single-field case. We also provide explicit analytical formulae for the noises correlations in multifield contexts, for massless scalar fields, as well as in generic situations under a slow-varying approximation. We are confident that the formalism presented in this paper can be used in many interesting applications, both theoretical and phenomenological. First, the It\^o-Langevin equations coupled to the UV EoM could be fully solved numerically without resorting to the Markovian hypothesis. That would however require following the evolutions of as many modes as time steps in the computation, in order to predict the correct noise amplitude at any time, and depending on the previous realisations of the noises and IR dynamics. These simulations should be done a large number of times, in order to compute statistical averages. Although being the most rigorous approach, it may be simpler to first consider the Markovian approximation, replacing the commonly approximated noise amplitude $(H/2\pi)^2$ by the formulae that we give in Eqs.~\eqref{eq: power spectra massive case-QQ}--\eqref{eq: power spectra massive case-PP}, and only then determine the IR dynamics, either numerically or analytically. Observationally relevant quantities, such as the power spectrum and the full PDF of the curvature perturbation, as well as the mass distribution of PBHs in relevant models, can then be computed by use of the stochastic-$\delta N$ formalism, either applied to the result of many stochastic simulations in separate universes, or readily working at the level of the FP equation~\eqref{eq: phase space fokker-planck}. We stress that due to the generality of our formalism, such computations can be made, not only in single-field contexts, but in the very large class of multifield models with curved field space, where qualitatively new phenomena can be expected. It would also be interesting to compare the computations of correlation functions made with the stochastic formalism to pure QFT calculations, notably in de Sitter, or to determine equilibrium PDFs in phase space, as well as to study the eigenvalues and eigenvectors of the FP operator in simple multifield contexts. Eventually, this paper not only provides one with a useful formalism that can be used from now on, but it also paves the way for going further. First, thanks to the rigorous path-integral derivation, corrections to the present stochastic formalism can be in principle computed. Technically, that would require going at next order in the expansion of the Hamiltonian coarse-grained effective action in the quantum components of the fields and momenta. Another interesting avenue is to unveil the effect of non-Gaussianities on the stochastic formalism, by expanding the Hamiltonian action up to cubic order as we do in Eqs.~\eqref{eq: cubic order hamiltonian action}--\eqref{eq: cubic order hamiltonian action-end}, and considering the effect of non-linear mode couplings. We leave these interesting possibilities for future works. \acknowledgments We are grateful to Thibaut Arnoulx de Pirey Saint Alby, Camille Aron, Cliff Burgess, Guillaume Faye, Tomohiro Fujita, Jacopo Fumagalli, Vivien Lecomte, Jer\^ome Martin, Cyril Pitrou, Gerasimos Rigopoulos, Julien Serreau, Takahiro Tanaka, Junsei Tokuda, Andrew J. Tolley, Vincent Vennin, and Lukas Witkowski for helpful discussions. We also thank the anonymous referee for insightful comments. L.P and S.RP are supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement No 758792, project GEODESI). Y.T. is supported by JSPS KAKENHI Grants No. JP18J01992 and No. JP19K14707, and was supported by a grant from R\'egion \^Ile-de-France at the initial stage of this work.
1,116,691,498,102
arxiv
\section{Introduction} In recent years, many semi-supervised regression and classification methods have been proposed, see the surveys by \cite{Chapelle2006, zhu2009introduction, SubramanyaTalukdar2014}. These methods demonstrated empirical success on some data sets, whereas on others the unlabeled data did not appear to help. This raised two key questions of continued interest: (i) under which conditions can the potentially huge amount of unlabeled data help the learning process? and (ii) can we design statistically sound and computationally efficient methods that benefit from the unlabeled data? The {\em cluster} assumption and the {\em manifold} assumption are two common models for studying the above questions regarding semi-supervised learning. Under the cluster assumption, instances with the same label concentrate in well-defined clusters separated by low density regions \citep{Chapelle2005,Rigollet2007,Singh2009}. Under the manifold assumption the data points reside in one or several low-dimensional manifolds, with nearby instances on the manifold having similar response values. \cite{BickelLi2007} as well as \cite{LaffertyWasserman2007} studied semi-supervised learning under the manifold assumption. They showed that without knowing the manifold, standard multivariate polynomial regression in the ambient space, using only the labeled data, achieves the asymptotic minimax rate for Sobolev functions. According to these results, it seems there is little benefit to the availability of additional unlabeled data. However, these results require that the number of labeled samples tends to infinity. Intuitively, in this limit, the geometry of the data manifold and the sampling density can be accurately estimated from the labeled data alone. Thus the benefits of a potentially huge number of unlabeled points when there is little labeled data remained unclear. One of the goals of this work is to clarify this benefit of unlabeled data for rather general manifolds, via a finite sample analysis, whereby the number of labeled samples is fixed. In this context, \cite{Niyogi2013} showed that unlabeled data can indeed help, by presenting a specially constructed manifold, for which supervised learning is provably more difficult than semi-supervised learning. \cite{GoldbergZhuSinghXuNowak2009} considered this question under both the manifold and multi-manifold cases. In particular, in their Section 2.1, they conjectured that semi-supervised learning of a H\"older function on an unknown manifold with intrinsic dimension $d$ can achieve the finite-sample minimax bound for nonparametric regression in $\mathbb{R}^d$. In this paper we prove that when the regressed function is Lipschitz, a simple semi-supervised regression method based on geodesic nearest neighbor averaging achieves the finite-sample minimax bound when the amount of unlabeled points is sufficiently large. This settles the conjecture of \cite{GoldbergZhuSinghXuNowak2009} for the Lipschitz case. The regression method we consider, denoted \emph{geodesic kNN regression}, consists of two steps: (i) estimate the manifold geodesic distances by shortest-path distances in a graph constructed from both the labeled and unlabeled points; and (ii) estimate the response at any point by averaging its $k$ geodesic nearest labeled neighbors. Section \ref{sec:framework} describes the graph construction and the corresponding nonparametric statistical estimation method. Our main result, detailed in Section \ref{sec:stat_analysis}, is a proof that for a Lipschitz function on a manifold, if enough unlabeled samples are available, then with high probability this method achieves the finite-sample minimax bound. In Section \ref{sec:fastgknn} we discuss the computational aspects of this approach, which is very fast compared to spectral-based semi-supervised methods. Finally, in Section \ref{sec:application} we apply our method to two problems with a low dimensional manifold structure, indoor localization using WiFi fingerprints and facial pose estimation. On both problems geodesic kNN exhibits a marked improvement compared to classical kNN, which does not utilize the unlabeled data, and also compared to the popular semi-supervised regression method of \cite{BelkinNiyogi2004}. \section{Semi-supervised learning with geodesic distances} \label{sec:framework} We consider the following framework for semi-supervised learning. Given $n$ labeled instances $\mathcal{L} = \{({\bf x}_i, y_i)\}_{i=1}^n $ and $m$ unlabeled instances $\mathcal U = \{{\bf x}_j\}_{j=1}^m$ from an instance space $\mathcal{X}$ equipped with a distance function $d({\bf x},{\bf x}')$: \begin{enumerate} \item Construct an undirected (sparse)\ graph $G$ whose vertices are all the labeled and unlabeled points. Pairs of close points ${\bf x}, {\bf x}'$ are then connected by an edge with weight $w({\bf x}, {\bf x}') = d({\bf x}, {\bf x}')$. \item Compute the shortest-path graph distance $d_G({\bf x}_i, {\bf x}_j)$ for all ${\bf x}_i \in \mathcal L$ and ${\bf x}_j \in \mathcal L \cup\mathcal U$. \item Apply standard metric-based supervised learning methods, such as kNN or Nadaraya-Watson, using the computed graph distances $d_G$. \end{enumerate} This framework generalizes the work of \citet{BijralRatliffSrebro2011}, which assumed that the samples are vectors in $\mathbb{R}^D$ and the distance function is \( \| {\bf x}_i - {\bf x}_j\|_p^q. \) The use of geodesic nearest neighbors for classification was also considered by \cite{BelkinNiyogi2004}. Specific edge selection rules include the distance-cutoff rule, whereby two points are connected by an edge if their distance is below a threshold, and the symmetric kNN rule, where every point is connected by an edge to its $k$ nearest neighbors and vice versa. \citep{AlamgirVonluxburg2012, TingHuangJordan2010} The elegance of this framework is that it \textit{decouples} the unsupervised and supervised parts of the learning process. It represents the geometry of the samples by a single metric $d_G$, thus enabling the application of any supervised learning algorithm based on a metric. For classification, a natural choice is the $k$ nearest neighbors algorithm. For regression, one may similarly employ a $k$ nearest neighbor regressor. For any ${\bf x}_i\in\mathcal L\cup\mathcal U$, let kNN$({\bf x}_i) \subseteq \mathcal{L}$ denote the set of $k$ (or less)\ nearest \emph{labeled} neighbors to ${\bf x}_i$, as determined by the graph distance $d_G$. The \emph{geodesic kNN regressor} at \({\bf x}_i\) is \begin{equation} \label{eq:gknn_regressor} \hat{f}({\bf x}_{i}) := \frac{1}{|\text{kNN}({\bf x}_i)|}\sum_{({\bf x}_j, y_j) \in \text{kNN}({\bf x}_i)} y_j. \end{equation} We now extend the definition of $\hat{f}({\bf x})$ to the inductive setting. Assume we have already computed the regression estimates $\hat{f}({\bf x}_i)$ of Eq. \eqref{eq:gknn_regressor} for all points in $\mathcal{L} \cup \mathcal{U}$. For a new instance ${\bf x} \notin \mathcal{L} \cup \mathcal{U}$, we first find its \emph{Euclidean} nearest neighbor ${\bf x}^*$ from $\mathcal{L} \cup\mathcal{U}$. This can be done in sublinear time either using data structures for spatial queries \citep{Omohundro1989,Bentley1975} or by employing approximate nearest neighbor methods \citep{AndoniIndyk2006}. Then the geodesic kNN regression estimate at ${\bf x}$ is \begin{align} \hat{f}({\bf x}) := \hat{f}({\bf x}^*) = \hat{f} \left( \argmin_{{\bf x}' \in \mathcal{L} \cup \mathcal{U}} \| {\bf x} - {\bf x}'\| \right). \label{eq:inductive} \end{align} \section{Statistical analysis under the manifold assumption} \label{sec:stat_analysis} We now analyze the statistical properties of the geodesic kNN\ regressor \(\hat f\) of Eq. (\ref{eq:inductive}), under the manifold assumption. We consider a standard nonparametric regression model, $Y = f(X) + \mathcal{N}(0,\sigma^2)$ where $X \in \mathbb{R}^D$ is drawn according to a measure $\mu$ on $\mathcal M$. We prove that if $f$ is Lipschitz with respect to the manifold distance and if enough unlabeled points are available then $\hat{f}$ obtains the minimax bound on the mean squared error. To this end, we first review some classical results in nonparametric estimation. Let $\hat f:\mathbb{R}^D \to \mathbb{R}$ be an estimator of a function $f$, based on $n$ noisy samples. Let \( \text{MSE}(\hat{f}, {\bf x}) := \expect{ ( \hat{f}({\bf x}) - f({\bf x}) )^2}$ be its mean squared error at a point ${\bf x} \in \mathbb{R}^D$, where the expectation is over the random draw of data points. It can be shown that for \emph{any} estimator $\hat{f}$ and any point ${\bf x} \in \mathbb{R}^D$, there is some Lipschitz function $f$ such that \( \text{MSE}(\hat{f}, {\bf x}) \ge c n^{-\frac{2}{2+D}} \) for some constant $c > 0$ that depends only on the Lipschitz constant and the noise level. The term $n^{-\frac{2}{2+D}}$ is thus termed a \emph{finite-sample minimax lower bound} on the MSE at a point. Several results of this type were derived under various measures of risk and classes of functions \citep{Tsybakov2009,Gyorfi2002}. Standard nonparametric methods such as Nadaraya-Watson or kNN regression have an upper bound on their MSE that is also of the form $c' n^{-\frac{2}{2+D}}$. Hence these methods are termed \emph{minimax optimal} for estimating a Lipschitz function. In Theorem \ref{thm:minimax_optimality} below we prove that given a sufficient number of \emph{unlabeled} points, the MSE of the geodesic kNN regressor is upper-bounded by $cn^{-\frac{2}{2+d}}$ where $c$ is some constant and $d$ is the \textit{intrinsic} dimension of the manifold. Hence the geodesic kNN regressor is minimax-optimal and adaptive to the geometry of the unknown manifold. \subsection{Notation and prerequisites} Our main result relies on the analysis of \cite{TenenbaumDesilvaLangford2000} regarding the approximation of manifold distances by graph distances. Before stating our result, we thus first introduce some notation, our assumptions and a description of the key results of \cite{TenenbaumDesilvaLangford2000} that we shall use. For a general background on smooth manifolds, see for example the book by \cite{Lee2012}. We assume that the data manifold ${\mathcal M} \subseteq \mathbb{R}^D$ is a compact smooth manifold of known intrinsic dimension $d$, possibly with boundaries and corners. We further assume that ${\mathcal M}$ is geodesically convex, i.e. that every two points in ${\mathcal M}$ are connected by a geodesic curve. We denote by $d_{\mathcal M}({\bf x}, {\bf x}')$ the length of the shortest path between two points in $\mathcal M$, the diameter of ${\mathcal M}$ by \( \text{diam}({\mathcal M}) := \sup_{{\bf x}, {\bf x}'} d_{\mathcal M}({\bf x}, {\bf x}') \) and the manifold-ball of points around ${\bf x}$ by $B_{\bf x}(r) := \{{\bf x}' \in {\mathcal M} : d_{\mathcal M}({\bf x}, {\bf x}') < r \}$. We denote the volume of ${\mathcal M}$ by $V$ and the minimum volume of a manifold ball of radius $r$ by $V_{\min}(r) := \min_{{\bf x} \in {\mathcal M}} Vol(B_{\bf x}(r))$. We denote by $r_0$ the minimal radius of curvature of ${\mathcal M}$ and by $s_0$ its minimal branch separation (see the supplementary of \cite{TenenbaumDesilvaLangford2000} for precise definitions). We assume that the data points are sampled i.i.d. from some measure $\mu$ on ${\mathcal M}$ with associated density function $\mu({\bf x})$. For every point ${\bf x} \in {\mathcal M}$ and radius $r \le R$ we assume that $\mu(B_{{\bf x}}(r)) \ge Q r^d$ where $R,Q > 0$. This condition means that the measure of small balls grows with the radius as is typical for dimension $d$. In particular, it guarantees that the minimum density $\mu_{\min}:=\min_{{\bf x}\in\mathcal M}\mu({\bf x})> 0$. Finally, we assume that $f:{\mathcal M} \to \mathbb{R}$ is a bounded $L$-Lipschitz function on ${\mathcal M}$, \begin{align} \label{eq:lipschitz} \forall {\bf x}, {\bf x}' \in \mathcal{M}:|f({\bf x})-f({\bf x}')| \le L d_{\mathcal{M}}({\bf x}, {\bf x}'). \end{align} We now reproduce the statement of Theorem B. \paragraph{Theorem B. \citep{TenenbaumDesilvaLangford2000}} \textit{Let ${\mathcal M} \subseteq \mathbb{R}^D$ be a compact smooth and geodesically convex manifold of intrinsic dimension $d$. Let $\delta, \epsilon, r > 0$ be constants. Let $X_1, \ldots, X_N \overset{i.i.d.}{\sim} \mu$ be a sample of points on ${\mathcal M}$ and suppose we use these points to construct a graph $G$ using the distance-cutoff rule with threshold $r$ where $r < \min\{s_0, (2/\pi) r_0 \sqrt{24 \delta}\}$. } \textit{\noindent Denote by $A$ the event that the following inequalities \begin{align} \label{eq:isomap_distance_bounds} 1-\delta \le d_G(X_i, X_j) / d_{\mathcal M}(X_i, X_j) \le 1+\delta. \end{align} \textit{hold for all pairs $X_i, X_j$, where $1 \le i,j \le N$}. Then} \begin{align} \label{eq:pr_A} \pr{ A \Big| N > \frac{ \log \left( V / \epsilon V_{\min} \left( \tfrac{\delta r}{16} \right) \right) }{ \mu_{\min} V_{\min} \left( \tfrac{\delta r}{8} \right) } } \ge 1 - \epsilon. \end{align} \begin{remark} By Theorem C in \citep{TenenbaumDesilvaLangford2000}, a similar result holds for the symmetric kNN rule. \end{remark} \begin{remark} In the typical case where $V_{\min}(r) \sim r^{-d}$, if we fix $\epsilon, \delta$ we must have $N \gtrsim \frac{1}{\mu_{\min}} (8 / \delta r)^d$. In other words, the required number of samples for Eq. \eqref{eq:isomap_distance_bounds} to hold is exponential in the intrinsic dimension $d$. \end{remark} \begin{remark} If we fix $N, \delta, r$ and invert Eq. \eqref{eq:pr_A}, we conclude that $\pr{A^c}$ decays exponentially with $N$, \begin{align} \label{eq:epsilon_bound} \pr{A^c} < \epsilon = c_a e^{-c_b N} \end{align} where $c_a = V / V_{\min}(\frac{\delta r}{16})$ and $c_b = V_{\min}(\frac{\delta r}{8}) \cdot \mu_{\min}$. A similar bound holds for the symmetric kNN graph. \end{remark} \begin{remark} Theorems B and C consider points drawn from a Poisson point process. However, they hold also in the case of an i.i.d. draw of $N$ points. See page 11 of the supplement of \cite{TenenbaumDesilvaLangford2000}. \end{remark} \subsection{Main result} We are now ready to state our main theorem. It bounds the expected MSE of the geodesic kNN regressor $\hat{f}({\bf x})$ at a fixed point ${\bf x} \in {\mathcal M}$, where the expectation is over the draw of $n$ labeled and $m$ unlabeled points. \begin{theorem} \label{thm:minimax_optimality} Consider a fixed point ${\bf x} \in {\mathcal M}$. Suppose the manifold ${\mathcal M}$, the measure $\mu$ and the regression function $f$ satisfy all the assumptions stated above. Then, the geodesic kNN regressor of Eq. \eqref{eq:inductive} computed using the distance-cutoff rule with r as in Theorem B, or a symmetric kNN rule with a suitable k, satisfies \begin{align} \label{eq:main_theorem} &\expect{( \hat{f} \left( {\bf x} \right) - f({\bf x}) )^2 } \le c n^{-\frac{2}{2+d}} + c' e^{-c'' \cdot(n+m)} f_D^2. \end{align} where $f_D := f_{\max} - f_{\min}$. The coefficients $c,c',c''$ are independent of the sample size. They depend only on the Lipschitz constant of $f$, the noise level $\sigma$, properties of ${\mathcal M}$ and $\mu$ and on the parameters $\epsilon, \delta$ in Theorem B. \end{theorem} \begin{proof} By Eq. (\ref{eq:inductive}), \(\hat f({\bf x})=\hat f({\bf x}^*)$, where ${\bf x}^*$ is the nearest point to ${\bf x}$\ from $\mathcal L\cup\mathcal U$. Since \( (a+b)^2 \le 2 a^2 + 2 b^2 \) \begin{align} &\expect{(\hat{f}({\bf x}) - f({\bf x}))^2} = \expect{(\hat{f}({\bf x}^*) - f({\bf x}))^2} \nonumber \\ &= \expect{\left( ( \hat{f} \left( {\bf x}^* \right) - f({\bf x}^*) ) + ( f({\bf x}^*) - f({\bf x}) ) \right)^2 } \nonumber \\ &\le 2\expect{( \hat{f}\left({\bf x}^*\right) - f({\bf x}^*))^2} + 2\expect{\left( f\left({\bf x}^*\right) - f({\bf x})\right)^2}. \nonumber \end{align} Bounds on these two terms are given by lemmas \ref{lemma:fxstar_minus_fx_squared} and \ref{lemma:fhat_xstar_minus_f_xstar} below. In each of these lemmas the bound is composed of a term \( c_1 n^{-\frac{2}{2+d}} \) and an exponential term of the form $c_2 e^{-c_3(n+m)} f_D^2$. Hence, Eq. \eqref{eq:main_theorem} follows. \end{proof} \begin{remark} While the exponential term in Eq. \eqref{eq:main_theorem} may be huge for small sample sizes, if the number of \emph{unlabeled} samples is large enough then it is guaranteed to be small with respect to the first term for \emph{any} number of labeled samples $n$. It thus can be absorbed into the coefficient $c$ with negligible effect. \end{remark} \begin{remark} \citet[Theorem 1]{Kpotufe2011} proved that for data sampled from an unknown manifold, even classical (supervised) kNN based on Euclidean distances achieves the minimax bound up to log factors. However, his result requires $O(\log n)$ labeled points in a small Euclidean ball around ${\bf x}$. This is different from our result that holds for any number of labeled points $n$, and does not include log factors. \end{remark} We now state and prove the two lemmas used in the proof of Theorem \ref{thm:minimax_optimality}. To this end, let $X_1, \ldots, X_{n+m} \overset{i.i.d.}{\sim} \mu$, and let $Y_i=f(X_i)+\eta_i$ be the observed responses at the first $n$ (labeled) points, where $\eta_1,\ldots,\eta_n\overset{i.i.d.}{\sim} \mathcal N(0,\sigma^2)$. \begin{lemma} \label{lemma:fxstar_minus_fx_squared} Let ${\bf x} \in {\mathcal M}$ and let ${\bf x}^*$ be its Euclidean nearest point from $\{X_1, \ldots, X_{n+m} \}$. For any $L$-Lipschitz function $f$ and measure $\mu$ that satisfies $\mu(B_{\bf z}(r)) \ge Q r^d$ for all $ r \le R$ and ${\bf z}\in\mathcal M,$ \begin{eqnarray} \expect{\left( f \left( {\bf x}^* \right) - f({\bf x})\right)^2} & \le & \frac{2 L^2}{(1-e^{-Q})^2} n^{-\frac{2}{2+d}} + \nonumber\\ &&e^{-Q R^d (n+m)} \cdot f_D^2. \nonumber \end{eqnarray} \end{lemma} \begin{proof} Let $E_R$ denote the event that $d_{\mathcal M}({\bf x}, {\bf x}^*) \le R$. \begin{align} \label{eq:E_f_xstar_minus_fx_squared} &\expect{\left( f({\bf x}^*) - f({\bf x})\right)^2} \\ &\le \pr{E_R} \cdot \expect{\left( f({\bf x}^*) - f({\bf x})\right)^2|E_R} + \pr{E_R^c} f_D^2. \nonumber \end{align} Since $\mu(B_{{\bf x}}(r)) \ge Q r^d$ for any $r \le R$, \begin{align*} \pr{E_R^c} &= \pr{d_{{\mathcal M}}({\bf x}, {\bf x}^*) > R} \\ &\le (1-QR^d)^{n+m} \le e^{-Q R^d (n+m)}. \end{align*} Next, we bound the first term of \eqref{eq:E_f_xstar_minus_fx_squared}. Since \(f\) is $L$-Lipschitz w.r.t. the manifold, \begin{align*} \expect{\left( f({\bf x}^*) - f({\bf x})\right)^2|E_R} \le L^2 \expect{d^2_{\mathcal M}({\bf x}^*, {\bf x}) | E_R}. \end{align*} Recall that for a non-negative random variable, $\expect{Z} = \int_0^\infty \pr{Z > t}dt$. Applying this to $d_{\mathcal M}^2({\bf x}^*, {\bf x})$ \begin{align*} &\expect{d^2_{\mathcal M}({\bf x}^*, {\bf x}) |E_R} \!= \int_0^{\text{diam}({\mathcal M})} \! \pr{d^2_{{\mathcal M}}({\bf x}^*, {\bf x})>t | E_R} dt \\ &= \int_0^{\text{diam}({\mathcal M})} \frac{\pr{d^2_{{\mathcal M}}({\bf x}^*, {\bf x})>t \text{ and } E_R}}{\pr{E_R}} dt \\ & = \frac{1}{\pr{E_R}} \int_0^{R^2} \pr{d_{\mathcal M}^2({\bf x}^*, {\bf x}) \in(t,R^2)} dt \\ & \leq \frac{1}{\pr{E_R}} \int_0^{R^2} \pr{ d_{\mathcal M}({\bf x}^*, {\bf x}) > \sqrt{t} } dt. \end{align*} Lemma \ref{lemma:probability_integral} in the supplementary gives the following bound, which is independent of $R$ \begin{align*} &\int_0^{R^2} \pr{d_{\mathcal M}({\bf x}^*, {\bf x}) > \sqrt{t} } dt \\ &\le 2 (1-e^{-Q})^{-2}(n+m)^{-\frac{2}{d}} \le 2 (1-e^{-Q})^{-2} n^{-\frac{2}{2+d}}. \end{align*} Combining all of the above concludes the proof.\end{proof} \begin{lemma} \label{lemma:fhat_xstar_minus_f_xstar} Under the same conditions of Lemma \ref{lemma:fxstar_minus_fx_squared}, \begin{align*} &\expect{(\hat{f}({\bf x}^*) - f({\bf x}^*))^2} \\ &\le \left( 2 L^2 \big( \tfrac{1+\delta}{1-\delta} \big)^2 c_1({\mathcal M},\mu,\delta) + \sigma^2 \right) n^{-\frac{2}{2+d}} \\ &+ 4 c_a e^{-c_b \mu_{\min} \cdot (n+m)} f_D^2. \end{align*} where $\delta$ is the approximation ratio of Eq. (\ref{eq:isomap_distance_bounds}). The coefficients \(c_{a}\) and \(c_{b}\) depend on $\delta$ and on the manifold ${\mathcal M}$ and graph construction parameters (see Eq. \eqref{eq:epsilon_bound}). \end{lemma} \begin{proof} By the bias-variance decomposition and the law of total variance, \begin{align} \label{eq:bias_variance_decomp} &\expect{(\hat{f}({\bf x}^*) - f({\bf x}^*))^2} = \text{bias}^2 \left( \hat{f} ( {\bf x}^*) \right) + \var{\hat{f}(x^*)} \nonumber \\ &= \text{bias}^2 \left( \hat{f} ( {\bf x}^*) \right) + \mathbb{E} \left[ \var{ \hat{f}({\bf x}^*) | X_1, \ldots, X_{n+m} } \right] \nonumber \\ &+ \var{\mathbb{E}\left[ \hat{f}({\bf x}^*) | X_1, \ldots, X_{n+m} \right]} . \end{align} We now bound these three terms separately. We start with the bias term, which we split into two parts, depending on the event $A$ \begin{eqnarray*} \text{bias} ( \hat{f} ( {\bf x}^*) ) &=& \pr{A} \cdot \text{bias} ( \hat{f} ( {\bf x}^*) | A ) +\\ && \pr{A^c} \cdot \text{bias} ( \hat{f} ( {\bf x}^*) | A^c ) \\ &\le& (1-\epsilon) \cdot \text{bias} ( \hat{f} ( {\bf x}^*) | A ) + \epsilon \cdot f_{D}. \end{eqnarray*} Therefore, \begin{align} &\text{bias}^2 ( \hat{f} ( {\bf x}^*) ) \le (1-\epsilon)^2 \text{bias}^2 ( \hat{f} ( {\bf x}^*) | A ) \nonumber \\ &+ 2 \epsilon(1-\epsilon) \text{bias} ( \hat{f} ( {\bf x}^*) | A ) f_D + \epsilon^2 f_D^2 \nonumber\\ &\le \text{bias}^2 ( \hat{f} ( {\bf x}^*) | A ) \label{ineq:squared_bias} + 3 \epsilon f_D^2. \end{align} Denote by $X^{(i,n)}_G({\bf x}^*)$ the $i$-th closest labeled point to ${\bf x}^*$ according to the graph distance. Let $Y^{(i,n)}_G({\bf x}^*)$ be its response and $\eta^{(i,n)}_G({\bf x}^*) = Y^{(i,n)}_G({\bf x}^*) - f( X^{(i,n)}_G({\bf x}^*))$ the noise. Using this notation, the geodesic kNN regression estimate of Eq. \eqref{eq:gknn_regressor} is \begin{align} \label{eq:gknn_explicit} \hat{f}({\bf x}^*) = \sum_{i=1}^k \frac{Y_G^{(i,n)}({\bf x}^*)}{k} = \sum_{i=1}^k \frac{f ( X_G^{(i,n)}({\bf x}^*) ) + \eta_G^{(i,n)}({\bf x}^*)}{k}. \end{align} For any random variable $Z$, we have $\mathbb{E}^2 \left[ Z \right] \le \expect{Z^2}$. Applying this, we get a bound on $\text{bias}^2 ( \hat{f} ( {\bf x}^*) | A )$. \begin{align} &\text{bias}^2 ( \hat{f} ( {\bf x}^*) | A ) \nonumber = \mathbb{E}^2 \Big[ \tfrac{1}{k} \sum_{i=1}^{k} f(X_G^{(i,n)}({\bf x}^*)) - f({\bf x}^*) \Big| A \Big] \nonumber \\ &\le \mathbb{E} \Big[ \big( \tfrac{1}{k} \sum_{i=1}^{k} ( f(X_G^{(i,n)}({\bf x}^*)) - f({\bf x}^*) ) \big)^2 \Big| A \Big] \label{eq:squared_bias_first_bound} \\ &\le \mathbb{E} \Big[ \big( \tfrac{1}{k} \sum_{i=1}^{k} L \cdot d_{\mathcal{M}} ( X_G^{(i,n)}({\bf x}^*), {\bf x}^* ) \big)^2 \Big| A \Big]. \label{eq:squared_bias_second_bound} \end{align} Conditioned on $A$, Eq. \eqref{eq:dMXM_less_dMXG} in the supplementary gives the bound \[ d_{\mathcal{M}} \left( X_G^{(i,n)}({\bf x}^*), {\bf x}^* \right) \le \tfrac{1+\delta}{1-\delta} d_{\mathcal{M}} \left( X_{\mathcal M}^{(i,n)}({\bf x}^*), {\bf x}^* \right) \] Randomly split the labeled samples $X_1, \ldots, X_n$ into disjoint subsets $S_1, \ldots, S_{k+1}$, such that $|S_1|=\ldots=|S_k| = \lfloor \tfrac{n}{k} \rfloor$ and $S_{k+1}$ contains the remaining elements. Let $S_i({\bf x}^*) := \argmin_{{\bf x}' \in S_i} d_{\mathcal M}({\bf x}^*, {\bf x}')$ be the closest element to ${\bf x}^*$ in $S_i$. Clearly, \begin{align} \label{eq:split_S_i} \sum_{i=1}^{k} d_{\mathcal M}\left(X_{\mathcal M}^{(i,n)}({\bf x}^*), {\bf x}^* \right) \le \sum_{i=1}^{k} d_{\mathcal M}\left(S_i({\bf x}^*), {\bf x}^* \right). \end{align} Inserting this into Eq. \eqref{eq:squared_bias_second_bound} and applying Jensen's inequality \begin{align} &\text{bias}^2 ( \hat{f} ( {\bf x}^*) | A ) \le\nonumber L^2 \mathbb{E} \Big[ \big( \frac{1}{k} \sum_{i=1}^{k} \tfrac{1+\delta}{1-\delta} d_{\mathcal M} ( S_i({\bf x}^*), {\bf x}^* ) \big)^2 \big| A \Big] \\ &\le L^2 \big( \tfrac{1+\delta}{1-\delta} \big)^2 \mathbb{E} \Big[ \frac{1}{k} \sum_{i=1}^{k} d_{\mathcal{M}}^{2} ( S_i({\bf x}^*), {\bf x}^* ) \big| A \Big] \nonumber \\ &= L^2 \big( \tfrac{1+\delta}{1-\delta} \big)^2 \mathbb{E} \big[ d_{\mathcal{M}}^{2} \left( S_1({\bf x}^*), {\bf x}^* \right) \big| A \big]. \nonumber \end{align} The set $S_1$ is simply a random draw of $ \lfloor \tfrac{n}{k} \rfloor $ points. By Lemma \ref{lemma:manifold_dist_to_gnn} in the supplementary, \( \expect{d_{\mathcal{M}}^{2} \left( S_1({\bf x}^*), {\bf x}^* \right) \Big| A } \le c_1({\mathcal M},\mu,\delta) \lfloor \tfrac{n}{k} \rfloor^{-\frac{2}{d}}. \) Plugging this back into Eq. \eqref{ineq:squared_bias}, we obtain a bound on the squared bias. \begin{align} \label{ineq:squared_bias_bound} \text{bias}^2 \le L^2 \left( \tfrac{1+\delta}{1-\delta} \right)^2 c_1({\mathcal M},\mu,\delta) \lfloor \tfrac{n}{k} \rfloor^{-\frac{2}{d}} + 3\epsilon f_D^2. \end{align} We now bound the second term in Eq. \eqref{eq:bias_variance_decomp}. Consider the definition of $\hat{f}({\bf x}^*)$ in Eq. \eqref{eq:gknn_explicit}. Conditioned on $X_1, \ldots, X_{n+m}$, the terms $f(X_G^{(i,n)}({\bf x}^*))$ are constants. The noise $\eta$ has zero mean and is independent of the draw of $X_1, \ldots, X_{n+m}$. Therefore \begin{align} \label{eq:variance_bound} \var{\hat{f}({\bf x}^*) \big| X_1, \ldots, X_{n+m}} = \sigma^2 / k. \end{align} To bound the third term in \eqref{eq:bias_variance_decomp}, we note that for any real random variable $Z$ and any $c \in \mathbb{R}$, we have \( \var{Z} = \expect{(Z - \expect{Z})^2} \le \expect{(Z - c)^2} \). Hence, \begin{align*} &\var{\expect{\hat{f}({\bf x}^*) \big| X_{1 \ldots n+m}}} = \text{Var} \Big( \tfrac{1}{k}\sum_{i=1}^{k} f(X_G^{(i,n)}({\bf x}^*)) \Big) \\ &\le \mathbb{E} \Big[ \big( \tfrac{1}{k}\sum_{i=1}^{k} f(X_G^{(i,n)}({\bf x}^*)) - f({\bf x}^*) \big)^2 \Big]. \end{align*} We split this expectation with respect to the event $A$ and apply the bound we computed for Eq. \eqref{eq:squared_bias_first_bound}. \begin{align} \label{ineq:f_X_G_minus_f} &\mathbb{E} \Big[ \big( \tfrac{1}{k}\sum_{i=1}^{k} f(X_G^{(i,n)}({\bf x}^*)) - f({\bf x}^*) \big)^2 \Big] \\ &= \pr{A} \cdot \mathbb{E} \Big[ \big( \tfrac{1}{k}\sum_{i=1}^{k} f(X_G^{(i,n)}({\bf x}^*)) - f({\bf x}^*) \big)^2 \big| A \Big] \nonumber \\ &+ \pr{A^c} \cdot f_D^2 \le L^2 \left( \tfrac{1+\delta}{1-\delta} \right)^2 c_1({\mathcal M},\mu,\delta) \lfloor \tfrac{n}{k} \rfloor^{-\frac{2}{d}} + \epsilon f_D^2. \nonumber \end{align} To conclude, by inserting equations \eqref{ineq:squared_bias_bound}, \eqref{eq:variance_bound} and \eqref{ineq:f_X_G_minus_f} into Eq. \eqref{eq:bias_variance_decomp}, and applying the bound on $\epsilon$ in Eq. \eqref{eq:epsilon_bound}, we obtain \begin{align} &\expect{(\hat{f}({\bf x}^*) - f({\bf x}^*))^2} \nonumber\\ &\le 2 L^2 \left( \tfrac{1+\delta}{1-\delta} \right)^2 c_1({\mathcal M},\mu,\delta) \lfloor \tfrac{n}{k} \rfloor^{-\frac{2}{d}} \nonumber\\ &+ 4 c_a e^{-c_b (n+m)} \cdot f_D^2 + \frac{\sigma^2}{k}. \nonumber \end{align} The lemma follows by setting $k = \lceil n^{\frac{2}{2+d}} \rceil $. \end{proof} \section{Computation of geodesic kNN} \label{sec:fastgknn} Theorem \ref{thm:minimax_optimality} shows that the geodesic kNN regressor outlined in Section \ref{sec:framework} is mini\-max optimal. In this section we describe how it can also be computed efficiently, assuming that the graph has already been constructed (more on this in Section \ref{sec:graph_construction} below). Computing $\hat f({\bf x})$ for all points in a dataset reduces to the following algorithmic problem: Let $G=(V,E)$ be a weighted undirected graph and let $\mathcal L \subseteq V$ be a subset of labeled vertices. How can we efficiently find the $k$ nearest labeled neighbors of every vertex in the graph? Denote $n = |\mathcal L|,\ N = |V|$. A simple approach to this problem is to first apply Dijkstra's algorithm from each of the labeled points, forming an \(n\times N\) matrix of all pairwise shortest graph distances \(d_G(s,v)\), where $s\in\mathcal L$ and $v\in V$. The $k$ nearest labeled vertices of the $j$\textsuperscript{th} vertex correspond to the $k$ smallest cells in the $j$\textsuperscript{th} column. The runtime of this method is $O \left( n N\log N+n|E| \right)$ \citep{dasgupta2006algorithms}. For $k=1$, where one computes the single nearest labeled vertex to every vertex in a graph, the result is known as the graph Voronoi diagram, with the labeled vertices acting as the centers of the Voronoi cells. A fast algorithm for this problem was developed by \citet{Erwig2000}. Algorithm \ref{alg:graph_k_nn} that we present here is a generalization of his approach for any $k \ge 1$. Before describing it, we briefly recall Dijkstra's shortest path algorithm: Given a seed vertex $s \in V$, Dijkstra's algorithm keeps, for every vertex $v \in V$, an upper bound on $d_G(s, v)$, denoted $u[v]$, initialized to $0$ if $v = s$ and to $+\infty$ otherwise. At every iteration, the vertex $v_0$ with the lowest upper bound is \emph{visited}: For every neighbor $v$ of $v_0$, if $u[v_0] + w(v_0, v) < u[v]$, then the current upper bound $u[v]$ is lowered. $v_0$ is never visited again. The basic idea behind Algorithm \ref{alg:graph_k_nn} can be described as running $n$ instances of Dijkstra's algorithm "simultaneously" from all labeled vertices. This is combined with an early stopping rule whenever $k$ paths from different labeled vertices have been found. As in Dijkstra's algorithm, Algorithm \ref{alg:graph_k_nn} uses a priority queue based on a Fibonacci heap with the 3 standard operations: insert, pop-minimum and decrease-key. We use decrease-or-insert as a shorthand for decreasing the key of an element if it is stored in the queue, and otherwise inserting it. Instead of storing vertices in the priority queue, as in Dijkstra's algorithm, Algorithm \ref{alg:graph_k_nn} stores pairs $(seed, v)$ keyed by $dist$, where $dist$ is the current upper bound on $d_G(seed, v)$. In the supplementary we prove that whenever $(dist, seed, v)$ is popped from the queue, we have $dist = d_G(seed,v)$. At every iteration, the pair $(seed,v_0)$ with the lowest upper bound is \emph{visited}: we examine every neighbor $v$ of $v_0$ and possibly update the current upper bound of $d_G(seed, v)$ using a decrease-or-insert operation. We keep a set $S_v$ for every vertex $v \in V$ to prevent multiple visits from the same seed. \begin{algorithm} \caption{Geodesic k nearest labeled neighbors} \label{alg:graph_k_nn} \paragraph{Input:} An undirected weighted graph $G = (V,E,w)$ and a set of labeled vertices $\mathcal L \subseteq V$.\\ {\bf Output: } For every $v \in V$ a list \(kNN[v]\) with the $k$ nearest labeled vertices to $v$ and their distances. \algsetup{indent=2em} \begin{algorithmic} \STATE $Q \gets$ PriorityQueue() \FOR{$v \in V$} \STATE kNN[$v$] $\gets$ Empty-List() \STATE $S_v \gets \phi$ \IF{$v \in \mathcal L$} \STATE insert($Q$, ($v$, $v$),\ priority = 0)\\\ \ENDIF \ENDFOR \WHILE{$Q \neq \phi$} \STATE (seed, $v_0$, dist) $\gets$ pop-minimum($Q$) \STATE $S_{v_0} \gets S_{v_0} \ \cup\ \{$seed$\}$ \IF{length(kNN[$v_0$]) < $k$} \STATE \text{\bf append}\ (dist, seed) \text{\bf to} kNN[$v_0$] \FORALL{$v \in $ neighbors$(v_0)$} \IF{length(kNN[$v$]) < $k$ and seed $\notin S_v$} \STATE decrease-or-insert($Q$, (seed, $v$),\\\qquad priority = dist $+ w(v_0, v)$) \ENDIF \ENDFOR \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} Independently of our work, this algorithm was recently described by \cite{Harpeled2016} for the $\mathcal L = V $ case. Furthermore, an optimization of the priority queues was proposed that bounds the runtime at \begin{align} \label{eq:runningtime} O(k|V|\log|V| + k|E|). \end{align} In the supplementary, we give a detailed description of this method, which we dub Algorithm \ref{alg:graph_k_nn_faster}. We formally prove the correctness of both algorithms, present asymptotic bounds on their running time and perform an empirical runtime comparison using the indoor localization data set. In our experiments, both Algorithm \ref{alg:graph_k_nn} and Algorithm \ref{alg:graph_k_nn_faster} have a similar runtime, which is orders of magnitude faster than the na\"ive method of computing geodesic nearest neighbors using Dijkstra's algorithm. It is also significantly faster than standard methods to compute eigenvectors, as required by Laplacian eigenvector regression. Both Algorithm \ref{alg:graph_k_nn} and Algorithm \ref{alg:graph_k_nn_faster} use memory bounded by the minimum of $O(n|V|)$ and $O(k|E|)$. The first bound follows from the fact that $Q$ cannot have more than $|\mathcal L \times V|$ elements. The second holds since every vertex $v$ is visited at most $k$ times and may insert up to $\deg(v)$ neighbors into $Q$. \begin{remark} \cite{BijralRatliffSrebro2011} also proposed a variant of Dijkstra's algorithm. However, their method is an improvement of \emph{single-source} Dijkstra in the setting of a dense graph constructed from points in $R^D$, whereas the methods we discuss here compute paths from \emph{multiple sources} and are applicable to any graph. \end{remark} \subsection{Notes on the graph construction time} \label{sec:graph_construction} For the construction of $G$, the straightforward approach is to compute the distances between all pairs of points in time $O(D|V|^2)$. In light of Eq. \eqref{eq:runningtime} this may take much longer than actually computing geodesic $kNN$ on the graph $G$. One way to reduce the runnning time is to store all of the data points in a k-d tree \citep{Bentley1975} a ball tree \citep{Omohundro1989} or some other data structure for spatial queries and then find nearby neighbors for every point. These data structures are suitable for constructing both distance-cutoff graphs and symmetric-kNN graphs from low-dimensional data. For high-dimensional data, several works have appeared in recent years which propose fast methods of constructing approximate kNN graphs \citep{ZhangHuangGengLiu2013, WangShiCao2013}. The running time of these methods is $O(D|V|\log|V|)$ multiplied by some constant which is empirically small. Combining these constructions with fast algorithms for computing geodesic kNN yields a runtime of $O((k+D)|V|\log|V|)$. This is much faster than many other semi-supervised methods, which typically involve expensive calculations such as matrix inversion \citep{ZhuGhahramaniLafferty2003} or eigenvector computation \citep{BelkinNiyogi2004}. \section{Applications} \label{sec:application} \subsection{Indoor localization using Wi-Fi signals} One motivation for our work is the problem of estimating the location of a mobile device in a closed environment using its Wi-Fi signature as received by a wireless router. This problem is gaining considerable interest in recent years due to its many potential applications, such as indoor navigation inside large commercial spaces \citep{Liu2007}. In indoor settings, the signal received by the router is a superposition of multiple reflections of the same source, which differ in their arrival time, direction and intensity. This limits the use of classic outdoor positioning methods such as triangulation, which require a direct line-of-sight between the transmitting device and the receiver. A common approach for tackling this problem, known as \textit{fingerprinting} in the signal processing community, is based on nearest-neighbor search. First, a labeled set $\{({\bf x}_i,y_i)\}_{i=1}^n$ is collected, where $y_i \in \mathbb R^2$ is the location of the transmitter and ${\bf x}_i$ is a feature vector extracted from the received signal. The location of new instances is then estimated via non-parametric regression methods such as k nearest neighbors. For applications requiring high accuracy, recording and maintaining a suitable labeled data set may be prohibitively expensive. On the other hand, collecting vast amounts of unlabeled data may be done simply by recording the Wi-Fi signals of various devices moving through the venue. Indoor localization is thus a natural application for semi-supervised methods. Moreover, the space of feature vectors is parameterized by a 2 or 3 dimensional position. Thus, we expect manifold-based methods to perform well in this task. To test this empirically, we used two data sets of indoor localization: a simulated and a real data set. A brief description follows. See the supplementary for details. \begin{figure} \includegraphics[width=0.45\textwidth]{SimEnvironment.png} \caption{3D model of a $80 \times 80m \times 5m$ floor.} \label{fig:mall} \end{figure} \textbf{Simulated data:} This data consists of 802.11 Wi-Fi signals in an artificial $80m \times 80m$ indoor office environment generated by \cite{KupershteinWaxCohen2013} using a 3D radio wave propagation software, see Figure \ref{fig:mall}. \textbf{Real data:} These are actual 802.11 signals, recorded by a Wi-Fi router placed roughly in the middle of a $27m\! \times\! 33m$ office, see Figure \ref{fig:real_data_floor} of the supplementary. The Signal Subspace Projection (SSP) of \cite{KupershteinWaxCohen2013} and \cite{JaffeWax2014} is used as the fingerprint for localization. It is based on the assumption that signals received from close locations have similar properties of differential delays and directions of arrival. In our experiments, the SSP features are $48 \times 48$ projection matrices, where the projected subspace is 10-dimensional. We use the Frobenius norm as the distance metric and construct a symmetric-4NN graph as described in Section \ref{sec:framework}. For more details on the datasets and SSP features, see the supplementary. We compare our semi-supervised geodesic kNN regressor to Laplacian eigenvector regression \citep{BelkinNiyogi2004}. As a baseline we also applied classic kNN regression, using only the labeled samples and optimizing over $k$. Figure \ref{fig:localization_error} shows the median localization error on the simulated data set as a function of the number of unlabeled locations, where the labeled points are placed on a fixed grid. Specifically, for the geodesic kNN regressor we used $k=7$ and exponentially decaying weights such that the weight of the $i$-th neighbor is proportional to $1/2^i$. Since the weights decay exponentially, the specific choice of $k$ is not important, with larger values of $k$ giving nearly identical results. For Laplacian eigenvector regression, we optimized over the number of eigenvectors by taking the best outcome after repeating the experiment with $10\%, 20\%, 30\%, 40\%$ and $50\%$ of the labeled points. Table \ref{table:real_dataset_accuracy} shows the mean localization error on the real data set for different densities of labeled points. The results on both the simulated and real datasets show a clear advantage for the geodesic kNN regressor. As expected, the improvement shown by the semi-supervised methods increases with the number of unlabeled locations. Moreover geodesic kNN regression is much faster to compute than the Laplacian eigenvector regressor, see Table \ref{table:runtime} in the supplementary. \begin{table} \centering \caption{Mean accuracy of kNN, geodesic kNN and Laplacian eigenbasis regression on the real data set} \label{table:real_dataset_accuracy} \begin{tabular}{lllll} \toprule Labeled grid & n & kNN & GNN & Laplacian\\\hline 1.5\text{m} & 73 & 1.49\text{m} & {\bf 1.11}\text{m} & 1.36\text{m} \\\hline 2.0\text{m} & 48 & 2.27\text{m} & {\bf 1.49}\text{m} & 1.65\text{m} \\\hline 3\text{m} & 23 & 3.41\text{m} & {\bf 2.41}\text{m} & 2.79\text{m} \\\hline \bottomrule \end{tabular} \end{table} \begin{figure} \includegraphics[width=\linewidth]{median_error_comparison_ss10_step20.pdf} \includegraphics[width=\linewidth]{median_error_comparison_ss10_step40.pdf} \caption{Median localization error vs. number of unlabeled points. Top: $1600$ labeled points placed on a regular grid with a side length of $2m$. Bottom: $400$ labeled points on a $4m$ grid.} \label{fig:localization_error} \end{figure} \subsection{Facial pose estimation} We illustrate the performance of geodesic kNN on another regression problem, using the {\bf faces} data set where the predicted value is the left-right angle of a face image.\footnote{\url{http://isomap.stanford.edu/datasets.html}} This data set contains $698$ greyscale images of a single face rendered at different angles and lighting. The instance space is the set of all $64 \times 64$ images whereas the intrinsic manifold dimension is $3$. For our benchmark, we computed the $\ell_1$ distance between all pairs of images and constructed a symmetric $4$-NN graph. For the geodesic kNN algorithm, the edge weights were set to the $\ell_1$ distances and $k$ was set to 1. For Laplacian eigenvector regression we used binary weights and set the number of eigenvectors to 20\% of the number of labeled points. This is a common rule-of-thumb, and gave good results over the whole range. \begin{figure} \centering \includegraphics[width=\linewidth]{faces_regression_performance.pdf} \includegraphics[width=0.9\linewidth]{faces_montage_short.png} \caption{Top: mean prediction error for the left-right angle of the face. Bottom: sample images from the {\bf faces} data set, showing different poses and lighting. } \label{fig:faces} \end{figure} Figure \ref{fig:faces} shows that geodesic kNN performs uniformly better than the nearest neighbor regressor and also outperforms the semi-supervised Laplacian regressor. \subsubsection*{Acknowledgments} We would like to thank Mati Wax, Jonathan Rosenblatt, Amit Gruber, Roee David and Jonathan Bauch for interesting discussions about this work and to Evgeny Kupershtein for providing data sets. \pagebreak \begin{center} \textbf{\Large Supplementary Material} \end{center}
1,116,691,498,103
arxiv
\section{Introduction} While the Berry phase has been shown to be important for spin-dynamics~\cite{spin_wave_dynamics_real_crystals,adiabatic_dynamics_local_spin_moments,spin_dynamics_tddft}, less attention has been paid to geometrical aspects in the exchange constants. Recently, it has been shown that the Dzyaloshinskii-Moriya interaction (DMI), i.e., the asymmetric exchange, can be computed from a Berry phase approach, in which the geometrical properties of the electronic structure in mixed phase space play a key role~\cite{mothedmisot,phase_space_berry,itsot,spicudmi}. DMI describes the linear change of the free energy with gradients in the magnetization direction. The effect of such noncollinear magnetic textures on conduction electrons can be accounted for by effective magnetic potentials~\cite{bruno_the_2004,gauge_fields_spintronics}. Since orbital magnetism leads to a linear change of the free energy when an external magnetic field is applied, several formal analogies exist between the modern theory of orbital magnetization~\cite{resta_review_om} and the Berry-phase approach to DMI~\cite{mothedmisot,itsot}, because the latter captures the free energy change linear in an effective magnetic potential generated by the noncollinear magnetic texture. Similarly, the (symmetric) exchange constants describe the quadratic change of the free energy with gradients in the magnetization direction while the orbital magnetic susceptibility (OMS) captures the quadratic change of the free energy with an applied magnetic field~\cite{oms_fukuyama}. Therefore, it is natural to suspect formal analogies between the theories of OMS on the one hand and exchange constants on the other hand, which we will investigate in detail in this paper. For this purpose, we use thermal quantum field theory in order to express the exchange constants in terms of torque operators, velocity operators and the Green's functions of a collinear ferromagnet and obtain a formula that resembles Fukuyama's result for OMS~\cite{oms_fukuyama,oms_ogata_fukuyama}. Recently, geometrical contributions to OMS have been identified and shown to be generally significant and sometimes even dominant~\cite{geometrical_effects_oms,quantum_metric_without_berry}. These contributions arise from the reciprocal-space Berry curvature and quantum metric, which describe geometrical properties of the electronic structure. We will show that, as a consequence of the formal analogies between OMS and exchange, similar geometrical contributions to the exchange constants can be identified, which arise from the Berry curvature and the quantum metric in mixed phase space as well as from the quantum metric in real space. In order to achieve this, we rewrite our Fukuyama-type formula for the exchange constant in terms of these geometrical properties. Both the Fukuyama-type formula as well as the geometrical expression allow us to obtain the exchange constants directly from the electronic structure. Compared to the frozen spin-spiral approach~\cite{adiabatic_spin_dynamics_dft_fe_co_ni,ab_initio_noco_flapw} such a formulation has the advantage that it becomes easier to investigate the relationship to spintronic and spincaloritronic effects. For example, the Berry phase theory of DMI allows us to relate DMI to the spin-orbit torque~\cite{mothedmisot}, to ground-state spin-currents~\cite{spicudmi}, and to ground-state energy currents which need to be subtracted in order to extract the inverse thermal spin-orbit torque~\cite{itsot}. Similarly, torques due to the exchange interaction need to be considered in the theory of thermally induced spin-transfer torques~\cite{thermal_stt}, and a Green's function expression of exchange is well suited for this purpose. For the calculation of exchange constants in realistic materials powerful techniques exist already. Besides the frozen spin-spiral approach~\cite{adiabatic_spin_dynamics_dft_fe_co_ni,ab_initio_noco_flapw, exchange_heuslers, exchange_interactions_noco_dft} the method of infinitesimal rotations of magnetic moments and the Lichtenstein formula are popular~\cite{force_theorem, exchange_dms,anisotropic_exchange_coupling_dms_dft}. In this work we focus on free electrons. However, the extension of the Fukuyama-type approach to calculations of exchange constants in realistic materials within the framework of first-principles density-functional theory has promising practical and technical perspectives. For example, a Fukuyama-type formula for the exchange constants might be an attractive alternative when spin-orbit interaction (SOI) is present, because in this case the frozen spin-spiral approach cannot be used and one needs to resort to supercell methods or use multiple scattering theory~\cite{anisotropic_exchange_coupling_dms_dft}, which cannot be combined easily with all available density-functional theory codes. Similarly, for the first-principles simulation of the current-induced motion of domain walls and skyrmions, which involves complicated effects such as chiral damping~\cite{chiral_damping_magnetic_domain_walls,phenomenology_chiral_damping} and the nonadiabatic torque~\cite{first_principles_nonadiabatic_stt}, and for the calculation of electronic transport properties -- such as the topological Hall effect~\cite{the_mnsi} -- in these noncollinear magnetic textures an approach that specifies the response to applied electric currents in terms of a coefficient matrix that is expanded in orders of the magnetization gradients is desirable. Since exchange constants are well-known for many materials, their calculation from a Fukuyama-type expression can be used for code-testing with the goal to extend the method to the mentioned spintronics effects. This paper is structured as follows: In section~\ref{sec_oms_fukuyama} we briefly review the derivation of Fukuyama's formula for OMS, which serves as a basis to derive a Fukuyama-type expression for the exchange constants in section~\ref{sec_exchange_fukuyama}. In section~\ref{sec_oms_semicla} we discuss how to express OMS in terms of reciprocal-space curvatures and quantum metrices, which sets the stage to express the exchange constants in terms of mixed phase space curvatures and quantum metrices in section~\ref{sec_exchange_semicla}. In section~\ref{sec_gauge_field} we show that -- despite the spin-orbit interaction -- the exchange constants can be obtained easily from a gauge-field approach in the case of the one-dimensional Rashba model. In section~\ref{sec_one_dim_rashba} we discuss the exchange constants of the one-dimensional Rashba model. We show that the results obtained from the Fukuyama-type approach agree to those of the gauge-field approach, thereby demonstrating the validity of the Fukuyama-type expression even in the presence of SOI. Additionally, we discuss the geometrical contributions. In section~\ref{sec_two_dim_rashba} we investigate the exchange constants in the two-dimensional Rashba model. This paper ends with a summary in section~\ref{sec_summary}. \section{Fukuyama method} \subsection{Orbital magnetic susceptibility} \label{sec_oms_fukuyama} The orbital magnetic susceptibility tensor $\vn{\chi}$ is defined by \begin{equation}\label{eq_def_oms} \delta \vn{M}_{\rm orb}=\frac{1}{\mu_{0}}\vn{\chi} \vn{B}, \end{equation} where $\vn{B}$ is an applied external magnetic field and $\delta \vn{M}_{\rm orb}$ is the change of the orbital magnetization due to the application of $\vn{B}$. $\mu_{0}$ is the vacuum permeability. The $zz$ element of the orbital magnetic susceptibility tensor is given by the Fukuyama formula~\cite{oms_fukuyama} \begin{equation}\label{eq_oms_fukuyama} \begin{aligned} &\chi^{zz}=\frac{\mu_{0}e^2}{2\beta\hbar^2} \int\!\!\frac{\rmd^d k}{(2\pi)^d} \sum_{p} \text{Tr} \Bigl[\\ &G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) v^{x}_{\vn{k}} G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) v^{y}_{\vn{k}} G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) v^{x}_{\vn{k}} G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) v^{y}_{\vn{k}} \Bigr], \end{aligned} \end{equation} where $d$ is the dimension ($d$=2 or $d$=3). In the case of twodimensional systems, such as a graphene sheet or a thin film, the $z$ direction is oriented perpendicular to the sheet or thin film. $v^{x}_{\vn{k}}$ and $v^{y}_{\vn{k}}$ are the $x$ and $y$ components of the velocity operator $\vn{v}_{\vn{k}}^{\phantom{k}}=e^{-i\vn{k}\cdot\vn{r}}\vn{v}e^{i\vn{k}\cdot\vn{r}}$ in crystal momentum representation, respectively. $\beta=(k_{\rm B}T)^{-1}$ is the inverse temperature, $k_{\rm B}$ is the Boltzmann constant, and $\mathcal{E}_{p}=\beta^{-1}(2p+1)\pi$ are the Matsubara points. \begin{equation} G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p})= \hbar [ i\mathcal{E}_{p}-H_{\vn{k}} ]^{-1} \end{equation} is the Matsubara Green's function, where $H_{\vn{k}}$ is the Hamiltonian in crystal momentum representation. Using the residue theorem the summation over Matsubara points can be replaced by an energy integration along the real energy axis as follows: \begin{equation}\label{eq_oms_fukuyama_real_axis} \begin{aligned} &\chi^{zz} =-\frac{\mu_{0}e^2}{2\pi \hbar^2} \int\!\!\frac{\rmd^d k}{(2\pi)^d} {\rm Im}\int d\,\mathcal{E}\,f(\mathcal{E}) \text{Tr} \Bigl[\\ &G^{\rm R}_{\vn{k}}(\mathcal{E}) v^{x}_{\vn{k}} G^{\rm R}_{\vn{k}}(\mathcal{E}) v^{y}_{\vn{k}} G^{\rm R}_{\vn{k}}(\mathcal{E}) v^{x}_{\vn{k}} G^{\rm R}_{\vn{k}}(\mathcal{E}) v^{y}_{\vn{k}} \Bigr], \end{aligned} \end{equation} where $f(\mathcal{E})$ is the Fermi function and \begin{equation} G^{\rm R}_{\vn{k}}(\mathcal{E})=\hbar [\mathcal{E}-H_{\vn{k}}+i0^{+}]^{-1} \end{equation} is the retarded Green's function. In the following we briefly sketch Fukuyama's derivation~\cite{oms_fukuyama} of Eq.~\eqref{eq_oms_fukuyama}, which serves as a preparation for obtaining an expression for the exchange constants in section~\ref{sec_exchange_fukuyama}. Since the vector potential of a homogeneous magnetic field is not compatible with Bloch boundary conditions we consider the spatially oscillating vector potential \begin{equation} \vn{A}(x)=\frac{B_{0}}{q}\sin(q x)\hat{\vn{e}}_{y} \end{equation} with corresponding magnetic field \begin{equation} \vn{B}(x)=\nabla\times\vn{A}(x)=B_{0}\cos(qx)\hat{\vn{e}}_{z}, \end{equation} where $\hat{\vn{e}}_{y}$ and $\hat{\vn{e}}_{z}$ are unit vectors in the $y$ and $z$ directions, respectively. At the final stage of the calculation the limit $q\rightarrow 0$ will be taken. According to Eq.~\eqref{eq_def_oms} this spatially oscillating magnetic field induces a spatially oscillating orbital magnetization. The interaction between this induced orbital magnetization and the magnetic field modifies the free energy density by the amount \begin{equation}\label{eq_free_energy_oms} \begin{aligned} \delta F&=-\frac{1}{2} \langle \delta M_{\rm orb}^{z} B^{z}_{\phantom{z}} \rangle= -\frac{1}{2\mu_{0}} \chi^{zz} \langle B_{\phantom{z}}^{z} B^{z}_{\phantom{z}} \rangle=\\ &=-\frac{1}{4\mu_{0}} \chi^{zz} [B_{0}]^2, \end{aligned} \end{equation} where $\langle\dots\rangle$ denotes spatial averaging. The expression for $\chi^{zz}$ can be found by determining $\delta F$ from thermal quantum field theory and equating the result with Eq.~\eqref{eq_free_energy_oms}. The free energy is obtained from the partition function $\Xi$ as \begin{equation} F=-\frac{1}{\beta}\ln\Xi. \end{equation} The modification of $\Xi$ due to the applied magnetic field $\vn{B}(x)$ is determined from perturbation theory. For example the contribution from second order perturbation theory is given by \begin{equation} \begin{aligned} \Xi^{(2)}=&\frac{1}{2\hbar^2} \int_{0}^{\hbar\beta}\!\!\!\! d\tau_{1} \int_{0}^{\hbar\beta}\!\!\!\! d\tau_{2} \,\text{Tr} \left[ e^{-\beta H} T_{\tau} \delta H_{\rm I}(\tau_{1}) \delta H_{\rm I}(\tau_{2}) \right]\\ =&\frac{\Xi^{(0)}}{2\hbar^2} \int_{0}^{\hbar\beta}\!\!\!\! d\tau_{1} \int_{0}^{\hbar\beta}\!\!\!\! d\tau_{2} \, \langle T_{\tau} \delta H_{\rm I}(\tau_{1}) \delta H_{\rm I}(\tau_{2}) \rangle, \end{aligned} \end{equation} where $\Xi^{(0)}$ is the partition function of the unperturbed system, $T_{\tau}$ is the time-ordering operator, $H$ is the unperturbed Hamiltonian, and $\delta H_{\rm I}(\tau)=e^{\tau H/\hbar}\delta H e^{-\tau H/\hbar}$ denotes the perturbation in the interaction picture. Minimal coupling leads to two perturbation terms, \begin{equation}\label{eq_minicoup1} \delta H^{(1)}=\frac{e}{2} \left[ \vn{v}\cdot\vn{A}(x) + \vn{A}(x) \cdot \vn{v} \right] \end{equation} and \begin{equation}\label{eq_minicoup2} \begin{aligned} \delta H^{(2)}&=\frac{e^2}{2m_e}\vn{A}^2(x)= \frac{e^2B_{0}^2}{2m_e q^2}\sin^2(qx) =\\ &=\frac{e^2B_{0}^2}{4m_e q^2} [1-\cos(2qx)], \end{aligned} \end{equation} where $e>0$ is the elementary positive charge and $m_e$ is the electron mass. In order to determine $\chi^{zz}$ from Eq.~\eqref{eq_free_energy_oms} we need to find the modification of the free energy density $\delta F$ that arises from the perturbations $\delta H^{(1)}$ and $\delta H^{(2)}$ and that is second order in $B_{0}$. Thus, we need to perform second order perturbation theory with $\delta H^{(1)}$ and first order perturbation theory with $\delta H^{(2)}$. In second quantization the perturbation $\delta H^{(1)}$ is given by \begin{equation} \begin{aligned} \delta H^{(1)} &= \frac{e B_{0} } {4iq} \sum_{\vn{k}nm} \Bigl\{\\ & \bigl[ \langle u_{\vn{k}_{+}n}^{\phantom{y}}| v_{\vn{k}_{+}}^{y}| u_{\vn{k}_{-}m}^{\phantom{y}} \rangle + \langle u_{\vn{k}_{+}n}^{\phantom{y}}| v_{\vn{k}_{-}}^{y}| u_{\vn{k}_{-}m}^{\phantom{y}} \rangle \bigr] c^{\dagger}_{\vn{k}_{+}n}c^{\phantom{\dagger}}_{\vn{k}_{-}m}\\ -& \bigl[ \langle u_{\vn{k}_{-}n}^{\phantom{y}}| v_{\vn{k}_{-}}^{y}| u_{\vn{k}_{+}m}^{\phantom{y}} \rangle + \langle u_{\vn{k}_{-}n}^{\phantom{y}}| v_{\vn{k}_{+}}^{y}| u_{\vn{k}_{+}m}^{\phantom{y}} \rangle \bigr] c^{\dagger}_{\vn{k}_{-}n}c^{\phantom{\dagger}}_{\vn{k}_{+}m} \Bigr\}, \end{aligned} \end{equation} where $\vn{k}_{+}=\vn{k}+\vn{q}/2$ and $\vn{k}_{-}=\vn{k}-\vn{q}/2$ and $\vn{q}=q\hat{\vn{e}}_{x}$. $|u_{\vn{k}n}\rangle$ denotes the eigenfunctions of the unperturbed Hamiltonian $H_{\vn{k}}$, such that $H_{\vn{k}}|u_{\vn{k}n}\rangle=\mathcal{E}_{\vn{k}n}|u_{\vn{k}n}\rangle$, where $\mathcal{E}_{\vn{k}n}$ is the band energy. $c^{\dagger}_{\vn{k}n}$ and $c^{\phantom{\dagger}}_{\vn{k}n}$ are creation and annihilation operators of an electron in band $n$ at $k$-point $\vn{k}$, respectively. Second order perturbation theory with respect to $\delta H^{(1)}$ modifies the free energy density by the amount \begin{equation}\label{eq_delta_frenergy} \delta F\!=\!\frac{e^2 B^2_{0}}{4 q^2\beta\hbar^2} \!\!\!\int\!\!\frac{\rmd^d k}{(2\pi)^d}\! \sum_{p} \!\text{Tr}\! \left[ G^{\rm M}_{\vn{k}_{+}} (i\mathcal{E}_{p}) v^{y}_{\vn{k}} G^{\rm M}_{\vn{k}_{-}}(i\mathcal{E}_{p}) v^{y}_{\vn{k}} \right]. \end{equation} When the trace in Eq.~\eqref{eq_delta_frenergy} is Taylor-expanded in $q$ the zeroth-order term leads to a contribution to $\delta F$ that diverges like $q^{-2}$ in the limit $q\rightarrow 0$. This divergent term cancels out with the contribution from the piece $e^2B_{0}^2/(4m_e q^2)$ in $\delta H^{(2)}$. The oscillating piece $-e^2B_{0}^2\cos(2qx)/(4m_e q^2)$ in $\delta H^{(2)}$ averages out in first order perturbation theory. The $q$-quadratic term from the Taylor-expansion of the trace in Eq.~\eqref{eq_delta_frenergy} yields the free-energy change \begin{equation} \delta F\!=\!-\frac{e^2 B_{0}^2}{8\beta\hbar^2} \!\!\!\int\!\!\frac{\rmd^d k}{(2\pi)^d}\! \sum_{p} \text{Tr} \left[ \frac{\partial G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) } { \partial k^{x}_{\phantom{x}} } v^{y}_{\vn{k}} \frac{ \partial G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) } { \partial k^{x}_{\phantom{x}} } v^{y}_{\vn{k}} \right]. \end{equation} With the help of Eq.~\eqref{eq_free_energy_oms} we obtain the susceptibility \begin{equation} \chi^{zz}\!=\!\frac{e^2 \mu_{0}}{2\beta\hbar^2} \!\!\!\int\!\!\frac{\rmd^d k}{(2\pi)^d}\! \sum_{p} \text{Tr} \left[ \frac{\partial G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) } { \partial k^{x}_{\phantom{x}} } v^{y}_{\vn{k}} \frac{ \partial G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) } { \partial k^{x}_{\phantom{x}} } v^{y}_{\vn{k}} \right]. \end{equation} Employing the relation \begin{equation}\label{eq_k_deriv_green} \frac{\partial G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p})}{\partial k^{x}}= G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) v^{x}_{\vn{k}} G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) \end{equation} one finally obtains Eq.~\eqref{eq_oms_fukuyama}. For completeness, we mention that it has been shown that Eq.~\eqref{eq_oms_fukuyama} needs to be modified for the calculation of OMS from tight-binding models~\cite{orbital_magnetism_coupled_bands,lattice_effects_response_graphene}. We do not discuss these modifications here. \subsection{Exchange constants} \label{sec_exchange_fukuyama} In order to derive an expression for the exchange constant we consider the case where the magnetization performs small sinusoidal oscillations around the $z$ direction as a function of the $x$ coordinate: \begin{equation}\label{eq_oszi_xz} \hat{\vn{n}}(x) =\begin{pmatrix} \eta\sin(qx)\\ 0\\ 1 \end{pmatrix} \frac{1}{\sqrt{1+\eta^2\sin^2(qx)}}, \end{equation} where $\hat{\vn{n}}(x)$ is a normalized vector that describes the magnetization direction and $\eta$ controls the amplitude of the oscillations. As a result of these oscillations the free energy density changes by the amount \begin{equation}\label{eq_df_axx} \delta F=\mathscr{A}^{xx} \left\langle \left[ \frac{ \partial \hat{n}^{x} } { \partial x } \right]^2 \right\rangle = \frac{1}{2}\eta^2q^2\mathscr{A}^{xx} , \end{equation} where $\mathscr{A}^{xx}$ is an exchange constant and where we neglected higher orders in $\eta$. In the presence of SOI the free energy change may depend on whether the magnetization oscillates in the $xz$ plane or in the $yz$ plane. When the magnetization oscillates in the $yz$ plane, i.e., when \begin{equation} \hat{\vn{n}}(x) =\begin{pmatrix} 0\\ \eta\sin(qx)\\ 1 \end{pmatrix} \frac{1}{\sqrt{1+\eta^2\sin^2(qx)}}, \end{equation} the corresponding free energy change is described by \begin{equation} \delta F=\mathscr{A}^{xy} \left\langle \left[ \frac{ \partial \hat{n}^{y} } { \partial x } \right]^2 \right\rangle =\frac{1}{2}\eta^2 q^2 \mathscr{A}^{xy}, \end{equation} with the exchange constant $\mathscr{A}^{xy}$. $\mathscr{A}^{xy}$ may differ from $\mathscr{A}^{xx}$ in the presence of SOI. In the following we use thermal quantum field theory in order to obtain expressions for the free energy change $\delta F$ that arises from spatial oscillations of the magnetization direction as given by Eq.~\eqref{eq_oszi_xz}. We will then use Eq.~\eqref{eq_df_axx} to obtain $\mathscr{A}^{xx}$. To simplify the notation we will focus on the component $\mathscr{A}^{xx}$. The generalization to the other exchange constants, such as $\mathscr{A}^{xy}$, is obvious. We consider the Hamiltonian of a collinear ferromagnet with magnetization pointing in $z$ direction, given by \begin{equation}\label{eq_hamil_coll_ferro} \begin{aligned} H(\vn{r})=&-\frac{\hbar^2}{2m_e}\Delta+V(\vn{r})+ \mu_{\rm B}^{\phantom{B}}\sigma^{z}\Omega^{\rm xc}(\vn{r})+\\ &+ \frac{1}{2 e c^2}\mu_{\rm B}^{\phantom{B}} \vn{\sigma}\cdot \left[ \vn{\nabla}V(\vn{r})\times\vn{v} \right]. \end{aligned} \end{equation} The kinetic energy is described by the first term. The second term is a scalar potential. The third term describes the exchange interaction, where $\mu_{\rm B}^{\phantom{B}}$ is the Bohr magneton, $\vn{\sigma}=(\sigma^{x},\sigma^{y},\sigma^{z})^{\rm T}$ is the vector of Pauli spin matrices, and $\Omega^{\rm xc}(\vn{r})$ is the exchange field. The last term is the spin-orbit interaction. When the magnetization direction is not collinear but spatially oscillating according to Eq.~\eqref{eq_oszi_xz} the corresponding Hamiltonian is $H'=H+\delta H^{(1)}+\delta H^{(2)}$ with \begin{equation}\label{eq_delta_h1_mag} \delta H^{(1)}= \mu_{\rm B}^{\phantom{B}} \sigma^{x} \Omega^{\rm xc}(\vn{r}) \eta \sin(qx)=\mathcal{T}^{y}\eta\sin(qx) \end{equation} and \begin{equation}\label{eq_delta_h2_mag} \begin{aligned} \delta H^{(2)}&= -\frac{1}{2}\mu_{\rm B}^{\phantom{B}} \sigma^{z} \Omega^{\rm xc}(\vn{r}) \eta^2 \sin^2(qx)\\ &= -\frac{1}{4}\mu_{\rm B}^{\phantom{B}} \sigma^{z} \Omega^{\rm xc}(\vn{r}) \eta^2 \left[1- \cos(2qx) \right],\\ \end{aligned} \end{equation} where $\vn{\mathcal{T}}=-\mu_{\rm B}^{\phantom{B}}\vn{\sigma}\times\hat{\vn{e}}^{z}\Omega^{\rm xc}$ is the torque operator and $\mathcal{T}^{y}$ is its $y$ component. According to Eq.~\eqref{eq_df_axx} we need to find the modification of the free energy that is quadratic in $\eta$. Therefore, we need to perform second order perturbation theory with $\delta H^{(1)}$ and first order perturbation theory with $\delta H^{(2)}$. The perturbation $\delta H^{(1)}$ can be written in second quantization in the form \begin{equation} \begin{aligned} \delta H^{(1)}=\frac{\eta}{2i} \sum_{\vn{k}nm}\Bigl\{ &\langle u_{\vn{k}_{+}n}^{\phantom{k}}| \mathcal{T}^{y}| u_{\vn{k}_{-}m}^{\phantom{k}} \rangle c^{\dagger}_{\vn{k}_{+}n}c^{\phantom{\dagger}}_{\vn{k}_{-}m}\\ - &\langle u_{\vn{k}_{-}n}^{\phantom{k}}| \mathcal{T}^{y}| u_{\vn{k}_{+}m}^{\phantom{k}} \rangle c^{\dagger}_{\vn{k}_{-}n}c^{\phantom{\dagger}}_{\vn{k}_{+}m} \Bigr\}. \end{aligned} \end{equation} In second order perturbation theory with respect to $\delta H^{(1)}$ the free energy is modified by the amount \begin{equation}\label{eq_delta_frenergy_exi} \delta F\!=\!\frac{\eta^2}{4 \beta\hbar^2} \!\!\!\int\!\!\frac{\rmd^d k}{(2\pi)^d}\! \sum_{p} \!\text{Tr}\! \left[ G^{\rm M}_{\vn{k}_{+}} (i\mathcal{E}_{p}) \mathcal{T}^{y} G^{\rm M}_{\vn{k}_{-}}(i\mathcal{E}_{p}) \mathcal{T}^{y} \right]. \end{equation} The zeroth-order term in the Taylor expansion of $\delta F$ with respect to $q$ cancels out with the contribution from the piece $-\frac{1}{4}\mu_{\rm B}^{\phantom{B}}\sigma^{z}\Omega^{\rm xc}(\vn{r})\eta^2$ from $\delta H^{(2)}$ only when SOI is not included. This is an interesting difference to the case of the orbital magnetic susceptibility discussed below Eq.~\eqref{eq_delta_frenergy}, where the corresponding cancellation happens always. This difference is due to the fact that the magnetic anisotropy energy gives rise to a contribution to $\delta F$ which in leading order is proportional to $\eta^2$ at the zeroth order in $q$. The oscillating piece $\frac{1}{4}\mu_{\rm B}^{\phantom{B}} \sigma^{z} \Omega^{\rm xc}(\vn{r}) \eta^2 \cos(2qx)$ from $\delta H^{(2)}$ averages out in first order perturbation theory. In order to obtain the exchange constant $\mathscr{A}^{xx}$ we need the $q$-quadratic term from the Taylor-expansion of $\delta F$, which is given by \begin{equation}\label{eq_delta_frenergy_exi2} \delta F\!=\!-\frac{\eta^2q^2}{8 \beta\hbar^2} \!\!\!\int\!\!\frac{\rmd^d k}{(2\pi)^d}\! \sum_{p} \!\text{Tr}\! \left[ \frac{ \partial G^{\rm M}_{\vn{k}_{+}} (i\mathcal{E}_{p})} {\partial k^x} \mathcal{T}^{y} \frac{ \partial G^{\rm M}_{\vn{k}_{-}} (i\mathcal{E}_{p})} {\partial k^x} \mathcal{T}^{y} \right]. \end{equation} Using Eq.~\eqref{eq_k_deriv_green} and Eq.~\eqref{eq_df_axx} we find the following expression for the exchange constant: \begin{equation}\label{eq_exchange_params_fukuyama} \begin{aligned} &\mathscr{A}^{xx}=\frac{-1}{4\beta\hbar^2} \sum_{p}\int\!\!\frac{\rmd^d k}{(2\pi)^d} \text{Tr} \Bigl[\\ &G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) \mathcal{T}^{y} G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) v^{x}_{\vn{k}} G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) \mathcal{T}^{y} G^{\rm M}_{\vn{k}}(i\mathcal{E}_{p}) v^{x}_{\vn{k}} \Bigr], \end{aligned} \end{equation} which strongly resembles the Fukuyama formula for OMS, Eq.~\eqref{eq_oms_fukuyama}. Apart from the prefactor, Eq.~\eqref{eq_exchange_params_fukuyama} differs from Eq.~\eqref{eq_oms_fukuyama} by the replacement of the velocity operator $v^{y}_{\vn{k}}$ by the torque operator $\mathcal{T}^{y}$. The summation over Matsubara points can be expressed in terms of an energy integration along the real energy axis yielding \begin{equation}\label{eq_exchange_params_real_axis} \begin{aligned} &\mathscr{A}^{xx} =\frac{1}{4\pi \hbar^2}{\rm Im}\int d\,\mathcal{E}\,f(\mathcal{E}) \int\!\!\frac{\rmd^d k}{(2\pi)^d} \text{Tr} \Bigl[\\ &G^{\rm R}_{\vn{k}}(\mathcal{E}) \mathcal{T}^{y} G^{\rm R}_{\vn{k}}(\mathcal{E}) v^{x}_{\vn{k}} G^{\rm R}_{\vn{k}}(\mathcal{E}) \mathcal{T}^{y} G^{\rm R}_{\vn{k}}(\mathcal{E}) v^{x}_{\vn{k}} \Bigr]. \end{aligned} \end{equation} The unit of the exchange constant as given by Eq.~\eqref{eq_exchange_params_fukuyama} or Eq.~\eqref{eq_exchange_params_real_axis} is energy times length when $d=1$ and it is energy when $d=2$ and it is energy per length when $d=3$. Consequently, the unit of the free energy density as given by Eq.~\eqref{eq_df_axx} is energy per length when $d=1$ and it is energy per area when $d=2$ and it is energy per volume when $d=3$. We have mentioned in the previous section that the Fukuyama formula for OMS needs to be modified for tight-binding models~\cite{orbital_magnetism_coupled_bands,lattice_effects_response_graphene}. We expect similar modifications to be necessary when exchange constants are computed from tight-binding models, but we leave the discussion of these modifications for future work. \section{Curvatures, quantum metrices, moments and polarizations} \subsection{Orbital magnetic susceptibility} \label{sec_oms_semicla} As discussed by Ogata et al.\ in~\cite{oms_ogata_fukuyama} one can express the velocity operators and Green's functions in Eq.~\eqref{eq_oms_fukuyama} in the representation of Bloch eigenfunctions such that \begin{equation}\label{eq_oms_fukuyama_eigenspace} \begin{aligned} &\chi^{zz}=\frac{\mu_{0} e^2\hbar^2}{2\beta} \sum_{\substack{nn'\\l \,l'}} \int\!\!\frac{\rmd^d k}{(2\pi)^d} \Bigl[ v^{x}_{\vn{k}nn'} v^{y}_{\vn{k}n'l} v^{x}_{\vn{k}ll'} v^{y}_{\vn{k}l'n} \Bigr]\\ &\times \sum_{p} \frac{1}{i\mathcal{E}_{p}-\mathcal{E}_{\vn{k}n}} \frac{1}{i\mathcal{E}_{p}-\mathcal{E}_{\vn{k}n'}} \frac{1}{i\mathcal{E}_{p}-\mathcal{E}_{\vn{k}l}} \frac{1}{i\mathcal{E}_{p}-\mathcal{E}_{\vn{k}l'}} , \end{aligned} \end{equation} where $\vn{v}^{\phantom{x}}_{\vn{k}nn'}=\langle u_{\vn{k}n}|\vn{v}_{\vn{k}}^{\phantom{x}}| u_{\vn{k}n'}\rangle$ denotes the matrix elements of the velocity operator, $\mathcal{E}_{\vn{k}n}$ is the energy of band $n$ at $k$-point $\vn{k}$ and $| u_{\vn{k}n}\rangle$ is the corresponding eigenstate of $H_{\vn{k}}$, i.e., $H_{\vn{k}}| u_{\vn{k}n}\rangle=\mathcal{E}_{\vn{k}n}| u_{\vn{k}n}\rangle$. The summations over Matsubara points can be carried out with the help of partial fraction decomposition and with the identity \begin{equation}\label{eq_matsubara_sum} \frac{1}{\beta}\sum_{p} \frac{1}{ [ i\mathcal{E}_{p}-\mathcal{E}_{\vn{k}n} ]^{m} }=\frac{1}{(m-1)!}f_{\vn{k}n}^{(m-1)}, \end{equation} where $f_{\vn{k}n}^{(m-1)}$ is the $(m-1)$th derivative of the Fermi function. For example, when $n=n'=l=l'$ in Eq.~\eqref{eq_oms_fukuyama_eigenspace} one uses Eq.~\eqref{eq_matsubara_sum} with $m=4$, which leads to a contribution with the third derivative of the Fermi function. In order to rewrite high derivatives of the Fermi function in terms of lower derivatives one employs integration by parts and the relation \begin{equation}\label{eq_kderiv_fermifunc} \vn{v}_{\vn{k}n}^{\phantom{x}} f^{(m+1)}_{\vn{k}n}= \frac{1}{\hbar}\frac{\partial f^{(m)}_{\vn{k}n}}{\partial \vn{k}}, \end{equation} where we defined $\vn{v}_{\vn{k}n}^{\phantom{x}}=\vn{v}_{\vn{k}nn}^{\phantom{x}}$. Thereby one can achieve that only the first derivative of the Fermi function occurs. The resulting expression for the orbital magnetic susceptibility can be written as \begin{equation}\label{eq_oms_semicla} \begin{aligned} \chi^{zz}=& \mu_{0} \frac{e^2}{\hbar^2}\int\!\!\frac{\rmd^d k}{(2\pi)^d} \sum_{n} \Biggl [ \frac{1}{12}f'_{\vn{k}n} ( \alpha^{xx}_{\vn{k}n} \alpha^{yy}_{\vn{k}n} - \alpha^{xy}_{\vn{k}n} \alpha^{yx}_{\vn{k}n} ) \\ -&f'_{\vn{k}n} m^{z}_{\vn{k}n} m^{z}_{\vn{k}n} -\frac{\hbar^2}{4m_e} f_{\vn{k}n} ( g^{xx}_{\vn{k}n} + g^{yy}_{\vn{k}n} )\\ +&\frac{3}{2} f_{\vn{k}n} \Omega^{z}_{\vn{k}n} m^{z}_{\vn{k}n}\\ +&\frac{1}{4} f_{\vn{k}n} ( g^{xx}_{\vn{k}n}\alpha^{yy}_{\vn{k}n} + g^{yy}_{\vn{k}n}\alpha^{xx}_{\vn{k}n} -2 g^{xy}_{\vn{k}n}\alpha^{yx}_{\vn{k}n} )\\ +& \frac{\hbar^2}{2} f'_{\vn{k}n} v^{x}_{\vn{k}n} \frac{ \partial \langle u_{\vn{k}n}| }{\partial k^y} [ v^{x}_{\vn{k}} +v^{x}_{\vn{k}n} ] \frac{\partial |u_{\vn{k}n}\rangle }{\partial k^y}\\ +& \frac{\hbar^2}{2} f'_{\vn{k}n} v^{y}_{\vn{k}n} \frac{ \partial \langle u_{\vn{k}n}| }{\partial k^x} [ v^{y}_{\vn{k}} +v^{y}_{\vn{k}n} ] \frac{\partial |u_{\vn{k}n}\rangle }{\partial k^x}\\ -& \frac{\hbar^2}{2} f'_{\vn{k}n} v^{x}_{\vn{k}n} \frac{ \partial \langle u_{\vn{k}n}| }{\partial k^y} [ v^{y}_{\vn{k}} +v^{y}_{\vn{k}n} ] \frac{\partial |u_{\vn{k}n}\rangle }{\partial k^x}\\ -& \frac{\hbar^2}{2} f'_{\vn{k}n} v^{y}_{\vn{k}n} \frac{ \partial \langle u_{\vn{k}n}| }{\partial k^x} [ v^{x}_{\vn{k}} +v^{x}_{\vn{k}n} ] \frac{\partial |u_{\vn{k}n}\rangle }{\partial k^y} \\ -& 2\hbar^2 f_{\vn{k}n}\sum_{m\ne n} \frac{ \mathcal{M}^{z}_{\vn{k}mn} \left[ \mathcal{M}^{z}_{\vn{k}mn} \right]^{*} } { \mathcal{E}_{\vn{k}n}- \mathcal{E}_{\vn{k}m} } \Biggr ] , \end{aligned} \end{equation} where \begin{equation} \alpha^{ij}_{\vn{k}n}= \frac{\partial^2 \mathcal{E}_{\vn{k}n}}{\partial k^i \partial k^j} \end{equation} is the $ij$ element of the inverse effective mass tensor, \begin{equation} m^{z}_{\vn{k}n}=- {\rm Im} \left[ \frac{ \partial \langle u_{\vn{k}n}| }{\partial k^x} [ \mathcal{E}_{\vn{k}n}-H ] \frac{\partial |u_{\vn{k}n}\rangle }{\partial k^y} \right] \end{equation} is the $z$ component of the orbital moment of the wavepacket associated with band $n$ at $k$-point $\vn{k}$~\cite{wave_packets_sundaram,berry_phase_correction_dos}, \begin{equation} g^{ij}_{\vn{k}n}={\rm Re} \left[ \frac{ \partial \langle u_{\vn{k}n}| }{\partial k^i} \Bigl[ 1- |u_{\vn{k}n}\rangle \langle u_{\vn{k}n}| \Bigr] \frac{\partial |u_{\vn{k}n}\rangle }{\partial k^j} \right] \end{equation} is the $ij$ element of the $\vn{k}$-space quantum metrical tensor~\cite{geometry_quantum_evolution,quantum_geometry_bloch_bands,quantum_metric_without_berry}, $m_e$ is the electron mass, \begin{equation} \Omega^{z}_{\vn{k}n}=-2{\rm Im} \left[ \frac{ \partial \langle u_{\vn{k}n}| }{\partial k^x} \frac{\partial |u_{\vn{k}n}\rangle }{\partial k^y} \right] \end{equation} is the $\vn{k}$-space Berry curvature, and \begin{equation} \vn{\mathcal{M}}_{\vn{k}mn}= \frac{1}{2} \left[ \sum_{n'\ne n} \vn{v}_{\vn{k}mn'}\times\vn{A}_{\vn{k}n'n} + \vn{v}_{\vn{k}n}\times\vn{A}_{\vn{k}mn} \right] \end{equation} are interband matrix elements of the magnetic dipole moment and of the position operator~\cite{geometrical_effects_oms}, where \begin{equation}\label{eq_k_space_connection} \vn{A}_{\vn{k}mn}=i \langle u_{\vn{k}m}| \frac{\partial |u_{\vn{k}n}\rangle }{\partial \vn{k}} = i\hbar \frac{ \langle u_{\vn{k}m}| \vn{v} |u_{\vn{k}n}\rangle } { \mathcal{E}_{\vn{k}n} - \mathcal{E}_{\vn{k}m} } \end{equation} is the interband Berry connection. A detailed discussion of all terms in Eq.~\eqref{eq_oms_semicla} has been given by Gao et al.\ in Ref.~\cite{geometrical_effects_oms}. In the semiclassical derivation of Gao et al.\ the terms in the lines 5, 6, 7 and 8 in Eq.~\eqref{eq_oms_semicla} are explained by the $k$-space polarization energy and are related to the quadrupole moment of the velocity operator with respect to wave packets~\cite{geometrical_effects_oms}. However, the semiclassical derivation yields a different prefactor for these polarization terms. Already Ogata et al.\ pointed out in Ref.~\cite{oms_ogata_fukuyama} that the expression given by Gao et al.\ in Ref.~\cite{geometrical_effects_oms} differs from Eq.~\eqref{eq_oms_fukuyama}. However, Ogata et al.\ compared the semiclassical expression to the Fukuyama formula only in the special case of space-inversion symmetric systems when time-reversal symmetry is not broken. We find that Eq.~\eqref{eq_oms_fukuyama} can generally be written in the form of Eq.~\eqref{eq_oms_semicla}, i.e., Eq.~\eqref{eq_oms_semicla} yields the correct orbital magnetic susceptibility even in the time-reversal broken case and in systems lacking space inversion symmetry. Only the last line in Eq.~\eqref{eq_oms_semicla} involves interband couplings explicitly, while the first 8 lines in Eq.~\eqref{eq_oms_semicla} are formulated in terms of single-band properties. The Berry curvature and the quantum metric describe the geometrical properties of a single-band. In this sense, lines 3 and 4 in Eq.~\eqref{eq_oms_semicla} constitute the geometrical contribution to the orbital magnetic susceptibility~\cite{geometrical_effects_oms}. In section~\ref{sec_exchange_semicla} we will identify analogous geometrical contributions to the exchange constants. \subsection{Exchange constants} \label{sec_exchange_semicla} As discussed in section~\ref{sec_oms_semicla} the Fukuyama formula for the orbital magnetic susceptibility, Eq.~\eqref{eq_oms_fukuyama}, can be expressed in terms of geometrical properties such as the $k$-space Berry curvature and the quantum metric, and several other single-band properties, such as the orbital magnetic moment and the $k$-space polarization. The expression for the exchange constants, Eq.~\eqref{eq_exchange_params_fukuyama}, has the same structure as Eq.~\eqref{eq_oms_fukuyama} and can be obtained by replacing two velocity operators in Eq.~\eqref{eq_oms_fukuyama} by torque operators. This formal similarity suggests that Eq.~\eqref{eq_exchange_params_fukuyama} can be expressed in terms of Berry curvatures and quantum metrices in mixed phase space. For this purpose we define the mixed Berry curvature~\cite{phase_space_berry} \begin{equation}\label{eq_mixed_curvature} \mathcal{B}^{ij}_{\vn{k}n}= -2\, {\rm Im} \left\langle \frac{\partial u_{\vn{k}n}}{ \partial\hat{n}^{i} } \left| \frac{\partial u_{\vn{k}n}}{\partial k^{j}}\right. \right\rangle, \end{equation} where $\vn{k}$-derivatives are mixed with $\hat{\vn{n}}$-derivatives. Similarly, we define the mixed quantum metric \begin{equation} \mathcal{G}^{ij}_{\vn{k}n}={\rm Re} \left[ \frac{ \partial \langle u_{\vn{k}n}| } {\partial \hat{n}^i} \Bigl[ 1- |u_{\vn{k}n}\rangle \langle u_{\vn{k}n}| \Bigr] \frac{\partial |u_{\vn{k}n}\rangle } {\partial k^j} \right]. \end{equation} Additionally, we define the quantum metric in magnetization space \begin{equation} \tilde{g}^{ij}_{\vn{k}n}={\rm Re} \left[ \frac{ \partial \langle u_{\vn{k}n}| }{\partial \hat{n}^i} \Bigl[ 1- |u_{\vn{k}n}\rangle \langle u_{\vn{k}n}| \Bigr] \frac{\partial |u_{\vn{k}n}\rangle }{\partial \hat{n}^j} \right]. \end{equation} The twist-torque moment of wavepackets is described by~\cite{mothedmisot} \begin{equation} \mathcal{A}^{ij}_{\vn{k}n}= - {\rm Im} \left\langle \frac{\partial u_{\vn{k}n}}{ \partial\hat{n}^{i} } \right| \!\Bigl[ \mathcal{E}_{\vn{k}n}-H_{\vn{k}} \Bigr] \!\left| \frac{\partial u_{\vn{k}n}}{\partial k^{j}} \right\rangle , \end{equation} and \begin{equation} \bar{A}^{j}_{\vn{k}mn}=i \langle u_{\vn{k}m}| \frac{\partial |u_{\vn{k}n}\rangle }{\partial \hat{n}^{j}} \end{equation} is the interband Berry connection in magnetization space. The mixed phase-space analogue of the inverse effective mass tensor is given by \begin{equation}\label{eq_mass_mixed} \bar{\alpha}^{ij}_{\vn{k}n}= \frac{\partial^2 \mathcal{E}_{\vn{k}n}}{\partial k^i \partial \hat{n}^j}. \end{equation} In Appendix~\ref{sec_appendix_torque} we explain how the derivatives with respect to magnetization direction are related to matrix elements of the torque operator. In terms of the mixed phase-space quantities Eq.~\eqref{eq_mixed_curvature} through Eq.~\eqref{eq_mass_mixed} the exchange constant can be written as \begin{equation}\label{eq_exchange_params_semicla} \begin{aligned} \mathscr{A}^{xx}&= \int\!\!\frac{\rmd^d k}{(2\pi)^d} \sum_{n} \Biggl [ \frac{1}{24} ( f''_{\vn{k}n} \mathcal{T}^{y}_{\vn{k}n} \mathcal{T}^{y}_{\vn{k}n} \alpha^{xx}_{\vn{k}n} + f'_{\vn{k}n} \bar{\alpha}^{xx}_{\vn{k}n} \bar{\alpha}^{xx}_{\vn{k}n} ) \\ &+\frac{1}{2}f'_{\vn{k}n} \mathcal{A}^{xx}_{\vn{k}n} \mathcal{A}^{xx}_{\vn{k}n} +\frac{1}{3}f_{\vn{k}n} \tilde{g}^{xx}_{\vn{k}n} \frac{\hbar^2}{m_e} \\ &-\frac{5}{6}f_{\vn{k}n} \mathcal{A}^{xx}_{\vn{k}n} \mathcal{B}^{xx}_{\vn{k}n} \\ &-\frac{1}{6}f_{\vn{k}n} \alpha^{xx}_{\vn{k}n} \tilde{g}^{xx}_{\vn{k}n} +\frac{1}{6}f_{\vn{k}n} \bar{\alpha}^{xx}_{\vn{k}n} \mathcal{G}^{xx}_{\vn{k}n} \\ &+\mathscr{A}^{xx}_{\rm pol} + \mathscr{A}^{xx}_{\rm inter} \Biggr], \end{aligned} \end{equation} with \begin{equation}\label{eq_axx_pol} \begin{aligned} \mathscr{A}^{xx}_{\rm pol}&= \int\!\!\frac{\rmd^d k}{(2\pi)^d}\sum_{n} \Biggl [\\ &-\frac{1}{6} f'_{\vn{k}n} \mathcal{T}^{y}_{\vn{k}n} \left\langle \frac{\partial u_{\vn{k}n}}{ \partial k^{x} } \right| [ \mathcal{T}^{y}_{\phantom{k}} +2 \mathcal{T}^{y}_{\vn{k}n} ] \left| \frac{\partial u_{\vn{k}n}}{\partial k^{x}} \right\rangle\\ & -\frac{1}{6}\hbar^2 f'_{\vn{k}n} v^{x}_{\vn{k}n} \left\langle \frac{\partial u_{\vn{k}n}}{ \partial \hat{n}^{x} } \right| [ v^{x}_{\vn{k}} + v^{x}_{\vn{k}n} ] \left| \frac{\partial u_{\vn{k}n}}{\partial \hat{n}^{x}} \right\rangle\\ & +\frac{1}{3}\hbar f'_{\vn{k}n} \mathcal{T}_{\vn{k}n}^{y} \left\langle \frac{\partial u_{\vn{k}n}}{ \partial\hat{n}^{x} } \right| [ 2v^{x}_{\vn{k}} + v^{x}_{\vn{k}n} ] \left| \frac{\partial u_{\vn{k}n}}{\partial k^{x}} \right\rangle\Biggr], \end{aligned} \end{equation} and \begin{equation}\label{eq_axx_inter} \begin{aligned} \mathscr{A}^{xx}_{\rm inter}&= \int\!\!\frac{\rmd^d k}{(2\pi)^d}\sum_{n} \Biggl [ \frac{\hbar^2}{3} f_{\vn{k}n} v^{x}_{\vn{k}n} v^{x}_{\vn{k}n} \sum_{m\ne n} \frac{ \bar{A}^{x}_{\vn{k}mn} [\bar{A}^{x}_{\vn{k}mn}]^{*} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}} \\ &- \frac{2}{3} \hbar f_{\vn{k}n} v^{x}_{\vn{k}n} \mathcal{T}^{y}_{\vn{k}n} \sum_{m\ne n} \frac{ \bar{A}^{x}_{\vn{k}mn} [A^{x}_{\vn{k}mn}]^{*} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}} \\ & -\frac{\hbar}{3} f_{\vn{k}n} \sum_{m\ne n} \frac{ \sum\limits_{q\ne n} [ v^{x}_{\vn{k}mq} \bar{A}^{x}_{\vn{k}qn} ]^{*} \sum\limits_{r\ne n} \mathcal{T}^{y}_{\vn{k}mr} A^{x}_{\vn{k}rn} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}}\\ & +\frac{2}{3}\hbar^2 f_{\vn{k}n} \sum_{m\ne n} \frac{ \sum\limits_{q\ne n} [ v^{x}_{\vn{k}mq} \bar{A}^{x}_{\vn{k}qn} ]^{*} \sum\limits_{r\ne n} v^{x}_{\vn{k}mr} \bar{A}^{x}_{\vn{k}rn} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}}\\ & -\frac{1}{3}\hbar f_{\vn{k}n} v^{x}_{\vn{k}n} \sum_{m\ne n} \frac{ [ \bar{A}^{x}_{\vn{k}mn} ]^{*} \sum\limits_{r\ne n} \mathcal{T}^{y}_{\vn{k}mr} A^{x}_{\vn{k}rn} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}}\\ & -\frac{2}{3}\hbar f_{\vn{k}n} \mathcal{T}^{y}_{\vn{k}n} \sum_{m\ne n} \frac{ [ A^{x}_{\vn{k}mn} ]^{*} \sum\limits_{r\ne n} v^{x}_{\vn{k}mr} \bar{A}^{x}_{\vn{k}rn} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}}\\ & + f_{\vn{k}n} \mathcal{T}^{y}_{\vn{k}n} \sum_{m\ne n} \frac{ [ A^{x}_{\vn{k}mn} ]^{*} \sum\limits_{r\ne n} \mathcal{T}^{y}_{\vn{k}mr} A^{x}_{\vn{k}rn} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}} \Biggr], \end{aligned} \end{equation} where we defined $\mathcal{T}^{y}_{\vn{k}nn'}=\langle u_{\vn{k}n}|\mathcal{T}^{y}_{\phantom{k}}| u_{\vn{k}n'}\rangle$ and $\mathcal{T}^{y}_{\vn{k}n}=\mathcal{T}^{y}_{\vn{k}nn}$. Eq.~\eqref{eq_exchange_params_semicla} differs substantially in structure from Eq.~\eqref{eq_oms_semicla}, while the corresponding Fukuyama-type expressions, Eq.~\eqref{eq_oms_fukuyama} and Eq.~\eqref{eq_exchange_params_fukuyama}, are very similar structurally. The structural differences between Eq.~\eqref{eq_exchange_params_semicla} and Eq.~\eqref{eq_oms_semicla} arise, because there is no integration over the magnetization direction, only a Brillouin zone integration, and therefore the identity \begin{equation}\label{eq_magderiv_fermifunc} \vn{\mathcal{T}}_{\vn{k}n}^{\phantom{y}} f^{(m+1)}_{\vn{k}n}= \hat{\vn{n}}\times \frac{\partial f^{(m)}_{\vn{k}n}}{\partial \hat{\vn{n}}} \end{equation} cannot be combined with integration by parts in order to rewrite high derivatives of the Fermi function in terms of lower derivatives of the Fermi function while Eq.~\eqref{eq_kderiv_fermifunc} can be used for this purpose. For example, the first line in Eq.~\eqref{eq_exchange_params_semicla} is related formally to the Landau-Peierls susceptibility in the first line of Eq.~\eqref{eq_oms_semicla}: In the case of the orbital magnetic susceptibility the torque operators in the first line of Eq.~\eqref{eq_exchange_params_semicla} turn into velocity operators and one can use integration by parts such that \begin{equation} \begin{aligned} &\int\!\!\frac{\rmd^d k}{(2\pi)^d} f''_{\vn{k}n} v^{y}_{\vn{k}n} v^{y}_{\vn{k}n} \alpha^{xx}_{\vn{k}n}= \int\!\!\frac{\rmd^d k}{(2\pi)^d} \frac{1}{\hbar} \frac{ \partial f'_{\vn{k}n} } { \partial k^y } v^{y}_{\vn{k}n} \alpha^{xx}_{\vn{k}n}=\\ -&\int\!\!\frac{\rmd^d k}{(2\pi)^d} \frac{1}{\hbar} f'_{\vn{k}n} \frac{ \partial} {\partial k^y} [ v^{y}_{\vn{k}n} \alpha^{xx}_{\vn{k}n}]=\\ -&\int\!\!\frac{\rmd^d k}{(2\pi)^d} \frac{1}{\hbar} f'_{\vn{k}n} [ \alpha^{yy}_{\vn{k}n} \alpha^{xx}_{\vn{k}n} + v^{y}_{\vn{k}n} \frac{ \partial} {\partial k^y} \alpha^{xx}_{\vn{k}n}],\\ \end{aligned} \end{equation} which contains the term $f'_{\vn{k}n}\alpha^{yy}_{\vn{k}n}\alpha^{xx}_{\vn{k}n}$ found also in the first line of Eq.~\eqref{eq_oms_semicla}. The lines 2, 3 and 4 in Eq.~\eqref{eq_exchange_params_semicla} correspond to the lines 2, 3 and 4 in Eq.~\eqref{eq_oms_semicla}, where the twist torque moment $\mathcal{A}_{\vn{k}n}^{xx}$ replaces the orbital moment $m^{z}_{\vn{k}n}$, the $k$-space quantum metric $g_{\vn{k}n}^{yy}$ is replaced by the magnetization-space quantum metric $\tilde{g}_{\vn{k}n}^{xx}$, the mixed Berry curvature replaces the $k$-space Berry curvature, and the off-diagonal elements of the inverse effective mass, $\alpha_{\vn{k}n}^{yx}$, and of the $k$-space quantum metric, $g_{\vn{k}n}^{xy}$, are replaced by their mixed phase-space counterparts. The contribution $\mathscr{A}_{\rm pol}^{xx}$ defined in Eq.~\eqref{eq_axx_pol} corresponds to the lines 5, 6, 7 and 8 in Eq.~\eqref{eq_oms_semicla}, which describe the $k$-space polarization energy. The contribution $\mathscr{A}_{\rm inter}^{xx}$ defined in Eq.~\eqref{eq_axx_inter} corresponds to the last line in Eq.~\eqref{eq_oms_semicla} and is the only term that contains interband couplings explicitly. Several terms in Eq.~\eqref{eq_exchange_params_semicla} are zero when SOI is not included in the Hamiltonian: The mixed phase-space analogue of the inverse effective mass, $\bar{\alpha}_{\vn{k}n}^{ij}$, is zero without SOI, because the band energy does not depend on the magnetization direction when SOI is absent. Additionally, $\mathcal{T}_{\vn{k}n}^{y}=0$, $\mathcal{A}^{ij}_{\vn{k}n}=0$, $\mathcal{B}^{ij}_{\vn{k}n}=0$ and $\mathcal{G}^{ij}_{\vn{k}n}=0$ in the absence of SOI. Thus, when SOI is absent the exchange constants are given by the considerably simpler expression \begin{equation}\label{eq_exchange_params_semicla_nosoi} \begin{aligned} \mathscr{A}^{xx}&= \int\!\!\frac{\rmd^d k}{(2\pi)^d} \sum_{n} \Biggl [ \frac{1}{3}f_{\vn{k}n} \tilde{g}^{xx}_{\vn{k}n} \frac{\hbar^2}{m_e} \\ &-\frac{1}{6}f_{\vn{k}n} \alpha^{xx}_{\vn{k}n} \tilde{g}^{xx}_{\vn{k}n} \\ & -\frac{\hbar^2}{6} f'_{\vn{k}n} v_{\vn{k}n}^{x} \left\langle \frac{\partial u_{\vn{k}n}}{ \partial \hat{n}^{x} } \right| [ v^{x}_{\vn{k}} + v^{x}_{\vn{k}n} ] \left| \frac{\partial u_{\vn{k}n}}{\partial \hat{n}^{x}} \right\rangle\\ &+ \frac{\hbar^2}{3} f_{\vn{k}n} v^{x}_{\vn{k}n} v^{x}_{\vn{k}n} \sum_{m\ne n} \frac{ \bar{A}^{x}_{\vn{k}mn} [\bar{A}^{x}_{\vn{k}mn}]^{*} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}} \\ & -\frac{\hbar}{3} f_{\vn{k}n} \sum_{m\ne n} \frac{ \sum\limits_{q\ne n} [ v^{x}_{\vn{k}mq} \bar{A}^{x}_{\vn{k}qn} ]^{*} \sum\limits_{r\ne n} \mathcal{T}^{y}_{\vn{k}mr} A^{x}_{\vn{k}rn} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}}\\ & +\frac{2}{3} \hbar^2 f_{\vn{k}n} \sum_{m\ne n} \frac{ \sum\limits_{q\ne n} [ v^{x}_{\vn{k}mq} \bar{A}^{x}_{\vn{k}qn} ]^{*} \sum\limits_{r\ne n} v^{x}_{\vn{k}mr} \bar{A}^{x}_{\vn{k}rn} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}}\\ & -\frac{1}{3}\hbar f_{\vn{k}n} v^{x}_{\vn{k}n} \sum_{m\ne n} \frac{ [ \bar{A}^{x}_{\vn{k}mn} ]^{*} \sum\limits_{r\ne n} \mathcal{T}^{y}_{\vn{k}mr} A^{x}_{\vn{k}rn} } {\mathcal{E}_{\vn{k}n}-\mathcal{E}_{\vn{k}m}} \Biggr]. \end{aligned} \end{equation} In Appendix~\ref{sec_analytical} we discuss how to evaluate Eq.~\eqref{eq_exchange_params_semicla_nosoi} analytically for simple model systems. The lines 3 and 4 in Eq.~\eqref{eq_exchange_params_semicla} are the geometrical contribution to the exchange constants. It consists of three terms: \begin{equation}\label{eq_geo1} \mathscr{A}_{\rm geo1}^{xx}= -\frac{5}{6} \int\!\!\frac{\rmd^d k}{(2\pi)^d} \sum_{n} f_{\vn{k}n} \mathcal{A}^{xx}_{\vn{k}n} \mathcal{B}^{xx}_{\vn{k}n} \end{equation} and \begin{equation}\label{eq_geo2} \mathscr{A}_{\rm geo2}^{xx}= -\frac{1}{6} \int\!\!\frac{\rmd^d k}{(2\pi)^d} \sum_{n} f_{\vn{k}n} \alpha^{xx}_{\vn{k}n} \tilde{g}^{xx}_{\vn{k}n} \end{equation} and \begin{equation}\label{eq_geo3} \mathscr{A}_{\rm geo3}^{xx}= \frac{1}{6} \int\!\!\frac{\rmd^d k}{(2\pi)^d} \sum_{n} f_{\vn{k}n} \bar{\alpha}^{xx}_{\vn{k}n} \mathcal{G}^{xx}_{\vn{k}n}. \end{equation} $\mathcal{B}^{xx}_{\vn{k}n}$ and $\mathcal{G}^{xx}_{\vn{k}n}$ describe geometrical properties of the bands in mixed phase space. When SOI is not included in the Hamiltonian $\mathscr{A}_{\rm geo1}^{xx}$ and $\mathscr{A}_{\rm geo3}^{xx}$ are zero. $\mathscr{A}_{\rm geo2}^{xx}$ is nonzero even in the absence of SOI. It involves $\tilde{g}^{xx}_{\vn{k}n}$, which describes the geometrical properties of the bands in real space. According to Eq.~\eqref{eq_axx_pol} $\mathscr{A}^{xx}_{\rm pol}$ contains only terms with $f'_{\vn{k}n}$. The derivative of the Fermi function becomes large close to the Fermi energy. In particular at zero temperature we have $f'_{\vn{k}n}=-\delta(\mathcal{E}_{\rm F}-\mathcal{E}_{\vn{k}n})$. Therefore only states close to the Fermi level contribute to $\mathscr{A}^{xx}_{\rm pol}$, i.e., $\mathscr{A}^{xx}_{\rm pol}$ is a Fermi surface term. In contrast, $\mathscr{A}^{xx}_{\rm inter}$ (Eq.~\eqref{eq_axx_inter}) contains only terms with $f_{\vn{k}n}$, i.e., all states below the Fermi energy contribute to $\mathscr{A}^{xx}_{\rm inter}$. Hence, $\mathscr{A}^{xx}_{\rm inter}$ is a Fermi sea term. Eq.~\eqref{eq_exchange_params_semicla} contains additional Fermi surface and Fermi sea terms. The exchange constant in magnetic band insulators arises from the Fermi sea terms, since the Fermi surface terms are zero in insulators. \section{Gauge-field approach} \label{sec_gauge_field} The appearance of gauge-fields and their application in spintronics has been discussed in detail in the review Ref.~\cite{gauge_fields_spintronics}. They can occur in real-space, momentum-space and in time. Here, we are interested in the Berry gauge field associated with electron spins that adiabatically follow noncollinear magnetic textures. This gauge field mimics the magnetic vector potential known from electrodynamics. The curl of this effective magnetic vector potential has similar consequences like a real magnetic field. In particular it deflects electrons by an effective Lorentz force, which leads to the topological Hall effect~\cite{bruno_the_2004}. The curl of the effective magnetic vector potential is nonzero when the scalar spin chirality of the magnetic texture is nonzero, for example in skyrmions. For the discussion of the exchange constants it is not necessary to consider systems with nonzero scalar spin chirality. But even when the curl of the effective magnetic vector potential is zero it does have consequences, in particular it affects the energy of the eigenstates, as we will see below. In the case of the topological Hall effect the gauge-field approach has been developed for systems without SOI~\cite{bruno_the_2004}. In the general case it is difficult to apply the gauge-field approach to magnetic systems with SOI. However, under certain conditions the exchange constants can be obtained from a gauge-field approach even in the presence of SOI. We demonstrate this in the following. We will show that the exchange constants calculated based on the gauge-field approach agree to those given by Eq.~\eqref{eq_exchange_params_fukuyama}. This will prove the accuracy of Eq.~\eqref{eq_exchange_params_fukuyama}. We consider the Rashba model with an additional exchange splitting (see Ref.~\cite{rashba_review} for a recent review on the Rashba model) \begin{equation}\label{eq_rashba_model} H=\frac{-\hbar^2}{2m_e} \Delta-i \alpha (\vn{\nabla}\times\hat{\vn{e}}_{z})\cdot\vn{\sigma}+ \frac{\Delta V}{2} \vn{\sigma} \cdot \hat{\vn{n}}_{\rm c}(\vn{r}) , \end{equation} where the first, second and third terms on the right-hand side describe the kinetic energy, the Rashba spin-orbit coupling and the exchange interaction, respectively. We focus on the case of a flat cycloidal spin-spiral, where the magnetization direction $\hat{\vn{n}}_{\rm c}(\vn{r})$ is given by \begin{equation}\label{eq_spin_spiral_cycloid} \hat{\vn{n}}_{\rm c}(\vn{r})= \begin{pmatrix} \sin(qx)\\ 0\\ \cos(qx) \end{pmatrix}. \end{equation} The exchange interaction describing the noncollinear spin-spiral in Eq.~\eqref{eq_rashba_model} can be transformed into an effective exchange interaction of a collinear magnet with the help of the unitary transformation \begin{equation}\label{eq_gauge_trafo_matrix} U(x)= \left( \begin{array}{cc} \cos(\frac{qx}{2}) &-\sin(\frac{qx}{2})\\[6pt] \sin(\frac{qx}{2}) &\cos(\frac{qx}{2}) \end{array} \right) \end{equation} such that~\cite{bruno_the_2004} \begin{equation} U^{\dagger}(x) \frac{\Delta V}{2} \vn{\sigma} \cdot \hat{\vn{n}}_{\rm c}(\vn{r})U(x)= \frac{\Delta V}{2} \sigma_{z}. \end{equation} The kinetic energy in Eq.~\eqref{eq_rashba_model} transforms under this unitary transformation as follows~\cite{bruno_the_2004}: \begin{equation} \begin{aligned} &-\frac{\hbar^2}{2m_e} U^{\dagger} \Delta U =\\ &=-\frac{\hbar^2}{2m_e} U^{\dagger} \frac{\partial}{\partial \vn{r}}\cdot \left( U\frac{\partial}{\partial \vn{r}} +\frac{\partial U}{\partial \vn{r}} \right)\\ &=-\frac{\hbar^2}{2m_e} \left( \Delta +2 U^{\dagger} \frac{\partial U}{\partial x} \frac{\partial}{\partial x} + U^{\dagger} \frac{\partial^2 U}{\partial x^2} \right). \end{aligned} \end{equation} The derivatives of $U$ with respect to the $x$ coordinate are \begin{equation}\label{eq_gauge_trafo_matrix_der1} \frac{\partial U(x)}{\partial x}=\frac{q}{2} \left( \begin{array}{cc} -\sin(\frac{qx}{2}) &-\cos(\frac{qx}{2})\\[6pt] \cos(\frac{qx}{2}) &-\sin(\frac{qx}{2}) \end{array} \right) \end{equation} and \begin{equation}\label{eq_gauge_trafo_matrix_der2} \frac{\partial^2 U(x)}{\partial x^2}= -\frac{q^2}{4}U(x) \end{equation} and we have \begin{equation} [U(x)]^{\dagger} \frac{\partial U(x)}{\partial x}=\frac{q}{2}\left( \begin{array}{cc} 0 &-1\\ 1 &0 \end{array} \right)=\frac{q}{2i}\sigma_{y} \end{equation} such that the kinetic energy transforms as \begin{equation} -\frac{\hbar^2}{2m_e} U^{\dagger} \Delta U =-\frac{\hbar^2}{2m_e} \left( \Delta -iq\sigma_y\frac{\partial}{\partial x} -\frac{q^2}{4} \right).\\ \end{equation} Next, we need to find out how the Rashba SOI \begin{equation} \frac{1}{i}\alpha\vn{\sigma}\cdot(\vn{\nabla}\times\hat{\vn{e}}_z) =\frac{1}{i}\alpha \left[ \sigma_{x} \frac{\partial}{\partial y} -\sigma_{y} \frac{\partial}{\partial x} \right] \end{equation} transforms under $U$. We have \begin{equation} \begin{aligned} [U(x)]^{\dagger} \sigma_{y} \frac{\partial U(x)}{\partial x}=-i\frac{q}{2} \end{aligned} \end{equation} and \begin{equation} \begin{aligned} [U(x)]^{\dagger} \sigma_{y} U(x)=\sigma_{y} \end{aligned} \end{equation} and thus \begin{equation} \begin{aligned} &-[U(x)]^{\dagger} \left[ \frac{\alpha}{i} \sigma_{y} \frac{\partial}{\partial x} \right] U(x)=\\ =&-[U(x)]^{\dagger} \left[ \alpha \sigma_{y} \right] U(x) \frac{1}{i} \frac{\partial }{\partial x}\\ &-[U(x)]^{\dagger} \left[ \frac{\alpha}{i} \sigma_{y} \right] \frac{\partial U(x) }{\partial x} =\\ =&- \alpha \sigma_{y} \frac{1}{i} \frac{\partial }{\partial x} +\frac{\alpha q}{2}.\\ \end{aligned} \end{equation} However \begin{equation}\label{eq_gauge_trafo_sigmax} \begin{aligned} &[U(x)]^{\dagger} \sigma_{x} U(x) =\\ &= \left( \begin{matrix} 2\cos^2(\frac{qx}{2})-1 &-2 \cos(\frac{qx}{2}) \sin(\frac{qx}{2})\\[6pt] -2 \cos(\frac{qx}{2}) \sin(\frac{qx}{2}) & -2\cos^2(\frac{qx}{2})+1 \end{matrix} \right) \end{aligned} \end{equation} depends on the $x$ coordinate and consequently the application of the $U$ transformation to Eq.~\eqref{eq_rashba_model} transforms the $x$-dependence of the exchange interaction into an $x$-dependence of SOI and no simplification is achieved by this transformation. Therefore, we consider now the one-dimensional version of the Rashba model with an additional exchange splitting \begin{equation}\label{eq_rashba_model_onedim} H=\frac{-\hbar^2}{2m_e} \frac{\partial^2}{\partial x^2} +i \alpha \sigma_{y}\frac{\partial}{\partial x} + \frac{\Delta V}{2} \vn{\sigma} \cdot \hat{\vn{n}}_{\rm c}(\vn{r}). \end{equation} The one-dimensional Rashba model can be used to describe spin-split bands in one-dimensional atomic chains on surfaces~\cite{spin_split_bands_onedim_chain}. Application of the $U$ transformation to Eq.~\eqref{eq_rashba_model_onedim} yields \begin{equation}\label{eq_onedim_rashba_model_gauge} \begin{aligned} \tilde{H}=[U(x)]^{\dagger} H U(x) &= -\frac{\hbar^2}{2m_e} \left[ \frac{\partial^2}{\partial x^2} -iq\sigma_y\frac{\partial}{\partial x} -\frac{q^2}{4} \right]+\\ &+ \frac{\Delta V}{2} \sigma_{z} - \alpha \sigma_{y} \frac{1}{i} \frac{\partial }{\partial x} +\frac{\alpha q}{2} \end{aligned} \end{equation} and the corresponding crystal-field representation of the Hamiltonian is given by \begin{equation}\label{eq_onedim_rashba_gauge_crystal_field} \begin{aligned} \tilde{H}_{k_x} &= \frac{\hbar^2}{2m_e} \left[ k_x^2 -q k_{x} \sigma_y +\frac{q^2}{4} \right]+\\ &+ \frac{\Delta V}{2} \sigma_{z} - \alpha k_{x} \sigma_{y} +\frac{\alpha q}{2}. \end{aligned} \end{equation} Since the $U$ transformation preserves the eigenvalues, the Hamiltonian $\tilde{H}_{k_x}$ has the same spectrum as Eq.~\eqref{eq_rashba_model_onedim}. However, $\tilde{H}_{k_x}$ is position-independent and thus it is straightforward to determine its eigenvalues, while the original Hamiltonian in Eq.~\eqref{eq_rashba_model_onedim} is more difficult to deal with due to the position-dependence of the exchange term for the cycloidal spin-spiral. The reason why the $U$ transformation can be used to simplify Eq.~\eqref{eq_rashba_model_onedim} into $\tilde{H}_{k_x}$ lies in the spin-rotation symmetry of Eq.~\eqref{eq_rashba_model_onedim}: The Hamiltonian is invariant under the simultaneous rotation of the spin-operator and the magnetization direction $\hat{\vn{n}}_{c}$ around the $y$ axis. In contrast, the Hamiltonian of the two-dimensional Rashba model, Eq.~\eqref{eq_rashba_model}, does not exhibit this symmetry when $\alpha\neq 0$. The Hamiltonian Eq.~\eqref{eq_onedim_rashba_model_gauge} can be rewritten in the form \begin{equation} H=\frac{1}{2m} (p_x+eA^{\rm eff})^2-\frac{m \alpha^2}{2 \hbar^2} \end{equation} where $p_x=-i\hbar \partial/\partial x$ is the $x$ component of the momentum operator and \begin{equation} A^{\rm eff}=-\frac{m }{e \hbar} \left( \alpha+\frac{\hbar^2}{2m}q \right) \sigma_{y} \end{equation} can be considered as an effective magnetic vector potential, which is why we refer to this method as gauge-field approach. The free energy density $F_{q}$ of the one-dimensional Rashba model with exchange splitting, Eq.~\eqref{eq_rashba_model_onedim}, can be obtained from \begin{equation} F_{q}= -\frac{1}{\beta} \int\frac{d\,k_{x}}{2\pi}\sum_{n} \ln \left[ 1+e^{-\beta(\mathcal{E}_{k_x,q,n}-\mu)} \right] , \end{equation} where $\mathcal{E}_{k_x,q,n}$ denotes the $n$th eigenvalue of $\tilde{H}_{k_x}$ at $k$-point $k_{x}$ and spin-spiral wavenumber $q$. Equating $F_{q}$ and the phenomenological expression for the free energy \begin{equation} F_{q}=F_{0}+D^{yx}q+\mathscr{A}^{xx}q^2 \end{equation} allows us to determine the DMI-coefficient and the exchange parameter as follows: \begin{equation}\label{eq_gauge_approach_dmi} D^{yx}=\frac{F_{q}-F_{-q}}{2q} \end{equation} and \begin{equation}\label{eq_gauge_approach_exchange} \mathscr{A}^{xx}=\frac{ F_{q}+F_{-q} -2F_{0} }{2 q^2}. \end{equation} In section~\ref{sec_one_dim_rashba} we will compare the exchange constant $\mathscr{A}^{xx}$ obtained from Eq.~\eqref{eq_gauge_approach_exchange} to the one given by Eq.~\eqref{eq_exchange_params_fukuyama} and find perfect agreement between these two rather different approaches. Additionally, we will show in section~\ref{sec_one_dim_rashba} that the DMI-coefficient $D^{yx}$ obtained from Eq.~\eqref{eq_gauge_approach_dmi} is in perfect agreement with the one given by the Berry-phase theory of DMI~\cite{mothedmisot,phase_space_berry,itsot,spicudmi}, which in the one-dimensional case runs \begin{equation}\label{eq_dmi_berry} D^{yx}\!\!=\!\!\int\! \frac{d\, k_x}{2\pi} \!\sum_{n} \Bigl[ f_{\vn{k}n}\mathcal{A}_{\vn{k}n}^{xx} \!+\!\frac{1}{\beta} \ln \left[ 1+ e^{-\beta( \mathcal{E}_{\vn{k}n} -\mu )} \right] \mathcal{B}_{\vn{k}n}^{xx} \Bigr]. \end{equation} \subsection{Two-dimensional electron gas without SOI} \label{sec_gauge_field_rashba2d} Due to Eq.~\eqref{eq_gauge_trafo_sigmax} the $U$ transformation does not lead to simplifications in the case of the two-dimensional Rashba model Eq.~\eqref{eq_rashba_model} when $\alpha\ne 0$. However, in the case of $\alpha=0$ the $U$ transformation leads to a simplification of Eq.~\eqref{eq_rashba_model}: \begin{equation} \begin{aligned} \tilde{H}=&[U(x)]^{\dagger} H U(x) =\\ =&-\frac{\hbar^2}{2m_e} \left[ \Delta -iq\sigma_y\frac{\partial}{\partial x} -\frac{q^2}{4} \right]+ \frac{\Delta V}{2} \sigma_{z}, \end{aligned} \end{equation} with corresponding crystal momentum representation \begin{equation}\label{eq_rash_2d_tilde_k} \begin{aligned} &\tilde{H}_{\vn{k}} = \frac{\hbar^2}{2m_e} \left[ \vn{k}^2 -q k_{x} \sigma_y +\frac{q^2}{4} \right]+ \frac{\Delta V}{2} \sigma_{z}. \end{aligned} \end{equation} Since $\tilde{H}_{\vn{k}}$ is position-independent, its eigenvalues $\mathcal{E}_{\vn{k},q,n}$ can be determined easily. The free energy density is then obtained from \begin{equation} F_{q}= -\frac{1}{\beta} \int\frac{d^2 k}{(2\pi)^2}\sum_{n} \ln \left[ 1+e^{-\beta(\mathcal{E}_{\vn{k},q,n}-\mu)} \right] \end{equation} and Eq.~\eqref{eq_gauge_approach_exchange} can be used to determine the exchange constant $\mathscr{A}^{xx}$. \section{Exchange constants in model systems} \label{sec_num_results} \subsection{One-dimensional Rashba model} \label{sec_one_dim_rashba} \begin{figure} \includegraphics[width=\linewidth]{compare_fuku_spirals_exchange.eps} \includegraphics[width=\linewidth]{compare_mothedmi_spirals.eps} \includegraphics[width=\linewidth]{rashba_bandstructure.eps} \caption{\label{fig_gauge_vs_fukuyama} (a) Exchange constant $\mathscr{A}^{xx}$ and (b) DMI constant $D^{yx}$ in the one-dimensional Rashba model Eq.~\eqref{eq_rashba_model_onedim} as a function of Fermi energy for the model parameters $\Delta V=1$eV and $\alpha=2$eV\AA{}. Results obtained from the gauge-field approach (dashed lines) agree respectively to the exchange constant from the Fukuyama-type approach and to the DMI-constant from the Berry-phase approach (solid lines). (c) Band structure of the one-dimensional Rashba model. } \end{figure} \begin{figure} \includegraphics[width=\linewidth]{spix_vs_alpha.eps} \includegraphics[width=\linewidth]{spiy_vs_alpha.eps} \caption{\label{fig_axx_vs_axy} (a) Exchange constant $\mathscr{A}^{xx}$ and (b) $\mathscr{A}^{xy}$ in the one-dimensional Rashba model Eq.~\eqref{eq_rashba_model_onedim} as a function of the Rashba parameter $\alpha$. The Fermi energy is set to zero and $\Delta V=1$eV. Solid lines: Complete exchange constants. The geometrical contributions $\mathscr{A}_{\rm geo1}$ (dashed), $\mathscr{A}_{\rm geo2}$ (dotted), and $\mathscr{A}_{\rm geo3}$ (dashed-dotted) as defined in Eqs.~\eqref{eq_geo1} through \eqref{eq_geo3} are shown as well. } \end{figure} In the presence of SOI both the exchange constant $\mathscr{A}^{ij}$ as obtained from Eq.~\eqref{eq_exchange_params_fukuyama} and the DMI constant $D^{ij}$ as obtained from Eq.~\eqref{eq_dmi_berry} may depend on the magnetization direction $\hat{\vn{n}}$. However, as we explained in the discussion below Eq.~\eqref{eq_rashba_model_onedim}, rotations in spin-space around the $y$ axis are a symmetry operation of the one-dimensional Rashba model. Since we consider the special case of a cycloidal spin-spiral, Eq.~\eqref{eq_spin_spiral_cycloid}, which describes a magnetization that rotates around the $y$ axis as one moves along the spin-spiral, $\mathscr{A}^{ij}$ and $D^{ij}$ are constant along this spin-spiral. This allows us to compare the values of $\mathscr{A}^{ij}$ and $D^{ij}$ obtained for $\hat{\vn{n}}$ in $z$ direction to the values obtained from the gauge-field approach in this particular case, while in a general case a spin-spiral calculation will correspond to an $\hat{\vn{n}}$-integration of $\hat{\vn{n}}$-dependent $\mathscr{A}^{ij}$ and $D^{ij}$. In Fig.~\ref{fig_gauge_vs_fukuyama} we show the exchange constant $\mathscr{A}^{xx}$ as well as the DMI coefficient for the one-dimensional Rashba model, Eq.~\eqref{eq_rashba_model_onedim}, as a function of the Fermi energy. The parameters used in the model are $\Delta V=1$eV and $\alpha=2$eV\AA{} and we set the temperature in the Fermi functions to $k_{\rm B}T=25$~meV. Two approaches are compared: The dashed lines show the results obtained from Eq.~\eqref{eq_gauge_approach_exchange} and Eq.~\eqref{eq_gauge_approach_dmi} within the gauge-field approach, where we used a small spin-spiral vector of $q=0.006$~\AA$^{-1}$ (we checked that making $q$ smaller does not affect the results). The solid line in Fig.~\ref{fig_gauge_vs_fukuyama}(a) is obtained from the Fukuyama-type expression Eq.~\eqref{eq_exchange_params_fukuyama} for the exchange constant. The solid line in Fig.~\ref{fig_gauge_vs_fukuyama}(b) is obtained from the Berry-phase theory of DMI, Eq.~\eqref{eq_dmi_berry}. The results from the different methods are in perfect agreement, which shows in particular that Eq.~\eqref{eq_exchange_params_fukuyama} can be used for calculating exchange constants even in the presence of SOI. The exchange constant becomes negative when the Fermi energy is close to $\pm 0.5$~eV, i.e., close to the band minima (see Fig.~\ref{fig_gauge_vs_fukuyama}(c)). Negative exchange constants imply that the ferromagnetic state is unstable and that a spin-spiral state will form. With increasing Fermi energy, the effect of Rashba SOI on the Fermi surface becomes smaller and smaller. At very high Fermi energy the Fermi surfaces with an without SOI differ very little. As a consequence, the DMI is suppressed at high Fermi energy. We have verified that the gauge-field approach, Eq.~\eqref{eq_gauge_approach_dmi}, and the Berry-phase theory, Eq.~\eqref{eq_dmi_berry}, agree at all orders in SOI. Previously, we have shown~\cite{spicudmi} that the Berry-phase theory reduces to the ground-state spin current~\cite{dmi_doppler_shift} at the first order in SOI. In Fig.~\ref{fig_axx_vs_axy} we show the exchange constants $\mathscr{A}^{xx}$ and $\mathscr{A}^{xy}$ as a function of the Rashba parameter $\alpha$ when the Fermi energy is set to zero and $\Delta V=1$eV. $\mathscr{A}^{xx}$ is the exchange constant of a cycloidal spin-spiral and $\mathscr{A}^{xy}$ is the one of a helical spin-spiral. In the absence of SOI rotations in spin-space leave the spectrum of the Hamiltonian invariant and therefore $\mathscr{A}^{xx}=\mathscr{A}^{xy}$. For $\alpha\ne0$ $\mathscr{A}^{xx}$ and $\mathscr{A}^{xy}$ differ from each other and the difference becomes large with increasing $\alpha$. The three geometrical contributions as defined in Eqs.~\eqref{eq_geo1} through \eqref{eq_geo3} are shown in Fig.~\ref{fig_axx_vs_axy} as well. The mixed Berry curvature and the mixed quantum metric are zero without SOI and therefore we expect that $\mathscr{A}_{\rm geo1}$ and $\mathscr{A}_{\rm geo3}$, which depend on the mixed Berry curvature and the mixed quantum metric, differ between cycloidal and helical spin-spirals, which is indeed the case: While $\mathscr{A}^{xx}_{\rm geo1}$ increases strongly with $\alpha$, $\mathscr{A}^{xy}_{\rm geo1}$ is zero and while $\mathscr{A}^{xx}_{\rm geo3}$ is zero, $\mathscr{A}^{xy}_{\rm geo3}$ becomes negative with increasing $\alpha$. In contrast, $\mathscr{A}^{xx}_{\rm geo2}$ and $\mathscr{A}^{xy}_{\rm geo2}$ are very similar, because they only involve the quantum metric in real space as well as the inverse effective mass in $k$-space. Generally, the geometrical contribution cannot be neglected and is of the same order of magnitude as the total exchange constant. The expressions Eq.~\eqref{eq_exchange_params_semicla}, Eq.~\eqref{eq_axx_pol}, and Eq.~\eqref{eq_axx_inter} contain both Fermi surface and Fermi sea terms. The exchange constant does not vanish in band insulators due to the Fermi sea terms and exhibits a plateau in the gap. To illustrate this we show in Fig.~\ref{fig_1d_rashba_insulator} the exchange constant of the one-dimensional Rashba model with model parameters $\alpha=20$eV\AA{} and $\Delta V=1$eV. In the $k_x$ integration we use a cutoff of 2.63\AA{}$^{-1}$. This cutoff is necessary in order to obtain an insulating system because there is no global gap in the bandstructure of the one-dimensional Rashba model. However, when we restrict the range of $k$ points to the region -2.63\AA{}$^{-1}<k_x<$2.63\AA{}$^{-1}$ the band structure appears gapped as shown in Fig.~\ref{fig_1d_rashba_insulator}(a). As shown in Fig.~\ref{fig_1d_rashba_insulator}(b) the corresponding exchange constant exhibits a plateau in the gap. \begin{figure} \includegraphics[width=\linewidth]{bandstructure_insulator.eps} \includegraphics[width=\linewidth]{exchange_insulator.eps} \caption{\label{fig_1d_rashba_insulator} (a) Band energy of the one-dimensional Rashba model Eq.~\eqref{eq_rashba_model_onedim} with model parameters $\alpha=20$eV\AA{} and $\Delta V=1$eV. (b) The corresponding exchange constant exhibits a plateau between -0.3eV and 0.3eV due to the gap of the band structure in (a). } \end{figure} \subsection{Rashba model in two dimensions} \label{sec_two_dim_rashba} \begin{figure} \includegraphics[width=\linewidth]{rashba_twodim_axx_vs_fermi.eps} \includegraphics[width=\linewidth]{rashba_twodim_axy_vs_fermi.eps} \caption{\label{fig_2d_rashba} (a) Exchange constant $\mathscr{A}^{xx}$ and (b) $\mathscr{A}^{xy}$ in the two-dimensional Rashba model Eq.~\eqref{eq_rashba_model} as a function of the Fermi energy. The model parameters are $\alpha=2$eV\AA{} and $\Delta V=1$eV. Solid lines: Complete exchange constants. The geometrical contributions $\mathscr{A}_{\rm geo1}$ (dashed), $\mathscr{A}_{\rm geo2}$ (dotted), and $\mathscr{A}_{\rm geo3}$ (dashed-dotted) as defined in Eqs.~\eqref{eq_geo1} through \eqref{eq_geo3} are shown as well. } \end{figure} When the Rashba parameter $\alpha$ is zero the exchange constant of the two-dimensional Rashba model can be obtained from the gauge-field approach as discussed in section~\ref{sec_gauge_field_rashba2d}. We checked that the gauge-field approach and Eq.~\eqref{eq_exchange_params_fukuyama} yield identical results in this case. We now turn to the case with $\alpha>0$, where we use the model parameters $\alpha=2$eV\AA{} and $\Delta V=1$eV. In Fig.~\ref{fig_2d_rashba} we show the exchange constants $\mathscr{A}^{xx}$ and $\mathscr{A}^{xy}$ as obtained from Eq.~\eqref{eq_exchange_params_fukuyama} as a function of the Fermi energy, as well as the geometrical contributions $\mathscr{A}_{\rm geo1}$, $\mathscr{A}_{\rm geo2}$, and $\mathscr{A}_{\rm geo3}$ as defined in Eqs.~\eqref{eq_geo1} through \eqref{eq_geo3}. We rediscover several properties that we discussed already in the one-dimensional Rashba model: The exchange constant of the cycloidal spin-spiral ($\mathscr{A}^{xx}$) differs considerably from the exchange constant of the helical spin-spiral ($\mathscr{A}^{xy}$) when SOI is large. The contribution $\mathscr{A}_{\rm geo2}$ does not differ much between helical spin-spiral and cycloidal spin-spiral, while $\mathscr{A}_{\rm geo1}$ and $\mathscr{A}_{\rm geo3}$ are very different between these two cases. \section{Summary} \label{sec_summary} We derive a formula that expresses the exchange constants in terms of Green's functions, velocity operators, and torque operators of a collinear ferromagnet. Thus, it allows us to access the exchange constants directly from the electronic structure information without the need for spin-spiral calculations. We compare this formula to Fukuyama's result for the orbital magnetic susceptibility and find strong formal similarities between these two theories. We rewrite the Green's function expression for the exchange constant in terms of Berry curvatures and quantum metrices in mixed phase-space. Thereby we identify several geometrical contributions to the exchange constants that we find to be generally important in free electron model systems. Our formalism can be used even in the presence of spin-orbit interaction, where we find sizable differences between the exchange constants of helical and cycloidal spin spirals in the Rashba model.
1,116,691,498,104
arxiv
\section{Introduction} \label{sec:introduction} Biomembranes consisting of lipid bilayers can be regarded as thin two-dimensional (2D) fluids, and membrane protein molecules as well as lipid molecules are allowed to move laterally~\cite{Sin72,AlbertsBook}. These membrane inclusions are subject to the thermal motion of lipid molecules, leading to random positional fluctuations. Such a Brownian motion plays important roles in various life processes such as transportation of materials or reaction between chemical species~\cite{LipowskyBook}. In order to describe lateral diffusion of membrane proteins, a drag coefficient of a cylindrical disc moving in a 2D fluid sheet has been theoretically studied in various membrane environments~\cite{Saf75,Saf76,Hug81,Evans88,Ramachandran10,Seki11,Seki14}. The obtained drag coefficient was used to estimate the diffusion coefficients of membrane proteins through Einstein's relation under the assumption that the system is in thermal equilibrium~\cite{KomuraBook}. In recent experiments, however, it has been shown that motions of particles inside cells are dominantly driven by random non-thermal forces rather than thermal fluctuations~\cite{Guo14,Par14}. In these experimental works, they found that non-thermal forces in biological cells are generated by active proteins undergoing conformational changes with a supply of adenosine triphosphate (ATP). These active fluctuations lead to enhanced diffusion of molecules in the cytoplasm~\cite{Yasuda17EPL,Yasuda17PRE}. Biomembranes also contain various active proteins which, for example, act as ion pumps by changing their shapes to exert forces to the adjacent membrane and solvent~\cite{AlbertsBook}. Lipid bilayers containing such active proteins have been called ``active membranes", and their out-of-plane fluctuations (deformations) have already been investigated both experimentally and theoretically~\cite{Man99,Ramaswamy00,Man01}. However, lateral motions of inclusions in membranes that are induced by active proteins have not yet been considered. Since such active forces give rise to enhanced diffusion, one needs to take into account both active non-thermal fluctuations as well as passive thermal ones to calculate diffusion in membranes. Recently, Mikhailov and Kapral discussed an enhanced diffusion due to non-thermal fluctuating hydrodynamic flows which are induced by oscillating active force dipoles [see Fig.~\ref{fig:membrane}(a)]~\cite{Mik15,Kap16}. They calculated the active diffusion coefficient of a passive particle immersed either in a three-dimensional (3D) cytoplasm or in a 2D membrane, and showed that it exhibits a logarithmic size dependence for the 2D case. Moreover, a chemotaxis-like drift of a passive particle was predicted when gradients of active proteins or ATP are present~\cite{Mik15}. Later Koyano \textit{et al.}\ showed that lipid membrane rafts, in which active proteins are concentrated, can induce a directed drift velocity near the interface of a domain~\cite{Koy16}. In these works, they considered membranes that are smaller in size than the hydrodynamic screening length. Huang \textit{et al.}\ performed coarse-grained simulations of active protein inclusions in lipid bilayers~\cite{Huang12,Huang13}. In Ref.~\cite{Huang13}, they showed that active proteins undergoing conformational motions not only affect the membrane shape but also laterally stir the lipid bilayer so that lipid flows are induced. Importantly, the flow pattern induced by an immobilized protein resembles the 2D fluid velocity fields that are created by a force dipole. \begin{figure}[tbh] \centering\includegraphics[scale=0.18]{fig1.eps}\\ \caption{ (a) The conformational change of an oscillating force dipole representing an active protein. Within a turnover cycle of the force dipole separated by a distance $x(t)$, it exerts two oppositely directed forces $\pm \mathbf{F}(t)$ at time $t$. The integral intensity of a force dipole is $S$ (see the text). (b) Schematic picture showing a flat and infinitely large membrane of 2D viscosity $\eta_{\rm m}$ that is located at $z=0$. The membrane is surrounded by a bulk solvent of 3D viscosity $\eta_{\rm s}$, and the two flat walls are located at $z=\pm h$. The solvent velocity is assumed to vanish at the surfaces of these walls. The ``free membrane" and the ``confined membrane" cases correspond to the limits of $h \rightarrow \infty$ and $h \rightarrow 0$, respectively. The yellow passive particle undergoes Brownian motion due to thermal and non-thermal fluctuations. The latter contribution is induced by active force dipoles which are homogeneously distributed in the membrane with a 2D concentration $c_0$. } \label{fig:membrane} \end{figure} Following Refs.~\cite{Mik15,Kap16}, we assume that an active protein behaves as an oscillating force dipole which acts on the surroundings to generate hydrodynamic flows that can induce motions of passive particles in the fluid. In this paper, we investigate active diffusion and drift velocity of a particle in ``free" and ``confined" membranes which are completely flat and infinitely large. In the free membrane case, a thin 2D fluid sheet is embedded in a 3D solvent having typically a lower viscosity than that of the membrane. Whereas in the confined case, which mimics a supported membrane~\cite{Tan05}, a membrane is sandwiched by two rigid walls separated by a finite but small distance from it. For both the free and confined membrane cases, we employ general mobility tensors that take into account the hydrodynamic effects mediated by the surrounding 3D solvent~\cite{Inaura08,Ram11,Ram11b,Komura12}. Using the general mobility tensors, we numerically calculate the active diffusion coefficient and the drift velocity as a function of the diffusing particle size for the entire length scales. Furthermore, several asymptotic expressions are also derived in order to compare with numerical estimates and thermal contributions. Importantly, our result leads to characteristic length scales describing a crossover from non-thermal to thermal diffusive behaviors for large scales. In the next section, we present the expressions for the active diffusion coefficient and the drift velocity in 2D membranes~\cite{Mik15}. We also review the general mobility tensors for the free and confined membrane cases~\cite{Inaura08,Ram11,Ram11b,Komura12}. Using these expressions, we calculate in Sec.~\ref{sec:diffusion} the active diffusion coefficient for the two geometries. In Sec.~\ref{sec:total_diffusion}, we compare the thermal diffusion coefficient with the obtained non-thermal diffusion coefficient, and discuss the characteristic crossover lengths. In Sec.~\ref{sec:drift}, we obtain the drift velocities as a function of the particle size. The summary of our work and some numerical estimates for the obtained quantities are given in Sec.~\ref{sec:discussion}. \section{Active transport and mobility tensors in membranes} \label{sec:model} \subsection{Active diffusion coefficient} Active proteins in a 2D biological membrane, modeled as oscillating force dipoles, produce non-equilibrium fluctuations and cause an enhancement of the lateral diffusion of a passive particle. We assume that the spatially fixed force dipoles are homogeneously and isotropically distributed in the membrane, and they exert only in-plane lateral forces. The total diffusion coefficient is given by $D=D_{\rm T}+D_{\rm A}$, where $D_{\rm T}$ is the thermal contribution and determined by Einstein's relation (which will be discussed in Sec.~\ref{sec:total_diffusion}), and $D_{\rm A}$ is the active non-thermal contribution given by~\cite{Mik15} \begin{align} D_{\rm A}&=\frac{Sc_0}{2} \int {\rm d}^2r\, \Omega_{\beta\beta^\prime\gamma\gamma^\prime} \frac{\partial G_{\alpha\beta}(\mathbf{r})}{\partial r_\gamma} \frac{\partial G_{\alpha\beta^\prime}(\mathbf{r})}{\partial r_{\gamma^\prime}}, \label{eq:coefficient} \end{align} where $\mathbf{r}=(x, y)$ denotes a 2D vector and we have introduced a notation \begin{align} \Omega_{\beta\beta^\prime\gamma\gamma^\prime}= \frac{1}{8} (\delta_{\beta\beta^\prime}\delta_{\gamma\gamma^\prime}+\delta_{\beta\gamma}\delta_{\beta^\prime\gamma^\prime} +\delta_{\beta\gamma^\prime}\delta_{\beta^\prime\gamma}). \end{align} Throughout this paper, the summation over repeated greek indices is assumed. In Eq.~(\ref{eq:coefficient}), $S$ is the integral intensity of a force dipole, $c_0$ is the constant 2D concentration of active proteins, and $G_{\alpha\beta}(\mathbf{r})$ is the membrane mobility tensor which will be discussed later separately. Within a fluctuating ``dimer model" as presented in Fig.~\ref{fig:membrane}(a), the magnitude of a force dipole is given by $m(t)=x(t)F(t)$, where $x(t)$ is the distance between the two spheres and $F(t)$ is the magnitude of the oppositely directed forces. The statistical average of the dipole magnitude vanishes, i.e., $\langle m(t)\rangle=0$, whereas the integral intensity $S$ of a force dipole is given by $S=\int_0^\infty {\rm d}t\, \langle m(t)m(0)\rangle$~\cite{Mik15}. Since we assume that active proteins are homogeneously distributed in the membrane as shown in Fig.~\ref{fig:membrane}(b), it is sufficient to consider only the isotropic diffusion as given by Eq.~(\ref{eq:coefficient}). In deriving Eq.~(\ref{eq:coefficient}), the size of a dipole is assumed to be much smaller than the distance between the passive particle and active force dipoles~\cite{Mik15}. At large distances, almost any object that changes its shape would create a flow field that corresponds to some force dipole. It should be noted, however, that the above expression is not accurate when the distance between them becomes smaller. As for the mobility tensor in 3D fluids, it is known that the Rotne-Prager mobility tensor takes into account higher order corrections to the Oseen mobility tensor and gives more accurate approximation at short distances~\cite{Kap16}. Such a better approximation has not been worked out so far for 2D fluid membranes, and we shall only consider the lowest order contribution (see later calulations). In the above, we have also assumed that force dipoles are spatially fixed in the membrane. Since no forces are applied to fix the dipoles, such an approximation is justified when the dynamics of force dipoles is much slower than that of the passive particle. \subsection{Drift velocity} Although we have assumed above that $c_0$ is constant, active proteins are often distributed inhomogeneously in the membrane due to heterogeneous structures such as sphingolipid-enriched domains~\cite{Sim97,Kom14}. According to the ``lipid raft" hypothesis, theses domains act as platforms for membrane signaling and trafficking~\cite{Lin10}. Hence it is also important to consider the effects of nonuniform spatial distribution of active proteins and to see how it affects the lateral dynamics in membranes. When a spatial concentration gradient $\nabla c$ of active protein is present, it gives rise to an unbalanced induced forces between two points in the membrane. Hence passive particles are subjected to a drift toward either lower or higher concentration of active proteins, and a chemotaxis-like drift can occur. When the absolute value of the concentration gradient $\vert \nabla c \vert$ is assumed to be constant, the induced drift velocity of a passive particle in the direction $\nabla c$ is given by~\cite{Mik15} \begin{align} V&=-S \vert \nabla c \vert \int {\rm d}^2r\, \Omega_{\beta\beta^\prime\gamma\gamma^\prime} \hat{n}_\alpha\frac{\partial^2 G_{\alpha\beta}(\mathbf{r})}{\partial r_\gamma\partial r_\delta}\frac{\partial G_{\delta\beta^\prime}(\mathbf{r})}{\partial r_{\gamma^\prime}}(\mathbf{r}\cdot\hat{\mathbf{n}}). \label{eq:drift} \end{align} Here, the unit vector $\hat{\mathbf{n}}=\nabla c/\vert \nabla c \vert$ denotes the direction of the concentration gradient of active proteins. We shall employ the above expression to obtain the lateral drift velocity in a membrane by using the membrane mobility tensor as discussed below. \subsection{Membrane mobility tensors} Since we discuss active diffusion in an infinitely large flat membrane, we use the 2D membrane mobility tensor which also takes into account the hydrodynamic effects of the surrounding 3D solvent. We consider a general situation as depicted in Fig.~\ref{fig:membrane}(b), where a fluid membrane of 2D shear viscosity $\eta_{\rm m}$ is surrounded by a solvent of 3D shear viscosity $\eta_{\rm s}$. Furthermore, we consider the case in which there are two walls located symmetrically at an arbitrary distance $h$ from the flat membrane~\cite{Inaura08,Ram11,Ram11b,Komura12}. We denote the in-plane velocity vector of the fluid membrane by ${\mathbf v}({\mathbf r})$ and the lateral pressure by $p({\mathbf r})$. Assuming that the incompressibility condition holds for the fluid membrane, we write its hydrodynamic equations as \begin{align} &\nabla \cdot {\mathbf v} = 0, \\ & \eta_{\rm m} \nabla^2 {\mathbf v} - \nabla p + {\mathbf f}_{\rm s} + {\mathbf F}=0. \end{align} The second equation is the 2D Stokes equation, where ${\mathbf f}_{\rm s}$ is the force exerted on the membrane by the surrounding solvent, and ${\mathbf F}$ is any external force acting on the membrane. If we denote the upper and lower solvents with the superscripts $\pm$, the two solvent velocities ${\mathbf v}^{\pm}({\mathbf r},z)$ and pressures $p^{\pm}({\mathbf r},z)$ obey the following hydrodynamic equations, respectively \begin{align} & \widehat{\nabla} \cdot {\mathbf v}^{\pm} = 0, \\ & \eta_{\rm s} \widehat{\nabla}^2 {\mathbf v}^{\pm} - \widehat{\nabla} p^{\pm}= 0, \end{align} where $\widehat{\nabla}$ stands for the 3D differential operator. We assume that the surrounding solvent cannot permeate the membrane, and impose the no-slip boundary condition between the membrane and the surrounding solvent at $z=0$~\cite{Saf75,Saf76,Inaura08,Ram11,Ram11b,Komura12}. Hence we require the conditions \begin{equation} v_z^{\pm}({\mathbf r},0)=0,~~~~~ v_{\alpha}({\mathbf r})=v_{\alpha}^{\pm}({\mathbf r},0), \end{equation} where $\alpha=x, y$. Furthermore, the solvent velocity vanishes at the walls located at $z=\pm h$, i.e., $v_{\alpha}^{\pm}({\mathbf r},\pm h)=0$. By solving the above coupled hydrodynamic equations in Fourier space with $\mathbf{k}=(k_x,k_y)$ being the 2D wavevector, the 2D mobility tensor ${G}_{\alpha\beta}({\mathbf k})$ defined through ${ v_\alpha}({\mathbf k})= G_{\alpha\beta}({\mathbf k}) { F}_{\beta}({\mathbf k})$ can be obtained as~\cite{Inaura08,Ram11,Ram11b,Komura12} \begin{align} G_{\alpha\beta}(\mathbf{k})=\frac{\delta_{\alpha\beta}-\hat{k}_\alpha \hat{k}_\beta} {\eta_{\rm m}\left[k^2+\nu k\coth(kh)\right]}, \label{eq:general_mobility} \end{align} where $k= |\mathbf{k}|$ and $\hat{k}_\alpha=k_\alpha/k$, and the ratio of the two viscosities $\nu^{-1}=\eta_{\rm m}/(2\eta_{\rm s})$ defines the Saffman-Delbr\"{u}ck (SD) hydrodynamic screening length~\cite{Saf75,Saf76}. Notice that $\eta_{\rm m}$ and $\eta_{\rm s}$ have different dimensions, and $\nu^{-1}$ has a dimension of length. In order to perform analytical calculations, the two limiting cases of Eq.~(\ref{eq:general_mobility}) are considered, i.e., the ``free membrane" case and the ``confined membrane" case corresponding to the limits of $h \rightarrow \infty$ and $h \rightarrow 0$, respectively~\cite{Ram11,Ram11b,Komura12}. For the free membrane case, we take the limit $kh\gg1$ in Eq.~(\ref{eq:general_mobility}) and obtain the following asymptotic expression \begin{align} G^{\rm F}_{\alpha\beta}(\mathbf{k})&=\frac{\delta_{\alpha\beta}-\hat{k}_\alpha \hat{k}_\beta} {\eta_{\rm m}(k^2+\nu k)}. \label{eq:ft_sd} \end{align} Hereafter, we shall denote the quantities for the free membrane case with the superscript ``F''. For the confined membrane case, on the other hand, we take the opposite limit $kh \ll 1$ and obtain \begin{align} G^{\rm C}_{\alpha\beta}(\mathbf{k})&=\frac{\delta_{\alpha\beta}-\hat{k}_\alpha \hat{k}_\beta} {\eta_{\rm m}(k^2+\kappa^2)}, \label{eq:ft_es} \end{align} where $\kappa^{-1}=(h/\nu)^{1/2}$ is the Evans-Sackmann (ES) screening length~\cite{Evans88}, and we use the superscript ``C'' for the quantities related to the confined membrane case. We note that the ES screening length $\kappa^{-1}$ is the geometric mean of $\nu^{-1}$ and $h$ so that we typically have $\kappa^{-1} < \nu^{-1}$. Taking the inverse Fourier transform of Eqs.~(\ref{eq:ft_sd}) and (\ref{eq:ft_es}), we obtain the mobility tensors in the real space for the two limiting cases as~\cite{Ram11,Ram11b,Komura12} \begin{align} G_{\alpha\beta}^{{\rm F}}(\mathbf{r})=&\frac{1}{4\eta_{\rm m}} \left[\mathbf{H}_0(\nu r)-Y_0(\nu r)+\frac{2}{\pi\nu^2 r^2}\right.\nonumber \\ &\left.-\frac{\mathbf{H}_1(\nu r)}{\nu r} +\frac{Y_1(\nu r)}{\nu r} \right]\delta_{\alpha\beta}\notag\\ &+\frac{1}{4\eta_{\rm m}}\left[ -\frac{4}{\pi\nu^2 r^2}+\frac{2\mathbf{H}_1(\nu r)}{\nu r}\notag\right.\\ &\left.-\frac{2Y_1(\nu r)}{\nu r}-\mathbf{H}_0(\nu r)+Y_0(\nu r) \right]\hat{r}_\alpha\hat{r}_\beta, \label{eq:mobility_sd} \end{align} and \begin{align} G_{\alpha\beta}^{{\rm C}}(\mathbf{r})=&\frac{1}{2\pi\eta_{\rm m}}\left[ K_0(\kappa r)+\frac{K_1(\kappa r)}{\kappa r}-\frac{1}{\kappa^2r^2} \right]\delta_{\alpha\beta}\notag\\ &+\frac{1}{2\pi\eta_{\rm m}}\left[ -K_0(\kappa r)-\frac{2K_1(\kappa r)}{\kappa r}+\frac{2}{\kappa^2r^2} \right]\hat{r}_\alpha\hat{r}_\beta, \label{eq:mobility_es} \end{align} respectively, where we have used the notations $r= |\mathbf{r}|$ and $\hat{r}_\alpha=r_\alpha/r$. In the above, $\mathbf{H}_n(z)$ are the Struve functions, $Y_n(z)$ the Bessel functions of the second kind, and $K_n(z)$ the modified Bessel functions of the second kind. The physical meaning of the above expressions was also discussed in Refs.~\cite{Dia09,Opp09,Opp10}. We note that if there is only one wall instead of two walls, the definition of the ES length needs to be modified as $\kappa^{-1} \rightarrow (2h/\nu)^{1/2}$~\cite{Opp10}. In the next sections, we shall use Eqs.~(\ref{eq:mobility_sd}) and (\ref{eq:mobility_es}) to calculate the active diffusion coefficients and the drift velocity. \section{Active diffusion coefficient} \label{sec:diffusion} \subsection{Free membranes} We first calculate the active diffusion coefficient for the free membrane case by substituting Eq.~(\ref{eq:mobility_sd}) into Eq.~(\ref{eq:coefficient}). Since the integrand in Eq.~(\ref{eq:coefficient}) diverges logarithmically at short distances, we need to introduce a small cutoff length $\ell_{\rm c}$. Physically, $\ell_{\rm c}$ is given by the sum of the size of a passive particle (undergoing lateral Brownian motion) and that of a force dipole~\cite{Kap16}. In the following, we generally assume that force dipoles are smaller than the diffusing object whose size is represented by $\ell_{\rm c}$. This is further justified when we consider lateral diffusion of a passive object that is larger than the SD or ES screening lengths. \begin{figure}[tbh] \begin{center} \includegraphics[scale=0.35]{fig2.eps} \end{center} \caption{ The plot of the scaled active diffusion coefficient $D_{\rm A}$ as a function of the scaled cutoff length $\delta=\nu\ell_{\rm c}$ and $\epsilon = \kappa\ell_{\rm c}$ for the free membrane case [solid line, see Eq.~(\ref{eq:diffusion_sd})] and the confined membrane case [dashed line, see Eq.~(\ref{eq:diffusion_es})], respectively. Here $D_{\rm A}$ is scaled by $Sc_0/(256\pi\eta_{\rm m}^2)$. The numbers in this plot indicate the slope of the curves and represent the powers of the algebraic dependencies. } \label{fig:sd_es} \end{figure} Introducing a dimensionless vector $\mathbf{z}=\nu\mathbf{r}$ scaled by the SD length, we can write the active diffusion coefficient for the free membrane case as \begin{align} D^{{\rm F}}_{\rm A}=&\frac{Sc_0}{32\pi^2\eta_{\rm m}^2} \int_{\delta}^\infty {\rm d}^2z\,\Omega_{\beta\beta^\prime\gamma\gamma^\prime} \frac{\partial g^{{\rm F}}_{\alpha\beta}(\mathbf{z})}{\partial z_\gamma}\frac{\partial g^{{\rm F}}_{\alpha\beta^\prime}(\mathbf{z})}{\partial z_{\gamma^\prime}}, \label{eq:diffusion_sd} \end{align} where $\delta =\nu \ell_{\rm c}$ is the dimensionless cutoff, and $g^{{\rm F}}_{\alpha\beta}(\mathbf{z})=4\pi\eta_{\rm m}G^{\rm F}_{\alpha\beta}$ is the corresponding dimensionless mobility tensor [see Eq.~(\ref{eq:mobility_sd})]. We have first evaluated the above integral numerically. In Fig.~\ref{fig:sd_es}, we plot the obtained $D^{{\rm F}}_{\rm A}$ as a function of $\delta=\nu \ell_{\rm c}$ by the solid line. We see that the active diffusion coefficient depends only weakly on the particle size at small scales, whereas it shows a stronger size dependence described by a power-law behavior at large scales. The crossover between these two behaviors is set by the condition $\delta\approx1$. In order to understand the above behaviors, we next discuss the asymptotic behaviors of $D^{{\rm F}}_{\rm A}$ for both small and large $\delta$ values. Expanding the mobility tensor in Eq.~(\ref{eq:mobility_sd}) for $\nu r \ll 1$ and $\nu r \gg 1$, we have~\cite{Opp10} \begin{align} g_{\alpha\beta}^{{\rm F}}(\mathbf{z})&\approx\left(\ln\frac{2}{z}-\gamma-\frac{1}{2}\right)\delta_{\alpha\beta} +\hat{z}_\alpha\hat{z}_\beta, \label{eq:mobility_sd_small} \end{align} and \begin{align} g_{\alpha\beta}^{{\rm F}}(\mathbf{z})&\approx\frac{2}{z}\hat{z}_\alpha\hat{z}_\beta, \label{eq:mobility_sd_large} \end{align} respectively, where $\gamma=0.5722\cdots$ is Euler's constant. By substituting Eqs.~(\ref{eq:mobility_sd_small}) and (\ref{eq:mobility_sd_large}) into Eq.~(\ref{eq:diffusion_sd}), we can analytically obtain the asymptotic forms of the active diffusion coefficient as a function of $\delta =\nu \ell_{\rm c}$. As obtained in Ref.~\cite{Mik15}, we find for $\delta\ll1$ \begin{align} D^{{\rm F}}_{\rm A}&\approx\frac{Sc_0}{32\pi\eta_{\rm m}^2} \ln \frac{L}{\ell_{\rm c}}, \label{eq:diffusion_sd_small} \end{align} where a large cutoff length $L$ is introduced because the integral in Eq.~(\ref{eq:diffusion_sd}) also diverges logarithmically at large distances. In order to match with the numerical estimation, we obtain $L\approx0.682 \nu^{-1}$. The above logarithmic dependence on $\ell_{\rm c}$ means that $D^{{\rm F}}_{\rm A}$ depends only weakly on the particle size. We also note that the above expression contains only the membrane viscosity $\eta_{\rm m}$, and does not depend on the solvent viscosity $\eta_{\rm s}$. This is because the hydrodynamics at small scales is primarily dominated by the 2D membrane property. In the opposite limit of $\delta \gg1$, on the other hand, we show in the Appendix A that the active diffusion coefficient becomes \begin{align} D^{{\rm F}}_{\rm A}&\approx\frac{5Sc_0}{256\pi\eta_{\rm s}^2} \frac{1}{\ell_{\rm c}^2}, \label{eq:diffusion_sd_large} \end{align} which is an important result of this paper. This asymptotic expression decays as $1/\ell_{\rm c}^2$ and depends now only on $\eta_{\rm s}$, indicating that the membrane lateral dynamics is governed by the surrounding 3D fluid at large scales. From the obtained asymptotic expressions in Eqs.~(\ref{eq:diffusion_sd_small}) and (\ref{eq:diffusion_sd_large}), the behavior of $D^{{\rm F}}_{\rm A}$ in Fig.~\ref{fig:sd_es} is explained as a crossover from a logarithmic dependence to an algebraic dependence with a power of $-2$. \subsection{Confined membranes} Next we consider the confined membrane case. With the use of Eq.~(\ref{eq:mobility_es}) the active diffusion coefficient can be written as \begin{align} D^{\rm C}_{\rm A}&=\frac{Sc_0}{32\pi^2\eta_{\rm m}^2} \int_{\epsilon}^\infty {\rm d}^2w\,\Omega_{\beta\beta^\prime\gamma\gamma^\prime} \frac{\partial g^{{\rm C}}_{\alpha\beta}(\mathbf{w})}{\partial w_\gamma}\frac{\partial g^{{\rm C}}_{\alpha\beta^\prime}(\mathbf{w})}{\partial w_{\gamma^\prime}}, \label{eq:diffusion_es} \end{align} where $\mathbf{w}=\kappa\mathbf{r}$ is a different dimensionless variable, $\epsilon =\kappa \ell_{\rm c}$ is a differently scaled cutoff, and $g^{{\rm C}}_{\alpha\beta}(\mathbf{w})=4\pi\eta_{\rm m}G^{\rm C}_{\alpha\beta}$ is the corresponding dimensionless mobility tensor [see Eq.~(\ref{eq:mobility_es})]. Performing the numerical integration of Eq.~(\ref{eq:diffusion_es}), we plot in Fig.~\ref{fig:sd_es} the active diffusion coefficient $D^{\rm C}_{\rm A}$ as a function of $\epsilon=\kappa \ell_{\rm c}$ by the dashed line. For small $\epsilon$ values, the behavior of $D^{\rm C}_{\rm A}$ is similar to that of $D^{\rm F}_{\rm A}$, while $D^{\rm C}_{\rm A}$ decays much faster than $D^{\rm F}_{\rm A}$ for large $\epsilon$ values. To discuss these size dependencies, we use the asymptotic expressions of Eq.~(\ref{eq:mobility_es}) for $\kappa r \ll 1$ and $\kappa r \gg 1$ given by~\cite{Opp10} \begin{align} g_{\alpha\beta}^{{\rm C}}(\mathbf{w})&\approx\left(\ln\frac{2}{w}-\gamma-\frac{1}{2}\right)\delta_{\alpha\beta} +\hat{w}_\alpha\hat{w}_\beta, \label{eq:mobility_es_small} \end{align} and \begin{align} g_{\alpha\beta}^{{\rm C}}(\mathbf{w})&\approx-\frac{2}{w^2} (\delta_{\alpha\beta}-2\hat{w}_\alpha\hat{w}_\beta), \label{eq:mobility_es_large} \end{align} respectively. Note that Eq.~(\ref{eq:mobility_es_small}) is identical to Eq.~(\ref{eq:mobility_sd_small}) when $w$ is replaced by $z$. Hence, in the limit of $\epsilon\ll1$, the active diffusion coefficient for the confined membrane case should be identical to Eq.~(\ref{eq:diffusion_sd_small}) and is given by~\cite{Mik15} \begin{align} D^{\rm C}_{\rm A}&\approx\frac{Sc_0}{32\pi\eta_{\rm m}^2} \ln\frac{L}{\ell_{\rm c}}. \label{eq:diffusion_es_small} \end{align} The large cutoff length should be taken here as $L \approx 1.12 \kappa^{-1}$. As mentioned before, the 2D hydrodynamic effect is more important at small scales, and $D^{\rm C}_{\rm A}$ is logarithmically dependent on the particle size. In the large size limit of $\epsilon\gg1$, on the other hand, we also show in the Appendix A that $D^{\rm C}_{\rm A}$ asymptotically behaves as \begin{align} D^{\rm C}_{\rm A}&\approx\frac{Sc_0}{16\pi\eta_{\rm s}^2} \frac{h^2}{\ell_{\rm c}^4}, \label{eq:diffusion_es_large} \end{align} which is another important result. The obtained expression decays as $1/\ell_{\rm c}^4$ which is much stronger than Eq.~(\ref{eq:diffusion_sd_large}) for the free membrane case. According to Eqs.~(\ref{eq:diffusion_es_small}) and (\ref{eq:diffusion_es_large}), the behavior of $D^{{\rm C}}_{\rm A}$ in Fig.~\ref{fig:sd_es} can be understood as a crossover from a logarithmic dependence to an algebraic dependence with a power of $-4$. \section{Total diffusion coefficient} \label{sec:total_diffusion} Having obtained the active diffusion coefficients for the free and the confined membrane cases, we now discuss the total lateral diffusion coefficients in membranes by considering both thermal and non-thermal contributions. Concerning the thermal diffusion coefficient $D^{\rm F}_{\rm T}$ for the free membrane case, we use an empirical expression obtained by Petrov and Schwille~\cite{Pet08,Petrov12} \begin{align} D^{\rm F}_{\rm T}(\delta)=&\frac{k_{\rm B}T}{4\pi\eta_{\rm m}}\left[\ln\frac{2}{\delta}-\gamma+\frac{4\delta}{\pi}-\frac{\delta^2}{2}\ln\frac{2}{\delta}\right]\nonumber \\ & \times \left[1-\frac{\delta^3}{\pi}\ln\frac{2}{\delta}+\frac{c_1\delta^{b_1}}{1+c_2\delta^{b_2}}\right]^{-1}, \label{DFT} \end{align} where $k_{\rm B}$ is the Boltzmann constant, $T$ is the temperature, and the four numerical constants are chosen as $c_1=0.73761$, $b_1=2.74819$, $c_2=0.52119$, and $b_2=0.51465$~\cite{Petrov12}. For the free membrane case, there is no exact analytical expression of the thermal diffusion coefficient which covers the entire size range, except for the case where a 2D polymer chain is confined in a fluid membrane~\cite{Ram11}. Equation~(\ref{DFT}) is known to recover the correct asymptotic limits of the thermal diffusion coefficients both for $\delta \ll 1$~\cite{Saf75,Saf76} and $\delta \gg 1$~\cite{Hug81}. \begin{figure}[tbh] \begin{center} \includegraphics[scale=0.35]{fig3.eps} \end{center} \caption{ The plot of the scaled thermal diffusion coefficient $D_{\rm T}$ as a function of the scaled cutoff length $\delta=\nu\ell_{\rm c}$ and $\epsilon = \kappa\ell_{\rm c}$ for the free membrane case [solid line, see Eq.~(\ref{DFT})] and the confined membrane case [dashed line, see Eq.~(\ref{DCT})], respectively. Here $D_{\rm T}$ is scaled by $k_{\rm B}T/(4\pi\eta_{\rm m})$. The numbers in this plot indicate the slope of the curves and represent the powers of the algebraic dependencies. } \label{fig:thermal_sd_es} \end{figure} On the other hand, the thermal diffusion coefficient $D^{\rm C}_{\rm T}$ for the confined membrane case was explicitly calculated by Evans {\it et al.}~\cite{Evans88} and also by Ramachandran \textit{et al.}~\cite{Ramachandran10,Seki11,Seki14,KomuraBook}. In this case, the resulting expression is given by \begin{align} D^{\rm C}_{\rm T}(\epsilon)=&\frac{k_{\rm B}T}{4\pi\eta_{\rm m}}\left[\frac{\epsilon^2}{4} +\frac{\epsilon K_1(\epsilon)}{K_0(\epsilon)} \right]^{-1}. \label{DCT} \end{align} In Fig.~\ref{fig:thermal_sd_es}, we plot $D^{\rm F}_{\rm T}$ as a function of the particle size $\delta$ by the solid line, and $D^{\rm C}_{\rm T}$ as a function of $\epsilon$ by the dashed line for the whole size range. Their asymptotic behaviors are separately discussed below. When we consider the total diffusion coefficient $D=D_{\rm T}+D_{\rm A}$, we shall neglected the contribution from thermal fluctuations of force dipoles. These fluctuations can arise when force dipoles contain structural internal degrees of freedom. However, such a contribution to the diffusion coefficient is small compared to $D_{\rm T}$ because it should be proportional to the product of $k_{\rm B}T$ and the concentration of force dipoles $c_0$. \subsection{Free membranes} For the free membrane case, the total diffusion coefficient is given by $D^{\rm F}=D_{\rm T}^{\rm F}+D_{\rm A}^{\rm F}$, where the active non-thermal contribution $D_{\rm A}^{\rm F}$ was discussed in the previous section. Using Eqs.~(\ref{DFT}) and (\ref{eq:diffusion_sd_small}) in the limit of $\delta\ll1$, we asymptotically have~\cite{Saf75,Saf76} \begin{align} D^{\rm F}&\approx \frac{k_{\rm B}T}{4\pi\eta_{\rm m}}\left(\ln\frac{2}{\nu\ell_{\rm c}} -\gamma\right) +\frac{Sc_0}{32\pi\eta_{\rm m}^2}\ln\frac{L}{\ell_{\rm c}}, \label{eq:total_sd_small} \end{align} where both contributions are proportional to $\ln (1/\ell_{\rm c})$. For $\delta \gg 1$, on the other hand, we obtain from Eqs.~(\ref{DFT}) and (\ref{eq:diffusion_sd_large})~\cite{Hug81} \begin{align} D^{\rm F}&\approx \frac{k_{\rm B}T}{16\eta_{\rm s}}\frac{1}{\ell_{\rm c}} +\frac{5Sc_0}{256\pi\eta_{\rm s}^2}\frac{1}{\ell_{\rm c}^2}. \label{eq:total_sd_large} \end{align} Since the $\ell_{\rm c}$-dependencies in Eq.~(\ref{eq:total_sd_large}) are different between the thermal and non-thermal contributions, we can introduce a new crossover length defined by \begin{align} \ell^{*}=\frac{5Sc_0}{16\pi k_{\rm B}T\eta_{\rm s}}. \label{eq:crossover_sd} \end{align} This length scale characterizes a crossover from the $1/\ell_{\rm c}^2$-dependence to $1/\ell_{\rm c}$-dependence. When $\ell_{\rm c} \ll \ell^{*}$ (but still $\nu^{-1} \ll \ell_{\rm c}$), the non-thermal contribution dominates over the thermal one, while in the opposite limit of $\ell_{\rm c} \gg \ell^{*}$, the thermal contribution is of primary importance. \subsection{Confined membranes} In the case of confined membranes, the total diffusion coefficient now becomes $D^{\rm C}=D_{\rm T}^{\rm C} + D_{\rm A}^{\rm C}$. In the limit of $\epsilon \ll 1$, we have from Eqs.~(\ref{DCT}) and (\ref{eq:diffusion_es_small})~\cite{Evans88,Ramachandran10} \begin{align} D^{\rm C}&\approx \frac{k_{\rm B}T}{4\pi\eta_{\rm m}}\left(\ln\frac{2}{\kappa\ell_{\rm c}} -\gamma\right) +\frac{Sc_0}{32\pi\eta_{\rm m}^2}\ln\frac{L}{\ell_{\rm c}}, \label{eq:total_es_small} \end{align} where both contributions exhibit a logarithmic dependence on $\ell_{\rm c}$ as in the free membrane case. In the opposite limit of $\epsilon\gg1$, we find from Eqs.~(\ref{DCT}) and (\ref{eq:diffusion_es_large})~\cite{Evans88,Ramachandran10} \begin{align} D^{\rm C}&\approx \frac{k_{\rm B}T}{2\pi\eta_{\rm s}}\frac{h}{\ell_{\rm c}^2} +\frac{Sc_0}{16\pi\eta_{\rm s}^2}\frac{h^2}{\ell_{\rm c}^4}. \label{eq:total_es_large} \end{align} Similar to the free membrane case, we can consider another characteristic length defined by \begin{align} \ell^{**}=\left(\frac{Sc_0h}{8k_{\rm B}T\eta_{\rm s}}\right)^{1/2}. \label{eq:crossover_es} \end{align} This length scale characterizes a crossover from the $1/\ell_{\rm c}^4$-dependence to $1/\ell_{\rm c}^2$-dependence. We note that $\ell^{**}$ is essentially the geometric mean of $\ell^*$ and $h$. Numerical estimates of these two characteristic length scales will be discussed in Sec.~\ref{sec:discussion}. \section{Drift velocity} \label{sec:drift} \subsection{Free membranes} \begin{figure}[tbh] \begin{center} \includegraphics[scale=0.35]{fig4.eps} \end{center} \caption{ The plot of the scaled drift velocity $V$ as a function of the scaled cutoff length $\delta=\nu\ell_{\rm c}$ and $\epsilon =\kappa\ell_{\rm c}$ for the free membrane case [solid line, see Eq.~(\ref{eq:drift_sd})] and the confined membrane case [dashed line, see Eq.~(\ref{eq:drift_es})], respectively. Here $V$ is scaled by $S \vert \nabla c \vert/(128\pi\eta_{\rm m}^2)$. The numbers in this plot indicate the slope of the curves and represent the powers of the algebraic dependencies. } \label{fig:drift_sd_es} \end{figure} In this section, we calculate the drift velocity $V$ of a passive particle due to a concentration gradient of active force dipoles. For the free membrane case, we substitute Eq.~(\ref{eq:mobility_sd}) into Eq.~(\ref{eq:drift}) and obtain \begin{align} V^{\rm F}= & -\frac{S \vert \nabla c \vert}{16\pi^2\eta_{\rm m}^2} \int_{\delta}^\infty {\rm d}^2z\, \Omega_{\beta\beta^\prime\gamma\gamma^\prime} \nonumber \\ & \times \hat{n}_\alpha \frac{\partial^2 g^{\rm F}_{\alpha\beta}(\mathbf{z})}{\partial z_\gamma\partial z_\delta}\frac{\partial g^{\rm F}_{\delta\beta^\prime}(\mathbf{z})}{\partial z_{\gamma^\prime}} (\mathbf{z}\cdot\hat{\mathbf{n}}), \label{eq:drift_sd} \end{align} where $\delta =\nu \ell_{\rm c}$ and $g^{{\rm F}}_{\alpha\beta}(\mathbf{z})=4\pi\eta_{\rm m}G^{\rm F}_{\alpha\beta}$ as before. Performing the numerical integration of Eq.~(\ref{eq:drift_sd}), we plot in Fig.~\ref{fig:drift_sd_es} the drift velocity $V^{\rm F}$ as a function of $\delta$ by the solid line. Similar to the active diffusion coefficient $D^{{\rm F}}_{\rm A}$, the drift velocity $V^{\rm F}$ depends weakly on the particle size at small scales, while it exhibits a stronger size dependence at large scales. Such a crossover also occurs around $\delta\approx1$. We next discuss the asymptotic behaviors of $V^{\rm F}$ for small and large $\delta$ values. With the use of Eqs.~(\ref{eq:mobility_sd_small}) and (\ref{eq:mobility_sd_large}), we show in the Appendix B that the asymptotic behaviors of $V$ for $\delta \ll 1$ and $\delta \gg 1$ are \begin{align} V^{\rm F}&\approx\frac{S \vert \nabla c \vert}{32\pi\eta_{\rm m}^2} \ln\frac{L}{\ell_{\rm c}}, \label{eq:drift_sd_small} \end{align} and \begin{align} V^{\rm F}&\approx\frac{13S \vert \nabla c \vert}{256\pi\eta_{\rm s}^2}\frac{1}{\ell_{\rm c}^2}, \label{eq:drift_sd_large} \end{align} respectively, where we choose $L \approx 1.85 \nu^{-1}$. Note that Eq.~(\ref{eq:drift_sd_small}) was previously derived in Ref.~\cite{Mik15} for a 2D membrane, while Eq.~(\ref{eq:drift_sd_large}) is a new result. As we see in Eqs.~(\ref{eq:drift_sd_small}) and (\ref{eq:drift_sd_large}), there is a crossover from a logarithmic to an algebraic dependence with a power of $-2$ when $\delta$ is increased. These behaviors are consistent with the numerical plot in Fig.~\ref{fig:drift_sd_es} for the free membrane case. \subsection{Confined membranes} Finally we calculate the drift velocity for the confined membrane case. Substituting Eq.~(\ref{eq:mobility_es}) into Eq.~(\ref{eq:drift}), we now obtain \begin{align} V^{\rm C}=&-\frac{S \vert \nabla c \vert}{16\pi^2\eta_{\rm m}^2} \int_\epsilon^\infty {\rm d}^2w\, \Omega_{\beta\beta^\prime\gamma\gamma^\prime} \nonumber \\ & \times \hat{n}_\alpha \frac{\partial^2 g^{\rm C}_{\alpha\beta}(\mathbf{w})}{\partial w_\gamma\partial w_\delta}\frac{\partial g^{\rm C}_{\delta\beta^\prime}(\mathbf{w})}{\partial w_{\gamma^\prime}} (\mathbf{w}\cdot\hat{\mathbf{n}}), \label{eq:drift_es} \end{align} where $\epsilon=\kappa\ell_{\rm c}$ and $g^{{\rm C}}_{\alpha\beta}(\mathbf{w})=4\pi\eta_{\rm m}G^{\rm C}_{\alpha\beta}$ as before. In Fig.~\ref{fig:drift_sd_es}, we present numerically calculated $V^{\rm C}$ as a function of $\epsilon$ by the dashed line. As $\epsilon$ is increased, we see a crossover from a logarithmic to an algebraic dependence, although $V^{\rm C}$ decays faster than $V^{\rm F}$ at large scales. The asymptotic behaviors of $V^{\rm C}$ for small and large $\epsilon$ values can be discussed similarly. Using Eqs.~(\ref{eq:mobility_es_small}) and (\ref{eq:mobility_es_large}), we obtain in the Appendix B the asymptotic expressions of $V^{\rm C}$ for $\epsilon \ll 1$ and $\epsilon \gg 1$ as \begin{align} V^{\rm C}&\approx\frac{S \vert \nabla c \vert}{32\pi\eta_{\rm m}^2} \ln\frac{L}{\ell_{\rm c}}, \label{eq:drift_es_small} \end{align} and \begin{align} V^{\rm C}&\approx\frac{3S \vert \nabla c \vert}{16\pi\eta_{\rm s}^2} \frac{h^2}{\ell_{\rm c}^4}, \label{eq:drift_es_large} \end{align} respectively, and we choose $L \approx 3.05 \kappa^{-1}$ to coincide with the numerical integration. We note that Eqs.~(\ref{eq:drift_sd_small}) and (\ref{eq:drift_es_small}) are identical and depend only on $\eta_{\rm m}$ for small sizes~\cite{Mik15}. From Fig.~\ref{fig:drift_sd_es} and Eqs.~(\ref{eq:drift_sd_small}), (\ref{eq:drift_sd_large}), (\ref{eq:drift_es_small}) and (\ref{eq:drift_es_large}), we see that the drift velocity $V$ is always positive. This means that passive particles move toward higher concentrations of active proteins, and a chemotaxis-like drift takes place in the presence of protein concentration gradients~\cite{Mik15,Kap16,Koy16}. The dominant viscosity dependence of $V$ switches from $\eta_{\rm m}$ to $\eta_{\rm s}$ as the particle size exceeds the corresponding hydrodynamic screening length, namely, $\nu^{-1}$ or $\kappa^{-1}$. \section{Summary and Discussion} \label{sec:discussion} \begin{table*} \caption{\label{tab:table3} Summary of the asymptotic dependencies of the thermal diffusion coefficient $D_{\rm T}$, the active diffusion coefficient $D_{\rm A}$, and the drift velocity $V$ on the passive particle size $\ell_{\rm c}$. The numbers after the asymptotic expressions correspond to the equation numbers in this paper.} \begin{ruledtabular} \begin{tabular}{ccccc} cases &limits&$D_{\rm T}$&$D_{\rm A}$&$V$\\ \hline free membrane&$\nu\ell_{\rm c}\ll1$&$\ln(1/\ell_{\rm c})$~~~(\ref{eq:total_sd_small})&$\ln(1/\ell_{\rm c})$~~~(\ref{eq:diffusion_sd_small})&$\ln(1/\ell_{\rm c})$~~~(\ref{eq:drift_sd_small})\\ ($hk\gg1$)&$\nu\ell_{\rm c}\gg1$&$1/\ell_{\rm c}$~~~(\ref{eq:total_sd_large})&$1/\ell_{\rm c}^2$~~~(\ref{eq:diffusion_sd_large})&$1/\ell_{\rm c}^2$~~~(\ref{eq:drift_sd_large})\\ \hline confined membrane &$\kappa\ell_{\rm c}\ll1$&$\ln(1/\ell_{\rm c})$~~~(\ref{eq:total_es_small})&$\ln(1/\ell_{\rm c})$~~~(\ref{eq:diffusion_es_small})&$\ln(1/\ell_{\rm c})$~~~(\ref{eq:drift_es_small})\\ ($hk\ll1$)&$\kappa\ell_{\rm c}\gg1$&$1/\ell_{\rm c}^2$~~~(\ref{eq:total_es_large})&$1/\ell_{\rm c}^4$~~~(\ref{eq:diffusion_es_large})&$1/\ell_{\rm c}^4$~~~(\ref{eq:drift_es_large}) \end{tabular} \end{ruledtabular} \label{limits} \end{table*} In this paper, we have investigated lateral diffusion induced by active force dipoles embedded in a biomembrane. In particular, we have calculated the active diffusion coefficient and the drift velocity for the free and the confined membrane cases by taking into account the hydrodynamic coupling between the membrane and the surrounding bulk solvent. The force dipole model in Refs.~\cite{Mik15,Kap16} and the general membrane mobility tensors obtained in Refs.~\cite{Inaura08,Ram11,Ram11b,Komura12} have been employed in our work. When the size of a passive diffusing particle is small, the active diffusion coefficients for the free and the confined membranes represent the same logarithmic size dependence, as shown in Eqs.~(\ref{eq:diffusion_sd_small}) and (\ref{eq:diffusion_es_small}), respectively~\cite{Mik15}. In the opposite large size limit, we find algebraic dependencies with powers $-2$ and $-4$ for the two cases, as given by Eqs.~(\ref{eq:diffusion_sd_large}) and (\ref{eq:diffusion_es_large}), respectively. These are the important outcomes of this paper and are also summarized in Table~\ref{limits} together with other asymptotic expressions. In our work, we have assumed that the total diffusion coefficient is provided by the sum of thermal and non-thermal contributions. For small particle sizes, we have shown that both the total $D^{\rm F}$ and $D^{\rm C}$ exhibit a logarithmic size dependence~\cite{Mik15}, whereas different contributions have different size dependencies for large particle sizes. From this result, we have obtained two characteristic length scales that describe the crossover from non-thermal to thermal behaviors when the particle size is larger than the hydrodynamic screening length. The drift velocity in the presence of a concentration gradient of active proteins exhibits the same size dependencies as the active diffusion coefficient for the two membrane geometries. Here we give some numerical estimates of the obtained crossover length scales. Using typical values such as $k_{\rm B}T\approx 4\times10^{-21}$\,J, $\eta_{\rm s}\approx10^{-3}$\,Pa$\cdot$s, $h\approx10^{-9}$\,m, $S\approx 10^{-42}$\,J$^2\cdot$s, and $c_0 \approx10^{14}$\,m$^{-2}$~\cite{Mik15}, we obtain $\ell^*\approx 2\times10^{-6}$\,m [see Eq.~(\ref{eq:crossover_sd})] and $\ell^{**} \approx 6\times10^{-8}$\,m [see Eq.~(\ref{eq:crossover_es})]. On the other hand, the SD and the ES screening lengths are typically $\nu^{-1}\approx5 \times 10^{-7}$\,m and $\kappa^{-1}\approx2\times10^{-8}$\,m, respectively~\cite{Saf75,Saf76,Evans88,Ramachandran10}. Hence $\ell^*$ and $\ell^{**}$ are typically larger than $\nu^{-1}$ and $\kappa^{-1}$, respectively. Moreover, the values of $S$ and $c_0$ can vary significantly in one membrane to another as pointed out in Ref.~\cite{Mik15}. For example, when active proteins are confined in raft domains~\cite{Sim97,Kom14,Lin10}, the 2D concentration $c_0$ can be much larger. When, for example, $c_0 \approx10^{15}$\,m$^{-2}$ (while $S$ is the same as above)~\cite{Koy16}, the crossover length can be estimated as $\ell^*\approx 2\times10^{-5}$\,m and $\ell^{**} \approx 2\times10^{-7}$\,m. If $\ell^{*}$ and $\ell^{**}$ are much larger than the screening lengths $\nu^{-1}$ and $\kappa^{-1}$, respectively, as in this case, the three different scaling regimes of the total diffusion coefficient are expected as the particle size is increased, i.e., $\ln(1/\ell_{\rm c}) \to 1/\ell_{\rm c}^2 \to1/\ell_{\rm c}$ for the free membrane case, and $\ln(1/\ell_{\rm c}) \to 1/\ell_{\rm c}^4 \to 1/\ell_{\rm c}^2$ for the confined membrane case. Momentum in a membrane is conserved over distances smaller than the hydrodynamic screening length (either $\nu^{-1}$ or $\kappa^{-1}$), whereas it leaks to the surrounding fluid beyond that length scale~\cite{Dia09,Opp09,Opp10}. Within a membrane, the velocity decays as $\ln(1/r)$ at short distances, as shown in Eqs.~(\ref{eq:mobility_sd_small}) and (\ref{eq:mobility_es_small}), due to the momentum conservation in 2D. These 2D behaviors also lead to the logarithmic dependence of the active diffusion coefficients in Eqs.~(\ref{eq:diffusion_sd_small}) and (\ref{eq:diffusion_es_small}). For the free membrane case, the velocity decays as $1/r$ at large scales as shown in Eq.~(\ref{eq:mobility_sd_large}) due to the momentum conservation in the 3D bulk. This behavior is reflected in the first term of Eq.~(\ref{eq:total_sd_large}) for the thermal diffusion coefficient~\cite{Hug81}. As shown in Eq.~(\ref{eq:mobility_es_large}), however, the velocity decays as $1/r^2$ at large scales for the confined membrane case. This behavior essentially arises from the mass conservation in 2D while the total momentum is not conserved due to the presence of the walls which break the translational symmetry of the system~\cite{Dia09,Opp09,Opp10}. The corresponding contribution is the first term of Eq.~(\ref{eq:total_es_large}) for the thermal diffusion coefficient~\cite{Evans88,Ramachandran10}. The active diffusion coefficient $D^{{\rm F}}_{\rm A}$ obtained in Eq.~(\ref{eq:diffusion_sd_large}) for the free membrane case essentially reflects the hydrodynamics of the surrounding bulk 3D solvent. Hence our result can be compared with that in Ref.~\cite{Mik15} obtained for a purely 3D fluid system: \begin{align} D^{{\rm 3D}}_{\rm A}&\approx\frac{S c_0^{\rm 3D}}{60\pi\eta_{\rm s}^2} \frac{1}{\ell_{\rm c}}, \label{eq:diffusion_sd_bulk} \end{align} which decays as $1/\ell_{\rm c}$ and is different from Eq.~(\ref{eq:diffusion_sd_large}). In fact, such a difference arises from the different dimensions of the dipole concentrations, i.e., $c_0$ is the 2D concentration in our case, while $c_0^{\rm 3D}$ is the 3D concentration in Ref.~\cite{Mik15}. A similar comparison can be also made for the drift velocity of free membranes in Eq.~(\ref{eq:drift_sd_large}) and that in Ref.~\cite{Mik15} for a 3D fluid system: \begin{align} V^{\rm 3D}&\approx\frac{S \vert \nabla c^{\rm 3D} \vert}{30\pi\eta_{\rm s}^2} \frac{1}{\ell_{\rm c}}. \end{align} The same reason holds for the different $\ell_{\rm c}$-dependence. At this stage, we also comment that both the active diffusion coefficient $D_{\rm A}$ and the drift velocity $V$ exhibit the same $\ell_{\rm c}$-dependence. Although the integrands in Eqs.~(\ref{eq:coefficient}) and (\ref{eq:drift}) look apparently different, their physical dimensions are identical because the first derivative of the mobility tensor in Eq.~(\ref{eq:coefficient}) corresponds to the product of the second derivative and $(\mathbf{r}\cdot\hat{\mathbf{n}})$ in Eq.~(\ref{eq:drift}). This is the simple reason that they exhibit the same $\ell_{\rm c}$-dependence. One can also easily confirm that $V$ is positive when we make use of the membrane mobility tensor, because the integrand in Eq.~(\ref{eq:drift}) is the product of the first and the second derivatives of the mobility tensor which have opposite signs. This leads to $V>0$ indicating a chemotaxis-like drift as mentioned before. In this work, we have assumed that active proteins generate forces only in the lateral directions. On the other hand, actual active motors such as bacteriorhodopsin can also exert forces to the surrounding solvent~\cite{Man99,Ramaswamy00,Man01}. Although we did not take into account such normal forces which induce membrane undulation, consideration of normal forces as well as lateral ones will provide us with a general understanding of active diffusion in biomembranes~\cite{Komura15}. We have also assumed that the force dipoles are fixed in a membrane and are distributed homogeneously. It would be interesting to consider the case when active proteins can also move laterally in the membrane and even interact with each other through a nematic-like interaction~\cite{Lau09}. The full equation of motion now involves potential-of-mean-force interactions in the multi-particle diffusion equations that describe the combined motions of the passive particle and active proteins in the membrane. Although the dynamics of the active protein concentration is essentially determined by a diffusion equation, it is a complicated problem because not only thermal diffusion but also active non-thermal diffusion should be taken into account. Our work is the first step toward such a full description of very rich biomembrane dynamics. \begin{acknowledgments} We thank A.\ S.\ Mikhailov and T.\ Kato for useful discussions. S.K.\ and R.O.\ acknowledge support from the Grant-in-Aid for Scientific Research on Innovative Areas ``\textit{Fluctuation and Structure}" (Grant No.\ 25103010) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan, and the Grant-in-Aid for Scientific Research (C) (Grant No.\ 15K05250) from the Japan Society for the Promotion of Science (JSPS). \end{acknowledgments}
1,116,691,498,105
arxiv
\section{Introduction} \title{On the stability and magnetic properties of surface nanobubbles in water} \author{\textbf{Siddhartha Sen}\footnote{[email protected]} } \affiliation{{CRANN}, \\ \emph{Trinity College, Dublin 2, Ireland}} \author{\textbf{Kumar S. Gupta}\footnote{[email protected]} } \affiliation{{Saha Institute of Nuclear Physics, Theory Division}\\ \emph{1/AF Bidhannagar, Kolkata 700 064, India}} \begin{abstract} A model for gas nanobubbles is proposed in which their remarkable stability is explained as due to the presence of a qualitatively different form of water covering the nanobubble surface which leads to a reduction of the diffusion coefficient by a factor of $10^{9}$. It is shown that this new form of water is created by the interaction of the electrons of water molecules with the zero point vacuum electromagnetic field. The model gives an estimate for the life time of surface nanobubbles, explains why they are not influenced by surfactants and predicts that they should exhibit nonlinear paramagnetism. \end{abstract} \pacs{47.55.D-, 31.30.J-, 75.75.-c} \maketitle Nanobubbles with radii in the range 25 to 1000 nanometres and contact angles in the range $135^{\circ}$ to $ 175^{\circ}$ have been observed to form on hydrophobic surfaces, where they are remarkably stable \cite{ducker,lohse1,seddon,jin,lohse2}. For bubbles of this size lifetimes of the order microseconds are expected due to the high Laplace pressure, $\frac{2\gamma}{\rho}$, where $\gamma$ is the surface tension of the liquid and $\rho$, the radius of curvature of the bubble, inside them. This high pressure should drive the gas into the liquid by diffusion and make the nanobubbles unstable. However the nanobubbles observed are very stable with lifetimes of hours or even days. These bubbles have unusual properties. For example it is observed that the addition of surfactants does not influence their long life time and stability. Normally one would expect surfactants, which lower the surface tension, to decrease the bubble lifetimes. Any theoretical model for nanobubbles should explain these basic observational features. A number of ideas have been proposed \cite{ducker,lohse1,seddon,jin,lohse2}. Here we would like to propose a very different model \cite{sentalk} which can qualitatively account for the surface nanobubble properties listed. Our model gives an estimate of their lifetimes, explains why they expel surfactants and it also predicts that the surface nanobubbles should have nonlinear response to external magnetic fields with a saturation value for its magnetic moment per water molecule, of $g\mu_{B}$, where $\mu_{B}$ is the Bohr magneton and $g \approx 10^{-3}$. The magnetic moment of the nanoshell is estimated to be $\approx 10^{6} \mu_{B}$. The basic premise of the model is that interfacial water has two phases. The first phase is normal water. The second phase is water in the form of coherent nanoscale domains of volume $V_c$ containing a suitable number $N$ of electrons associated with the water molecules. We suggest that it is this coherent phase that is responsible for the stability and existence of surface nanobubbles. We show that the zero-point fluctuating vacuum electromagnetic (EM) field, required to exist by quantum theory, can spontaneously generate nanoscale structures. It can produce a time-independent shift in the position of orbital electrons \cite{itz}. For the case of the hydrogen atom, this positional shift has been used to provide a good estimate of the observed Lamb shift of its $^2S$ energy level \cite{itz,lamb,welt}. The zero-point vacuum EM field also gives rise to the Casimir effect \cite{cas}. For two parallel plates close to each other the energy of the vacuum EM field between the plates changes if the distance between the plates is changed. This gives rise to the Casimir force, which has been measured \cite{most}. In this work, we combine the idea of a universal shift in the position of an electron due to the fluctuating vacuum EM field with the idea that a change in the volume of a region containing zero point EM fields gives rise to forces. We apply these ideas to a well defined nanoscale volume containing a number of electrons and show how an induced EM force is generated. This interaction lowers the energy of the ground state of the cluster and thus leads to the formation of a stable spontaneously created coherent many-electron nanoscale structure. Here we suggest that such a coherent structure is formed on the surface of nanobubbles in water and is responsible for their remarkable stability. Let us sketch how this comes about. Our system is the surface of a nanobubble, which is assumed to have a well defined surface layer volume $V_c$. This volume contains water molecules with $N$ orbiting electrons on which the zero point EM field acts. Furthermore each water molecule is assumed to be in one of two electronic states, a ground state of energy $\hbar \omega_1$ and an excited state of energy $\hbar \omega_2$, which is assumed to be stable. This assumption is made only for the sake of simplicity. It is not an essential requirement of the model. Recent theoretical work on water suggests that the excited states of bulk water are unstable leading to the dissociation of electrons from the water molecules on a timescale of femtoseconds \cite{shan}. Thus nanoscale structures in the bulk, even if formed are unstable. However this might not be the case at the surface or for water in the presence of surfaces or other substrates, such as biomolecules. Here surface effects can stabilize excited states. We assume this to be the case. For instance biomolecules provide a convenient scaffold for nanostructures and there is indeed evidence of different form of water adjacent to biomolecules \cite{pollack,senbio}. Our analysis will show that coherent nanoscale structures do form as a volume layer at the surface of nanobubbles. The electrons of the water molecule are coherent as they all oscillate with a collective common frequency $\Omega$, different from the transition frequency $\omega=\omega_2-\omega_1$ of the water molecule. This collective behaviour arises because, as we show, the coherent structure has lower energy. Furthermore a result of Frohlich \cite{froh} implies that two oscillating dipoles interacting through a compatible oscillating EM field attract one another while molecules or atoms which do not satisfy such a resonance condition repel \cite{froh2}. This explains why surfactants do not influence stability: the surfactant molecules, not in resonance with the coherent surface volume molecules are expelled by the Frohlich force. Thus gases in the coherent water layer, not in resonance, are expelled from the water to the hydrophobic surface, leading to the formation of surface nanobubbles. The formation of a coherent water layer also prevents diffusion of gases through them. This is because the scattering cross section of gas molecules with a coherent structure is greatly increased leading to a decrease in the diffusion coefficient $D$. Recall that $D \approx\frac{<v>}{n\sigma}$, where $<v>$ is the thermal speed of the diffusing molecule, $n$ is the number density of water molecules from which it scatters and $\sigma$ is the scattering cross section of water for the gas molecule concerned. If the surface water contains $N$ water molecules scattering incoherently, the scattering is proportional to $N$ while if the scattering is coherent, it is proportional to $N^2$ \cite{coey}. The scattering is expected to be coherent for the coherent nanoscale domains so that the effective cross section for scattering is increased by a factor of $N$ and the diffusion constant is decreased by the same factor. We will show that for a coherent domain to form $N\approx 10^{9}$. This leads to an increase in the lifetime for gas diffusion by a factor of $10^9$ which explains why surface nanobubble have long lifetimes. The lifetime is expected to be further enhanced by the Frohlich repulsive force between non resonating molecules. As emphasized already, an essential assumption of our work in that a nanoscale collection of electrons exist within a physical volume $V_c$ which can be clearly identified. For the present work, the volume $V_c$ is identified with the surface volume of nanobubble in water which contains a collection of $N$ electrons. The vacuum electromagnetic field couples to all these electrons, each of which has mass $m$ and charge $e$. Let $\vec{E}(t,\vec{x})$ denote the time dependent electromagnetic field due to the vacuum fluctuation and $\vec{\delta_i}$ denote the fluctuation in the position of the $i^{\mathrm {th}}$ electron due to the effect of $\vec{E(t,x)}$. Then we have \begin{eqnarray} m \sum_i\ddot{\vec{\delta_i}}(t, \vec{x}) &=& N e \vec{E} (t, \vec{x}),~~~~i = 1,2,.....N. \end{eqnarray} Taking the Fourier transform, we get \begin{equation} \label{ft} -m \sum_i\omega^2 \vec{\delta_i}(\omega, \vec{x}) = N e \vec{E} (\omega, \vec{x}), \end{equation} where we have used the same symbols $\vec{\delta}$ and $\vec{E}$ for the Fourier transforms of these quantities. Thus we get \begin{equation} \left | \sum_i{\vec{\delta}_i} (\omega, \vec{x}) \right |^2 = N^2 \frac{e^2 E^2 (\omega, \vec{x})}{m^2 w^4}, \end{equation} where $E^2 \equiv |{\vec{E}}|^2$. We now calculate the time average of the fluctuations. We assume that the fluctuations are independent so that the cross terms vanish. We also assume that the time averaged fluctuation $<|\vec{\delta_i}|^2> \equiv <\delta^2>$ are the same for all the electrons. With these assumptions and using the result for the value of $\delta^2$ given in \cite{itz} we get our final expression for the average squared fluctuation of the position of an electron when it is part of an assembly of $N$ electrons as \begin{equation} \label{delta} < \delta^2 >_{N} = 2 N \alpha \lambda_c^2 \ln{\frac{1}{\alpha^2}} \equiv \delta_N^2, \end{equation} where $\alpha = \frac{e^2}{\hbar c}$ is the fine structure constant and $\lambda_c = \frac{\hbar}{m c}$ is the Compton wavelength of the electron. The factors in the log term come from limits of $\omega$ integration which are taken to be $\frac{mc^2}{\hbar}$ and $\frac{me^4}{\hbar^3}$ representing a relativistic cut off set by the electron mass and a lower frequency cut off set by the atomic scale Bohr frequency \cite{itz}. We will set $\ln\frac{1}{\alpha^2} \approx 8$. Thus $<\delta^2>_{N} \approx 16N \alpha \lambda^2_{c}$. This is a constant universal expression. We now use this result to determine the coupling of a charge to the zero point field when it is part of an assembly of charges close together. The idea we use is that the fluctuation-induced position change $\delta_{N}$ produces a change of the well defined nanovolume $V_c \approx l^3$ and this leads to a force. The volume of the nanoshell is set by the wavelength $l$ corresponding to a transition of energy $\approx 10$ eV. This induced force tells how the vacuum EM field interacts with a charge belonging to an assembly. To determine this force. we start with our well-defined nanocluster of charged particles in volume $V_c$. The energy density of the fluctuating field in this volume is $|\vec{E}|^2= \frac{1}{2V_c}\hbar\omega$ for frequency $\omega$, where $\vec{E}$ is the vacuum EM field, given by $\vec{E} = \vec{u}\frac{\sqrt{\hbar\omega}}{\sqrt{2V_c}} e^{-i\omega t}$. Consider the effect of volume change due to the position change $\sqrt{<\delta^2(t,\vec{x})>_{N}} \approx 4\sqrt{\alpha \lambda_c^2 N}$ on $\vec{E}$ The volume change produces, as we now show, an induced EM force, which we write as $e\vec{E}_{f}(t)$. We determine $\vec{E}_{f}(t)$ in two steps. In the first step we define $\vec{E}_{f}$ by the equation \begin{eqnarray} \sqrt{\alpha} \vec{E}_{f}(t)&=&\vec{u}\sqrt{\hbar\omega}[\frac{1}{\sqrt{2(V_c+\delta V)}}-\frac{1}{\sqrt{2V_c}}]e^{-i\omega t} \end{eqnarray} where $\frac{\delta V_c}{V_c}=\frac{3\sqrt{<\delta>^2}}{l}$. A constraint to remember is that we must have $\frac{\delta V}{V}<<1$. Thus we get \begin{equation} e\vec{E}_{f}(t)=\vec{u}~\frac{3}{2}\sqrt{16c\frac{N\alpha \lambda_c^2}{2V_c\omega}}(\frac{1}{l})\hbar\omega~e^{-i\omega t}. \end{equation} The fluctuating EM force $e\vec{E}_{f}$ can now be used to produce an interaction energy term, $e\vec{E}_{f}(t).\vec{x}$ with an electron, located at $\vec{x}$ which is simply the usual $\frac{e}{c}~\vec{j}.\vec{A}$ term, where the vector potential $\vec{A}$ is defined by $\vec{E}_{f}=\frac{1}{c}~\frac{\partial \vec{A}}{\partial t}$. Thus the induced EM term constructed is a standard field-current interaction and can cause a transition between states $|i>, |f>$. Its transition matrix element is given by the expression \begin{eqnarray} <i|e\vec{E}_{f}.\vec{x}|f>&=& \frac{<i|(\vec{x}.\vec{u})|f>}{\sqrt{2 V_c \omega/c}}\frac{3}{2}\sqrt{\alpha} \sqrt{\frac{16N\lambda_c^2}{l^2}}~\hbar\omega \end{eqnarray} where $ \vec{x}.\vec{u}=rl \cos\theta$ , and $\cos\theta$ is the angle between the vectors $\vec{x}, \vec{u}$. The expression for the vacuum field induced transition amplitude represents a coupling between two electronic states and the zero point photon induced em field. It has the structure of the usual dipole transition but with a multiplicative factor $\sqrt{\frac{16N\lambda_{c}^2}{l^2}}$ due to the fact that the interaction is induced from zero point fluctuations. When $N$ is $\approx 10^{9}$ this factor is of order unity and the mixing of states takes place with high probability. This is what we think happens in the nanobubble shell. Another important result that can be extracted from the transition matrix element is a frequency relation between a collective frequency $\Omega$ and the transition frequency $\omega$, which is, \begin{eqnarray} \Omega &=&G\omega,\\ G&=&\sqrt{\alpha}~r~\frac{3}{2}~\sqrt{\frac{Nc}{2V_c\omega}}~\left ( \sqrt{\frac{16\lambda_c^2}{l^2}} \right ). \end{eqnarray} We note that $\frac{<i|e\vec{E}_{F}.\vec{x}|f>}{<i|\vec{x}.\vec{u}|f>}$ is a characteristic energy associated with the transition which we have written as $\hbar \Omega$. We will show, shortly, that this energy term, representing the mixing between the ground state and the exited state, lowers the ground state energy. Hence we expect that all the atoms in our nano assembly will have this frequency $\Omega$. The value of $G$ is $\approx 10^{-3}$ for $N \approx 10^{9},r\approx 3\times 10^{-8}$ cm, $\omega \approx 10^{16} s^{-1}$, and $V \approx 10^{-15}$ cc. A result of this form was first derived Preparata et al in \cite{prep1} for water molecules using a path integral quantum field theory approach. They pioneered the idea that nanoscale structures would form driven by time dependent zero point em fields \cite{prep3}. However our approach is different from that of Preparata et al and one of our basic results is a crucially different from those derived in \cite{prep3}, namely our linking equation between the collective oscillation $\Omega_{ij}$ and the transition oscillation frequencies $\omega_{ij}$ has an additional numerical factor of $\sqrt{\frac{16N\lambda_{c}^2}{l^2}}$ present. This factor appears for the physical reason that we have explained. We now show that the mixing of states can lower the ground state energy by just considering frequencies. Note the vacuum induced electric field is independent of $\vec{x}$ and small. Hence a simple perturbation treatment of its effect is allowed. We describe energies in terms of frequencies $\Omega, \omega_1, \omega_2$, neglecting numerical factors. Consider the Hamiltonian \begin{eqnarray} \label{int} \frac{H}{\hbar}& =& \left ( \begin{array}{cc} \omega_1 & \Omega \\ \Omega & \omega_2 \end{array} \right ) \end{eqnarray} which acts on the states characterized by eigenvalues $\omega_1$ and $\omega_2$ and the interaction term mixes the two states. The eigenvalues of the interaction Hamiltonian $H$ are given by \begin{equation} \lambda_{\pm} = \frac{(\omega_1 + \omega_2)}{2} \left [ 1 \pm \sqrt{1 + \frac{4 ( \Omega^2 - \omega_1 \omega_2 )}{(\omega_1 + \omega_2)^2}} \right ]. \end{equation} For $\Omega^2 >\omega_1 \omega_2$ we see that one of the eigenvalues is \begin{equation} \lambda_- \approx - \frac{\Omega^2 - \omega_1 \omega_2}{(\omega_1 + \omega_2)} < 0. \end{equation} The condition on $\Omega$ tells us that $\lambda_{-}$ is most negative when $\omega_2$ is small. This implies that the energy is lowered if the transition is between levels one of which is close to the ionization threshold. Thus the mixed state is expected to have a loosely bound electron. In our application the ground state energy is $\omega_1$ and we assume that the excited state is close to the ionization threshold i.e. $\omega_2 \approx 0$. Then, it is clear that the ground state energy is lowered due to the mixing and this lowering of the ground state energy is the physical reason for the formation of coherent nanoclusters. We stress that although the fluctuating EM field mixes the ground state with an excited state both states involved are bound states. No photon leaves the system. Let us work out the magnetic properties expected for the surface nanobubbles assuming they have a coherent water layer of volume $100^3$ cubic nm. This size is fixed by the wavelength of the $\approx 10$eV near ionisation excitation level of water. The current in the model $j_{e}=e\frac{\Omega}{2\pi}$, is interpreted as due to the collective orbiting motion of outer electrons of water molecules. This gives an average magnetic moment $\mu$ per water molecule in the coherent domain given by \begin{equation} \mu=\frac{\pi r^2 e\Omega}{2\pi c}, \end{equation} where $r$ is the size of the water molecule and $\Omega$ is the collective frequency of the cluster. Putting in numbers $r \approx 3 \times 10^{-8}$ cm, and $\Omega \approx 10^{13}$ we find that $\mu \approx 10^{-3} \mu_{B}$, where $\mu_{B}$ is the Bohr magneton. We suppose that when a static external magnetic field is introduced it interacts with the electron currents present in the coherent domain. Thus for a coherent domain interacting with an external magnetic field $\mathbf{B}$, the static interaction is ]$V=N\mathbf{\mu}.\mathbf{B}$. We now apply our general results to surface nanobubble water layers, continuing to use the simple model in which the coherent mixing leads to a lower energy ground state. We described this ground state in terms of two time-dependent coherent oscillatory basis states. These we now take to be labelled by angular momentum spherical harmonic labels $l=0,m=0$ and $l=1,m=0$ that describe the electrons. The ground state wavefunction is written as \begin{equation} \zeta(\mathbf{\Omega},t)=\gamma_0(t) Y_{0,0}(\mathbf{\Omega})+\gamma_1(t) Y_{1,0}(\mathbf{\Omega}), \end{equation} where $\mathbf{\Omega}$ is the direction of the orbiting electron angular velocity. The magnetic field mixes the basis vectors of the coherent state which we describe by a mixing angle $\theta$. Once this step has been taken we can calculate the magnetic moment of the system, assuming $V$ is a perturbation and as it is static, it does not modify the time dependence of $\gamma_i(t), i=0,1$. The eigenvalues of the system with the magnetic field are, \begin{eqnarray} \lambda_1&=&\frac{\omega -\sqrt{\omega^2+4V^2}}{2},\\ \lambda_2 &=& \frac{\omega+\sqrt{\omega^2+4V^2}}{2}. \end{eqnarray} From $\lambda_1, \lambda_2$ the corresponding eigenfunctions are constructed. In terms of them the ground state wave function becomes \begin{eqnarray} \zeta &=& A Y_{0,0}+B Y_{1,0},\\ A&=&(\gamma_0(t)\cos\theta -\gamma_1(t)\sin\theta),\\ B&=&(\gamma_1(t)\cos\theta+\gamma_0(t)\sin\theta), \end{eqnarray} where $\tan\theta=\frac{1-\sqrt{1+x^2}}{x}, x=\frac{2V}{\omega}$. Using this expression we now calculate the time average value of the magnetic moment $P_{Av}$ in the presence of an external magnetic field. This is simply the time average of $\epsilon_{\mu}.\epsilon_{\mathbf{B}}$ evaluated on the ground state written in terms of the new basis wavefunctions. Here $\epsilon_{\mathbf{B}}$ and $\epsilon_{\mu}$ are the unit vectors in the direction of the field and the orbiting electron angular velocity, respectively Thus \begin{equation} P_{Av}=\int d\Omega \zeta^{*}(\mathbf{u},t) \cos\theta \zeta(\mathbf{u},t), \end{equation} which gives \begin{equation} P_{Av}=-\kappa \sin 2\theta=-\kappa\frac{x}{\sqrt{1+x^2}}, \end{equation} where $\kappa=\frac{1}{\sqrt{3}}~\frac{\Omega^2-2\lambda^2_0}{\Omega^2+2\lambda^2_0} \approx 0.58$ and $\lambda_{0}=\frac{1}{2}[\omega-\sqrt{\omega^2+4\Omega^2}]$ This is the expression for the expected nonlinear response of an outer orbiting electron of a water molecule that belongs to the surface coherent layer to an external magnetic field. The value for the total magnetic moment for the surface nanobubble is $\frac{N\pi er^2\Omega}{2\pi c} \approx 10^{6} \mu_{B}$, where $\mu_{B}$ is the Bohr magneton. Since this result is for a stable ground state it should be temperature independent. In conclusion, we have suggested that surface nanobubbles are stable because of a new structured phase of water, first suggested by Preparata and coworkers using the methods of quantum field theory. Our treatment is based on simpler ideas of fluctuations and give different results. The main difference is that our model suggests that nanoscale structures need to have a well defined starting volume. We do not expect them in the bulk but for surface nanobubbles where the volume at the surface is well defined. The second difference is that our induced em field has an additional factor representing the vacuum fluctuating origin of the force. Surface nanobubbles can only have sizes that are compatible with a fixed coherent volume of $\approx 100^3$ cubic nm. They are expected to be charged, due to the mixing of the ground state with a state close to the ionization threshold, they should expel surfactants due to quantum forces that repel molecules that are not in resonance, which explains why surface nanobubbles form, while the increase in the scattering cross section by a factor of $N \approx 10^{9}$ due to the formation of coherent domains, explains their long life. Finally we showed that they are expected to exhibit nonlinear orbital paramagnetism. The model described has been used to explain the observed relatively long lifetime of microbubbles\cite{coey} and the observed temperature independent nonlinear magnetic properties of doped Cerium Oxide \cite{coey2} \section*{Acknowledgement} KSG and SS would like to thank Prof. Alvaro Ferraz (IIP-UFRN-Brazil) for the hospitality at IIP-Natal-Brazil, where this work was carried out. SS would like to thank Prof Michael Coey of CRANN for getting him interested in nanobubbles, for many helpful discussions of experimental results and for his useful comments about the paper, especially regarding the significance of Eq (21).
1,116,691,498,106
arxiv
\section{#1}\setcounter{equation}{0}} \newcommand{\I}{\mathrm i} \newcommand{\ba}{\begin{array}} \newcommand{\end{array}}{\end{array}} \newcommand{\bal}{\begin{align}} \newcommand{\end{align}}{\end{align}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\normalsize}{\normalsize} \newcommand{\footnotesize}{\footnotesize} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\norma}[1]{\left\Vert#1\right\Vert} \newcommand{\tam}[1]{{\text {\Large $#1$}}} \newcommand{\stam}[1]{{\text {\large $#1$}}} \newcommand{\left(}{\left(} \newcommand{\right)}{\right)} \newcommand{\left[}{\left[} \newcommand{\right]}{\right]} \newcommand{\left\{}{\left\{} \newcommand{\right\}}{\right\}} \newcommand{\ket}[1]{\left|#1\right.\rangle} \newcommand{\bra}[1]{\langle\left.#1\right|} \newcommand{\braket}[1]{\langle#1\rangle} \newcommand{\hspace{2mm}F_{\hspace{-3.8mm}1\hspace{1.9mm}1}}{\hspace{2mm}F_{\hspace{-3.8mm}1\hspace{1.9mm}1}} \renewcommand{\lim}[2]{\begin{tabular}{c} \vspace{-3mm}lim\\ $_{#1\to #2}$\end{tabular}} \def\partial{\partial} \def\slash\hspace{-2.1mm}\partial{\slash\hspace{-2.1mm}\partial} \def\mathrm{d}{{\rm d}} \def\alpha{\alpha} \def\beta{\beta} \def\chi{\chi} \def\gamma{\gamma} \def\g^{\dot 5}{\gamma^{\dot 5}} \def\Gamma{\Gamma} \def\delta{\delta} \def\Delta{\Delta} \def{\kappa}{{\kappa}} \def\mu{\mu} \def\nu{\nu} \def \hat{\hat} \def \mathcal{G}{\mathcal{G}} \def\tau{\tau} \def\theta{\theta} \defl{l} \def\psi{\psi} \defl{l} \def\Psi{\Psi} \def\theta{\theta} \def\zeta{\zeta} \def\Theta{\Theta} \def\lambda{\lambda} \def\Lambda{\Lambda} \def\Omega{\Omega} \def\omega{\omega} \def\tilde\omega{\tilde\omega} \def\rho{\rho} \def\sigma{\sigma} \def\Sigma{\Sigma} \def\epsilon{\epsilon} \def\mathcal{J}{\mathcal{J}} \def\mathcal{M}{\mathcal{M}} \def\tilde{\mathcal{M}}{\tilde{\mathcal{M}}} \def\mathbb{R}{\mathbb{R}} \def\mathrm{tr}{\mathrm{tr}} \def\mathrm{Tr}{\mathrm{Tr}} \def\mathrm {tan}{\mathrm {tan}} \def\mathrm {sin}{\mathrm {sin}} \def\mathrm {cos}{\mathrm {cos}} \def\mathrm {tanh}{\mathrm {tanh}} \def\mathrm {coth}{\mathrm {coth}} \def\mathrm {sinh}{\mathrm {sinh}} \def\mathrm {cosh}{\mathrm {cosh}} \def\phantom{1}{\phantom{1}} \def\xi{\xi} \def{\bf 1}{{\bf 1}} \def\frac{n}{R}{\frac{n}{R}} \def\tilde m{\tilde m} \def{\rm $\tilde n$}{{\rm $\tilde n$}} \def\mathfrak{n}_f{\mathfrak{n}_f} \def\mathfrak{e}_{f_h}{\mathfrak{e}_{f_h}} \def\mathfrak{e}_{f_\nu}{\mathfrak{e}_{f_\nu}} \def\mathcal Q{\mathcal Q} \def\mathfrak H{\mathfrak H} \def\mathfrak f{\mathfrak f} \makeatletter \makeatother \newcommand{$m_{SUSY}$ \,} \newcommand{\mSUSY}{$m_{SUSY}$ \,}{$m_{SUSY}$ \,} \newcommand{\mSUSY}{$m_{SUSY}$ \,} \def\stackrel{<}{{}_\sim}{\stackrel{<}{{}_\sim}} \def\simgt{\stackrel{>}{{}_\sim}} \begin{document} \thispagestyle{empty} \begin{titlepage} \vskip-0.5cm \begin{flushright} FERMILAB-PUB-11-614-T \end{flushright} \vskip0.2cm \renewcommand{\thefootnote}{\fnsymbol{footnote}} \vskip 2cm \begin{center} \vspace{0.3cm} \Large {\sc The Forward-Backward Top Asymmetry in a Singlet extension of the MSSM} \vspace*{1.5cm} \normalsize {\bf Alejandro de la Puente~\footnote{[email protected]}} \vskip0.8cm {\em Department of Physics, University of Notre Dame, Notre Dame, IN 46556, USA}\\[6pt] {\em and}\\[6pt] {\em Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510, USA}\\[6pt] \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \vskip0.6in \end{center} \centerline{\large\bf Abstract} \vspace{.5cm} \noindent The CDF and D$\O$ collaborations have recently reported a large forward-backward asymmetry in the $t\bar{t}$ system which deviates from the next to-leading order QCD standard model prediction. We study the asymmetry in the $t\bar{t}$ system within the framework of singlet extensions of the Minimal Supersymmetric Standard Model. For this purpose, we introduce non-renormalizable couplings between first and third generation of quarks to scalars. We analyze two limiting cases of the model, characterized by the size of the supersymmetric mass for the singlet superfield. We study both the small and large limits of this mass parameter. We find that in the region of small singlet supersymmetric mass we can obtain a large asymmetry while being consistent with limits on the $t\bar{t}$ production cross section. These results are also consistent with constraints arising from flavor physics, quark masses and top quark decays. \vspace{1.5cm} \begin{center} {\it Dedicated to my dear friend, Leven. If only I can have your spirit.} \end{center} \vspace{2mm} \end{titlepage} \section{Introduction} The CDF and D$\O$ collaborations have recently reported a new measurement of the inclusive forward-backward top asymmetry. In particular, after unfolding they have found~\cite{Aaltonen:2011kc,Abazov:2011rq} \begin{eqnarray} A^{t\bar{t}}_{FB}&=&0.158\pm0.072\pm0.017~(\text{CDF with} ~5.3 ~\text{fb}^{-1}), \\ A^{t\bar{t}}_{FB}&=&0.196\pm0.060^{+0.018}_{-0.026}~(\text{D$\O$ with}~5.4 ~\text{fb}^{-1}), \end{eqnarray} which is to be compared to the Standard Model (SM) prediction of $0.058\pm0.009$. Furthermore, the CDF collaboration has measured this asymmetry for different regions of $|\Delta y|$, the difference in the pseudo-rapidities of the top and anti-top quarks, \begin{eqnarray} A^{t\bar{t}}_{FB}(|\Delta y|<1)&=&0.026\pm0.118, \\ A^{t\bar{t}}_{FB}(|\Delta y|\ge1)&=&0.611\pm0.256. \end{eqnarray} In addition, the CDF collaboration provides a measurement of the asymmetry for two different regions of the $t\bar{t}$ invariant mass distribution: \begin{eqnarray} A^{t\bar{t}}_{FB}(M_{t\bar{t}}<450~\text{GeV/c}^{2})&=&-0.116\pm0.153, \\ A^{t\bar{t}}_{FB}(M_{t\bar{t}}\ge450~\text{GeV/c}^{2})&=&~~0.475\pm0.114. \end{eqnarray} Equation (1.6) has a significance of 3.1 standard deviations from the SM prediction of $0.088\pm0.013$. The D$\O$ collaboration, however, does not find a significant dependence of $A^{t\bar{t}}$ on either $|\Delta y|$ or $M_{t\bar{t}}$. The close agreement between the CDF and D$\O$ results on the inclusive asymmetry, serve as motivation for building models beyond the SM that may shed light on possible explanations for the large asymmetry. In this work, we introduce a supersymmetric model to explain the large asymmetry. We use an existing variation of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), known as the singlet extended Minimal Supersymmetric Standard Model (MSSM) or S-MSSM~\cite{CDOP1,CDP2}. The S-MSSM was introduced to provide a more natural solution to the so called little hierarchy problem. In this work, the S-MSSM is extended with dimension-five operators in the superpotential, in order to study their contributions to the forward-backward asymmetry, A$^{t\bar{t}}_{FB}$, and the total cross section, $\sigma^{t\bar{t}}$. Within supersymmetric extensions of the SM, models with R-parity violation can contribute to the asymmetry through a t-channel sparticle exchange in the process $d\bar{d}\to t\bar{t}$~\cite{Cao:2009uz}. In a recent work by Isidori et al.~\cite{Isidori:2011dp}, a simple extension of the SM was introduced which incorporates a light fermion and a scalar with mass above the top mass. The authors found viable regions of parameter space where an asymmetry could be generated from the decays of the new scalars. This extension can be accommodated within the MSSM, where the scalar top can be identified with a right-handed stop and the light neutral fermion with the bino. However, in the MSSM, the bino couples with electroweak strength and a large asymmetry can not be generated. It is argued in~\cite{Isidori:2011dp} that a similar analysis could be carried out within singlet extensions of the MSSM. Models that incorporate scalars within a variety of representations of the SM gauge group have been extensively studied~\cite{Grinstein:2011yv,Patel:2011eh,Ligeti:2011vt,AguilarSaavedra:2011vw,Gresham:2011pa,Shu:2011au,AguilarSaavedra:2011zy,Nelson:2011us,Babu:2011yw,Cui:2011xy,AguilarSaavedra:2011ug,Vecchi:2011ab,Dorsner:2009mq,Cao:2010zb,Shu:2009xf}. Most recently, Blum et al.~\cite{Blum:2011fa} have argued that only a color-singlet weak doublet with an electroweak-scale mass and a very non-generic flavor structure of Yukawa couplings can enhance the top forward-backward asymmetry while being consistent with the $t\bar{t}$ production cross section and invariant mass distribution. The model presented in this work incorporates a gauge singlet. The asymmetry is mediated by the Higgs mass eigenstates, which can be an admixture of the singlet and the up-type Higgs of the MSSM. We will show that relevant effective couplings in the Lagrangian of $O\left(1\right)$ after electroweak symmetry breaking can enhance the top asymmetry. Models that incorporate scalars are not the only route to generating a large asymmetry. Models with exotic gluons~\cite{Krnjaic:2011ub,Gresham:2011pa,AguilarSaavedra:2011ug,Cao:2010zb}, Kaluza-Klein modes within extra dimensional models~\cite{Westhoff:2011ir,Djouadi:2011aj,Bauer:2010iq,Djouadi:2009nb}, and models with new vector bosons~\cite{Haisch:2011up,AguilarSaavedra:2011vw,Gresham:2011pa,Shu:2011au,AguilarSaavedra:2011zy,AguilarSaavedra:2011ug,Cao:2010zb} have also been studied in great detail. This work is organized as follows: In section two the model is introduced, the Higgs spectrum is reviewed and we show how the observables in the top sector are in great part fixed by electroweak symmetry breaking as well as the Higgs spectrum. In section three, the $p\bar{p}\to t\bar{t}$ differential cross section and forward-backward asymmetry are studied, and in particular the interference between the SM and new physics contributions. Section four outlines the experimental constraints on our model. In section five, results are shown as a function of a few free parameters corresponding to the additional operators contributing to the top observables. Results are summarized in section six and the future outlook is presented. \section{Model} In a recent work~\cite{CDOP1,CDP2} by this author and collaborators a generalization of the NMSSM was studied which was designed to make the solution to the little hierarchy problem more natural within a low energy framework. The model differed from the original NMSSM in that supersymmetric mass terms for both the MSSM Higgs fields and the gauge singlet were introduced. In the following, general aspects of this class of models are reviewed. The superpotential governing these models is given by~\footnote{There is no symmetry that forbids the tadpole term but the non-renormalizable theorem will prevent its generation until SUSY is broken, thus it is assumed to be absent. The $S^{3}$ is no longer required to stabilize the potential and it is taken to zero~\cite{CDOP1}.} \begin{equation} W_{\mathrm{S-MSSM}}=W_{\mathrm{Yukawa}}+(\mu+\lambda\hat{S})\hat{H_{u}}\hat{H_{d}}+\frac{\mu_{s}}{2}\hat{S}^{2}. \end{equation} The scalar potential, including all the allowed soft SUSY-breaking terms is given by \begin{eqnarray} V&=&(m^{2}_{H_{u}}+|\mu+\lambda S|^{2})|H_{u}|^{2}+(m^{2}_{H_{d}}+|\mu+\lambda S|^{2})|H_{d}|^{2}+(m_{s}^{2}+\mu_{s}^{2})|S|^{2} \nonumber \\ &+&[B_{s}S^{2}+(\lambda\mu_{s}S^{\dagger}+B_{\mu}+\lambda A_{\lambda}S)H_{u}H_{d}+h.c.] +\lambda^{2}|H_{u}H_{d}|^{2} \nonumber \\ &+&\frac{1}{8}(g^{2}+g'^{2})(|H_{u}|^{2}-|H_{d}|^{2})^{2}+\frac{1}{2}g^{2}|H_{u}^{\dagger}H_{d}|^{2}, \end{eqnarray} where $m^{2}_{s}$, $B_{s}$ and $A_{\lambda}$ are the soft breaking contributions associated with the singlet. Minimization of the tree-level scalar potential in absence of CP-violating phases leads to the following three conditions: \begin{equation} \frac{1}{2}m_{Z}^{2}=\frac{m_{H_{d}}^{2}-m_{H_{u}}^{2}\tan^{2}2\beta}{\tan^{2}\beta-1}-\mu_{eff}^{2}, \end{equation} \begin{equation} \sin2\beta=\frac{2B_{\mu,eff}}{m_{H_{u}}+m_{H_{d}}+2\mu^{2}_{eff}+\lambda^{2}v^{2}}, \end{equation} \begin{equation} v_{s}=\frac{\lambda v^{2}}{2}\frac{(\mu_{s}+A_{\lambda})\sin2\beta-2\mu}{\lambda^{2}v^{2}+\mu^{2}_{s}+m^{2}_{s}+2B_{s}}, \end{equation} where $v_{s}=\left<S\right>$ is the vacuum expectation value (vev) of the singlet field, and $v^{2}=v^{2}_{u}+v^{2}_{d}=(174$ GeV)$^{2}$. Furthermore, the following parameters are defined: \begin{eqnarray} \mu_{eff}&=&\mu+\lambda v_{s}, \\ B_{\mu,eff}&=&B_{\mu}+\lambda v_{s}(\mu_{s}+A_{\lambda}). \end{eqnarray} As in the NMSSM, this class of models leads to a scalar spectrum consisting of three scalars, two pseudoscalars, and one charged Higgs boson. In~\cite{CDOP1}, the model was analyzed in the limit where $\mu_{s}$ was the largest scale in the Higgs sector. It was found that, in this region of parameter space, the vacuum structure of the model was very similar to that of the MSSM. Furthermore, in the limit of $\mu_{s}\to\infty$, the singlet vev, $v_{s}\to0$, and the singlet could be integrated out supersymmetrically. In the Higgs decoupling limit, only one light scalar identified with the SM-like Higgs boson remained with a mass given by \begin{equation} m^{2}_{h^{0}}\approx m^{2}_{Z}\cos^{2}2\beta+\frac{2\lambda v^{2}}{\mu_{s}}\left(2\mu\sin2\beta-A_{\lambda}\sin^{2}2\beta\right). \end{equation} In the opposite limit, where $\mu_{s}$ is small, studied in~\cite{CDP2}, the vacuum structure of the model can be substantially different from that of the MSSM. In the Higgs decoupling limit, the spectrum was found to include one scalar identified with the SM-like Higgs boson with mass given by \begin{equation} m^{2}_{h^{0}}\approx m^{2}_{Z}\cos^{2}2\beta+\lambda^{2}v^{2}\sin^{2}2\beta-\frac{(m^{2}_{Z}-\lambda^{2}v^{2})^{2}}{m^{2}_{A}}\sin^{2}2\beta\cos^{2}2\beta, \end{equation} where $m^{2}_{A}\approx 2B_{\mu,eff}/\sin2\beta$. On the other hand, there are two lighter mostly-singlet states with masses given by \begin{eqnarray} m^{2}_{A_{s}}&\approx&\mu_{s}^{2}+\lambda^{2}v^{2}-\frac{\lambda^{2}v^{2}A^{2}_{\lambda}}{m^{2}_{A}}, \nonumber \\ m^{2}_{h_{s}}&\approx&\mu_{s}^{2}+\lambda^{2}v^{2}-\frac{\lambda^{2}v^{2}A^{2}_{\lambda}}{m^{2}_{A}} \cos^{2}2\beta. \end{eqnarray} Within the minimal incarnation of this class of models, there is no significant contribution from the Higgs spectrum to $q\bar{q}$ scattering. Therefore, in this work we consider a simple extension of this scenario by introducing the following dimension-five operators in the superpotential: \begin{equation} W=W_{\mathrm{S-MSSM}}+\frac{\Lambda_{ij}}{M}\hat{S}\hat{H_{u}}\hat{u^{c}_{i}} \hat{Q}_{j}-\frac{\Sigma_{ij}}{M}\hat{S}\hat{H_{d}}\hat{d^{c}_{i}}\hat{Q}_{j}. \end{equation} These interactions allow for t-channel contributions to $q\bar{q}$ scattering mediated by Higgs particles. In particular, off-diagonal elements coupling first and third families will be relevant in generating the forward-backward asymmetry of the $t\bar{t}$ pair. The scale $M$ dictates where these operators arise, and in this work we assume it is not far from the TeV scale. In light of the results from both the CDF and D$\O$ collaborations, only couplings between first and third generation of quarks will be considered. We will assume a fermion basis where all the SM up-type Yukawa couplings are diagonal before electroweak symmetry breaking. In such a basis we consider the following structure for the $\Lambda$ matrix: \begin{equation} \Lambda = \begin{pmatrix} 0 & 0 & \Lambda_{13} \\ 0 & 0& 0 \\ \Lambda_{31} & 0 & 0 \end{pmatrix} ~. \end{equation} Furthermore, we assume that $\Sigma_{ij}\approx0$, effectively yielding no new physics contributions to the top forward-backward asymmetry. At any rate, compared to the $\Lambda$ effects, the corrections from the $\Sigma$ couplings are suppressed since these enter in the asymmetry and cross section through $d\bar{d}$ scattering. The operators in the Lagrangian, derived from (2.12), that couple first generation up quarks to their third generation counterparts through the exchange of neutral scalar or pseudoscalar Higgs bosons are given by \begin{equation} {\cal L}_{u,t}\supset \sum_{i}\left( F^{i}_{R,H}H_{i}-iF^{i}_{R,A}A_{i}\right)\bar{u}_{L}t_{R}+\left( F^{i}_{L,H}H_{i}+iF^{i}_{L,A}A_{i}\right)\bar{u}_{R}t_{L}+h.c.~, \end{equation} where \begin{eqnarray} F^{i}_{R,(H,A)}=\frac{\Lambda_{31}}{\sqrt{2}M}(v\sin\beta~O^{(H,A)}_{i,S}+v_{s}~O^{(H,A)}_{i,H_{u}}), \nonumber \\ F^{i}_{L,(H,A)}=\frac{\Lambda_{13}}{\sqrt{2}M}(v\sin\beta~O^{(H,A)}_{i,S}+v_{s}~O^{(H,A)}_{i,H_{u}}). \end{eqnarray} The matrices $O^{(H,A)}$ diagonalize the scalar weak eigenstates $(H_{d},H_{u},S)$ into the corresponding mass eigenstates. These are labeled as $(H_{1},H_{2},H_{3})$ for scalars, and $(A_{1},A_{2})$ for pseudoscalars in order of increasing mass. The operators coupling down quarks to top quarks through the exchange of a charged Higgs boson are given by \begin{equation} {\cal L}_{d,t}\supset -\frac{v_{s}}{M}\Lambda_{31}\cos\beta~\bar{d}_{L} t_{R}H^{-}+h.c.~. \end{equation} The Feynman diagrams corresponding to the new physics in (2.13) and (2.15) are shown in Figure~\ref{fig:feynmandiag}. \begin{figure}[ht] \begin{center} \begin{picture}(180,120)(-80,-80) \Text(-43,35)[!]{$u$} \ArrowLine(-35,35)(0,0) \ArrowLine(0,0)(35,35) \Text(43,35)[!]{$t$} \Vertex(0,0){2} \Text(20,-25)[!]{$H_{i}, A_{i}$} \DashLine(0,-50)(0,0){5} \Vertex(0,-50){2} \Text(-43,-85)[!]{$\overline{u}$} \ArrowLine(0,-50)(-35,-85) \ArrowLine(35,-85)(0,-50) \Text(43,-85)[!]{$\overline{t}$} \end{picture} \begin{picture}(180,120)(-80,-80) \Text(-43,35)[!]{$d$} \ArrowLine(-35,35)(0,0) \ArrowLine(0,0)(35,35) \Text(43,35)[!]{$t$} \Vertex(0,0){2} \Text(20,-25)[!]{$H^\pm$} \DashLine(0,-50)(0,0){5} \Vertex(0,-50){2} \Text(-43,-85)[!]{$\overline{d}$} \ArrowLine(0,-50)(-35,-85) \ArrowLine(35,-85)(0,-50) \Text(43,-85)[!]{$\overline{t}$} \end{picture} \end{center} \caption{\small New diagrams contributing to $t\bar{t}$ production \label{fig:feynmandiag}} \end{figure} We will probe the parameter space of the model mainly as a function of the the singlet's vev, $v_{s}$ and the new couplings $\Lambda_{13}$ and $\Lambda_{31}$, always requiring that their values remain below $4\pi$. Notice however, that for such large values of the couplings, extra contributions coming from higher dimensional operators could be of similar size as those given in Equation (2.11). For simplicity, in this work we restrict ourselves to a dimension-five analysis. \section{Differential Cross Section and Asymmetry} Following the analysis carried out by the authors in~\cite{Cao:2010zb} the differential cross section at the parton level can be written as \begin{equation} \frac{d\hat{\sigma}}{d\cos\theta}=M^{SM}+M^{INT}+M^{NP}, \end{equation} where $M_{INT}$ denotes the interference between the SM and contributions arising from the operators given in (2.13) and (2.15), while $M_{SM}$ and $M_{NP}$ denote the contributions solely from the SM and new physics, respectively. In what follows, only the interference between new physics and the leading-order standard model diagrams will be considered; we will not incorporate the interference with the dominant NLO QCD corrections. $M_{SM}$ does include next-to-leading order contributions, and so we define the total new physics contributions by \begin{equation} M^{NP}_{total}=M^{NP}+M^{SM~LO,~NP}_{INT}. \end{equation} Integrating (3.2) in both the forward and backward regions, one can express the asymmetry simply as: \begin{equation} A^{total}_{FB}=A^{NP}_{FB}\cdot R+A^{SM}_{FB}\cdot(1-R), \end{equation} where we have made use of the following definitions: \begin{eqnarray} A^{NP}_{FB}&=&\frac{\sigma^{NP}_{F}-\sigma^{NP}_{B}}{\sigma^{NP}_{F}+\sigma^{NP}_{B}}, \nonumber \\ A^{SM}_{FB}&=&\frac{\sigma^{SM}_{F}-\sigma^{SM}_{B}}{\sigma^{SM}_{F}+\sigma^{SM}_{B}}, \\ R&=&\frac{\sigma^{NP}_{total}}{\sigma^{SM}_{total}+\sigma^{NP}_{total}}. \nonumber \end{eqnarray} The new physics contributions to the differential cross section in (3.1) can be calculated from equations (2.13) and (2.15). The new physics t-channel contributions to the $t\bar{t}$ production cross section, originating from a $u\bar{u}$ initial state and mediated by scalar and pseudoscalar particles, are given by \begin{eqnarray} M^{NP}(u\bar{u}\to t\bar{t})&=&\frac{\pi\beta_{t}\left(\hat{t}-m^{2}_{t}\right)^{2}}{2(16\pi)^{2}\hat{s}}\sum_{ij}\left[\frac{A^{ij}}{(\hat{t}-m^{2}_{H_{i}}+im_{H_{i}}\Gamma(m_{H_{i}}))(\hat{t}-m^{2}_{H_{j}}-im_{H_{j}}\Gamma(m_{H_{j}}))}\right. \nonumber \\ &+&\frac{B^{ij}}{(\hat{t}-m^{2}_{A_{i}}+im_{A_{i}}\Gamma(m_{A_{i}}))(\hat{t}-m^{2}_{A_{j}}-im_{A_{j}}\Gamma(m_{A_{j}}))} \nonumber \\ &+&\left.\left(\frac{C^{ij}}{(\hat{t}-m^{2}_{H_{i}}+im_{H_{i}}\Gamma(m_{H_{i}}))(\hat{t}-m^{2}_{A_{j}}-im_{A_{j}}\Gamma(m_{A_{j}}))}+h.c.\right)\right], \end{eqnarray} where $\beta_{t}=\sqrt{1-\frac{4m^{2}_{t}}{\hat{s}}}$ and the expressions for the coefficients $A^{ij},~B^{ij}$ and $C^{ij}$ are given by \begin{eqnarray} A^{ij}&=&\left((F^{i}_{R,H}+F^{i}_{L,H})^{2}(F^{j}_{R,H}+F^{j}_{L,H})^{2}+(F^{i}_{R,H}-F^{i}_{L,H})^{2}(F^{j}_{R,H}-F^{j}_{L,H})^{2}\right. \nonumber \\ &+&\left.2(F^{i2}_{R,H}-F^{i2}_{L,H})(F^{j2}_{R,H}-F^{j2}_{L,H})\right), \nonumber \\ B^{ij}&=&\left((F^{i}_{R,A}+F^{i}_{L,A})^{2}(F^{j}_{R,A}+F^{j}_{L,A})^{2}+(F^{i}_{R,A}-F^{i}_{L,A})^{2}(F^{j}_{R,A}-F^{j}_{L,A})^{2}\right. \nonumber \\ &+&\left.2(F^{i2}_{R,A}-F^{i2}_{L,A})(F^{j2}_{R,A}-F^{j2}_{L,A})\right), \nonumber \\ C^{ij}&=&\left((F^{i}_{R,H}+F^{i}_{L,H})^{2}(F^{j}_{R,A}-F^{j}_{L,A})^{2}+(F^{i}_{R,H}-F^{i}_{L,H})^{2}(F^{j}_{R,A}+F^{j}_{L,A})^{2}\right. \nonumber \\ &+&\left.2(F^{i2}_{R,H}-F^{i2}_{L,H})(F^{j2}_{R,A}-F^{j2}_{L,A})\right). \end{eqnarray} The contribution arising from a $d\bar{d}$ initial state is mediated by the charged Higgs scalar and it is given by \begin{equation} M^{NP}(d\bar{d}\to t\bar{t})=\frac{\pi\beta_{t}\left(\hat{t}-m^{2}_{t}\right)^{2}}{2(8\pi)^{2}\hat{s}}\frac{F^{4}_{H^{\pm}}}{(\hat{t}-m^{2}_{H^{\pm}})^{2}+m^{2}_{H^{\pm}}\Gamma^{2}(m_{H^{\pm}})}, \end{equation} where $F_{H^{\pm}}=\frac{v_{s}}{M}\Lambda_{31}$. Finally, the interference between the new physics diagrams with those arising from the leading-order QCD contribution are given by \begin{eqnarray} M^{INT}(u\bar{u}\to t\bar{t})&=&\frac{\alpha_{s}\beta_{t}}{36\hat{s}^{2}}\sum_{i}\left(\frac{(F^{i2}_{R,(H,A)}+F^{i2}_{L,(H,A)})(\hat{s}m^{2}_{t}+(\hat{t}-m^{2}_{t})^{2})}{\hat{t}-m^{2}_{(H,A)_{i}}+im_{(H,A)_{i}}\Gamma(m_{(H,A)_{i}})}\right. \nonumber \\ &+&\left.\frac{(F^{i2}_{R,(H,A)}+F^{i2}_{L,(H,A)})(\hat{s}m^{2}_{t}+(\hat{t}-m^{2}_{t})^{2})}{\hat{t}-m^{2}_{(H,A)_{i}}-im_{(H,A)_{i}}\Gamma(m_{(H,A)_{i}})}\right), \end{eqnarray} for scalar/pseudoscalar mediation and \begin{eqnarray} M^{INT}(d\bar{d}\to t\bar{t})&=&\frac{\alpha_{s}\beta_{t}}{36\hat{s}^{2}}\frac{F^{2}_{H^{\pm}}(\hat{s}m^{2}_{t}+(\hat{t}-m^{2}_{t})^{2})}{\hat{t}-m^{2}_{H^{\pm}}+im_{H^{\pm}}\Gamma(m_{H^{\pm}})} \nonumber \\ &+&\frac{F^{2}_{H^{\pm}}(\hat{s}m^{2}_{t}+(\hat{t}-m^{2}_{t})^{2})}{\hat{t}-m^{2}_{H^{\pm}}-im_{H^{\pm}}\Gamma(m_{H^{\pm}})}, \end{eqnarray} for charged scalar mediation. \section{Constraints} \subsection{$u-t$ Mass Mixing} Assuming the fermion basis and the structure of $\Lambda$ introduced in Section 2, the operators in the Lagrangian coupling first generation to third generation up quarks are given by \begin{equation} {\cal L}_{ut}\supset \frac{\Lambda_{31}}{M}S H^{0}_{u}\bar{t}_{R}u_{L}+\frac{\Lambda_{13}}{M}SH^{0}_{u}\bar{u}_{R}t_{L}+h.c.~. \end{equation} Expanding around fluctuations from the minima of both the singlet and the up-type neutral Higgs, contributions to the masses of the up and the top quarks arise. In particular, these lead to mixing terms parametrized by the following mass matrix: \begin{equation} M^{2}_{U} = \begin{pmatrix} \left(\Lambda_{13}~\frac{v_{s}v_{u}}{M}\right)^{2} & \left(\Lambda_{13}~\frac{v_{s}v_{u}}{M}\right)m_{t,0} & \\ \left(\Lambda_{13}~\frac{v_{s}v_{u}}{M}\right)m_{t,0} & \left(\Lambda_{31}~\frac{v_{s}v_{u}}{M}\right)^{2} +m^{2}_{t,0} \\ \end{pmatrix} ~. \end{equation} In the above expression the contribution to the up-quark mass from the Yukawa sector has been taken to zero. Furthermore, we use $m_{t,0}$ to denote the contribution from the Yukawa sector to the top quark mass. For $m_{t,0}\gg\Lambda\frac{v_{s}v\sin\beta}{M}$ the following are good approximations to the masses of the quark mass eigenstates \begin{eqnarray} m^{2}_{u}&\approx&\frac{\left(\Lambda_{31}\Lambda_{13}\frac{v^{2}_{s}v^{2}\sin^{2}\beta}{M^{2}}\right)^{2}}{\left( m^{2}_{t,0}+\left(\Lambda_{13}~\frac{v_{s}v_{u}}{M}\right)^{2}+\left(\Lambda_{31}~\frac{v_{s}v_{u}}{M}\right)^{2}\right)}, \nonumber \\ m^{2}_{t}&\approx&\left( m^{2}_{t,0}+\left(\Lambda_{13}~\frac{v_{s}v_{u}}{M}\right)^{2}+\left(\Lambda_{31}~\frac{v_{s}v_{u}}{M}\right)^{2}\right). \end{eqnarray} Within this limit one can can see that $m_{t,0}\approx m_{t}$. The value of $m_{t,0}$ is then found by imposing that $m_{t}\equiv 172.5$ GeV. Experimental constraints on the mass of the up quark~\cite{PDG} give a range of allowed values for $m_{u}$ \begin{equation} 1.3~\text{MeV}\le m_{u}\le3.1~\text{MeV}. \end{equation} Imposing $m_{u}\le3.1$ MeV constrains the product of $\Lambda_{13}\cdot\Lambda_{31}$. One can impose that both couplings be small but that will generate no new physics contributions to the forward-backward top asymmetry. One can then impose that either coupling be small enough to satisfy the constrain in (4.4) while making the other provide the new physics to generate a large asymmetry. In what follows, we will see that flavor constraints will constrain $\Lambda_{13}$ over $\Lambda_{31}$. \subsection{Meson mixing} Due to the flavor mixing structure of the matrix $\Lambda$ introduced in (2.13), contributions to meson mixing will arise. The operators in the Lagrangian contributing to $K^{0}-\bar{K}^{0}$ mixing are given by \begin{equation} {\cal L}_{mixing}\supset-\frac{v_{s}}{M}O^{H^{\pm}}_{22}\bar{d}_{Li}(V^{\dagger}\Lambda)_{ij}u_{Rj}H^{-}+h.c.~, \end{equation} where $V$ is the CKM matrix. The above contribution to meson mixing has the same structure as that recently studied in~\cite{Blum:2011fa}. In the model considered here the flavor-changing matrix has an additional suppression given by $\frac{v_{s}}{2M}O^{H^{\pm}}_{22}$, and thus it is constrained such that \begin{equation} \frac{1}{32\pi^{2}}\left(\frac{\text{TeV}}{m_{H^{\pm}}}\right)^{2}\sum_{i}F\left(x_{i}\right)\left(V^{\dagger}\Lambda'\right)^{2}_{1i}\left(V^{\dagger}\Lambda'\right)^{*2}_{2i}<10^{-6}, \end{equation} where $x_{i}=\frac{m^{2}_{u_{i}}}{m^{2}_{H^{\pm}}}$ and $\Lambda'=\frac{v_{s}}{2M}O^{H^{\pm}}_{22}\Lambda$. The loop function $F$ is given by \begin{equation} F(x)=\frac{1-x^{2}+2x\log(x)}{(1-x)^{3}}. \end{equation} Suppressing contributions to $K^{0}-\bar{K}^{0}$ mixing can be achieved with large charged Higgs masses or in the limit where $\Lambda_{13}\ll1$. \subsection{New Top decay channels} In Section 2 we introduced the Higgs spectrum of the S-MSSM. In particular, in the Higgs decoupling limit of the model where $\mu_{s}$ corresponds to the largest scale in the Higgs sector, one light scalar exists and can be identified with the SM-like Higgs. In the small $\mu_{s}$ limit, two additional singlet-like scalars with masses below $100$ GeV are present. Due to the new flavor-changing neutral current operators present in our model, the light scalars contribute to the decay width of the top quark. In particular we have for $m_{\phi_{i}}\le m_{t}$: \begin{equation} \Gamma \left(t\to\phi_{i} u\right)=\frac{m_{t}}{32\pi}\left(1-\frac{m^{2}_{\phi_{i}}}{m^{2}_{t}}\right)^{2}\left(F^{i2}_{L}+F^{i2}_{R}\right), \end{equation} where $\phi_{i}$ denotes any scalar or pseudoscalar that can be produced by a decaying top. A direct measurement of the top decay width has been carried out recently and yields an upper bound on the total decay width of the top quark of $7.6$ GeV at the $95\%$ confidence level, for a top mass of $172.5$ GeV~\cite{Aaltonen:2010ea}. We incorporate this constraint to place bounds on the allowed size for the couplings $F_{L,R}$. \subsection{Constraints from single and same-sign top production} It is also worth mentioning the collider constraints from single top production and same-sign top production that may restrict our parameter space. In particular, from~\cite{AguilarSaavedra:2011zy} we see that the coupling which enters into the cross section for same sign-top production is given by \begin{equation} g_{tt}\propto\left(\Lambda_{13}\Lambda_{31}\right)^{2}. \end{equation} Therefore, one may suppress any additional contributions to the same-sign top production cross section by suppressing one of the couplings as in the $u-t$ mass mixing constraint. \begin{figure}[t] \begin{center} \begin{picture}(150,80)(-50,-100) \Text(-43,35)[!]{$g$} \Gluon(-35,35)(5,0){5}{4} \ArrowLine(5,0)(35,35) \Text(43,35)[!]{$t$} \Vertex(5,0){2} \Text(20,-25)[!]{$t$} \ArrowLine(5,-50)(5,0) \Vertex(5,-50){2} \Text(-43,-85)[!]{$u$} \ArrowLine(-35,-85)(5,-50) \DashLine(35,-85)(5,-50){5} \Text(53,-85)[!]{$H_{i},~A_{i}$} \Text(5,-100)[!]{$(a)$} \end{picture} \begin{picture}(150,80)(-100,-100) \Text(-43,35)[!]{$g$} \Gluon(-35,35)(5,0){5}{4} \ArrowLine(5,0)(35,35) \Text(43,35)[!]{$t$} \Vertex(5,0){2} \Text(20,-25)[!]{$t$} \ArrowLine(5,-50)(5,0) \Vertex(5,-50){2} \Text(-43,-85)[!]{$d$} \ArrowLine(-35,-85)(5,-50) \DashLine(35,-85)(5,-50){5} \Text(43,-85)[!]{$H^\pm$} \Text(5,-100)[!]{$(b)$} \end{picture} \end{center} \caption{\small New physics diagrams contributing to single top production together with a neutral Higgs in (a) and charged Higgs in (b). \label{fig:singletop}} \end{figure} The diagrams contributing to single top production are shown in Figure~\ref{fig:singletop}. D$\O$ has a recent model independent measurement on the single top production cross section using center of mass energies of $\sqrt{s}=1.96$ TeV with $5.4$fb$^{-1}$~\cite{Abazov:2011rz}. They find: \begin{equation} \sigma\left(p\bar{p}\to tqb+X\right)=2.90\pm0.59~\text{pb}, \end{equation} for a top mass of $172.5$ GeV. The diagram in Figure~\ref{fig:singletop}b will contribute to the single top production cross section for $H^{\pm}\to\bar{b}u$. This contribution can be suppressed by either suppressing $\Lambda_{13}$ which comes into the $dtH^{\pm}$ vertex or by a suppression of $\Lambda_{31}$ which has the effect of making the decay $H^{\pm}\to\bar{b}u$ negligible. For a very heavy scalar or pseudoscalar the diagram of Figure~\ref{fig:singletop}a will be naturally suppressed at the Tevatron. For light scalars/pseudoscalars, the only decay channel open is into $b\bar{b}$, in which case the signal will be $t+2b~\text{jets}$. In some cases there are cascade decays between Higgses, which may suppress the branching ratio into $b\bar{b}$ significantly, and the main signal will be $t+4b~\text{jets}$. The coupling at the $tu(H_{i}A_{i})$ vertex is proportional to $F_{L,R}$ in Equation (2.14) and thus one may need to suppress both couplings in order to not enhance the single top production cross section. Given the complexity of the final states, a direct comparison with the D$\O$ measurement is difficult to make. \section{Results} In this section we present results on the forward-backward asymmetry in $t\bar{t}$ production arising from the of the S-MSSM introduced in Section 2, as well as the contributions to the total $t\bar{t}$ cross section. Due to the large number of parameters that are present in our model, and due to the fact that there exists a vast region of parameter space that can provide a solution to the little hierarchy problem, we present our results for various values of $v_{s}$. Furthermore, we illustrate our results for the two limiting cases of $\mu_{s}$ which were explained in Section 2. We use $\tan\beta=2$ with a corresponding value of $\lambda=0.63$~\cite{CDOP1,CDP2} and work in the Higgs decoupling limit in order to maximize the tree-level contribution to the mass of the SM-like Higgs boson. We fix the scale of new physics, $M$, where the operators in (2.13) arise, to $1$ TeV. In our calculations we make use the CTEQ6L PDF set~\cite{Pumplin:2002vw} using a factorization and renormalization scale of $m_{t}/2$. For the strong coupling constant we take $\alpha_{s}(161.9~\text{GeV})\sim 0.1$, which is used to calculate the one-loop radiative correction to the Higgs masses in~\cite{CDOP1,CDP2}. We carry out our calculations using a top mass of $172.5$ GeV and use the CDF analysis of the $t\bar{t}$ production cross section which incorporates a combination of leptonic and hadronic channels using data with an integrated luminosity of $4.6$ fb$^{-1}$~\cite{ttbarCSCDF}. They find: \begin{equation} \sigma_{t\bar{t}}=7.50\pm0.48~\text{pb}, \end{equation} for $m_{t}=172.5$ GeV. In addition, we apply all of the constraints introduced in Section 4 to search for viable scenarios consistent with experimental observations. \begin{figure}[t] \centering \includegraphics[width=4in]{CSvsAFB.eps} ~~~ \caption{\small The $t\bar{t}$ production cross section as a function of the parton level forward-backward top asymmetry for various values of the singlet vev $v_{s}$, Scenarios A through D. The green band indicate the combined uncertainty from the asymmetry measurements of CDF and D$\O$~\cite{Aaltonen:2011kc,Abazov:2011rq}, and the cyan band the combined theoretical and experimental uncertainty on the value of the $t\bar{t}$ production cross section given in Equation (5.1)~\cite{ttbarCSCDF}. The value of $\Lambda_{31}$ increases along the curves, from $0$ (left) to $9.5$ (right) for $\Lambda_{13}$ close to zero. \label{fig:smallmus1} } \end{figure} In the small $\mu_{s}$ limit the vacuum structure of the theory is significantly different from that of the MSSM. In particular, the appearance of light mostly singlet scalars can significantly enhance the $t\bar{t}$ production cross section. In Figure~\ref{fig:smallmus1} we illustrate our results of the new physics contributions to the $t\bar{t}$ cross section as a function of the forward-backward asymmetry. The output parameters that arise from electroweak symmetry breaking are shown in Table~\ref{tab:scenarios_input}. In the figure we show the experimental value for the cross section with a green one sigma band and the experimental value for the asymmetry with a cyan one sigma band. As can be seen from the figure, for all of our curves there is a region that falls within one standard deviation from both cross section and asymmetry. The black dotted line corresponds to a value of $v_{S}=120$ GeV, $\mu_{s}=20$ GeV and $A_{\lambda}=190$ GeV as well as vanishing values of $\mu$ and $B_{\mu}$, labeled scenario A in Table~\ref{tab:scenarios_input}. Scenario A is characterized by a heavy scalar and pseudoscalar with masses around 200 GeV, a SM-like Higgs with mass 124 GeV, one singlet-like scalar with mass 85 GeV and one singlet-like pseudoscalar with a mass of 60 GeV. The mass splitting between the two singlet-like states is evident from Equation (2.10) and it is due to the fact that the ratio $A^{2}_{\lambda}/m^{2}_{A}$ approaches unity. The blue dotted line corresponds to a value of $v_{S}=20$ GeV, $\mu_{s}=20$ GeV and $A_{\lambda}=470$ GeV as well as values for $\mu$ and $\sqrt{B_{\mu}}$ of 180 and 500 GeV respectively, labeled scenario B in Table~\ref{tab:scenarios_input}. Scenario B is characterized by a heavy scalar and pseudoscalar with masses around 800 GeV, a SM-like Higgs with mass 124 GeV, one singlet-like scalar and pseudoscalar with masses close to~$\sim100$ GeV. The near mass degeneracy of the singlet-like states is apparent from Equation (2.10) given that the $A^{2}_{\lambda}/m^{2}_{A}$ ratio has a more negligible contribution to the masses. In Figure~\ref{fig:smallmus2} we plot the asymmetry as a function of $\Lambda_{31}$ on the left, and the total cross section as function of $\Lambda_{31}$ on the right for scenarios A and B. The value of $\Lambda_{13}$ is fixed close to zero in order to remain consistent mainly with the constraint arising from the up quark mass. In this figure, the impact that the lighter spectrum has on the cross section becomes more evident and they become more dominant in scenario A for smaller values of $\Lambda_{31}$. From Figures~\ref{fig:smallmus1} and~\ref{fig:smallmus2} one can also note the inflection point where the pure new physics contributions to the cross section dominate over the interference terms in (3.10) and (3.11). This transition from negative to positive contributions to the cross sections is more rapid for smaller values of $\Lambda_{31}$ and larger values of $v_{s}$, and it is also a consequence of the relatively light spectrum. In scenarios C and D (red and orange in Figure~\ref{fig:smallmus1}, respectively) the value of $v_{s}$ is increased by increasing $A_{\lambda}$ to $310$ and $470$ GeV, respectively. The values of $\mu$ and $B_{\mu}$ are fixed to zero. The light Higgs spectrum for these two scenarios remains identical to that of scenario A, since the ratio of $A^{2}_{\lambda}/m^{2}_{A}$ remains close to unity. A large value of $v_{s}$ thus requires a smaller value of $\Lambda_{31}$ to generate a significant contribution to the cross section. \begin{figure}[t] \begin{center} \includegraphics[width=2.8in]{AFBvsNP.eps}~~ \includegraphics[width=2.8in]{CSvsNP.eps} \caption{\small On the left plot the forward-backward top asymmetry at the parton level as a function of $\Lambda_{31}$ for scenarios A and B. The green bands indicate the combined uncertainty from the asymmetry measurements of CDF and D$\O$~\cite{Aaltonen:2011kc,Abazov:2011rq}. On the right, the $t\bar{t}$ production cross section as a function of $\Lambda_{31}$ for scenarios A and B. The green bands indicate the combined theoretical and experimental uncertainty on the cross section~\cite{ttbarCSCDF}.\label{fig:smallmus2} } \end{center} \end{figure} \begin{table}[hb] \addtolength{\arraycolsep}{10pt} \renewcommand{\arraystretch}{1.3} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline\hline & Sc. A & Sc. B & Sc. C & Sc. D\\ \hline\hline $v_{s}$ [GeV] & $130$ & $20$ & $200$ & $300$ \\ $O^{H}_{2,S}, O^{H}_{2,H_{u}}$ & $-0.079$,~$0.90$ & $0.024$,~$-0.89$ & $-0.12$,~$0.90$ & $-0.18$,~$0.89$\\ $O^{H}_{1,S}, O^{H}_{1,H_{u}}$ & $-0.091$,~$0.93$ & $-0.0007$,~$0.99$ & $0.01$,~$0.97$ & $0.10$,~$0.97$ \\ $O^{A}_{1,S}, O^{A}_{1,H_{u}}$ & $-0.19$~$0.90$ & $-0.03$~$0.99$ & $-0.13$,~$0.95$ & $-0.095$,~$0.98$\\ \hline\hline \end{tabular} \caption{\small Scalar mixing angles and vev in the singlet field direction.\label{tab:scenarios_input} } \end{table} In the large $\mu_{s}$ limit, the singlet decouples from the theory and in the Higgs decoupling limit the only light scalar is the SM-like Higgs. Furthermore, within this class of models $v_{s}\to0$ and the most dominant contribution to the cross section and asymmetry arises from the coupling of the SM-like Higgs to the up and top quarks which is proportional to \begin{equation} \frac{\left(\Lambda_{13,31}\right)v\sin\beta}{M}O^{H}_{1,S}. \end{equation} The value of $O^{H}_{1,S}$ is very small since the SM-like Higgs has a very little singlet component, hence the additional suppression. In the analysis, we fix the $\mu$ parameter to be consistent with searches of supersymmetric particles carried out by LEP~\cite{PDG}. Our main results are shown in Figure~\ref{fig:largemus1}. On the left we have plotted the total $t\bar{t}$ cross section and on the right the top forward-backward asymmetry as a function of the $\Lambda_{31}$ while fixing the value of $\Lambda_{13}=12.5$. For this figure we have chosen $\mu_{s}=1.5$ TeV, $\mu=500$ GeV, $A_{\lambda}=-1$ TeV and $B_{\mu}=(500~\text{GeV})^{2}$ which yield a value of $v_{s}=0.5$ GeV. We can see from the figure that even for rather large values of both $\Lambda_{13}$ and $\Lambda_{31}$, the interference contribution to the cross section always dominates. This is due to the additional suppression in the coupling of the SM-like Higgs to the up and top quarks, see Equation (5.2). Furthermore, an asymmetry above $13$$\%$, that is within one sigma of the experimental result, can only be obtained when maximizing both $\Lambda_{13}$ and $\Lambda_{31}$. However, the corresponding cross section is close to being outside the three sigma region. Models with large $\mu_{s}$ and with only a relatively light scalar with SM-like couplings present a large amount of tension in the sense that in order to minimize the negative interference contributions to the cross section, one must sacrifice obtaining a large asymmetry. \begin{figure}[t] \centering \includegraphics[width=2.8in]{AFBvsNP_largemus.eps}~~~ \includegraphics[width=2.8in]{CSvsNP_largemus.eps} \caption{\small On the left plot the forward-backward top asymmetry at the parton level as a function of $\Lambda_{31}$ for the large $\mu_{s}$ scenario. The orange line indicates a one $\sigma$ deviation from a combination of the independent CDF and D$\O$ asymmetry measurements~\cite{Aaltonen:2011kc,Abazov:2011rq}. On the right, the $t\bar{t}$ production cross section as a function of $\Lambda_{31}$. The green line corresponds to three $\sigma$ deviations away from the experimental cross section~\cite{ttbarCSCDF}. } \label{fig:largemus1} \end{figure} \section{Conclusions} We have incorporated dimension-five operators to the S-MSSM that couple first and third generation quarks to scalars. We have studied their contributions to the $t\bar{t}$ production cross section and the forward-backward asymmetry. We have studied the two limiting cases of the S-MSSM that provide a natural solution to the Little Hierarchy and analyzed the effects that the distinct spectra have on mediating the new contributions to the $t\bar{t}$ cross section and asymmetry. We found that in the small $\mu_{s}$ limit we are able to generate an inclusive asymmetry consistent with the combined CDF and D$\O$ result~\cite{Aaltonen:2011kc,Abazov:2011rq} for values of $\Lambda_{31}<4\pi$ while being consistent with the experimental $t\bar{t}$ production cross section~\cite{ttbarCSCDF}. The relevant couplings in the Lagrangian are $O\left(1\right)$, and there exists regions where our effective theory approach still holds. Of course a more careful analysis incorporating higher dimensional operators will be interesting and it is left for future work. In essence this limiting case of the S-MSSM is consistent with what was found in an extension to the SM using light weak doublet scalars~\cite{Blum:2011fa}. The main difference is that our model is supersymmetric with an spectrum fixed by the Higgs sector of the S-MSSM. In addition, flavor constraints mediated by charged scalars are less stringent given that charged Higgses are decoupled in this kind of models~\cite{CDOP1,CDP2}. In the large $\mu_{s}$ limit we found that in order to minimize the interference contributions to the cross section, we had to sacrifice the production of a large asymmetry. The best case scenario was when both $\Lambda_{13}$ and $\Lambda_{31}$ were rather large. This region of our model generates a value of the cross section close to laying outside three sigma from the experimental value. Furthermore in this region of parameter space, our couplings are too large that one is now within a non-perturbative regime. Because of this and the tension that arises in generating a large enough asymmetry while being consistent with experimental $t\bar{t}$ production cross section, the case with large $\mu_{s}$ is not as promising as a new physics scenario. To conclude, we have shown that the S-MSSM with additional dimension-five operators coupling first and third generation of quarks to scalars provides an explanation for the anomaly on the Tevatron inclusive forward-backward top asymmetry within a supersymmetric scenario. We found that the small $\mu_{s}$ limit of the S-MSSM appears to be the most promising. \section*{Acknowledgements} A very special thanks to J. de Blas for useful discussions. I will also like to thank W. Altmannshofer, A. Delgado, S. Gori and C. Kolda for providing with very essential and important feedback regarding all aspects of this work. This work was supported in part by the Fermilab Fellowship in Theoretical Physics. Fermilab is operated by Fermi Research Alliance, LLC, under Contract DE-AC02-07-CH11359 with the US Department of Energy.
1,116,691,498,107
arxiv
\section{Introduction} It is well known that in NIS (Normal metal - Insulator - Superconductor) tunnel junctions the flow of electric current carried by quasiparticles is accompanied by a heat transfer from the normal metal into the superconductor \cite{Nahum, Giazotto}. This happens due to the presence of the superconducting energy gap $\Delta$, which induces selective tunneling of high-energy quasiparticles out of the normal metal. In the tunneling event only quasiparticles with energy $E > \Delta$ (compared to the Fermi level) can tunnel out of the normal metal. They generate the single particle current and associated heat current. At lower energies $E < \Delta$ charge transfer occurs via mechanism of Andreev reflection \cite{Andreev, S-J}. Andreev current $I_A$ generates a Joule heating $I_A V$ that is deposited in the normal metal electrode but this effect dominates over single particle cooling only at very low temperatures \cite{Sukumar1}. For the temperatures considered in this paper Andreev Joule heating is negligible. It has been shown that for voltage-biased NIS tunnel junctions heat current out of the normal metal (also referred as ``cooling power'') is positive when $eV \lesssim \Delta$, i.e., it cools the normal metal \cite{LPA}. For $eV \gtrsim \Delta$ the current through the junction increases strongly, resulting in Joule heating $IV$ and making the heat current negative. The cooling power is maximal near $eV \approx \Delta$. This effect enables the refrigeration of electrons in the normal metal. A microrefrigerator, based on an NIS tunnel junction, has been first fabricated by Na\-hum {\it et al.} \cite{Nahum}. They have used a single NIS tunnel junction in order to cool a small normal metal strip. Later Leivo {\it et al.} \cite{LPA} have noticed that the cooling power of an NIS junction is an even function of an applied voltage, and have fabricated a refrigerator with two NIS tunnel junctions arranged in a symmetric configuration (SINIS), which gives a reduction of the electronic temperature from 300 mK to about 100 mK. This significant temperature reduction gives a perspective to use NIS junctions for on-chip cooling of nano-sized systems like high-sensitivity detectors and quantum devices \cite{on-chip}. To enhance the performance of the NIS refrigerator it is important to understand the role of possible factors that may facilitate or decrease the cooling effect. One of such factors is the inelastic relaxation of injected quasiparticles in the superconducting lead. In non-reservoir geometries the quasiparticles injected into the superconducting lead generate a nonequilibrium distribution. In a diffusive superconductor backscattering on impurities and subsequent backtunneling into normal metal may considerably reduce the net heat current out of the normal metal electrode. The purpose of this work is to investigate the importance of the nonequilibrium quasiparticle distribution and consider the effect of inelastic relaxation in the superconductor. Possible mechanisms of inelastic relaxation could be usual processes, such as electron-electron or electron-phonon interactions, but also the presence of so called ``quasiparticle traps'', i.e. additional normal metal electrodes connected to the superconductor, which remove excited nonequilibrium quasiparticles from the superconducting lead \cite{traps, GV}. In this paper we do not specify the mechanisms of inelastic relaxation and consider the relaxation time approximation approach. Effect of the nonequilibrium quasiparticle injection in NIS junctions was also discussed in Ref.~\cite{Sukumar2}, where the authors proposed a phenomenological model of quasiparticle diffusion in the superconducting lead. The paper is organized as follows. In the next section, we formulate the theoretical model and basic equations. In sections~\ref{Current} and \ref{Heat} we solve the kinetic equations and apply solutions for the calculation of the electric and heat currents, respectively. In Sec.~\ref{Distrib} we calculate quasiparticle distribution functions in the superconducting lead. Finally we summarize the results in Sec.~\ref{Conclusion}. \section{Model and basic equations}\label{Model} The model of the N-I-S$^\prime$-S junction under consideration is depicted in Fig.~\ref{model} and consists of a voltage-biased normal metal reservoir (N), an insulator layer (I), a superconducting layer (S$^\prime$) of thickness $L$ and a superconducting reservoir (S) along the $x$ direction. The S and S$^\prime$ leads are made from the identical superconducting material. We assume the S$^\prime$-S interface to be fully transparent. We will consider the diffusive limit, in which the elastic scattering length $\ell $ is much smaller than the coherence length $\xi _0 = \sqrt{\mathcal{D}/2\Delta}$, where $\mathcal{D}$ is the diffusion coefficient (we assume $\hbar = k_B = 1$). The length $L$ of the S$^\prime$ lead is assumed to be much larger than $\xi_0$. The problem of current flow through diffusive N-I-S$^\prime$-S structures with short S$^\prime$ superconductor lead was solved in Ref.~\cite{Zaitsev}. \begin{figure}[tb] \epsfxsize=7cm\epsffile{model.eps} \caption{Geometry of the considered system.} \label{model \end{figure} Under the conditions described above, the calculation of the electric and heat currents requires solution of the one-dimensional Keldysh-Usadel equations \cite{LOnoneq} (see also review \cite{Belzig}) for the $4 \times 4$ matrix Keldysh-Green function $\check{G}(x, E)$ in the S$^\prime$ lead, \begin{align} &\left[\check{\sigma}_z E + \check{\Delta}, \; \check{G}\right] = i \mathcal{D} \partial \check{J}, \quad \check{J} = \check{G}\partial \check{G}, \quad \check{G}^2 = \check{1},\label{GK} \\ \label{G} &\check{G} = \begin{pmatrix} \hat{g}^R & \hat{G}^K \\ 0 & \hat{g}^A \end{pmatrix}, \quad \hat{G}^K = \hat{g}^R \hat{f} - \hat{f}\hat{g}^A. \end{align} Here \begin{equation}\nonumber \check{\sigma}_z = \begin{pmatrix} \sigma_z & 0 \\ 0 & \sigma_z \end{pmatrix}, \quad \check{\Delta} = \begin{pmatrix} \hat{\Delta} & 0 \\ 0 & \hat{\Delta} \end{pmatrix}, \end{equation} $\hat{g}^{R,A}$ are the $2 \times 2$ Nambu matrix retarded and advanced Green's functions, $\hat{f} = f_+ + \sigma_z f_-$ is the matrix distribution function (we use ``check'' for $4 \times 4$ and ``hat'' for $2 \times 2$ matrices), $\sigma_{y,z}$ are the Pauli matrices in the Nambu space, $\hat{\Delta} = e^{i \sigma_z \chi} i \sigma_y \Delta$, $\Delta$ and $\chi$ are the modulus and the phase of the pair potential, and $\partial \equiv \partial/\partial x$. In \Eqs{GK} we neglect the inelastic collision term which will be taken into account later. Since we are interested in small voltages $eV \lesssim \Delta$ (when the cooling of the normal metal occurs) we can neglect the effect of the suppression of the superconducting gap due to the heating in the superconductor. The boundary conditions for the function $\check{G}$ and the matrix current $\check{J}$ at the left normal ($x=-0$) and the right superconducting ($x=+0$) sides of the tunnel barrier are given by the relation \cite{KL} \begin{equation} \left( \sigma_N/g_N \right) \check{J}_{-0} = \check{J}_{+0} = (W/\xi_0) \bigl[\check{G}_{-0}, \check{G}_{+0}\bigr],\label{KL} \end{equation} where $\sigma_N$ and $g_N$ are the normal conductivities of the N and S$^\prime$ leads per unit length, respectively. In \Eq{KL}, the transparency parameter $W$ is defined as \begin{equation}\label{W} W = R(\xi_0)/2R = (3\xi_0/4\ell)\Gamma \gg \Gamma, \end{equation} where $R(\xi_0) = \xi_0/g_N$ is the normal resistance of the S$^\prime$ lead per length $\xi_0$ and $R$ is the junction resistance. It has been shown in Refs.~\cite{Kupriyanov, Bezuglyi} that this quantity rather than the barrier transparency $\Gamma$ plays the role of a transparency parameter in the theory of diffusive tunnel junctions (see also the discussion in Ref.~\cite{paper_1}). In this paper, we will consider the limit $W \ll 1$, which corresponds to the conventional tunneling concept. At the right transparent S$^\prime$-S interface all functions and their first derivatives are to be continuous. We neglect possible small resistance at this interface \cite{Nikolic} since it is much smaller than the resistance of the S$^\prime$ lead in the normal state. The electric and heat currents are related to the Keldysh component of the matrix current $\check{J}$ respectively as \cite{LOnoneq, BA, Heikkila1, Vinokur, Golubov}, \begin{align}\label{I} I &= - \frac{g_N}{4e} \int_0^{\infty} \tr \sigma_z \hat{J}^K dE, \\ \label{Q} P &= IV + \frac{g_N}{4 e^2} \int_0^{\infty} E \tr \hat{J}^K dE, \end{align} and thus they can be expressed through the boundary value $\check{J}_{+0}$ in \Eq{KL}, \begin{align}\label{Ib} I &= - \frac{1}{8 e R} \int_0^{\infty} \tr \sigma_z \bigl[\check{G}_{-0}, \check{G}_{+0}\bigr]^K dE, \\ \label{Qb} P &= IV + \frac{1}{8 e^2 R} \int_0^{\infty} E \tr \bigl[\check{G}_{-0}, \check{G}_{+0}\bigr]^K dE. \end{align} For further consideration it is convenient to write the Green function in the following standard way, \begin{equation}\label{g} \hat{g} = \sigma_z u + i \sigma_y v. \end{equation} Here we neglect the phase of the anomalous Green function since it gives corrections to the next order in diffusive barrier transparency parameter $W$. The functions $u$ and $v$ determine the spectral characteristics of the system. In particular, the quantity $N(E) = \left( u^R - u^A \right)/2$ is the density of states (DOS) normalized to its value $N_F$ in the normal state. In what follows, we will express the advanced Green functions through the retarded ones, $(u,v)^{A} = - (u,v)^{R\ast}$, using the general relation $\hat{g}^A = -\sigma_z \hat{g}^{R\dagger}\sigma_z$, and omit the superscript $R$, considering retarded Green's functions only. In this paper we will calculate the single particle current, therefore we consider quasiparticle energy $E > \Delta$ from now on. We neglect the proximity effect since proximity corrections to the spectral functions are of the order of $W$. Therefore, we use the BCS density of states in both S$^\prime$ and S layers. In the left voltage-biased normal metal reservoir (N) we have, \begin{equation}\label{g_N} \hat{g}_N = \sigma_z, \quad f_{\pm N} = \frac{1}{2}\left[ \tanh\left(\frac{E+eV}{2T_N}\right) \pm \tanh\left(\frac{E-eV}{2T_N}\right) \right], \end{equation} where $T_N$ is the temperature of the normal metal reservoir. We assume the voltage $V$ to be directly applied to the tunnel barrier and neglect a small electric field ($\sim eVW$) penetrating the S$^\prime$ superconducting lead. In the right superconducting reservoir (S) Green's and distribution functions are given by the relations \begin{align}\label{g_S} \hat{g}_S &= \sigma_z u_S + i \sigma_y v_S, \quad (u_S, v_S) = \frac{(E,\Delta)}{\sqrt{(E+i0)^2 - \Delta^2}}, \\ \label{f_S} f_{+S}\bigl|_{x>L} &\equiv f_{eq} = \tanh\left( \frac{E}{2T_S} \right), \quad f_{-S}\bigl|_{x>L} = 0, \end{align} where $T_S$ is the temperature of the S reservoir. In the S$^\prime$ layer, Green's function is given by \Eq{g_S} and distribution functions $f_{\pm S}(x,E)$ should be found from the kinetic equations, which follow from the Keldysh component of \Eqs{GK} and for $E > 0$ have a simple form $\partial^2 f_{\pm S} = 0$ within our approximations. These equations have no bound solutions: both distribution functions $f_{\pm S}(x,E)$ grow linearly with $x$ far from the junction. Such a growth is limited in practice by inelastic collisions, which provide the spatial relaxation of $f_{\pm S}(x,E)$ to the equilibrium values at $x \sim l_\pm \gg \xi_0$, where $l_\pm = \sqrt{\mathcal{D} \tau_\pm}$ are the inelastic scattering lengths and $\tau_\pm$ are the inelastic scattering times. To simplify the problem, instead of including complicated inelastic collision integrals, we add collision terms in the relaxation time approximation to the kinetic equations, \begin{align}\label{f_+} l_+^2 \partial^2 f_{+S} &= (f_{+S} - f_{eq}) N(E), \\ \label{f_-} l_-^2 \partial^2 f_{-S} &= f_{-S} / N(E), \end{align} where $N(E) = \re (u_S)$ is the BCS DOS. We should supplement \Eqs{f_+}-\eqref{f_-} with proper boundary conditions on both left and right interfaces. On the tunnel barrier ($x = 0$) they follow from the Keldysh component of \Eq{KL}, \begin{align}\label{KLf_+} \partial_x f_{+S}\bigl|_{x=0} &= \frac{N(E)}{g_N R} \left( f_{+S0} - f_{+N} \right), \\ \label{KLf_-} \partial_x f_{-S}\bigl|_{x=0} &= \frac{1}{g_N R N(E)} \left( f_{-S0} - f_{-N} \right), \end{align} where $f_{\pm S0}(E)$ are the boundary values of $f_{\pm S}(x,E)$ at $x = 0$. On the right transparent interface distribution functions become equilibrium functions of the right S reservoir, \begin{equation}\label{L} f_{+S}\bigl|_{x=L} = f_{eq}, \quad f_{-S}\bigl|_{x=L} = 0. \end{equation} \section{Single particle current}\label{Current} The equation for the electric current follows from \Eqs{Ib}, \eqref{KLf_-} and for single particle current reads, \begin{equation}\label{I1} I = \frac{1}{e R} \int_{\Delta}^{\infty} N(E) \left( f_{-N} - f_{-S0} \right) dE. \end{equation} To obtain $f_{-S0}$ we should solve the boundary problem \eqref{f_-},\eqref{KLf_-} and \eqref{L}. Doing this we get the following result, \begin{align}\label{f_-S0} f_{-S0} &= f_{-N} \frac{\alpha_-}{\alpha_- + \sqrt{N(E)}\coth(\beta_-/\sqrt{N(E)})}, \\ \label{alpha_minus} \alpha_- &= 2W l_-/\xi_0 = R(l_-)/R, \quad \beta_- = L/l_-, \end{align} where $R(l_-) = l_-/g_N$ is the normal resistance of the superconducting lead per length $l_-$. From \Eq{f_-S0} we see that the nonequilibrium correction to the current is of the order of $\alpha_- \gg W$, which justifies our assumption about neglecting terms of the order of $W$ in the kinetic equation. Substituting \Eq{f_-S0} into \Eq{I1} we finally obtain, \begin{equation}\label{I_in} I = \frac{1}{e R} \int_{\Delta}^{\infty} N(E) f_{-N} \frac{\coth(\beta_-/\sqrt{N(E)})}{\alpha_-/\sqrt{N(E)} + \coth(\beta_-/\sqrt{N(E)})} dE. \end{equation} For $L = 0$ or $l_- = 0$ the boundary value $f_{-S0}$ in \Eq{I1} is equal to zero and \Eq{I_in} reduces to the well-known equation for single particle current between N and S reservoirs, connected through an insulating layer, \begin{equation}\label{I_eq} I = \frac{1}{eR} \int_{-\infty}^{+\infty} N(E - eV) \left[ n_F(E-eV) - n_F(E) \right] dE, \end{equation} where $n_F(E) =[1 + \exp(E/T)]^{-1}$ is a Fermi function. \begin{figure}[tb] \epsfxsize=8.5cm\epsffile{curr_fin.eps} \caption{(Color online) IV characteristic of the NIS$^\prime$S junction. $T_N$ = $T_S$ = 0.3 $T_C$, $W$ = 0.01. Solid black line is calculated by use of \Eq{I_eq}. Other lines are calculated by use of \Eq{I_in}, $L$ = 50$\xi_0$: $l_- = 10\xi_0$ (red line); $l_- = 20 \xi_0$ (green line); $l_- = 50 \xi_0$ (blue line); $l_- = 100 \xi_0$ (dashed black line).} \label{curr_2 \end{figure} In the absence of inelastic relaxation in the S$^\prime$ lead ($\beta_- \ll 1$) we can approximate \Eq{I_in} by \begin{equation}\label{Iapp} I = \frac{1}{e R} \int_{\Delta}^{\infty} f_{-N} \frac{N^2(E)}{N(E) + \alpha_L} dE, \end{equation} where $\alpha_L = 2 W L/\xi_0 = R(L)/R$ and $R(L) = L/g_N$ is the resistance of the S$^\prime$ lead in the normal state. At zero temperature at $eV \gg \Delta$ the current given by \Eq{Iapp} can be calculated to first order in the small parameter $\Delta/eV \ll 1$ as follows, \begin{equation}\label{assym} I = \frac{1}{eR}\int_\Delta^{eV} \frac{N^2(E) \; dE}{N(E) +\alpha_L} = \frac{\Delta}{eR}\int_1^{eV/\Delta} \frac{x^2 \; dx}{x\sqrt{x^2 - 1} + \alpha_L (x^2 - 1)} \approx \frac{V}{R_{tot}} + I_{exc}, \end{equation} where $R_{tot} = R + R(L)$ is the net normal resistance of the junction and the S$^\prime$ lead and $I_{exc}$ is an excess current given by the relation, \begin{equation} I_{exc} = \frac{\Delta}{eR} \left[ \frac{\alpha_L}{1 - \alpha_L^2} - \frac{2 \alpha_L^2}{\left( 1 - \alpha_L^2 \right)^{3/2}} \arctan\left( \sqrt{\frac{1 - \alpha_L}{1 + \alpha_L}} \right)\right]. \end{equation} Thus the IV characteristic \Eq{Iapp} exhibits a voltage-independent excess current at large voltage, which is the manifestation of the nonequilibrium in the S$^\prime$ lead. Here we should mention that the total excess current measured in the experiment is known to consist of the two contributions: the one coming from the single particle current at large voltage just calculated above, and the other coming from the two particle current (Andreev current). In this paper we do not calculate the latter contribution since it is of the order of $W \ll \alpha_L$. In Fig.~\ref{curr_2} we plot the IV characteristic of the NIS$^\prime$S junction for different values of $l_-$ parameter. I(V) given by \Eqs{I_in}, \eqref{I_eq} is an odd function of voltage and we plot it only for positive voltages. We fix $T_N = T_S = 0.3 T_C$, where $T_C$ is the critical temperature of the superconductor. For Aluminum $T_N = T_S \approx 360$ mK. We see that with the growth of the charge imbalance relaxation length $l_-$ the electric current decreases. When $l_- > L$ the length of the S$^\prime$ lead $L$ plays the role of a characteristic relaxation length and the current is almost independent of $l_-$. \section{Cooling power}\label{Heat} The equation for the cooling power follows from \Eqs{Qb}, \eqref{KLf_+} and reads \begin{equation}\label{Q1} P = - IV - \frac{1}{e^2 R} \int_{\Delta}^{\infty} E N(E) \left( f_{+N} - f_{+S0} \right) dE, \end{equation} where $I$ is given by \Eq{I_in}. To obtain $f_{+S0}$ we should solve boundary problem \eqref{f_+},\eqref{KLf_+} and \eqref{L}. Doing this we get the following result, \begin{align}\label{f_+S0} f_{+S0} &= \frac{f_{+N} \alpha_+ \sqrt{N(E)} + f_{eq} \coth(\beta_+ \sqrt{N(E)})}{\alpha_+ \sqrt{N(E)} + \coth(\beta_+\sqrt{N(E)})}, \\ \label{alpha_plus} \alpha_+ &= 2W l_+/\xi_0 = R(l_+)/R \gg W, \quad \beta_+ = L/l_+, \end{align} where $R(l_+) = l_+/g_N$ is the normal resistance of the superconducting lead per length $l_+$. Substituting \Eq{f_+S0} into \Eq{Q1} we finally obtain, \begin{equation}\label{Q_in} P = - IV - \frac{1}{e^2 R} \int_{\Delta}^{\infty} E N(E) \left( f_{+N} - f_{eq} \right) \frac{\coth(\beta_+\sqrt{N(E)}) dE}{\alpha_+\sqrt{N(E)} + \coth(\beta_+\sqrt{N(E)})}. \end{equation} For $L = 0$ or $l_\pm = 0$ the boundary value $f_{+S0}$ in \Eq{Q1} is equal to $f_{eq}$ and \Eq{Q_in} reduces to the well-known equation for the heat current between N and S reservoirs, connected through an insulating layer \cite{LPA}, \begin{equation}\label{Q_eq} P = \frac{1}{e^2R} \int_{-\infty}^{+\infty} N(E) (E - eV) \left[ n_F(E-eV) - n_F(E) \right] dE. \end{equation} In the absence of inelastic relaxation in the S$^\prime$ lead ($\beta_\pm \ll 1$) we can approximate \Eq{Q_in} by \begin{equation}\label{Papp} P = -IV - \frac{1}{e^2 R} \int_{\Delta}^{\infty} \left( f_{+N} - f_{eq} \right) \frac{E N(E) dE}{1 + \alpha_L N(E)}, \end{equation} where the current $I$ is given by \Eq{Iapp}. This equation corresponds to the case when the length of the S$^\prime$ lead L is smaller than the inelastic relaxation length and all relaxation occurs only in the S reservoir. \begin{figure}[tb] \epsfxsize=8.5cm\epsffile{P_1.eps} \caption{(Color online) P(V) dependence of the NIS$^\prime$S junction. $T_N$ = $T_S$ = 0.3 $T_C$, $W$ = 0.01. Solid black line is calculated by use of \Eq{Q_eq}. Other lines are calculated by use of \Eq{Q_in}, $L$ = 50$\xi_0$: $l_- = l_+ = 10\xi_0$ (red line); $l_- = l_+ = 20 \xi_0$ (green line); $l_- = l_+ = 50 \xi_0$ (blue line); $l_- = l_+ = 100 \xi_0$ (dashed black line).} \label{heat_1 \end{figure} \begin{figure}[tb] \epsfxsize=8.5cm\epsffile{P_2.eps} \caption{(Color online) P(V) dependence of the NIS$^\prime$S junction. $T_N$ = $T_S$ = 0.3 $T_C$, $W$ = 0.01. Solid black line is calculated by use of \Eq{Q_eq}. Other lines are calculated by use of \Eq{Q_in}, $L$ = 50$\xi_0$, $l_+ = 10\xi_0$: $l_- = 10\xi_0$ (red line); $l_- = 20 \xi_0$ (green line); $l_- = 50 \xi_0$ (blue line); $l_- = 100 \xi_0$ (dashed black line).} \label{heat_2 \end{figure} \begin{figure}[tb] \epsfxsize=8.5cm\epsffile{P_3.eps} \caption{(Color online) P(V) dependence of the NIS$^\prime$S junction. $T_N$ = $T_S$ = 0.3 $T_C$, $W$ = 0.001. Solid black line is calculated by use of \Eq{Q_eq}. Other lines are calculated by use of \Eq{Q_in}, $L$ = 50$\xi_0$: $l_- = l_+ = 10\xi_0$ (red line); $l_- = l_+ = 20 \xi_0$ (green line); $l_- = l_+ = 50 \xi_0$ (blue line); $l_- = l_+ = 100 \xi_0$ (dashed black line).} \label{heat_3 \end{figure} In Figs.~\ref{heat_1},\ref{heat_2},\ref{heat_3} we plot the P(V) dependence of the NIS$^\prime$S junction for different values of $l_\pm$ parameters. P(V) given by \Eqs{Q_in}, \eqref{Q_eq} is an even function of voltage and we plot it only for positive voltages. From Fig.~\ref{heat_1} it can be seen that with the growth of the relaxation lengths $l_\pm$ the cooling power decreases. Also the maximal cooling power shifts to the region of smaller voltages than in the equilibrium case. We ascribe this suppression to the effect of backscattering on impurities and tunneling of nonequilibrium quasiparticles back to the normal metal reservoir. We can see that for large values of $l_\pm$ the cooling power is negative for all voltages. In Fig.~\ref{heat_2} we plot P(V) for different ratio $l_+/l_-$. For a fixed $l_+$ length we vary the charge imbalance relaxation length $l_-$. It can be seen that the cooling power increases with the growth of $l_-$. This happens because of the decrease of the term $IV$ in \Eq{Q_in} due to the suppression of the electric current (see Sec.~\ref{Current}). Finally we want to stress here the role of the transparency parameter $W$. In Fig.~\ref{heat_3} we plot the P(V) dependence for a different value of tunneling parameter $W = 10^{-3}$, which is one order of magnitude smaller than $W$ in Figs.~\ref{heat_1},\ref{heat_2}. Here we again see the decrease of the cooling power with the growth of $l_\pm$, but the effect is smaller than in Fig.~\ref{heat_1}. This is obvious since the amplitudes of the nonequilibrium distribution functions $f_{\pm S0}$ scale with the $W$ parameter. For very strong tunnel barriers the nonequilibrium effect is therefore negligible. In order to estimate the characteristic parameters of the junctions for the transparency parameters $W = 10^{-2}$ and $W = 10^{-3}$ used in our numerical calculations, we will assume the junction area to be $200 \times 200$ nm and the thickness of the leads as well as the mean free path to be 50 nm. For Al leads, this results in the sheet resistance $R_{\Box} \approx 0.3$ $\Omega$ and $R(\xi_0)\approx 0.45$ $\Omega$ at $\xi_0 \approx 300$ nm. Then, according to \Eq{W}, the tunneling probability and the junction resistance approach the values $\Gamma \approx 2 \times 10^{-3}$, $R \approx 22.5$ $\Omega$ for $W = 10^{-2}$, and $\Gamma \approx 2 \times 10^{-4}$, $R \approx 225$ $\Omega$ for $W = 10^{-3}$, respectively. \section{Nonequilibrium quasiparticle distribution in S$^\prime$}\label{Distrib} \begin{figure}[tb] \epsfxsize=11cm\epsffile{DMP1.eps} \vspace{50mm} \caption{(Color online) Distribution functions $f_{-S}(E,x)$ (left plot) and $\delta f_{+S}(E,x) = f_{eq} - f_{+S}$ (right plot). Here $T_N$ = $T_S$ = 0.3 $T_C$, $eV = 0.8 \Delta$, $L = 50 \xi_0$, $l_- = l_+ = 10 \xi_0$, $W$ = 0.01.} \label{DMP \end{figure} Solving \Eqs{f_-},\eqref{KLf_-} and \eqref{L} we can obtain the function $f_{-S}(x,E)$ in the S$^\prime$ lead. It reads \begin{equation}\label{f_-(x)} f_{-S} = \frac{f_{-N}\alpha_-}{\alpha_- + \sqrt{N(E)}\coth\left(\beta_-/\sqrt{N(E)}\right)} \frac{\sinh\left[ \left( \beta_- - x/l_- \right)/\sqrt{N(E)} \right]}{\sinh\left( \beta_-/\sqrt{N(E)} \right)}. \end{equation} This equation describes the relaxation of the imbalance between the electrons and holes in the S$^\prime$ lead (branch mixing). From \Eq{f_-(x)} we see the exponential decay of the imbalance function $f_-$ at the distance $x \sim l_-\sqrt{N(E)}$ from the tunnel barrier. At $E \rightarrow \Delta$ the decay length diverges. Similarly from \Eqs{f_+},\eqref{KLf_+} and \eqref{L} we obtain the function $f_{+S}(x,E)$ in the S$^\prime$ lead, \begin{equation} f_{+S} = f_{eq} - \frac{\left( f_{eq} - f_{+N} \right)\alpha_+ \sqrt{N(E)}}{\alpha_+ \sqrt{N(E)} + \coth\left( \beta_+ \sqrt{N(E)} \right)} \frac{\sinh\left[ \left( \beta_+ - x/l_+ \right)\sqrt{N(E)} \right]}{\sinh\left(\beta_+\sqrt{N(E)}\right)}. \end{equation} From this equation we see the exponential decay of the $\delta f_{+S}$ function at the distance $x \sim l_+/\sqrt{N(E)}$ from the tunnel barrier, where $\delta f_{+S} = f_{eq} - f_{+S}$. We note that the charge imbalance function $f_-$ can be measured in the experiment \cite{distrib}. In the adopted geometry, Fig.~\ref{model}, one can probe the $f_{-S}$ function at different locations in S$^\prime$ by small normal metal electrodes N$_i^\prime$, connected to S$^\prime$ through the insulating barrier at different points $x_i$. Importantly, the area of these probing N$_i^\prime$IS$^\prime$ junctions should be much smaller than the area of the initial NIS$^\prime$ barrier. Driving the current through the probing junction one can measure the corresponding IV curve and calculate the function $f_{-S}$ in the S$^\prime$ lead. In Fig.~\ref{DMP} we plot both $f_{-S}(E,x)$ and $\delta f_{+S}(E,x)$ functions. We can see that nonequilibrium distributions are nonzero only at energies rather close to $\Delta$. For large energies $E \gg \Delta$ both functions are approaching zero. In this paper we used simple relaxation approximation approach. At zero temperature it gives qualitatively correct estimation of nonequilibrium properties of considered systems (see, for example, Ref.~\cite{Golubov2}). We note that this approximation leads to an incorrect expression of the charge imbalance relaxation length near the critical temperature \cite{Volkov}. At intermediate temperatures $0 < T < T_C$ this approximation gives a result correct with an accuracy of a numerical factor. Since in this paper we consider the temperatures $T_N = T_S$ equal to $0.3 T_C$ which is much smaller than $T_C$ our description is qualitatively correct. However, we should stress that for higher temperatures one should add inelastic collision integrals in the kinetic equations \cite{Kaplan, Heikkila2, Kopnin}. \section{Conclusion}\label{Conclusion} We investigated the electric and heat currents in an NIS$^\prime$S junction in the diffusive limit. We have developed a model which describes the nonequilibrium quasiparticle injection and relaxation in the superconducting lead. This model will be used as a tool to fit experimental data in various types of NIS tunnel junctions in non-reservoir geometries. These fits will be presented elsewhere. We showed that in the case when relaxation lengths in the superconductor are rather long compared to the coherence length, the electric current and the cooling power for $eV$ below $\Delta$ are suppressed. We ascribe this suppression to the backtunneling of nonequilibrium quasiparticles into the normal metal. The value of this suppression scales with the diffusive transparency parameter $W$. Finally, we calculated the nonequilibrium distribution functions in the superconducting lead. \begin{acknowledgements} The authors thank D.~V.~Averin, E.~V.~Bezuglyi, H.~Courtois, D.~S.~Golubev, A.~A.~Golubov, T.~T.~Heikkil\"{a}, M.~Houzet, S.~Rajauria and A.~F.~Volkov for useful discussions. This work was supported by Na\-no\-SciERA ``Na\-no\-fridge'' EU project. \end{acknowledgements}
1,116,691,498,108
arxiv
\section{Introduction} An important invariant in graph theory is $\tau(G)$, the number of spanning trees of a graph $G$. The first result related to $\tau(G)$ dates back to 1847 and is attributed to Kirchhoff \cite{Kirchhoff}. In his celebrated theorem he has shown that the number of spanning trees of a graph $G$ is closely related to the cofactor of a special matrix (the {\em Laplacian matrix}) that can be obtained after substracting the {\em adjacency matrix} from the respective {\em degree matrix} (a diagonal matrix with vertex degrees on the diagonals). If by $Q(G)$ we denote the Laplacian matrix of a graph $G$ of order $n$ with eigenvalues $0 = \lambda_1 \leq \cdots \leq \lambda_n$, then a corollary of Kirchhoff's theorem can be stated as \begin{equation}\label{form} \tau(G) = \frac{1}{n} \lambda_2 \cdots \lambda_n. \end{equation} For example, as the eigenvalue $n$ of $Q(K_n)$ has multiplicity $(n-1)$, it follows that \begin{equation}\label{Cayley} \tau(K_n) = n^{n-2}. \end{equation} Equation (\ref{Cayley}) is also refered to as {\em Cayley formula} as a tribute to its discoverer Arthur Cayley \cite{Cayley}. For a survey of known results related to the Laplacian spectrum of graphs we refer the reader to \cite{Mohar}. Since the result of Cayley, many interesting identites for the number of spanning trees for various classes of graphs have been derived. For example, Bogdanowicz \cite{Bog} showed that the number of spanning trees of the $n$-fan $F_{n+1}$ equals to $f_{2n}$ where $f_n$ is the $n$'th {\em Fibonacci number}. A similar result relating the number of spanning trees of the wheel graph to {\em Lucas numbers} is also known \cite{IEE}. Counting the number of spanning trees is not only an area that is rich with surprising identities but also holds a fundamential role in other scientific areas such as physics \cite{Phy3,Phy1} networking theory \cite{Net1} and also finds applications in the study of various electrical networks \cite{El}. Since {\em graph products} (as defined in \cite{Klavzar}) form a basis for many network topologies it is natural to study the function $\tau$ in relation with various graph products. In this paper we study the number of spanning trees in the Cartesian product of graphs. For simple graphs $G_1$ and $G_2$, the Cartesian product $G_1 \square G_2$ is defined as the graph with vertex set $V(G_1) \times V(G_2)$ such that two vertices $(u,u')$ and $(v,v')$ are adjacent if and only if either $u = v$ and $u'$ is adjacent to $v'$ in $G_2$, or $u' = v'$ and $u$ is adjacent to $v$ in $G_1$. In what follows $G_1$ and $G_2$ will denote simple graphs of order $n_1$ and $n_2$ such that $m_1 = |E(G_1)|$ and $m_2 = |E(G_2)|.$ Moreover, we will denote by $\lambda_1, \ldots, \lambda_{n_1}$ and $\mu_1, \ldots, \mu_{n_2}$ the eigenvalues of $Q(G_1)$ and $Q(G_2)$ respectively. Using this notation, we can state the well know (see \cite{Mohar} for a survey of results related to the Laplacian spectrum) fact relating the eigenvalues of $G_1$ and $G_2$ to the eigenvalues of $G_1\square G_2$ which are $$ \lambda_i + \mu_j \quad \hbox{for} \quad i = 1, \ldots, n_1 \quad \hbox{and} \quad j = 1, \ldots, n_2.$$ Applying the later equality to identity (\ref{form}) and using the fact that $\lambda_1 = \mu_1 = 0$ one obtains the following formula for the number of spanning trees for the Cartesian product of $G_1$ and $G_2$: \begin{equation}\label{SPProd} \tau(G_1\square G_2) = \tau(G_1)\tau(G_2)\prod_{i = 2}^{n_1} \prod_{j=2}^{n_2} (\lambda_i + \mu_j). \end{equation} \section{Upper and lower bounds for $\tau(G_1 \square G_2)$} We are going to simplify equation (\ref{SPProd}) as to obtain upper and lower bounds for $\tau(G_1 \square G_2).$ Furthermore we will characterize the graphs for which equality holds and derive a formula for the number of spanning trees of the {\em Rook's graph} $K_{n_1} \square K_{n_2}.$ \begin{theorem}\label{T1} $\tau(G_1 \square G_2) \geq \frac{2^{(n_1-1)(n_2-1)}}{n_1n_2} (\tau(G_1) n_1)^{\frac{n_2+1}{2}} (\tau(G_2)n_2)^{\frac{n_1+1}{2}}$ where equality holds if and only if $G_1$ or $G_2$ is not connected or $n_1 = n_2$ and $G_1 \simeq G_2 \simeq K_{n_1}.$ \end{theorem} \begin{proof} Consider the expression: $$ \prod_{i=2}^{n_1} \prod_{j = 2}^{n_2} (\lambda_i+\mu_j).$$ By the inequality of arithmetic and geometric means $\lambda_i + \mu_j \geq 2\sqrt{\lambda_i \mu_j}$ for every $i,j,$ it therefore follows that $$ \prod_{i=2}^{n_1} \prod_{j = 2}^{n_2} (\lambda_i+\mu_j) \geq \prod_{i=2}^{n_1} \prod_{j = 2}^{n_2} 2\sqrt{\lambda_i \mu_j} = 2^{(n_1-1)(n_2-1)} \prod_{i=2}^{n_1} \prod_{j = 2}^{n_2} \sqrt{\lambda_i \mu_j}.$$ The last expression can also be writen as: $$2^{(n_1-1)(n_2-1)} \prod_{i=2}^{n_1} \sqrt{\lambda_i^{n_2-1}} \prod_{j=2}^{n_2} \sqrt{\mu_j^{n_1-1}}. $$ We now multiply and divide the last expression by $\sqrt{{n_1}^{n_2-1}{n_2}^{n_2-1}}$ and obtain: $$2^{(n_1-1)(n_2-1)}\sqrt{{n_1}^{n_2-1} {n_2}^{n_1-1}} \frac{ \prod_{i=2}^{n_1} \sqrt{\lambda_i^{n_2-1}}}{\sqrt{{n_1}^{n_2-1}}} \frac{\prod_{j=2}^{n_2} \sqrt{\mu_j^{n_1-1}}}{\sqrt{{n_2}^{n_1-1}}}$$ which, according to (\ref{form}), equals $$2^{(n_1-1)(n_2-1)} (\tau(G_1)n_1)^{\frac{n_2-1}{2}} (\tau(G_2)n_2)^{\frac{n_1-1}{2}}\,. $$ The stated inequality now follows after combining the derived result with equation (\ref{SPProd}). We now examine the cases in which equality holds. If $G_1$ or $G_2$ is not connected, then equality clearly holds as $\tau(G_1\square G_2) = 0.$ Therefore, let us assume $G_1$ and $G_2$ are connected. As we derived our inequality using the inequality of arithmetic and geometric means it follows that equality holds if and only if $\lambda_i = \mu_j$ for every $i = 2, \ldots, n_1$ and $j = 2, \ldots, n_2.$ The later holds if and only if $$\lambda_2 = \cdots = \lambda_{n_1} = \mu_2 = \cdots = \mu_{n_2},$$ which means that $Q(G_1)$ and $Q(G_2)$ have eigenvalues of multiplicity $n_1-1$ and $n_2-1$, respectively. As the only graph of order $k$ whose Laplacian matrix has an eigenvalue of multiplicity $k-1$ is $K_k$, it follows, that $n_1 = n_2$ and thus $G_1 \simeq K_{n_1} \simeq G_2.$ \end{proof} In the proof of Theorem \ref{T1} we applied the inequality of arithmetic and geometric means to each summand of (\ref{SPProd}) individually. Observe that the same inequality can be applied to the factors of equation (\ref{SPProd}). In the next theorem we use this observation and the fact that $ \sum_{i=2}^{n_1} \lambda_i = 2m_1$ and $\sum_{i=2}^{n_2} \mu_i = 2m_2$ in order to derive an upper bound for $\tau(G_1 \square G_2)$. \begin{theorem}\label{T2} $ \tau(G_1\square G_2) \leq \tau(G_1)\tau(G_2)\left[ \frac{2m_1}{n_1-1} + \frac{2m_2}{n_2-1} \right]^{(n_1-1)(n_2-1)},$ where equality holds if and only if $G_1$ or $G_2$ is not connected or $G_1 \simeq K_{n_1}$ and $G_2 \simeq K_{n_2}.$ \end{theorem} \begin{proof} As observed, we can bound equation (\ref{SPProd}) by applying the inequality of geometric and arithmetic means on its factors. We then obtain $$ \tau(G_1\square G_2) = \tau(G_1)\tau(G_2)\prod_{i = 2}^{n_1} \prod_{j=2}^{n_2} (\lambda_i + \mu_j) \leq \tau(G_1)\tau(G_2)\left[\frac{\sum_{i=2}^{n_1} \sum_{j=2}^{n_2} (\lambda_i + \mu_j)}{(n_1-1)(n_2-1)}\right]^{(n_1-1)(n_2-1)}, $$ which we further simplify to $$ \tau(G_1)\tau(G_2) \left[ \frac{(n_2-1)\sum_{i=2}^{n_1} \lambda_i + (n_1-1)\sum_{j=2}^{n_2} \mu_j}{(n_1-1)(n_2-1)} \right]^{(n_1-1)(n_2-1)}. $$ Applying the identity for the summation of the eigenvalues of the Laplacian matrix we obtain $$ \tau(G_1)\tau(G_2)\left[\frac{2m_1}{n_1-1}+\frac{2m_2}{n_2-1}\right]^{(n_1-1)(n_2-1)},$$ which is what we wanted to show. Observe now, that if $G_1$ or $G_2$ is not connected, equality in the stated bound clearly holds. Thus, let us assume $G_1$ and $G_2$ are connected. Equality will then hold if and only if $$ \lambda_i + \mu_j = \lambda_{i'}+\mu_{j'} \quad \hbox{ for} \quad i,i' = 1,\ldots,n_1 \quad \hbox{and} \quad j,j' = 1,\ldots,n_2.$$ The later holding if and only if $$\lambda_2 = \cdots = \lambda_{n_1} \quad \hbox{and} \quad \mu_2 = \cdots = \mu_{n_2},$$ which means $G_1 \simeq K_{n_1}$ and $G_2 \simeq K_{n_2}$ as these are the only graphs of order $n_1$ and $n_2$ having eigenvalues of multiplicity $n_1-1$ and $n_2-1$, respectively. \end{proof} The statements of Theorems \ref{T1} and \ref{T2} simplify substantialy if $G_1$ and $G_2$ are trees. In this case we can write the implications of Theorem \ref{T1} and Theorem \ref{T2} as the following corollary. \begin{cor} If $G_1$ and $G_2$ are trees of order $n_1\geq 3$ and $n_2 \geq 3$ respectively, then $$ 2^{(n_1-1)(n_2-1)}{n_1}^{\frac{n_2-1}{2}} {n_2}^{\frac{n_1-1}{2}} < \tau(G_1 \square G_2) < 2^{2(n_1-1)(n_2-1)}.$$ \end{cor} As we saw in Theorem \ref{T2}, the derived bound for $\tau(G_1 \square G_2)$ is tight whenever $G_1 \simeq K_{n_1}$ and $G_2 \simeq K_{n_2}.$ This, in combination with equation (\ref{Cayley}), readily gives an exact formula for the number of spanning trees of $K_{n_1} \square K_{n_2}$: \begin{cor} $\tau(K_{n_1} \square K_{n_2}) = {n_1}^{n_1-2}{n_2}^{n_2-2}(n_1+n_2)^{(n_1-1)(n_2-1)}.$ \end{cor} Observe, that the same argument as used in Theorems \ref{T1} and \ref{T2} could be applied to the other standard graph products provided that a similar characterisation of their Laplacian spectrum is known. At present no result of this type was known to the author, hence we leave it as future work to investigate upper and lower bounds for the other graph products. \section{Acknowledgements} The author is thankful to Sandi Klav\v{z}ar for constructive discussions related to the problem.
1,116,691,498,109
arxiv
\section{Introduction} \IEEEPARstart{T}{he} LiDAR (light detection and ranging) is mounted on mobile platforms, for locating multiple targets and obstacles \cite{ Song2016, Kang2012, Ramasamy2016} for tasks such as collision avoidance and path planning \cite{Zeng2018}, and generating 3D maps of the surroundings for localization\cite{Zhang2020,Wang2017}. To generate accurate reconstruction and interpretation of the surroundings, precise assembly of LiDAR on mobile platforms is critical. An alignment offset as low as a few degrees could cause a significant error in predicting the position of obstacles relative to the ego-vehicle and can result in dangerous situations. Thus, for large-scale production of vehicles with attached LiDAR for a high level of ADAS, the sensor misalignment inspection is necessary because an unexpected offset in the sensor assembly reduces the reliability and safety of the autonomous system. The sensor misalignment can be inspected by the extrinsic parameter calibration that estimates the pose of the sensor frame with respect to the reference frame of the mobile platform. Therefore, this paper aims to design an automatic and accurate LiDAR calibration system for practical application in the vehicle assembly process in the automobile industry. Most studies on LiDAR extrinsic calibration are focused on solving for the geometric relationship with respect to a camera. In contrast, there are only a few studies on estimating the LiDAR misalignment with respect to the vehicle body frame, as explained in Section II. These studies of LiDAR extrinsic calibration use the flat ground plane and a long pole from a series of poses\cite{Zhu2013}, the ground as the target object at various poses\cite{Zaiter2019} or multiple planar objects(walls and floor)\cite{Atanacio-Jimenez2011} to estimate the rigid transformation between the LiDAR and the target objects. However, considering the actual environment of the automobile manufacturing process, the inspection area is confined to a limited space of a few meters that includes other machinery for various assembly tasks, such as rails on the ground. Therefore, to overcome these limitations, this paper proposes a new extrinsic calibration method that uses a single calibrating target, at a close range and requires only one target pose. Most importantly, the misalignment pose is estimated with an accuracy within a few millimeters and at the subdegree level even for a low-resolution mobile LiDAR. This paper designed a novel automatic calibration system composed of a simple planar target with embedded photodetector arrays, named as the PD-target system. Modules of NIR photodetector 1D arrays, are arranged near the corners of a planar mid-sized target, to detect the the position of the beam spot on the target surface. The positions of the corresponding beam points measured from the LiDAR point cloud data and the target photodetectors are used for the initial pose estimation. Then, an iterative nonlinear optimization method is applied to obtain the final pose of the LiDAR with respect to the target body frame, which, without loss of generality, is assumed to be the vehicle body frame. The contributions of this paper can be summarized as follows. First, the proposed system is a novel calibration method that combines a target board with photodetector arrays (PD-target) that has not been proposed in other literature. Second, the system can estimate the sensor alignment within a high level of subdegree and subcentimeter accuracy with the help of NIR photodetector sensors. Last, this system requires only one pose of a simple target board in a close range of few meters that can be applicable in a large-scale automobile process. \section{Related Work} LiDAR calibration can be classified as intrinsic parameter calibration and extrinsic parameter calibration. The intrinsic calibration refers to optimizing the internal parameters to minimize the systematic error of the LiDAR measurements due to the optical parts and conversion from raw data to 3D points\cite{Atanacio-Jimenez2011}, \cite{Muhammad2010} \cite{Glennie2010}. However, this paper focused on the extrinsic calibration of LiDAR. Most research on LiDAR extrinsic calibration are focused on estimating the geometric relationship of the LiDAR with respect to the camera. The extrinsic parameters are estimated by extracting the 3D-2D correspondences from the point cloud of 3D data and 2D image pixels using one or multiple planar targets\cite{geiger2012automatic, kim2019extrinsic}, of various such as circular \cite{fremont2008extrinsic, alismail2012automatic}, trihedral \cite{gong20133d} and 3D cubic object\cite{pusztai2017accurate}. A common method is using a chessboard at multiple poses to determine the correspondence of the planes and lines and applying an optimization method \cite{zhang2004extrinsic, verma2019automatic}, \cite{Mirzaei2012}. In contrast to the research on the LiDAR extrinsic calibration with a camera, the LiDAR misalignment with respect to the vehicle body frame has not been investigated to that extent. Several studies proposed measuring systems that estimate the relative 6-DOF pose between a planar target and the a sensor system composed of 1D or 2D laser scanners \cite{kim2012developing, kim2014design, kim2015portable, kim2014structural, kim2014note}. These systems detect the feature beam points by using a camera or using the edges of the planar targets. A method of calibrating multiple 2D laser scanners with respect to the vehicle body was proposed in \cite{Underwood2010} that used a vertical pole on flat ground. An extrinsic calibration for a 3D LiDAR was proposed by \cite{Zhu2013} that first computes the pitch and roll angles from the flat ground plane and then computes the navigation angle by using a long pole from a series of poses with on-vehicle IMUs and GPS. A similar method was introduced to estimate the extrinsic LiDAR parameters with respect to the reference ground in \cite{Zaiter2019}. Here, it used the best-fit plane model of the ground and least squares conic algorithm to estimate for the Euler angles and sensor elevation above the ground. Other extrinsic parameter methods of a 3D LiDAR used a calibration object consisting of four walls and the floor\cite{Atanacio-Jimenez2011}. After estimating the sensor intrinsic parameters, the extrinsic rigid transformation between the LiDAR and the target objects was estimated with an optimization method. Since these methods have limitations to satisfy the aforementioned large-scale application, this paper proposes a new extrinsic calibration method using the PD-target system, which is composed of a planar target with photodetector arrays. \section{Analysis of LiDAR Measurement Error} \subsection{Resolution of 3D measurement} The LiDAR model used in this paper is Velodyne VLP-16, which has 16 vertical channels of LEDs emitting 903 nm near-infrared (NIR) wavelength. The firing of beam pulses is a series of a short bursts with a period of 2.3 $\mu sec$. Therefore, the minimum speed of ADC for capturing a LiDAR pulse should be at least 1 MHz. From the raw LiDAR data, the beams on the target board can be defined to be bounded by the vertical angle ranging from $\omega_1$ to $\omega_n$ and the azimuth angle from $\alpha_1$ to $\alpha_m$, where the values of $n$ and $m$ are the maximum numbers of scan channels and the number of beams per channel on the given target board dimension, respectively, as described in Fig. \ref{Fig_TargetboardROI}. VLP-16 has an azimuth and vertical angular resolution of 0.2 and 2 degrees, respectively, and has the $H_{res}$ of 9 mm and $V_{res}$ of 87 mm, on the target board located at 2.5 m of distance. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Fig_TargetboardROI2.png \caption{\label{Fig_TargetboardROI} The geometric relationship of 3D rotation (R) and translation (T) between the target body frame and the LiDAR frame are shown. The number of scan channels of LiDAR is n, and each beam with index (i,j) is measured in terms of the range (r), azimuth($\alpha$) and vertical angle ($\omega$). } \end{figure} \subsection{Uncertainty of 3D measurement} The principle in estimating the relative pose between the LiDAR and the target is to find the correspondence of feature points, ${}^OP$ and ${}^LP$, which are measured from the two different coordinate frames of the LiDAR body, $\{L\}$, and the target system, $\{O\}$, respectively, as shown in Fig. \ref{Fig_TargetboardROI}. The relationship between the two groups of correspondence feature points, ${}^OP$ and ${}^LP$, describes the orientation matrix $\bf{R}$ and the translation vector $\bf{T}$ of the LiDAR with respect to the reference target board. \begin{equation} {}^{\bf{O}}{\bf{P}} = {}_{\bf{L}}^{\bf{O}}{\bf{M}}{}^{\bf{L}}{\bf{P}} = [{}_{\bf{L}}^{\bf{O}}{\bf{R}}|{}_{\bf{L}}^{\bf{O}}{\bf{T}}]{}^{\bf{L}}{\bf{P}} \label{eq:pose} \end{equation} Typically, corners and edges are used as the features of a target for LiDAR calibration. Let the corners of the target board, $oP_{cor,i}$, be the features that need to be detected by LiDAR, as indicated in Fig. \ref{Fig_TargetboardROI}. For the ideal measurement of that feature point, $ {}^{\bf{L}}{\overline {\bf{p}} _{i,k}}$, the LiDAR should project a beam spot directly on that corner. \begin{equation} {}^{\bf{L}}{\overline {\bf{p}} _{i,k}} = ({r_{i,k}})\left[ {\begin{array}{*{20}{c}} {\cos ({\omega _{i,k}})\sin ({\alpha _{i,k}})}\\ {\cos ({\omega _{i,k}})\cos ({\alpha _{i,k}})}\\ {\sin ({\alpha _{i,k}})} \end{array}} \right] \end{equation} However, the actual position of the beam closest to the corner, ${}^{\bf{L}}{\widetilde {\bf{p}}}$, is restricted by the sensor resolution \begin{equation} {}^{\bf{L}}{\widetilde {\bf{p}}_{i,k}} = ({\overline r _{i,k}} + {\varepsilon _k})\left[ {\begin{array}{*{20}{c}} {\cos ({{\overline \omega }_{i,k}} + \delta {\omega _k})\sin ({{\overline \alpha }_{i,k}} + \delta {\alpha _k})}\\ {\cos ({{\overline \omega }_{i,k}} + \delta {\omega _k})\cos ({{\overline \alpha }_{i,k}} + \delta {\alpha _k})}\\ {\sin ({{\overline \alpha }_{i,k}} + \delta {\alpha _k})} \end{array}} \right] \end{equation}where $\epsilon _k$ is the depth measurement noise, and $\delta \omega$ and $\delta \alpha$ are the uncertainty of the azimuth and the vertical angle resolution, respectively. The subscript $i$ is the point closest to the $i^{th}$ corner, and $k$ is the scan number. The deviation of the beam point from the corner of the target surface is bounded by the azimuth and vertical angle resolution as \begin{equation} {}^{\bf{L}}{\overline {\bf{p}} _{i,k}} - {}^{\bf{L}}{\widetilde {\bf{p}}_{i,k}} \le {{\bf{\varepsilon }}_{i,k}} = \left[ {\begin{array}{*{20}{c}} {{r_{i,k}}\tan \delta {\alpha _k}}\\ 0\\ {{r_{i,k}}\tan \delta {\omega _k}} \end{array}} \right] \end{equation} This correspondence uncertainty needs to be minimized to obtain higher accuracy in the relative pose estimation. Since the resolution of the given LiDAR cannot be modified unless a higher performance LiDAR replaces it, the feature on the target board needs to be shifted from the corners (${}^OP_{cor}$) to the position where beams are actually projected, (${}^OP_{i}$). However, this introduces a technical problem of measuring the position of ${}^OP_{i}$ by the target board. Thus, this paper proposes a method to measure the position ${}^OP_{i}$ with high precision by using photodetectors attached on the target surface. \section{Design of PD-Target System} \subsection{Photodetector array} Based on the analysis of the laser beams in Section III., the appropriate photodetector should be sensitive to the NIR spectrum and have a bandwidth higher than 1 MHz. In addition, the active sensor area of the photodetector should be large enough to capture the beam spot diameter. This paper selected a photodetector 1-D array of 16 diode elements (Hamamatsu S4111), which is sensitive from the UV to the NIR spectrum. The active sensor area is 16 mm by 1.45 mm and the overall dimension, is shown in Fig. \ref{Fig_PD}. \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{Fig_PD2.png \caption{Photodetector 1-D array of 16 diode elements (Hamamatsu S4111) and the PCB of the signal processing circuit designed in this study }\label{Fig_PD} \end{figure} \subsection{PD-Target board configuration} The photodetector(PD) 1-D arrays are positioned on the target board to capture the LiDAR beam points, which are near the target edges and on the first and the last vertical laser scans Since each photodiode is a 1D array, it can detect the center position of a single beam spot in its own coordinate frame, $\{D\}$. The beams photodiodes are referred to as the key beam points throughout the paper. Then, the position of the beam in the target coordinate frame $\{O\}$ can be easily be converted from $\{D\}$ with the matrix $P_{offset}=[X_{pd,offset},Z_{pd,offset}]$ that describes the location of each photodetector attachment from the center of the board. \begin{equation} {}^{\bf{O}}{\bf{P}} = {}^D{\bf{P}} - {{\bf{P}}_{offset}} \end{equation} The 1D PD could be attached to the target surface at any angle. However, we considered the horizontal and vertical orientations and compared the performance of these two configurations. \subsection{Signal Processing Circuit} The photodetector diode generates an electric current when the LiDAR beam is projected on it. The output of the PD should be pre-amplified voltage to be used as the source for the ADC that is connected to the main processor. Thus, the trans-impedance amplifier (TIA) circuit is designed to convert the current to voltage and amplify the output to the usable level. The designed TIA circuit consists of an OP-amplifier, a feedback resistor, and a capacitor, as shown in Fig. \ref{Fig_PD}. \begin{table}[t] \caption{OP-AMP and photodetector parameters} \label{tab:opamp} \center \begin{tabular*}{\linewidth}{c|cc} \hline Symbol& Description& Value\\ \hline GBWP& Op-amp gain bandwidth product& 1[MHz]\\ $A_{OL}$& Op-amp DC open-loop gain& 106[dB]\\ $C_{i,amp}$& Op-amp input capacitance& 1.4[pF]\\ $C_{pd}$& Photodiode junction capacitance& 200[pF]\\ $R_{sh}$& Photodiode shunt resistance& 250[G$\Omega$]\\ \hline \end{tabular*} \end{table} The value of the feedback resistor, $R_f$, is determined from the ratio of the maximum photodiode current and the preferred range of the output voltage. The electric current of the PD generated by the laser photons from 2.5 m is expected to be a maximum of 100 $\mu A$. Targeting the output voltage to be within the range of 0 V to 10 V, the Rf value was designed to be 100 k$\Omega$. The voltage out of the TIA circuit can be expressed as a first-order system as \begin{equation} V_{out} = \frac{{{I_d}{R_f}}}{{({R_f}\;{C_f}\;s + 1)\;}}\ \end{equation} However, to avoid any unwanted oscillation in the signal, the stability of the circuit needs to be analyzed with the noise gain, which is the transfer function of the 1st order system of one zero and pole, composed of all resistors and capacitors in the amplifier and the photodetector. \begin{equation} {G_{noise}}\;(s) = \frac{{({R_f} + {R_{sh}}\;)}}{{{R_{sh}}}}\frac{{\left( {\frac{{{R_f}{R_{sh}}}}{{{R_f} + {R_{sh}}\;}}} \right)({C_f} + {C_{in}})s + 1}}{{{R_f}\;{C_f}\;s + 1}} \end{equation} where $C_{in}$ is the combination of capacitors at the input of the amplifier. All the parameters except the variable $R_f$ and $C_f$ are predetermined by the parts specification. The oscillation or the stability of the signal can be predicted with the circuit quality factor (Q), which can be calculated from the TIA circuit phase margin\cite{Fuada2017}. \begin{equation} Q = {\left( {{{\left( {\frac{1}{{{{(\tan \,\varphi )}^2}}}\; + 0.5} \right)}^2} - 0.25} \right)^{\frac{1}{4}}} \end{equation} A Q-factor value below 0.5 causes the system to be overdamped to avoid any oscillation in the output voltage. In this application, the feedback capacitance of 68 pF was selected and resulted in a Q-value of 0.47 that yielded an overdamped system. Circuit simulation with the designed TIA circuit is conducted by applying 100 ms (10 Hz) of laser pulses with different power intensities to generate the current input ranging from 100 to 400 $\mu A$ to the circuit. The results in Fig. \ref{Fig_CircuitSim}showed that the output voltage can be generated as high as 4V when the current of 400 $\mu A$ is applied by laser beam.Moreover, there is no oscillation in the output signals, as required. The circuitry of TIA was designed on a PCB that can amplify up to 4 diode channels of a PD array, as shown in Fig. \ref{Fig_PD}. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Fig_CircuitSim2.png \caption{ Simulation of the designed TIA circuit with an input signal of 10-Hz laser pulses with different power intensities that generate current of 100 to 400 $\mu A$ from the photodetector }\label{Fig_CircuitSim} \end{figure} The sampling rate for each ADC channel should be higher than 2 MHz to capture laser pulses of the 2.2-$\mu sec$ period. In addition, all the ADC channels must be concurrent to capture all the output signals of the PD simultaneously. Limited by the availability of ADC channels and DAQ boards, only two PDs and four out of sixteen diodes from each PD were sampled simultaneously. \section{Algorithm of LiDAR Extrinsic Calibration} \subsection{Preprocessing} From the whole point cloud of one LiDAR scan, the target board is segmented as the region of interest (ROI). A Euclidean clustering algorithm is applied to segment the planar target automatically from the background. With the point cloud data of the ROI, the best-fit plane model of the target is derived using a least squares method. Then, the ROI beam points are projected onto the fit plane surface to minimize the sensor measurement error. \subsection{Relative pose of LiDAR} The relative 6-DOF pose of the LiDAR from the reference target frame can be estimated by solving for the transformation matrix ${}_L^OM(\phi, \theta, \psi, \Delta x,\Delta y,\Delta z)$, as mentioned in Eq. \ref{eq:pose}. To solve for the transformation matrix ${}_{\bf{L}}^{\bf{O}}{\bf{M}}$, the 3D positions of feature beam points measured by the target body frame (${}^{\bf{O}}{\bf{p_i}}$) and measured in the LiDAR frame (${}^{\bf{L}}{\bf{p_i}}$) are necessary. With the proposed PD-target system, Eq. \ref{eq:pose} is modified to include the photodetector coordinate frame, ${D}$. \begin{equation} {}^D{\bf{P}} - {{\bf{P}}_{offset}} ={}_{\bf{L}}^{\bf{O}}{\bf{M}}{}^{\bf{L}}{\bf{P}} = [{}_{\bf{L}}^{\bf{O}}{\bf{R}}|{}_{\bf{L}}^{\bf{O}}{\bf{T}}]{}^{\bf{L}}{\bf{P}} \end{equation} where \begin{equation*} {}_{\bf{L}}^{\bf{O}}{\bf{R}} = \bf{R_z}(\phi)\bf{R_y}(\theta)\bf{R_x}(\psi),~~ {}_{\bf{L}}^{\bf{O}}{\bf{T}} = [ \Delta X, \Delta Y, \Delta Z]^T \end{equation*} The process of obtaining the position of the correspondence feature points,${}^D{\bf{P}}$and ${}^L{\bf{P}}$ are described in the following section. Note that since the signal processing for each PD alignment are similar, only the horizontal aligned photodetectors are explained in the section. \subsection{Estimating the position of the beam center} For each LiDAR scan, there would be at most three beams projected on the same photodetector in the horizontal alignment, as shown in Fig. \ref{Fig_PD_H_response}. With the given configuration of the target board at a distance of 2.5 meters, the diameter of a beam spot would be 19.6 mm. The beams on the photosensitive area overlap with an interval of 9.7 mm but at different times with 55-$\mu sec$ period. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Fig_PD_H_response.png \caption{Three beams (A to C) projected on the horizontally aligned photodetector in consequent time that generate signals from each diode }\label{Fig_PD_H_response} \end{figure} The diode channel closest to the beam center outputs the highest peak level because the beam power intensity is approximately distributed as a bell-shaped. The first set of peaks is generated by $beam A$, and $CH1$ showed a higher level than the others because it is the closest to the beam center. After 55 $\mu s$ from the first peaks, $beam B$ is projected on the PD closest to $CH11$, which could be inferred by comparing the peak level among other channel outputs. Thus, by analyzing the distribution of the peak values for multiple beam projections, the position and the time of the beam scanning over the PD can be derived. From the three beams on the PD, the feature beam spot is selected as the closest one to the PD center. The center position for each beam can be estimated by analyzing the output signals in the spatial domain. Starting at the origin of the PD coordinate frame on $CH1$, the center and the end ($CH16$) of the PD would be 7.5 mm and 15 mm, respectively. On this PD coordinate frame, the output of each channel for a beam can be distributed as plotted in Fig. \ref{Fig_PD_H_gauss}. The laser power spectrum was modeled as a Gaussian distribution\cite{laconte}, the beam center is estimated by applying a Gaussian fit on the signals of each PD channel. This paper used the iterative procedure of Gaussian fitting proposed in \cite{guo2011}, which is known to be a robust and fast method. If the natural log is applied on both sides of 1D Gaussian model, then a function of the second-order polynomial can be derived. \begin{equation} \begin{array}{l} \ln (y) = \ln (A) - \frac{{{{(x - \mu )}^2}}}{{2{\sigma ^2}}}\\ = - \frac{{{x^2}}}{{2{\sigma ^2}}} + \frac{{2\mu x}}{{2{\sigma ^2}}} + \left( { - \frac{{{\mu ^2}}}{{2{\sigma ^2}}} + \ln (A)} \right) = {a_2}{x^2} + {a_1}x + {a_0} \end{array} \end{equation} where x is the beam power input, y is the voltage output of the PD, $\mu$ is the average, or the beam center position, $\sigma$ is the standard deviation and A is a weight value. However, the actual output voltage signal, $y$, is corrupted with sensor noise. To apply the iterative procedure of Gaussian fitting \cite{guo2011} to achieve more robustness to the noise, the equation is modified as \begin{equation} \begin{array}{l} \left[ {\begin{array}{*{20}{c}} {\sum\nolimits_{i = 1}^m {x_i^4y_{i,(k - 1)}^2} }&{\sum\nolimits_{i = 1}^m {x_i^3y_{i,(k - 1)}^2} }&{\sum\nolimits_{i = 1}^m {x_i^2y_{i,(k - 1)}^2} }\\ {\sum\nolimits_{i = 1}^m {x_i^3y_{i,(k - 1)}^2} }&{\sum\nolimits_{i = 1}^m {x_i^2y_{i,(k - 1)}^2} }&{\sum\nolimits_{i = 1}^m {{x_i}y_{i,(k - 1)}^2} }\\ {\sum\nolimits_{i = 1}^m {x_i^2y_{i,(k - 1)}^2} }&{\sum\nolimits_{i = 1}^m {{x_i}y_{i,(k - 1)}^2} }&{\sum\nolimits_{i = 1}^m {y_{i,(k - 1)}^2} } \end{array}} \right]*\\ \left\{ {\begin{array}{*{20}{c}} {{a_{2,(k)}}}\\ {{a_{1,(k)}}}\\ {{a_{0,(k)}}} \end{array}} \right\} = \left\{ {\begin{array}{*{20}{c}} {\sum\nolimits_{i = 1}^m {x_i^2y_{i,(k - 1)}^2\ln ({y_i})} }\\ {\sum\nolimits_{i = 1}^m {{x_i}y_{i,(k - 1)}^2\ln ({y_i})} }\\ {\sum\nolimits_{i = 1}^m {y_{i,(k - 1)}^2\ln ({y_i})} } \end{array}} \right\} \end{array} \end{equation} where k is the number of iterations, with a value of k=10 used in this study, and $y_{i(k)}$ is defined as \begin{equation} {y_{i,(k)}} = \left\{ {\begin{array}{*{20}{c}} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{y_i}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,for\,\,k = 0}\\ {\exp \left( {{a_{2,(k)}}x_i^2 + {a_{1,(k)}}{x_i} + {a_{0,(k)}}} \right)for\,\,k > 0} \end{array}} \right. \end{equation} Finally, the model parameters$\mu$ and $\sigma$ are then solved as \begin{equation} {\sigma ^2} = - \frac{1}{{2{a_{2,(k)}}}},\,\,\,\,\,\,\,\,\mu = {a_{1,(k)}}{\sigma ^2} = - \frac{{{a_{1,(k)}}}}{{2{a_{2,(k)}}}} \end{equation} Since there were only four dataset measurements from the four laser scan channels, two more sets at -5 mm and 20 mm were augmented with the value of 0.1 V, which is approximately the noise level. \begin{figure}[] \centering \includegraphics[width=1\columnwidth]{Fig_PD_H_gauss.png \caption{ Position of the beam center is estimated by applying a Gaussian fit on the signals of each diode channel. }\label{Fig_PD_H_gauss} \end{figure} After estimating the center position for all the beams on the PD, the feature beam point is then selected that has the closest value to the center or 7.5 mm. The next process is to match the corresponding beam point (${}^LP_i$) from the LiDAR raw data that contains the 3D position of each beam in polar or Cartesian coordinates with respect to the sensor body frame. To find the correspondence beam, the photodetector needs to be detected and segmented from the target surface by LiDAR measurement. However, using only the 3D point data, the PD is not distinguishable from the target surface because the PD is too flat to show any clear depth differences. Instead of using the LiDAR 3D data, this paper used the difference in the reflectivity values of the beam on the PD from their neighbor beams. The surroundings of the PD are covered with a black colored background to force a lower reflectance intensity than that on the PD sensor surface. The test results showed that the reflectivity of the beams that are projecting on the PD area have relatively higher values than those of the other nearby beams, as shown in Fig. \ref{Fig_PD_H_reflectance}. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Fig_PD_H_reflectance2.png \caption{ Reflectivity values of the beams in the same laser channel that scan over the photodetector. The beam point with the local maxima intensity near the edges is the feature point on the photodetector. }\label{Fig_PD_H_reflectance} \end{figure} Now, we have the matching pair of the position data of each key beam point, ${}^LP_i$($\omega$,$\alpha$,$r$) and ${}^OP_i($x,y,z$)$, which have been obtained from the PD signals and the LiDAR data, respectively. The azimuth ($\alpha$) from the point cloud and the matching beam center from the Gaussian fit is plotted for 50 LiDAR scans to validate the correspondence matching. Given that the azimuth of the key beam point varies over the scans due to the LiDAR rotation fluctuations, the beam center should be changed proportionally. However, the results showed some outliers deviated from the expected linear relationship, as plotted in Fig. \ref{Fig_PD_H_postprocessing}. All outliers have the offset error of 0.2 degrees, which is the azimuth resolution of one beam index in the same vertical channel. The offset by one index could have been caused due to several factors, such as the potential for data loss of the first or the last beam of the target surface by the sharp edge of the target, which occasionally occurs. At the experimental configuration of the target position, such an index offset can cause as high as an 9.7 mm center position error. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Fig_PD_H_postprocessing2.png \caption{The azimuth ($\alpha$) from the point cloud data and the position of the beam center from the photodetector are plotted for 50 scans. The outliers are removed to obtain a linear relationship for correct correspondence matching. }\label{Fig_PD_H_postprocessing} \end{figure} The actual relationship between the azimuth and the center position should be a function of the tangent, but we approximated it as a linear curve within a small range of 1 azimuth degree. The outliers removed using RANSAC are shown as the solid red curve in Fig. \ref{Fig_PD_H_postprocessing}. Thus, the pose estimation applied the new ${}^OP_x$ from this post-processing with all LiDAR scan data as \begin{equation} {}^D{{\bf{p}}_{i,k}} = \left[ {\begin{array}{*{20}{c}} {{{\widehat \mu }_{i,k}} = {\nu _i} + {\tau _i}{\alpha _{i,k}}}\\ 0\\ {{}^D{z_{i,k}}} \end{array}} \right] \end{equation} \subsection{Optimizing pose estimation} Now, the corresponding feature points have been obtained from the target board and the LiDAR, and the relative pose of the LiDAR with respect to the target body frame can be estimated. This paper applied the Levenberg Marquardt method for the iterative optimization to minimize the cost function of the pose error to obtain the best fit values for the pose variables $\beta ^*$. \begin{equation} \begin{array}{l} {\beta ^*} = \arg {\min _\beta }\sum {\left\| {{}^{\bf{O}}{\bf{P}} - [{}_{\bf{L}}^{\bf{O}}{\bf{R}}|{}_{\bf{L}}^{\bf{O}}{\bf{T}}]{}^{\bf{L}}{\bf{P}}} \right\|} \\ = \beta - \eta {({{\bf{J}}^{\bf{T}}}{\bf{J}} + \lambda {\rm{diag}}({\bf{J}}))^{ - 1}}{{\bf{J}}^{\bf{T}}}{\bf{F}}(\beta ))\\ \end{array} \end{equation} where \begin{equation*} \beta = (\phi ,\theta ,\psi ,\Delta x,\Delta y,\Delta z) \end{equation*} Here, $\bf{J}$ is the Jacobian matrix for the cost function $\bf{F}$ in Eq.\ref{eq_Fi}, a constant $\eta$=0.02, and the varying tuning rate $\lambda$ is initially set at 0.3. \begin{figure*}[!t] \normalsize \begin{equation} F{(\beta )_i} = \left[ {\begin{array}{*{20}{c}} {{}^O{P_{1,i}} + {r_i}(c{\alpha _i}c{\omega _i})(c\psi s\phi - c\phi s\psi s\theta ) - {r_i}(s{\alpha _i})(s\psi s\phi + c\psi c\phi s\theta ) - {r_i}(c{\alpha _i}s{\omega _i})(c\theta c\phi ) - \Delta x}\\ {{r_i}(s{\alpha _i})(c\phi s\psi - c\psi s\theta s\phi ) - {r_i}(c{\alpha _i}c{\omega _i})(c\psi c\phi + s\psi s\theta s\phi ) - {r_i}(c{\alpha _i}s{\omega _i})c\theta s\phi - \Delta y}\\ {{}^O{P_{3,i}} + {r_i}(c{\alpha _i}s{\omega _i})s\theta - {r_i}(s{\alpha _i})c\psi c\theta - {r_i}(c{\alpha _i}c{\omega _i})c\theta s\psi - \Delta z} \end{array}} \right] \label{eq_Fi} \end{equation} \hrulefill \end{figure*} \section{Experiment} Experiments were conducted to evaluate the accuracy and precision of the proposed system with the designed test bench that consisted of the PD-target board and the LiDAR pose controller module, as shown in Fig. \ref{Fig_ExpSetup}. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Fig_ExpSetup2.png \caption{Experimental setup consisting of the PD-target board and the 2-DOF motion stage to control the LiDAR reference pose within a range of 3 degrees and 30 mm. }\label{Fig_ExpSetup} \end{figure} \subsection{Test-bench Setup} The target is a flat aluminum board with dimensions of 1 m in width and 0.54 m in height and has 1D PD arrays attached near each board corner. The board surface surrounding the photodiodes was covered with black papers to reduce the surface reflectance such that it was lower than the photodiode surface. The photodiodes were aligned vertically on the top-left and bottom-right corners and aligned horizontally on the other remaining corners. Two external DAQs of 4 ADC ports at a maximum of 4 MHz were used to capture the photodiode output voltage. Note that only two photodetectors were used at a time due to limited availability of the high-performance ADCs. The LiDAR (VLP-16) is placed on the 2-DOF pose controller module positioned at 2.5 meters from the PD-target board. The pose controller module consisting of a rotation motor stage and a linear motor stage can control the precise yaw and x-position of the LiDAR. The yaw angle is controlled within 3 degrees of the range with a precision of 0.01 degree, and the horizontal position is controlled within 30 mm of the range with a precision of 0.01 mm. \subsection{Experimental Results} The initial pose of the LiDAR with respect to the target body frame is set as $X_O$=-0.7 m, $Y_O=-2.5$ m. The relative pose of the LiDAR from the target board was estimated by the proposed algorithm at various reference poses of the yaw angle and x-position, \subsubsection{Yaw estimation test} In the first test, the yaw rotation of the LiDAR was controlled from -3 to 3 degrees with a step of 0.5 degrees by the motor stage. For each reference yaw angle, 50 LiDAR scans were repeated to evaluate the accuracy and precision of the relative pose estimation. Fig. \ref{Fig_PD_H_case12}a) shows the horizontally aligned PD-target system is tracking the input yaw angle with high accuracy. The yaw estimation error plotted in Fig. \ref{Fig_PD_H_case12} b) indicates that the highest offset error is approximately 0.2 degrees at the reference angle of -3 degrees. The overall accuracy and precision of the yaw estimation are calculated to be 0.05 degrees. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Fig_PD_H_case12_mod.png \caption{The yaw estimation a) tracking and error from -3 to 3 degrees b) for horizontal aligned and c) for vertical aligned PD }\label{Fig_PD_H_case12} \end{figure} The same test was repeated with the vertically aligned photodetector array and plotted in Fig. \ref{Fig_PD_H_case12}c). The overall accuracy is under 0.03 degrees and the precision is under 0.06 degrees. The highest offset error is shown at the angle of 3 degrees, with the error value of approximately 0.1 degree. \subsubsection{Displacement estimation test} A similar test was conducted by varying the X-position of the LiDAR from -30 mm to 30 mm with a step of 5 mm. Through the test, all orientation angles were fixed at the initial values. The estimation errors of the sensor position for both the horizontally and vertically aligned photodetectors are plotted in Fig. \ref{Fig_PD_HV_case22}. As shown in the results, both alignment types can detect the LiDAR offset displacement with an error of less than 3 mm. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Fig_PD_H_case22_mod.png \caption{ The displacement estimation error statistics from 50 scans by a) Horizontal aligned b) vertically aligned photodetector. }\label{Fig_PD_HV_case22} \end{figure} Fig. \ref{Fig_PD_comparison_case22} shows the statistics of the estimation error for all orientations and the displacement for this test. The horizontally and vertically aligned photodetectors have similar accuracy and precision, except the precision of the roll and tilt for the horizontal alignment seems slightly higher by 0.02 degrees. The overall evaluation of the proposed system for horizontal and vertical aligned photodetector from averaging all test results as presented in Table \ref{tab:exp}. Considering the possibility of experimental error, both photodetector alignments are observed to yield a similar level of accuracy and precision for all orientations and displacements, which are within 0.1 degree and 3 mm. Compared to the high depth measurement noise and low resolution of a mobile LiDAR, the 0.1 degree and 3 mm calibration errors are quite an improvement. If more photodetectors could be used, then the performance of the pose estimation would be improved more. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Fig_PD_comparison_case22_mod.png \caption{ Statistics of orientation angles and X-position errors for translation experiment. The circle mark indicates the results of the vertically aligned photodetectors. }\label{Fig_PD_comparison_case22} \end{figure} \begin{table}[t] \caption{ Overall Experimental Result of Alignment Estimation} \label{tab:exp} \begin{tabular}{cc|cccc} \hline \multicolumn{2}{c|}{Estimation} & Tilt $(^{\circ})$& Roll $(^{\circ})$&Yaw $(^{\circ})$ & $\Delta X (mm)$\\ \hline \multirow{2}{*}{Horizontal PD} & Accuracy & 0.03 & 0.04 & 0.01 & 0.6 \\ & Precision & 0.11 & 0.10 & 0.05 & 2.3 \\ \hline \multirow{2}{*}{Vertical PD} & Accuracy & 0.01 & 0.01 & 0.02 & 0.3 \\ & Precision & 0.08 & 0.08 & 0.05 & 2.6 \\ \hline \end{tabular} \end{table} \section{Conclusion} This paper proposed a novel PD-target calibration system for the extrinsic parameter calibration of LiDAR mounted on a mobile platform for automatic sensor misalignment inspection. The PD-target calibration system is composed of a planar target board with modules of NIR photodetector arrays attached near the corners of the board. With the photodetector arrays, the precise position of the laser beams on the board surface can be measured and used as the correspondence feature points for estimating the relative pose between the LiDAR and the target frame body. The proposed system was evaluated in terms of the accuracy and precision of the pose estimation with the designed test bench that can control the reference yaw and horizontal position of the LiDAR. Experimental results tested on VLP-16 LiDAR showed that the system estimated the offset pose within 0.15 degrees and 3 mm of accuracy and precision combined. There exist some technical issues that need to be overcome. The projection of laser beams could depart from the photodiode due to the low vertical resolution of LiDAR. Also, the cost of high speed ADC DAQ boards with many channels is another limtation and this study only could test two photodetectors at a time. More implementations of photodetectors would improve the overall performance. \bibliographystyle{IEEEtranTIE}
1,116,691,498,110
arxiv
\section{Introduction} \label{sec:intro} After the Higgs discovery at the Large Hadron Collider (LHC)~\cite{Aad:2012tfa,Chatrchyan:2012xdj}, much attention has been devoted to measure the mass and other properties of the Higgs boson as precisely as possible. These properties, within the current experimental and theoretical uncertainties, are in agreement with the Standard Model (SM) predictions~\cite{Sekmen:2022vzu}. Consequently, any model beyond the SM (BSM) should contain a state matching the LHC mass and rate measurements. The requirement imposes strong constraints on any BSM parameter space. On the other hand, the absense of a discovery of any (additional) BSM particles also impose severe constraints on the new physics parameters. Due to these reasons, Minimal Supersymmetric Standard Model (MSSM)~\cite{mssm}, which is probably the best and most studied extension of the SM, is also facing severe constraints on its parameter space. For example, the present value of the Higgs boson mass $M^{\rm obs}_{H}=125.38 \pm 0.14\,\, \mathrm{GeV}$~\cite{CMS:2020xrn} requires the SUSY partners of the top quark, the scalar tops, to be either in the multi-TeV range, or that some relations among their parameters are fulfilled. One way to deal with this problem would be to look for extra sources to the radiative corrections while keeping the sfermion masses light. Left-right mixing and flavor mixing in the sfermions can serve the purpose to some extent~\cite{AranaCatania:2011ak,Gomez:2014uha}. See \citere{Slavich:2020zjv} for a recent review on SUSY Higgs-boson mass calculations. In this paper we will take a different route. In the MSSM, the superpotential and soft breaking terms are generally considered as holomorphic functions. While the superpotential must be holomorphic, the soft SUSY-breaking terms can be non-holomorphic (NH) in nature~\cite{Girardello:1981wz, Bagger:1993ji}. For further reference, we will call this setup the Non-Holomorphic Supersymmetric Standard Model (NHSSM). In general, the NH soft SUSY-breaking terms can contribute to the radiative corrections to Higgs boson masses. The question arises whether these additional freedom in the form of NH soft SUSY-breaking terms have the potential to increase the value of the light {\cal CP}-even Higgs boson in a relevant way. Some of the previous analyses of the NH effects particularly to the Higgs boson sector can be found in \citeres{Jack:1999ud, Jack:1999fa, Jack:2004dv, Cakir:2005hd, Sabanci:2008qp, Un:2014afa}. More recently in \citere{Chattopadhyay:2016ivr}, it was reported that the NH soft SUSY-breaking terms can enhance/decrease the light {\cal CP}-even Higgs-boson mass $M_h$ by up to $3\,\, \mathrm{GeV}$. The analyses in this regard focused only on the leading top-stop contibutions to the Higgs sector. In \citere{Chattopadhyay:2016ivr} the effects of NH soft SUSY-breaking terms on $M_h$ via their effects on the scalar top masses were calculated. The NH terms enter the scalar top sector through left-right mixing parameter $X_t$. However, the observed effects can possibly be mimicked by a change in the holomorphic soft SUSY-breaking terms, in particular the trilinear Higgs-stop coupling, $A_t$: for each choice of $A_t^\prime$ the parameter $A_t$ can be adjusted to yield the same scalar top masses and mixing angles. An observed scalar top mass spectrum thus corresponds to a continuous set of combinations of $A_t$ and $A_t^\prime$ (keeping the other soft SUSY-breaking parameters and $\mu$ fix). An analysis that simply varies $A_t^\prime$, resulting in shifts in the scalar top masses and mixing angle, can thus not be regarded realistic. On the other hand, the analysis in \citere{Chattopadhyay:2016ivr} neglected the effects of the NH terms entering the Higgs-sfermion couplings. As NH soft SUSY-breaking terms also enter in the couplings of the Higgs bosons to the scalar fermions, it is important to consider all possible effects simultanously, while clearly working out the genuine NH effects. In this work, we have calculated the effects of NH soft SUSY-breaking term $A_t^\prime$ to the Higgs boson mass spectra at one-loop level using the Feynman diagramatic approach. These newly evaluated one-loop corrections, obtained by {\em FeynArts}/{\em FormCalc}\ setup~\cite{Hahn:2000kx,Hahn:2001rv,Fritzsche:2013fta,Hahn:1998yk}, were then fed into {\tt FeynHiggs}~\cite{mhiggslong, mhiggsAEC, mhcMSSMlong, Frank:2006yh, Mh-logresum, Bahl:2016brp, Bahl:2017aev, Bahl:2018qog} such that all other known higher-order corrections can be taken over from the MSSM. We ensured that the stop spectrum does not change under the variation of $A_t^\prime$. This allows to reliably estimate the effects of NH soft SUSY-breaking terms to the Higgs boson masses at the one-loop level. The paper is organized as follows: first we present the main features of the NHSSM in \refse{sec:model_NHSSM}. The computational setup is given in \refse{sec:CompSetup}. The numerical results are presented in \refse{sec:NResults}. Our conclusions can be found in \refse{sec:conclusions}. \section{Model set-up} \label{sec:model_NHSSM} The MSSM is the simplest Supersymmetric structure one can build from the SM particle content. The general set-up for the soft SUSY-breaking parameters is given by~\cite{mssm} \begin{eqnarray} \label{softbreaking} -{\cal L}_{\rm soft}&=&(m_{\tilde Q}^2)_i^j {\tilde q}_{L}^{\dagger i} {\tilde q}_{Lj} +(m_{\tilde u}^2)^i_j {\tilde u}_{Ri}^* {\tilde u}_{R}^j +(m_{\tilde d}^2)^i_j {\tilde d}_{Ri}^* {\tilde d}_{R}^j \nonumber \\ & &+(m_{\tilde L}^2)_i^j {\tilde l}_{L}^{\dagger i}{\tilde l}_{Lj} +(m_{\tilde e}^2)^i_j {\tilde e}_{Ri}^* {\tilde e}_{R}^j \nonumber \\ & &+{\tilde m}^2_{1}h_1^{\dagger} h_1 +{\tilde m}^2_{2}h_2^{\dagger} h_2 +(B \mu h_1 h_2 + {\rm h.c.}) \nonumber \\ & &+ ( A_d^{ij}h_1 {\tilde d}_{Ri}^*{\tilde q}_{Lj} +A_u^{ij}h_2 {\tilde u}_{Ri}^*{\tilde q}_{Lj} +A_l^{ij}h_1 {\tilde e}_{Ri}^*{\tilde l}_{Lj} \nonumber \\ & & +\frac{1}{2}M_1 {\tilde B}_L^0 {\tilde B}_L^0 +\frac{1}{2}M_2 {\tilde W}_L^a {\tilde W}_L^a +\frac{1}{2}M_3 {\tilde G}^a {\tilde G}^a + {\rm h.c.}). \end{eqnarray} Here $m_{\tilde Q}^2$ and $m_{\tilde L}^2$ are $3 \times 3$ matrices in family space (with $i,j$ being the generation indices) for the soft SUSY-breaking masses of the left handed squark ${\tilde q}_{L}$ and slepton ${\tilde l}_{L}$ $SU(2)$ doublets, respectively. $m_{\tilde u}^2$, $m_{\tilde d}^2$ and $m_{\tilde e}^2$ contain the soft masses for right handed up-type squark ${\tilde u}_{R}$, down-type squarks ${\tilde d}_{R}$ and charged slepton ${\tilde e}_{R}$ $SU(2)$ singlets, respectively. $A_u$, $A_d$ and $A_l$ are the $3 \times 3$ matrices for the trilinear couplings for up-type squarks, down-type squarks and charged sleptons, respectively. $\mu$ is Higgs mixing paramter, ${\tilde m}_1$, ${\tilde m}_2$ and $B$ are the soft SUSY-breaking parameters of the Higgs sector, where $h_1$ and $h_2$ denote the two doublets. In the last line $M_1$, $M_2$ and $M_3$ define the bino, wino and gluino mass terms, respectively. The superpotential in the MSSM must be holomorphic, and consequently, the soft SUSY-breaking sector is generally paramterized via holomorphic operators. However the MSSM can be extended by introducing R-Parity violating and/or non-holomorphic terms in the soft breaking sector~\cite{Girardello:1981wz, Bagger:1993ji, Chakrabortty:2011zz}. In it's simplest form the following terms can be introduced in the soft SUSY-breaking sector of the MSSM: \begin{eqnarray} \label{NonH-TrilinearTerms} -{\cal L}_{\rm soft}^{\rm NH}&=&A_d^{^\prime ij}h_2 {\tilde d}_{Ri}^*{\tilde q}_{Lj} +A_u^{^\prime ij}h_1 {\tilde u}_{Ri}^*{\tilde q}_{Lj} +A_l^{^\prime ij}h_2 {\tilde e}_{Ri}^*{\tilde l}_{Lj} +\mu^{\prime} {\tilde h}_1 {\tilde h}_2 \end{eqnarray} Here $\mu^{\prime}$ is NH Higgsino mass term, whereas $A_u^{^\prime}$, $A_d^{^\prime}$ and $A_l^{^\prime}$ denote the NH trilinear coupling matrices for up-type squarks, down-type squarks and charged sleptons, respectively. These terms do not necessarily have any relationship with the holomorphic trilinear soft terms given in \refeq{softbreaking}. One possibility is to assume them equal to the holomorphic trilinear couplings as a ``boundary condition'' at the GUT scale in models such as the Constrained MSSM. However, even in that case renormalization group equation running effects will result in completly different non-holomorphic trinlinear terms\cite{Un:2014afa}. Therefore it is a sensible approach to consider non-holomorphic trilinear terms independent, but overall of the same order of magnitude as the usual trinlinear couplings while comparing the NHSSM predictions with the experimental results. In the presence of the non-holomorphic trilinear terms, the sfermion mass matrices will be modified as \begin{equation} M_{\tilde{f}}^{2}=\left( \begin{array} [c]{cc}% m_{\tilde{f}LL}^{2} & m_{\tilde{f}LR}^{2}\\[.5em] m_{\tilde{f}LR}^{2\dag} & m_{\tilde{f}^{\prime}RR}^{2}% \end{array} \right) \label{fermion mass matrix}% \end{equation} with% \begin{align} m_{\tilde{f}LL}^{2} & =m_{\tilde{f}}^{2}+M_{Z}^{2}\cos2\beta\left( I_{3}% ^{f}-Q_{f}s_{W}^{2}\right) +m_{f}^{2}\nonumber\\ m_{\tilde{f}^{\prime}RR}^{2} & =m_{\tilde{f}^{% \acute{}% }}^{2}+M_{Z}^{2}\cos2\beta Q_{f^{\prime}}s_{W}^{2}+m_{f}^{2}\nonumber\\ m_{\tilde{f}LR}^{2} & =m_{f}X_{f}\text{ ; \ \ \ }X_{f}=A_{f}-(\mu+A_f^\prime)\left\{ \cot\beta;\tan\beta\right\} \label{mass terms}% \end{align} where $I_{3}^{f}$ is the weak isospin of fermions, $Q_{f}$ is the EM charge, $m_{f}$ is the standard fermion mass, $f$ and $f^{\prime}$ stands for left and right handed sfermions (except for neutrino), respectively. $M_Z$ and $M_W$ denote the mass of the $Z$~and the $W$~boson, and $s_W = \sqrt{1 - c_W^2}$ with $c_W = M_W/M_Z$. $A_{f}$ ($A^{\prime}_{f}$) is the holomorphic (non-holomorphic) trilinear coupling\footnote{ We neglect {\cal CP}~ violation throughout the paper.}, $\mu$ is the Higgs mixing parameter, and $\cot\beta$ is for up type squarks and $\tan\beta$ is for down type squarks and charged sleptons ($\tan \beta := v_2/v_1$, is the ratio of the two vacuum expectation values of the two Higgs doublets.) The NH higgsino mass parameter $\mu^{\prime}$ mentioned in \refeq{NonH-TrilinearTerms} modifies the neutralino and chargino mass matrices, but it does not enter into the modified sfermion mass matrix. Consequently, it will not be particularly relavent for our present analysis as we focus on the top/stop contributions. It should be noted that because of the different combination of fields in ${\cal L}_{\rm soft}^{\rm NH}$ w.r.t.\ ${\cal L}_{\rm soft}$ the non-holomorphic trilinear couplings $A_f^\prime$ receive the additional factors of $\tan \beta$ or $\cot \beta$. As discussed before, the the NH trilinear terms also modify the Higgs-sfermion-sfermion couplings. Here we show the couplings of the lightest Higgs boson $h$ to up-type squarks. \begin{align} C(h,\tilde{u}_{i}^{s},\tilde{u}_{j}^{t}) & =\frac{-ie\delta_{ij}}{6M_{W}% c_{W}s_{W}s_{\beta}}\Big[3c_{W}m_{u_{i}}\{A_{ii}^{u}c_{\alpha}+(\mu +A_{ii}^{\prime u})s_{\alpha}\}U_{s,1}^{\tilde{u},i}U_{t,2}^{\tilde{u}% ,i}\nonumber\\ & +\{6c_{\alpha}c_{W}m_{u_{i}}^{2}-M_{W}M_{Z}s_{\alpha+\beta}s_{\beta }(3-4s_{W}^{2})\}U_{s,1}^{\tilde{u},i}U_{t,1}^{\tilde{u},i}\nonumber\\ & +\{6c_{\alpha}c_{W}m_{u_{i}}^{2}-4M_{W}M_{Z}s_{\alpha+\beta}s_{\beta}% s_{W}^{2}\}U_{s,2}^{\tilde{u},i}U_{t,2}^{\tilde{u},i}\nonumber\\ & +3c_{W}m_{u_{i}}\{A_{ii}^{u}c_{\alpha}+(\mu+A_{ii}^{\prime u})s_{\alpha }\}U_{s,2}^{\tilde{u},i}U_{t,1}^{\tilde{u},i}\Big]\label{ChSqSq}% \end{align} The coupling of the charged Higgs boson $H^{-}$ to up-type and down-type squarks is given by \begin{align} C(H^{-},\tilde{u}_{i}^{s},\tilde{d}_{j}^{t}) & =\frac{ieV_{ij}^{CKM}}% {2M_{W}s_{W}s_{\beta}}\Big[m_{u_{i}}U_{s,2}^{\tilde{u},i}U_{t,1}^{\tilde{d}% ,j}\{A_{ii}^{u}+(\mu+A_{ii}^{\prime u})t_{\beta}\}\nonumber\\ & +m_{u_{i}}m_{d_{j}}U_{s,2}^{\tilde{u},i}U_{t,2}^{\tilde{d},j}(1+t_{\beta }^{2})+U_{s,1}^{\tilde{u},i}U_{t,2}^{\tilde{d},j}mt_{\beta}\{A_{ii}% ^{d}t_{\beta}+(\mu+A_{ii}^{\prime d})\}\nonumber\\ & +U_{s,1}^{\tilde{u},i}U_{t,1}^{\tilde{d},j}\{m_{u_{i}}^{2}-t_{\beta}% (M_{W}^{2}s_{2\beta}-m_{d_{j}}^{2})t_{\beta}\}\Big] \label{CHpSqSq} \end{align} Here $i,j$ are the generation indices (we assume flavor conservation throughout the paper), $U_{s,s'}^{\tilde{u},i}$ ($U_{t,t'}^{\tilde{d},j}$) is the $2\times 2$ rotation matrix for up-type (down-type) squarks, and we use the short hand notation $s_x, c_x, t_x$ for $\sin x$, $\cos x$, $\tan x$, respectively, where $\alpha$ is the {\cal CP}-even Higgs mixing angle. The couplings of the {\cal CP}-even heavy Higgs boson $H$ to the up-type squarks can be obtained by replacing $c_{\alpha} \rightarrow s_{\alpha}$, $s_{\alpha} \rightarrow -c_{\alpha}$ and $s_{\alpha+\beta} \rightarrow -c_{\alpha+\beta}$ in \refeq{ChSqSq}. It is interesting to observe that the $A_t^\prime$ enter differently into the scalar top masses and into the trilinear Higgs-stop couplings. This will be crucial for our numerical analysis, see the discussion in \refse{sec:NResults}. \section{Higher order corrections in the NHSSM Higgs sector} \label{sec:CompSetup} \subsection{Tree-level structure and higher-order corrections} The MSSM (and thus the NHSSM) Higgs-boson sector consist of two Higgs doublets and predicts the existence of five physical Higgs bosons, the light and heavy ${\cal CP}$-even $h$ and $H$, the ${\cal CP}$-odd $A$, and a pair of charged Higgs bosons, $H^\pm$. At the tree-level the Higgs sector is described with the help of two parameters: the mass of the $A$~boson, $M_A$, and $\tan \beta = v_2/v_1$, the ratio of the two vacuum expectation values. The tree-level relations and in particular the tree-level masses receive large higher-order corrections, see, e.g., \citeres{MHreviews, Draper:2016pys,Slavich:2020zjv} and references therein. The lightest MSSM Higgs boson, with mass $M_h$, can be interpreted as the new state discovered at the LHC around $\sim 125 \,\, \mathrm{GeV}$~\cite{Heinemeyer:2011aa}. The present experimental uncertainty at the LHC for $M_h$, is about~\cite{CMS:2020xrn}, \begin{align} \deM_h^{\rm exp,today} \sim 140 \,\, \mathrm{MeV}~. \end{align} This can possibly be reduced below the level of \begin{align} \deM_h^{\rm exp,future} \lsim 50 \,\, \mathrm{MeV} \end{align} at future $e^+e^-$~colliders~\cite{dbd}. Similarly, for the masses of the heavy neutral Higgs $M_H$, an uncertainty at the $1\%$ level could be expected at the LHC~\cite{cmsHiggs}. \medskip In the Feynman diagrammatic (FD) approach that we are following in our calculation here, the higher-order corrected ${\cal CP}$-even Higgs boson masses are obtained by finding the poles of the $(h,H)$-propagator matrix. The inverse of this matrix is given by \begin{equation} \left(\Delta_{\rm Higgs}\right)^{-1} = - i \left( \begin{array}{cc} p^2 - m_{H,{\rm tree}}^2 + \hat{\Sigma}_{HH}(p^2) & \hat{\Sigma}_{hH}(p^2) \\ \hat{\Sigma}_{hH}(p^2) & p^2 - m_{h,{\rm tree}}^2 + \hat{\Sigma}_{hh}(p^2) \end{array} \right)~. \label{higgsmassmatrixnondiag} \end{equation} Determining the poles of the matrix $\Delta_{\rm Higgs}$ in \refeq{higgsmassmatrixnondiag} is equivalent to solving the equation \begin{equation} \left[p^2 - m_{h,{\rm tree}}^2 + \hat{\Sigma}_{hh}(p^2) \right] \left[p^2 - m_{H,{\rm tree}}^2 + \hat{\Sigma}_{HH}(p^2) \right] - \left[\hat{\Sigma}_{hH}(p^2)\right]^2 = 0\,. \label{eq:proppole} \end{equation} Similarly, in the case of the charged Higgs sector, the corrected Higgs mass is derived by the position of the pole in the charged Higgs propagator (for details please see \citere{Frank:2013hba} and references therein), which is defined by: \noindent \begin{equation} p^{2}-m^{2}_{H^{\pm},{\rm tree}} + \hat{\Sigma}_{H^{-}H^{+}}\left(p^{2}\right)=0. \label{eq:proppolech} \end{equation} The (renormalized) Higgs-boson self-energies in \refeqs{eq:proppole} and \ref{eq:proppolech} can be evaluated at the $n$-loop level, by an explicit (FD) calculation of the corresponding loop diagrams. As discussed above, in this work we will concentrate on the one-loop corrections from the top/stop sector. The FD contributions to the Higgs-boson self-energies can be supplemented by a resummation of leading and subleading logarithmic contributions, which are relevant in the case of heavy scalar tops. For more details, see \citere{Slavich:2020zjv}. This will be relevant for the numerical evaluation presented below in \refse{sec:NResults}. \subsection{Non-holomorhpic Contributions to Higgs Sector} \label{sec:strategy0} The NH soft SUSY-breaking parameters enter into the one-loop prediction of the various (renormalized) Higgs-boson self-energies and tadpoles. As discussed above, they can enter into the scalar fermion masses, where, however, their effect can be compensated by a change in the corresponding holomorphic trilinear coupling. They also inter into the Higgs-sfermion-sfermion coupling, see \refeq{ChSqSq}, which will have the main effect in our analysis. Generic Feynman diagrams that involve non holomorphic couplings are shown in the \reffi{FeynDiagHSelf}. Here we restrict ourself to quark/squark contributions only. \begin{figure}[htb!] \begin{center} \unitlength=1.0bp% \begin{feynartspicture}(432,280)(3,2) \FADiagram{} \FAProp(0.,10.)(6.,10.)(0.,){/ScalarDash}{0} \FALabel(3.,8.93)[t]{$\phi$} \FAProp(20.,10.)(14.,10.)(0.,){/ScalarDash}{0} \FALabel(17.,11.07)[b]{$\phi$} \FAProp(6.,10.)(14.,10.)(0.8,){/ScalarDash}{-1} \FALabel(10.,5.73)[t]{$\tilde u_{t}$} \FAProp(6.,10.)(14.,10.)(-0.8,){/ScalarDash}{1} \FALabel(10.,14.27)[b]{$\tilde u_{s}$} \FAVert(6.,10.){0} \FAVert(14.,10.){0} \FADiagram{} \FAProp(0.,10.)(6.,10.)(0.,){/ScalarDash}{0} \FALabel(3.,8.93)[t]{$\phi$} \FAProp(20.,10.)(14.,10.)(0.,){/ScalarDash}{0} \FALabel(17.,11.07)[b]{$\phi$} \FAProp(6.,10.)(14.,10.)(0.8,){/ScalarDash}{-1} \FALabel(10.,5.73)[t]{$\tilde d_{t}$} \FAProp(6.,10.)(14.,10.)(-0.8,){/ScalarDash}{1} \FALabel(10.,14.27)[b]{$\tilde d_{s}$} \FAVert(6.,10.){0} \FAVert(14.,10.){0} \FADiagram{} \FAProp(0.,10.)(6.,10.)(0.,){/ScalarDash}{0} \FALabel(3.,8.93)[t]{$\phi$} \FAProp(20.,10.)(14.,10.)(0.,){/ScalarDash}{0} \FALabel(17.,11.07)[b]{$\phi$} \FAProp(6.,10.)(14.,10.)(0.8,){/ScalarDash}{-1} \FALabel(10.,5.73)[t]{$\tilde u_{t}$} \FAProp(6.,10.)(14.,10.)(-0.8,){/ScalarDash}{1} \FALabel(10.,14.27)[b]{$\tilde d_{s}$} \FAVert(6.,10.){0} \FAVert(14.,10.){0} \FADiagram{} \FAProp(0.,10.)(10.,10.)(0.,){/ScalarDash}{0} \FALabel(5.,8.93)[t]{$\phi$} \FAProp(20.,10.)(10.,10.)(0.,){/ScalarDash}{0} \FALabel(15.,8.93)[t]{$\phi$} \FAProp(10.,10.)(10.,10.)(10.,15.5){/ScalarDash}{-1} \FALabel(10.,16.57)[b]{$\tilde u_{s}, \tilde d_{s}$} \FAVert(10.,10.){0} \FADiagram{} \FAProp(0.,10.)(7.5,10.)(0.,){/ScalarDash}{0} \FALabel(5.,8.93)[t]{$\phi$} \FAProp(7.5,10.)(7.5,10.)(14.,10.){/ScalarDash}{-1} \FALabel(11.,16.)[]{$\tilde u_{s}$} \FAVert(7.5,10.){0} \FADiagram{} \FAProp(0.,10.)(7.5,10.)(0.,){/ScalarDash}{0} \FALabel(5.,8.93)[t]{$\phi$} \FAProp(7.5,10.)(7.5,10.)(14.,10.){/ScalarDash}{-1} \FALabel(11.,16.)[]{$\tilde d_{s}$} \FAVert(7.5,10.){0} \end{feynartspicture} \end{center} \caption{ Generic Feynman diagrams for the Higgs boson self-energies and tadpoles. $\phi$ denotes any of the Higgs bosons, $h$, $H$, $A$ or $H^\pm$; $u$ stand for $u,c,t$; $d$ stand for $d,s,b$; $\tilde u_{s,t}$ and $\tilde d_{s,t}$ are the six mass eigenstates of up-type and down-type squarks, respectively.} \label{FeynDiagHSelf} \end{figure} In the following, we briefly describe our work flow for the calculation. To calculate the non holomprphic contributions to the Higgs boson self energies, we first created NHSSM model file for {\em FeynArts}\ using Mathematica package SARAH~\cite{Staub:2009bi, Staub:2010jh, Staub:2012pb, Staub:2013tta, Staub:2015kfa}. The {\em FeynArts}/{\em FormCalc}~\cite{Hahn:2000kx,Hahn:2001rv,Fritzsche:2013fta,Hahn:1998yk} packages have then be used to analytically calculate the NHSSM contributions to the Higgs boson self-energies, given as a function of $A_t$ and $A_t^\prime$. For the numerical evaluation with the {\em FeynArts}/{\em FormCalc}\ setup the {\em FormCalc}\ driver files had to be adjusted from the MSSM to the NHSSM. Concerning the numerical evaluation, for a given value of $A_t^{\rm MSSM}$ in the MSSM and $A_t^\prime$ in the NHSSM a new value of $A_t^{\rm NHSSM}$ is calculated such that $X_t^{\rm MSSM} = A_t^{\rm MSSM} - \mu\cot \beta$ and $X_t^{\rm NHSSM} = A_t^{\rm NHSSM} - (\mu + A_t^\prime)\cot \beta$ are identical (yielding the same values for the stop masses and mixings, see the discussion in the next section). Using $A_t^{\rm NHSSM}$ and $A_t^\prime$ the NH contribution to the Higgs-boson self-energies is calculated numerically. To avoid double counting, we subtracted the Higgs-boson self-energy values at $A_t^{\prime}=0$ (i.e.\ $A_t^{\rm MSSM} \equiv A_t^{\rm NHSSM}$) from the obtained results. These numerical values were fed to {\tt FeynHiggs}~\cite{feynhiggs,mhiggslong,mhiggsAEC,mhcMSSMlong,Mh-logresum,Bahl:2016brp, Bahl:2017aev,Bahl:2018qog} using the {\tt FeynHiggs}\ function {\tt FHAddSelf} (where in {\tt FeynHiggs}\ the value $A_t^{\rm MSSM}$ was used). The {\tt FeynHiggs}\ package already contains the complete set of one-loop corrections in the MSSM. Those are supplemented with leading and sub-leading two-loop corrections as well as a resummation of leading and sub-leading logarithmic contributions from the $t/\tilde{t}$ sector. In this way we include the NH contributions from $A_t^\prime$ into the most precise evaluation of the MSSM Higgs-boson masses available. This allows us to readily estimate the effect of the NH soft SUSY-breaking terms. \section{Numerical Results} \label{sec:NResults} \subsection{General strategy} \label{sec:strategy} The leading corrections to $M_h$ from the top/scalar top loops in the NHSSM have been calculated in \citere{Chattopadhyay:2016ivr} and are given by \begin{equation} \Delta m^{2}_{h,t/\tilde{t}}=\frac{3 g_2^2 m^4_t}{8 \pi^2 M^2_W } \left[\ln \left(\frac{m_{\tilde t_{1}} m_{\tilde t_{2} }}{m^2_{t}}\right)+\frac{X_t^2}{m_{\tilde t_{1}} m_{\tilde t_{2} }}\left(1-\frac{X_t^2}{12 m_{\tilde t_1} m_{\tilde t_2}}\right)\right] \label{deltaMh-NHSSM} \end{equation} where $X_t^{\rm NHSSM} =: X_t=A_t-(\mu+A_t^{\prime})\cot \beta$. The non-holomorphic trilinear coupling $A_t^{\prime}$ affects the $X_t$ parameter as well as the scalar top quark masses and mixing angle. A simple change in the value of $A_t^{\prime}$ with fix $A_t$ will result in the change of $X_t$ which in turn will change $M_h$. However, with this approach, we can not distinguish the pure NHSSM contribution, as the same results can be obtained by a correspondingly changed value of $A_t$ in the MSSM. Moreover, (if SUSY is realized in nature) the scalar top masses and mixing will be known in the future, and the choice of the soft SUSY-breaking parameters have to reproduce their values. Therefore it makes sense to analyze the NH effects in a scheme that allows to keep the two stop masses and the mixing angle fixed. On the other hand, in the FD approach the $A_t^{\prime}$ appears also in the coupling of the Higgs boson to the scalar top quarks. In order to estimate the contribtuions on the Higgs-boson mass spectrum coming purely from $A_t^{\prime}$, it is therefore important to fix the value of $X_t$ parameter (see the discussion in the previous section), shifting the NH effects completely into the change in the Higgs-stop coupling. \subsection{Benchmark scenarios} \label{sec:bench} In our numerical analyses we have followed the above described approach. We evaluated the results in three benchmark scenarios defined in \citere{Bahl:2018zmf} that are used by the ATLAS and CMS collaboration for their interpretation of MSSM Higgs boson searches. These are the $M_h^{125}$ scenario (heavy SUSY particles, effectively the Two Higgs Doublet Model type~II with SUSY restrictions on Higgs-boson masses and couplings), the $M_h^{125}(\tilde\tau)$ scenario (featuring light scalar taus) and the $M_h^{125}(\tilde\chi)$ scenrio (featuring light charginos and neutralinos). The three scenarios are compatible with the LHC searches for SUSY particles and yield a light {\cal CP}-even Higgs boson with mass around 125 $\,\, \mathrm{GeV}$ with SM-like properties. For these scenarios indirect constraints like dark matter density, flavor observables and the anomalous magnetic moment of the muon on the MSSM parameters space are not taken into account on purpose~\cite{Bahl:2018zmf}. These potential constraints mainly depend on the parameters that are not important for Higgs-boson phenomenology. Alternatively, small variations in the MSSM can invalidate this type of constraints, while leaving the Higgs-boson phenomenology largely unaffected, see the discussion in \citere{Bahl:2018zmf}. We furthermore assume that there is no (relevant) flavor violation. Consequently, the first and second generation scalar fermions have very mild effect on the predictions of the Higgs masses and mixing. Thus a common soft SUSY-breaking mass $M_{\tilde{f}}=2\,\, \mathrm{TeV}$ and corresponding Higg-sfermion interaction terms $A_f=0$ are assumed for first and second generation sfermions in the benchmark scenarios. This is in full agreement with the current exclusion bounds from CMS~\cite{CMS:2022goy} and ATLAS~\cite{ATLAS:2020zms, ATLAS:2021twp}. In \refta{input-parameters}, we list the remaining soft SUSY-breaking input paramters with corresponding scalar top masses for three scenarios considered in our numerical analysis. \begin{table}[h!] \renewcommand{\arraystretch}{1.2} \centerline{\begin{tabular}{|c||c|c|c|} \hline & $M_h^{125}$ & $M_h^{125} (\tilde{\tau})$ & $M_h^{125} (\tilde{\chi})$ \\\hline $m_{\tilde Q_{3}, \tilde U_{3}, \tilde D_{3}}$ & 1.5 $\,\, \mathrm{TeV}$ & 1.5 $\,\, \mathrm{TeV}$ & 1.5 $\,\, \mathrm{TeV}$ \\ $m_{\tilde L_{3},\tilde E_{3}}$ & 2 $\,\, \mathrm{TeV}$ & 350 $\,\, \mathrm{GeV}$ & 2 $\,\, \mathrm{TeV}$ \\ $\mu$ & 1 $\,\, \mathrm{TeV}$ & 1 $\,\, \mathrm{TeV}$ & 180 $\,\, \mathrm{GeV}$ \\ $M_1$ & 1 $\,\, \mathrm{TeV}$ & 180 $\,\, \mathrm{GeV}$ & 160 $\,\, \mathrm{GeV}$ \\ $M_2$ & 1 $\,\, \mathrm{TeV}$ & 300 $\,\, \mathrm{GeV}$ & 180 $\,\, \mathrm{GeV}$ \\ $M_3$ & 2.5 $\,\, \mathrm{TeV}$ & 2.5 $\,\, \mathrm{TeV}$ & 2.5 $\,\, \mathrm{TeV}$ \\ $X_t$ & 2.8 $\,\, \mathrm{TeV}$ & 2.8 $\,\, \mathrm{TeV}$ & 2.5 $\,\, \mathrm{TeV}$ \\ $A_\tau$ & 0 & 800 $\,\, \mathrm{GeV}$ & 0 \\ $A_b$ & 0 & 0 & 0 \\ \hline $m_{\tilde t_{1}},m_{\tilde t_{2}}$ & 1339,1662 $\,\, \mathrm{GeV}$ & 1339,1662 $\,\, \mathrm{GeV}$ & 1358,1646 $\,\, \mathrm{GeV}$ \\ \hline \end{tabular}} \caption{Selected scenarios in the MSSM parameter space, taken from \citere{Bahl:2018zmf}.} \label{input-parameters} \renewcommand{\arraystretch}{1.0} \end{table} For each scenario, we investigate three different combinations of $M_A$ and $\tan \beta$, taking into account the latest experimental limits for MSSM Higgs-boson searches~\cite{ATLAS:2020zms,CMS:2022goy}: \begin{itemize} \item[P1]: $M_A = 1000 \,\, \mathrm{GeV}$, $\tan \beta = 7$ \item[P2]: $M_A = 1500 \,\, \mathrm{GeV}$, $\tan \beta = 15$ \item[P3]: $M_A = 2000 \,\, \mathrm{GeV}$, $\tan \beta = 45$ \end{itemize} For our numerical analysis, the values of $A_t$ and $A_t^{\prime}$ have been choosen such that the value of $X_t$ remain constant as given in the three scenarios in \refta{input-parameters}. However in order to extract pure NHSSM contributions we treat $A_b$ and $A_{\tau}$ independent from $A_t$ (contrary to the definition in \citere{Bahl:2018zmf}) and concentrate only on top/stop sector. Here it should be noted that the bottom/sbottom and tau/stau contributions can also results in large radiative corrections to the renormalized Higgs-boson self-energies due to the fact that the corresponding non-holomorphic trilinear couplings $A_b^\prime$ and $A_{\tau}^\prime$ are multiplied by $\tan \beta$. However, a fixed value of $X_b (X_{\tau})$, as our strategy requires, can result in unrealistically large value of $A_b$ ($A_{\tau}$). Furthermore, this can lead to severe numerical instabilities in the evalution of the Higgs-boson spectra, even for moderate values of $A_b^\prime$ and $A_{\tau}^\prime$, and special care has to be taken to remain in a perturbative and numerically stable regime of the model. Consequently, here we restrict ourselves to the corrections from the top/stop sector (as it had been done in \citere{Chattopadhyay:2016ivr}), allowing us to pin down the NH effects. We leave a corresponding analysis of the effects of $A_b^\prime$ and $A_\tau^\prime$ for future work. \subsection{NH contributions to renormalized Higgs-boson self-energies} In this subsection we present our results for the NH effects on the renormalized Higgs-boson self-energies in the scenarios defined in the previous subsection. To highlight the non-holomorphic contributions, we define \begin{align} \delta \hat{\Sigma}_{hh} &\eq \hat{\Sigma}_{hh} - \hat{\Sigma}_{hh}^{\rm MSSM}\,, \nonumber \\ \delta \hat{\Sigma}_{hH} &\eq \hat{\Sigma}_{hH} - \hat{\Sigma}_{hH}^{\rm MSSM}\,, \nonumber \\ \delta \hat{\Sigma}_{HH} &\eq \hat{\Sigma}_{HH} - \hat{\Sigma}_{HH}^{\rm MSSM}\,, \nonumber \\ \delta \hat{\Sigma}_{H^{\pm}} &\eq \hat{\Sigma}_{H^{\pm}} - \hat{\Sigma}_{H^{\pm}}^{\rm MSSM}\,, \end{align} and \begin{align} \delta M_h &\eq M_h - M_h^{\rm MSSM}\,, \nonumber \\ \delta M_H &\eq M_H - M_H^{\rm MSSM}\,, \nonumber \\ \delta M_{H^\pm} &\eq M_{H^\pm} - M_{H^\pm}^{\rm MSSM}\,, \end{align} where $\hat{\Sigma}_{hh}^{\rm MSSM}$, $\hat{\Sigma}_{hH}^{\rm MSSM}$, $\hat{\Sigma}_{HH}^{\rm MSSM}$, $\hat{\Sigma}_{H^{\pm}}^{\rm MSSM}$, $M_h^{\rm MSSM}$, $M_H^{\rm MSSM}$ and $M_{H^\pm}^{\rm MSSM}$ corresponds to the renormalized Higgs-boson self energies and Higgs-boson masses with $A_t^{\prime} = 0$. The contributions of the non-holomorphic trilinear coupling $A_t^{\prime}$ to the renormalized Higgs boson self energies, $\delta\hat{\Sigma}_{hh}$, $\delta\hat{\Sigma}_{hH}$, $\delta\hat{\Sigma}_{HH}$ and $\delta\hat{\Sigma}_{H^\pm}$ are shown as a function of $A_t^\prime$ in \reffis{fig:SEh0}, \ref{fig:SEh0HH}, \ref{fig:SEHH} and \ref{fig:SEHp}, respectively. We have varied $A_t^\prime$ in the interval $-3000 \,\, \mathrm{GeV}$ to $+3000 \,\, \mathrm{GeV}$. In each figure we show in the left (right) plot the results for the $M_h^{125}$ ($M_h^{125}(\tilde\chi)$) scenario for P1 (P2, P3) in blue (orange, violet) dashed lines. The results in the $M_h^{125}(\tilde\tau)$ scenario are effectively identical to the ones obtained in the $M_h^{125}$ scenario, as could be expected from the identical parameter values in the scalar top sector. Consequently, we refrain from showing them separately. On the other hand, as can be seen from these figures the results for the renormalized Higgs-boson self-energies differ slightly between $M_h^{125}$, $M_h^{125}(\tilde\tau)$ and $M_h^{125} (\tilde{\chi})$. This can be traced back to the different $A_t$ values found in the three scenarios, which in turn stem from the different baseline values of $X_t$ and in particular $\mu$ in $M_h^{125}$, $M_h^{125}(\tilde\tau)$ w.r.t.\ $M_h^{125}(\tilde\chi)$. As an example, for $\tan \beta = 45$ we find an intervals of $A_t^{\rm NHSSM} = 2755 \,\, \mathrm{GeV}$ to $2888 \,\, \mathrm{GeV}$ for the first two benchmarks, whereas $A_t^{\rm NHSSM} = 2437 \,\, \mathrm{GeV}$ to $2570 \,\, \mathrm{GeV}$ in the latter. \begin{figure}[ht!] \vspace{0.5em} \begin{center} \psfig{file=Plots/MH125/SEH0.eps ,scale=0.75,angle=0,clip=} \hspace{0.5cm} \psfig{file=Plots/MH125chi/SEH0.eps ,scale=0.75,angle=0,clip=} \end{center} \caption{ $\delta \hat{\Sigma}_{hh}$ as a funtion of $A^{\prime}_t$ for $M_h^{125}$ (left) and $M_h^{125} (\tilde{\chi})$ (right plot). The results in $M_h^{125}(\tilde{\tau})$ are effectively identical to $M_h^{125}$ and consequently not shown.} \label{fig:SEh0} \end{figure} \begin{figure}[ht!] \vspace{0.5em} \begin{center} \psfig{file=Plots/MH125/SEh0HH.eps ,scale=0.75,angle=0,clip=} \hspace{0.5cm} \psfig{file=Plots/MH125chi/SEh0HH.eps ,scale=0.75,angle=0,clip=} \end{center} \caption{ $\delta \hat{\Sigma}_{hH}$ as a funtion of $A^{\prime}_t$ for $M_h^{125}$ (left) and $M_h^{125} (\tilde{\chi})$ (right plot).} \label{fig:SEh0HH} \end{figure} \begin{figure}[ht!] \vspace{1em} \begin{center} \psfig{file=Plots/MH125/SEHH.eps ,scale=0.75,angle=0,clip=} \hspace{0.5cm} \psfig{file=Plots/MH125chi/SEHH.eps ,scale=0.75,angle=0,clip=} \end{center} \caption{ $\delta \hat{\Sigma}_{HH}$ as a funtion of $A^{\prime}_t$ for $M_h^{125}$ (left) and $M_h^{125} (\tilde{\chi})$ (right plot).} \label{fig:SEHH} \end{figure} \begin{figure}[ht!] \vspace{1em} \begin{center} \psfig{file=Plots/MH125/SEHp.eps ,scale=0.75,angle=0,clip=} \hspace{0.5cm} \psfig{file=Plots/MH125chi/SEHp.eps ,scale=0.75,angle=0,clip=} \end{center} \caption{ $\delta \hat{\Sigma}_{H^{\pm}}$ as a funtion of $A^{\prime}_t$ for $M_h^{125}$ (left) and $M_h^{125} (\tilde{\chi})$ (right plot).} \label{fig:SEHp} \end{figure} For the renormalized self-energies of the neutral ${\cal CP}$-even Higgs bosons we observe that $\delta\hat{\Sigma}_{HH} > \delta\hat{\Sigma}_{hH} > \delta\hat{\Sigma}_{hh}$. This can be understood from the fact that the new NH soft SUSY-breaking term $A_t^\prime$ couples preferrably to the first Higgs doublet, see \refeq{NonH-TrilinearTerms}. The light {\cal CP}-even Higgs, $h$, has a large contribution from the second Higgs doublet, whereas the $H$ has a large part from the first Higgs doublet. Consequently, the largest effects are expected in the coupling of the heavy {\cal CP}-even Higgs to scalar tops. The largest effects on $\hat{\Sigma}_{hh}$ are found in P1 with $\sim \mp 6 \,\, \mathrm{GeV}^2$ for $A_t^\prime = \mp 3000 \,\, \mathrm{GeV}$, respectively, with only a small variation between the three benchmark scenarios. The effect increases to $\mp 1500 \,\, \mathrm{GeV}^2$ for $\hat{\Sigma}_{hH}$, nearly equal for all benchmarks and points. The largest effects are found for $\hat{\Sigma}_{HH}$, reaching up to $\sim -100000 \,\, \mathrm{GeV}^2$ for P3 in the $M_h^{125}$ and $M_h^{125}(\tilde\tau)$ scenario for $A_t^\prime = 3000 \,\, \mathrm{GeV}$, and up to $\sim -70000 \,\, \mathrm{GeV}^2$ for P3 in the $M_h^{125}(\tilde\chi)$. For $\hat{\Sigma}_{HH}$ a strong variation between the three $M_A$-$\tan \beta$ combinations can be observed, where larger $M_A$, which in turn allows for larger $\tan \beta$ leads to the most sizable effect. This can be understood from the corresponding $\tan \beta$ enhancement of the $A_t^\prime$ contribution. Very similar effects can be observed for the renormalized charged Higgs-boson self-energy, as shown in \reffi{fig:SEHp}. Also for the charged Higgs-boson, residing largely in the first Higgs doublet, the $A_t^\prime$ coupling contribution is enhanced with $\tan \beta$, see \refeq{CHpSqSq}. \subsection{NH contributions to the Higgs-boson masses} We now turn to the numerical evaluation of the impact of the NH trilinear coupling $A_t^\prime$ on the higher-order corrected Higgs-boson masses themselves. The results shown in the previous subsection were obtained by subtracting the Higgs-boson self-energies values at $A_t^\prime=0$, i.e.\ the pure MSSM contribution. This allows to directly add these new contributions to the full calculation of the renormalized Higgs-boson self-energies in the MSSM. In order to estimate their effects on the Higgs-bosons masses, we fed these results to the code {\tt FeynHiggs}\ using the {\tt FeynHiggs}\ function {\tt FHAddSelf}. This function adds the NHSSM contributions to the renormalized Higgs boson self-energies in the MSSM, evaluated at the highest level of precision. For details see the discussion at the end of \refse{sec:strategy0} and in \refse{sec:strategy}. The obtained results are shown as a function of $A_t^\prime$ in \reffis{fig:Mh0}, \ref{fig:MHH} and \ref{fig:MHp} for $\deM_h$, $\deM_H$ and $\deM_{H^\pm}$, respectively. As in the previous subsection, we use the interval of $A_t^\prime = -3000 \,\, \mathrm{GeV}$ to $+3000 \,\, \mathrm{GeV}$. The order of the plots and the color coding is as in the previous subsection. In particular, we again do not show the resuls for $M_h^{125}(\tilde\tau)$, as they are effectively identical to the ones in the $M_h^{125}$ scenario. Since the effects on the renormalized Higgs-boson self-energies follows the pattern $\delta\hat{\Sigma}_{HH} > \delta\hat{\Sigma}_{hH} > \delta\hat{\Sigma}_{hh} \sim \delta\hat{\Sigma}_{H^\pm}$, one expects larger effects for the two heavy Higgs-boson masses than for the light {\cal CP}-even Higgs. Only for very large values of $M_A^2 \gg |\delta\hat{\Sigma}_{HH}|, |\delta\hat{\Sigma}_{H^\pm}|$ the additional contributions from NH terms should become irrelevant for $M_H$ and $M_{H^\pm}$. For $\deM_h$, as shown in \reffi{fig:Mh0}, the NH contributions yield corrections are in general found to be very small, as could be expected from the size of $\delta\hat{\Sigma}_{hh}$, see \reffi{fig:SEh0}. They reach up to $\sim -45 \,\, \mathrm{MeV} \,\, \mathrm{GeV}$ for $A_t^\prime = 3000 \,\, \mathrm{GeV}$ in the $M_h^{125}$ and $M_h^{125}(\tilde\tau)$ scenario for P1, with negligible changes in P2 and P3. In these two benchmark scenarios the corrections for negative $A_t^\prime$ stays below $+30 \,\, \mathrm{MeV}$. In the $M_h^{125}(\tilde\chi)$ scenario the results look similar, with slightly larger corrections in P1. The fact that P1 exhibits the largest corrections corroborates that this effect on $M_h$, as expected, stems from the contribution of $\delta\hat{\Sigma}_{hh}$. Since the corrections turn out to be very small over the whole analyzed parameter space demonstrates that the NH terms {\em do not} alleviate the fact that large stop masses are needed to reach the value of $M_h \sim 125 \,\, \mathrm{GeV}$. On the other hand, effects from $A_b^\prime$ and/or $A_\tau^\prime$ could show a different behavior. We leave this analysis for future work. It should be noted here that the size of the numerical effects on $M_h$ found here are substantially smaller than previously claimed in the literature~\cite{Chattopadhyay:2016ivr}. This can be explained by the fact that we ensured to compare results including the NH effects to the ``pure MSSM'', but leaving the physics scenario (the stop masses and mixing) unchanged. \begin{figure}[ht!] \begin{center} \psfig{file=Plots/MH125/MH0.eps ,scale=0.75,angle=0,clip=} \hspace{0.5cm} \psfig{file=Plots/MH125chi/MH0.eps ,scale=0.75,angle=0,clip=} \end{center} \caption{ $\delta M_h$ as a funtion of $A^{\prime}_t$ for $M_h^{125}$ (left) and $M_h^{125}(\tilde{\chi})$ (right plot). The results in $M_h^{125}(\tilde{\tau})$ are effectively identical to $M_h^{125}$ and consequently not shown.} \label{fig:Mh0} \end{figure} The changes in the heavy ${\cal CP}$-even Higgs-boson mass, $M_H$, are shown in \reffi{fig:MHH}. The general pattern follows the size of the corrections for $M_h$, as analyzed in \reffi{fig:Mh0}. However, for $M_H$ the corrections turn out to be in general positive. The largest values reached are $\sim +25 \,\, \mathrm{GeV}$ for $A_t^\prime$ in P3 in the $M_h^{125}$ and the $M_h^{125}(\tilde\tau)$ scenario. In the $M_h^{125}(\tilde\chi)$ scenario the largest corresponding value is $\sim +18 \,\, \mathrm{GeV}$. For $A_t^\prime = -3000 \,\, \mathrm{GeV}$ the corrections reach up to $+5 \,\, \mathrm{GeV}$ in P3 for the first two benchmarks, and up to $+13 \,\, \mathrm{GeV}$ for the third, with correspondingly smaller values for P2 and P1. \begin{figure}[ht!] \begin{center} \psfig{file=Plots/MH125/MHH.eps ,scale=0.75,angle=0,clip=} \hspace{0.5cm} \psfig{file=Plots/MH125chi/MHH.eps ,scale=0.75,angle=0,clip=} \end{center} \caption{ $\delta M_H$ as a funtion of $A^{\prime}_t$ for $M_h^{125}$ (left), and $M_h^{125}(\tilde{\chi})$ (right plot).} \label{fig:MHH} \end{figure} As a last step, we show the changes in the charged Higgs boson mass, $M_{H^\pm}$, in \reffi{fig:MHp}. As can be expected from the NH contributions to the renormalized Higgs-boson self-energies, which are similar for $\delta\hat{\Sigma}_{HH}$ and $\delta\hat{\Sigma}_{H^\pm}$, see \reffis{fig:SEHH} and \ref{fig:SEHp}, also the correction to the two heavy Higgs-boson masses themselves turn out to be similar. $\deM_{H^\pm}$ follows in sign and size the corrections found for $M_H$. The NH contributions {\em do not} lead to an enhanced splitting between the $M_H$ and $M_{H^\pm}$, but only to larger differences between $M_A$ (our input) and the other two heavy Higgs-boson masses. \begin{figure}[ht!] \begin{center} \psfig{file=Plots/MH125/MHp.eps ,scale=0.75,angle=0,clip=} \hspace{0.5cm} \psfig{file=Plots/MH125chi/MHp.eps ,scale=0.75,angle=0,clip=} \end{center} \caption{ $\delta M_{H^{\pm}}$ as a funtion of $A^{\prime}_t$ for $M_h^{125}$ (left) and $M_h^{125} (\tilde{\chi})$ (right plot).} \label{fig:MHp} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper we have investigated the effect of non-holomorphic soft SUSY-breaking terms to the Higgs-boson mass preditions in the MSSM, a model dubbed NHSSM. In order to perform the calculations we generated the {\em FeynArts}\ model file using mathematica package SARAH. The model file was then used in the {\em FeynArts}/{\em FormCalc}\ setup (including modifications in the {\em FormCalc}\ driver files to addapt the NHSSM specific input) to generate analytical and numerical results for the various renormalized Higgs boson self energies. We concentrated on the contributions from the top/scalar top sector. The relevant NH term is the trilinear coupling $A_t^\prime$. The results for the renormalized Higgs-boson self-energies were then fed into the code {\tt FeynHiggs}\ (using the {\tt FHAddSelf} subroutine) to calculate the predictions for the Higgs boson masses. We took particular care to analyze the pure NH contribution. The $A_t^\prime$ contributions enter into the scalar top mass matrix via the non-diagonal entry $X_t$, as well as into the Higgs-stop couplings. An analsysis simply varying $A_t^\prime$ thus leads to a shift in the scalar top masses, which should be considered a different physics scenario, as the stop masses and mixing angle are expected to be measured in the future (if SUSY is realized). Consequently, an observed effect from a naive variation of $A_t^\prime$ can possibly be mimicked by a change in the holomorphic soft SUSY-breaking terms, in particular the trilinear Higgs-stop coupling, $A_t$: for each choice of $A_t^\prime$ the parameter $A_t$ can be adjusted to yield the same scalar top mass. An observed scalar top mass spectrum thus corresponds to a continuous set of combinations of $A_t$ and $A_t^\prime$ (keeping the other soft SUSY-breaking parameters and $\mu$ fix). An analysis that simply varies $A_t^\prime$, resulting in shifts in the scalar top masses, can thus not be regarded realistic. Therefore, in our analysis we required $X_t$ to be constant under a change of $A_t^\prime$ by an adjustment of $A_t$. In this way the effect of the NH contributions is shifted into the Higgs-stop couplings and can readily be analyzed. For the NH contributions to the renormalized Higgs-boson self-energies we find $\delta\hat{\Sigma}_{hh} < \delta\hat{\Sigma}_{hH} < \delta\hat{\Sigma}_{HH} \sim \delta\hat{\Sigma}_{H^\pm}$. This can be understood from the fact that the new NH soft SUSY-breaking term $A_t^\prime$ couples preferrably to the first Higgs doublet. The light {\cal CP}-even Higgs,~$h$, has a large contribution from the second Higgs doublet, whereas the $H$, as well as the charged Higgs have their largest component from the first Higgs doublet. Consequently, the largest effects are expected in the coupling of the heavy {\cal CP}-even Higgs, or the charged Higgs to scalar tops. For the numerical analysis we chose three LHC benchmark scenarios ($M_h^{125}$, $M_h^{125}(\tilde\tau)$ and $M_h^{125}(\tilde\chi)$), and in each scenario three combinations of $(M_A, \tan \beta)$ that are allowed by current MSSM Higgs-boson searches at the LHC, $(1000 \,\, \mathrm{GeV}, 7), (1500 \,\, \mathrm{GeV}, 15), (2000 \,\, \mathrm{GeV}, 45)$, called P1, P2, P3, respectively. $A_t^\prime$ has been varied from $-3000 \,\, \mathrm{GeV}$ to $+3000 \,\, \mathrm{GeV}$. The results in the $M_h^{125}$ and the $M_h^{125}(\tilde\tau)$ scenario are effectively identical due to their identical settings in the scalar top sector. The results in the $M_h^{125}(\tilde\chi)$ scenario, however, can vary substantially from the other two scenarios. For $\deM_h$ the NH contributions yield corrections are in general found to be very small, in contrary to previous claims in the literature. They reach up to $\sim -60 \,\, \mathrm{MeV}$ in the analyzed parameter space, where P1 exhibits the largest corrections. Since the corrections turn out to be very small over the whole analyzed parameter space we find that the NH terms {\em do not} alleviate the fact that large stop masses are needed to reach the value of $M_h \sim 125 \,\, \mathrm{GeV}$. The situation might change for the corrections involving $A_b^\prime$ and/or $A_\tau^\prime$, which we leave for future work. The numerical effects for $M_H$ and $M_{H^\pm}$ were found to be in general positive and reached values of up to $+25 \,\, \mathrm{GeV}$ for $M_H$ and $M_{H^\pm}$. Despite the fact that the NH contributions entering via $A_t^\prime$ are small for $M_h$, a full analysis of supersymmetric extensions of the SM should include the possibility of NH contributions. We aim for an inclusion of these effects into the code {\tt FeynHiggs}. \subsection*{Acknowledgments} We thank F.~Staub for helpful discussions on SARAH and the model file generation for {\em FeynArts}. The work of S.H.\ has received financial support from the grant PID2019-110058GB-C21 funded by MCIN/AEI/10.13039/501100011033 and by ``ERDF A way of making Europe". MEINCOP Spain under contract PID2019-110058GB-C21 and in part by by the grant IFT Centro de Excelencia Severo Ochoa CEX2020-001007-S funded by MCIN/AEI/10.13039/ 501100011033.
1,116,691,498,111
arxiv
\section{Introduction} Recently, the study of complex networks has emerged in various range of disciplines and research areas. The World Wide Web has revolutionized the way we deal with everything one deals in daily life. Computer scientists were curious to find a way to handle the wheel of controlling the complexity and the enormous growth of the Internet. Social networks' data scale is unpredictably uncontrollable by social scientists. The biological interactions in cell's metabolism are expected to define its pathways and could provide insights to biologists [13]. The urge a new born science is needed in order to be able to manipulate networks before networks manipulate our needs [8]. \\ The study of complex networks evolved since the study of randomly generated graphs by Erdos and Renyi [4], and the appearance of a large-scale network data had leashed tremendous work in multi-disciplinary areas including the real and the virtual world [13]. The efforts were put to describe the properties of random graphs in large networks which raised more and more technical questions to be answered. To mimic real-networks, a randomly produced stylized network model is adopted in order to generalize the resulting conclusions and properties onto real-networks. Simple models fails to capture the complexity of a realistic network's structure and features offering a strong mathematical basis which futures investigations can be build upon. \\ In the next sections of this paper, we survey the ``small-world phenomenon'' and few related problems. We start with the famous psychologist Stanley Milgram's social experiment, that captures the main aspects of the phenomenon [11], [14]; we review few of the models based on random graphs that tries to explain the problem [7], [9], [12], [15], [16]; and then we mention recent work that has applied the traditional insights of these models on large data sets extracted from famous web applications [2], [10]. Lastly, some suggested further extensions to small-world networks are discussed, along with some future works followed by their relevance to this field. \section{small-world phenomenon} The small-world phenomenon has recently been the hot topic of both theoretical and practical research, and it has been given huge attention from most, if not all, multi-disciplinary researchers. The term ``small-world'', linked by all means to the ``short chains of acquaintances'', or the ``six degrees or separation'' [5][6][16], refers to the human social network's graph; where nodes replaces people, and edges between two nodes mimic if the two corresponding persons know each other on a first-name basis [8]. The graph is described to be a ``small-world'' because of the fact that any two random pairs of nodes are separated by relatively a small number of nodes, generally less than 6. Although the first-name basis rule is a bit naive for an edge definition, the resulted graph behaves as a real-world network. \\ Small-world networks are of great importance because of adoption to the limitations of either of the end extreme networks types; random networks and regular lattices. Small-world networks proved their ability to be used as frameworks for the study of interaction networks of complex systems [1].\\ The most important key of the small-world study is to prove the hypothesis that assumes the qualitatively shared structure among a variety of networks across variant fields. A common property arises in large networks which is the existence of short paths between most of the nodes pairs although nodes in network have a high degree of clustering. Nodes can also be reached and navigated with no need of universal understanding of the whole network. Such properties contributed in describing large-scale social networks behavior, and additionally, they gave important insights to create applications to the internal design of decentralized peer-to-peer systems. \\ \subsection{Milgram's Experiment} Stanley Milgram, the famous social psychologist, made an experiment to measure people connectivity in the USA in the 1960s and to test the small-world property [11][14]. The experiment questions the probability of an arbitrarily two selected individuals from a large data set to know each other in person. A target person was selected in the state of Massachusetts who was a Boston stockbroker, and 296 arbitrarily selected individuals as ``starting persons'' from Nebraska and Boston were asked to generate acquaintance chains to the stockbroker in Boston. The selected group were given a document of the described study with the target's name. The senders were asked to choose a recipient which they think that he/she will contribute to carry the message to the target person in the shortest way possible. The concept of ``roster'' was introduced to prevent the message goes back to a previous sender ``loop'' and to track the number of node the message reached. \\ The results of the experiment were quite astonishing. Sixty-four chains made their way to the target person. The mean number of the intermediaries was 5.2 with median of 6. Boston starting chains exhibits shorter range chains than Nebraska's chains. Additional experiments by Korte and Milgram proved that these numbers are quite stable [14].\\ Some comments on Milgram's experiments exhibit the inability of this model to be generalized to larger networks. Varying the information about the target person might affect the decisions taken by senders, and here we meant psychological and sociological factors take place.\\ \section{Small-world based empirical models} \subsection{Watts and Strogatz's Model} Watts and Strogatz came up with a model that aims to explain the small-world property. After Bollobas and de la Vega [3] introduced the theorem which proves the logarithmic property in the path length with respect to the number of nodes \emph{O(log n)} in small-world networks, Watts and Strogatz felt that there was something missing in the theorem. The model proposed considered small-world networks to be highly-structured with relatively a few number of random links added within. Long-range connections in this model plays a crucial rule in creating short paths through all nodes [15]. \\ The model adopts the idea of rewiring edges between nodes with a certain probability to generate a graph. The probability allows the change between regular networks (p=0) and random networks (p=1). The model starts by generating a ring of n connected nodes (average degree of k). Then, the rewiring of each edge with the probability p and the landing node is also chosen randomly. The clustering coefficient \emph{$(C_p)$} is a measure which reflects the fraction of the connected neighbours to a node in a graph compared to all possible connections of the neighboured averaged over all nodes [15]. \\ The results derived for a regular graph (p = 0) a highly clustered (\emph{$C\sim$}3/4) and path length \emph{$L\sim n/2k$} where \emph{$n>k>ln(n)>1$} should be chosen. For random graphs (p = 1) the resulted network is poorly with a low clustering coefficient (\emph{$C\sim k/n$}) and a small path length \emph{$L\sim ln(n)/ln(k)$}. Their research also included three empirical real-world examples of small-world models, and their main finding was that the average path length of the chosen example was slightly higher than the random model, while the clustering coefficient had clearly a much higher value than the random model. Using their results, they reasoned how infectious disease spread rapidly in small-world societies.\\ Some drawbacks of Watts and Strgatz's model is that it cannot be generalized to all small-world models. Some extended works by other scientists tried to fill in gaps. \\ \subsection{Classes of Small-World Networks} Due to the limited vision of the Watts and Strogatz model, new explanation was needed. Trying to look at the dilemma from another prospective, Amaral et al. tried to classify small-world networks to three classes reporting an empirical study of real-world networks [1]. The study covers mainly the statistical properties of real-world networks, and it was enough to prove the existence of three classes of small-world networks: Scale-free, broad-scale, and single scale[1].\\ \subsubsection{Scale-free} The networks which is characterized by a vertex connectivity distribution which decays as a power law.\\ \subsubsection{Broad-scale} The networks characterized by a connectivity distribution that has a power-low region and followed by a sharp cutt-off.\\ \subsubsection{Single-scale}The networks characterized by a connectivity distribution with a fast decaying tail.\\ The research also gave an answer to why such taxonomy exist, and they reasoned that by mentioning two types of factors. The first factor was the aging of the vertices, which in time old nodes will stop being as effective in the network, and an example of that can be the actors network. The second type of factor was the cost of adding new links to the vertices which limited by the vertex capacity. An example of this can be the airports map where placing too many vertices are pricey and not practical.\\ \subsection{Kleinbergs's Algorithmic Prospective} Kleinberg's way of explaining small-world properties was a bit close to Watts and Strogatz but with slight differences [7]. Kleinberg used a \emph{n x n} grid of nodes to represent the network, and to add the small-world flavour, a number of long-range connection edges were added and not rewired. After adding the edges, the probability of connecting two random vertices (v,w) is proportional to \emph{$1/d(v,w)^{q}$} where q is the clustering coefficient [9].\\ Kleinberg came up with theorems to quantify the decentralized algorithms' delivery time which generalized the results in [3] by Bollobás and de la Vega of the logarithmic behavior of short pathways in networks. He proved that the time needed is not always logarithmic but it depends on other parameters. A new parameter was introduced ($\alpha$ \textgreater=0) that controls the long-range connections. Interestingly, the delivery time varies depends on $\alpha$ as follows:\\ \subsubsection{For 0 \textless $\alpha$ \textless 2} the delivery time of any decentralized algorithm in the grid-based model is $\Omega$ ($n^{(2-a)/3}$). \subsubsection{For $\alpha$ = 2} the delivery time of any decentralized algorithm in the grid-based model is O($log^{2} n$). \subsubsection{For $\alpha$ \textgreater 2} the delivery time of any decentralized algorithm in the grid-based model is $\Omega$ ($n^{(a-2)/(a-1)}$) [7].\\ \section{Recent real-world Empirical experiments } \subsection{Dodds, Muhammad, and Watts Experiment} Dodds et al. tried to mimic Milgram's experiment with the electronic messaging systems. Around 60,000 randomly selected email users attempted to reach 18 target persons in 13 different countries.\\ The findings were quite unexpected. Successful social chains passed through intermediate to weak strength ties [12]. This finding proves that the highly connected hubs' effect is negligible. The attrition of message chains showed that messages could make it to the target in a median of five to seven. The cool fact about attrition rate was the constancy of its value for a certain period of time. The 384 completed chains (out of 24,163) had an average chain length of 4.05. This number was considered misleading by the authors, which made them evaluate the experiment using new metrics. \\ The general results showed that the the network structure alone is not enough to interpret the network. Actions and perceptions of the individuals are big contributors. \\ \subsection{Leskovec et al. on a Large Instant-Messaging Network } Leskovec et al. presented a study in 2003 which captured a month of communication activities within the Microsoft Messenger instant-messaging system [10]. The data set contained about 30 million conversations between 240 million people, and a graph was constructed containing 180 million nodes and 1.3 billion undirected edges. The network represents accounts that were active during June 2006. \\ The resulted average path length among users was 6.6 with 6 being the median. The results showed that users with similar age, language, and location tend to communicate with each other. Users with different genders tend to communicate more and also for longer conversations [10]. Conversations tends to decrease with the increase in the distance. However, communication chains through relatively long distances tend to carry longer conversations [10]. \\ \subsection{Bakhshandeh's Degrees of Separation in Twitter } Bakhshandeh et al. did an interesting analysis to identify the degree separation between two Twitter users. They used a new search technique to provide near-optimal solutions more optimized than greedy approaches [2]. The average separation path length was 3.43 between any two random Twitter users which required 67 requests from Twitter, while the near-optimal was 3.88 using only 13.3 requests on average. Surprisingly, Twitter’s 3.43 degree of separation is small and the reason they have claimed was the indicative of changing social norms in the modern connected world. \\ \section{Further Extensions and future works} There is no doubt that small-world networks are still and will still be a hot research topic due to its nature. In this section, we would like to propose some ideas for future extensions which might propose solutions for unanswered or vaguely answered questions. \\ Introducing machine learning techniques to small-world networks, in my opinion, is a good idea. Constructing networks should be smart enough in order to be controllable not only interpretable. Small-world networks could be build to mimic the brain neural-map which might give us more insight on how the human brain works. ML techniques can be also used to conserve the ``six degrees of separation rule'' or even to break it which completely depends on the application. \\ Introducing local reference nodes in such networks could be a new idea to be implemented. Reference nodes could have some regional knowledge about the surrounding nodes. They can control the ``hubs'' and determine how new links can be distributed among reference nodes. We can think of routers as examples. The uniqueness of the node is somehow unrealistic for some applications, and that shows the urge of introducing a new concept. \\ \section{Conclusion} At this paper, we discussed the famous phenomenon of small-world networks and its importance in various areas. Few of the small-world driven models were surveyed. Then recent real-world experiments in the context of complex networks were mentioned. Later, further extensions and future works were proposed. In the future, we will try to implement that the suggested ideas practically on a given data set. By taking into account their bros and cons, the ideas will be later evaluated against the other state-of-art implementations. \\ \appendices \ifCLASSOPTIONcaptionsoff \newpage \fi
1,116,691,498,112
arxiv
\section{Introduction} The two-way relay network model is a cooperative communication network that consists of two nodes $1$ and $2$ that want to communicate to each other but there is no direct link between them \cite{ZLWL,RW}. The intermediate relay node $R$ assists the communication between the two nodes. Communication takes place in two phases, a multiple access (MAC) phase and a broadcasting phase as Fig. \ref{FF1}. Transmissions are assumed perfectly synchronised and the communication in the MAC and broadcasts phases are orthogonal. The relay node decodes the sum of the messages from the two nodes in the uplink, and broadcasts it to the two nodes in the downlink. However, a part of information from each message is leaked to the relay node. When the relay node is untrusted, it is needed to keep the secrecy of both message from the relay node. That is, the following task is required. When Nodes $1$ and $2$ have the messages $M_1$ and $M_2$, the relay node decodes the module sum $M_1+M_2$ without obtaining information for $M_1$ and $M_2$. We call this task secure computation-and-forward. The preceding papers \cite{Ren,He1,He2,Vatedka,Zewail} discussed secure computation-and-forward by using the lattice code with computation-and-forward. However, the lattice code has large cost for its implementation because the number of constellation points increases when the size of code increase. Even if multilevel implementations have been proposed \cite{HY}, it is better to employ a linear code with fixed constellation points. Moreover, it is desirable that the employed code has encoding and decoding with small computational complexity. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{SecureRepeaterFigure.png} \end{center} \caption{MAC phase and broadcast phase.} \label{FF1} \end{figure} For this aim, as a typical scenario, we focus on a multiple access channel and address use of the channel $n$ times when two users' input alphabets are given as $\mathbb{F}_q$ and their constellation points are fixed. Then, we fix a sequence of general linear codes in $\mathbb{F}_q^n$. Similar to \cite{Expo,VH}, using the sequence of linear codes and attaching universal2 hash function, we construct a sequence of codes with strong secrecy for the untrusted relay. For a practical use, we can choose error correcting codes with efficient decoder, e.g., LDPC codes, as the general linear codes. Then, we derive the amount of the leaked information in the finite-length setting, which is required to guarantee the secrecy in an implemented system. Recently, Takabe et al. \cite{Takabe} addressed this kind of Gaussian multiple access channel with $\mathbb{F}_2$ when the sum of the messages of both nodes is decoded. Then, using density evolution method, they derived the threshold of standard deviation of noises of spatially coupled LDPC codes with belief propagation decoding, which implies the threshold of decodable rate. Hence, it is useful to apply these error correcting codes to our secure code construction. In this paper, we derive the asymptotic transmission rate of this practical code. Then, we apply our finite-length secrecy evaluation to this practical code. As another application of secure computation-and-forward, we consider butterfly network coding, which is a coding method that efficiently transmits the information in the crossing way as Fig. \ref{F8}. However, when the secrecy of the message is required, this conventional butterfly network coding has the following problem. The intermediate node $V_2$ can obtain the information of the messages. Also, the receiver nodes $V_5$ and $V_6$ can obtain the information of the other message, respectively. Although secure network coding is known, it cannot realize this kind of secrecy in the butterfly network. When we apply secure computation-and-forward to the communications to nodes $V_2, V_5,$ and $V_6$, the desired secrecy is realized. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7, angle=-90]{butter3} \end{center} \caption{Butterfly network coding.} \label{F8} \end{figure} The remaining of this paper is organized as follows. Using a linear code for computation-and-forward, Section \ref{S2} constructs our secure code that has no information leakage to the relay node. Section \ref{S3} gives security analyses with the finite-length setting. Section \ref{S5} numerically evaluates the asymptotic achievable rate in the cases of random coding and spatial coupling LDPC code with BPSK scheme. \section{Code Construction}\label{S2} When we use the channel $n$ times, we discuss a secure protocol to exchange their messages $M_1$ and $M_2$ without information leakage to the relay $R$. In the MAC phase protocol, given an arbitrary map $\sigma$ from $\mathbb{F}_q$ to $\mathbb{R}$ or $\mathbb{C}$, we assume the following MAC channel $W$ \begin{align} \textbf{Y}_R=h_1 \sigma (\mathbf{X}_1) + h_2 \sigma (\mathbf{X}_2) +\textbf{Z}_R, \label{23-1} \end{align} where $h_1, h_2 \in \mathbb{R}$ or $\mathbb{C}$ are the channel fading coefficients and $\textbf{Z}_R \sim \mathcal{N}(0, N_0\textbf{I}_n)$ is a vector of jointly Gaussian real random variables. Here, $\mathbf{Y}_R $ is an $n$-dimensional real or complex value, and $\mathbf{X}_1 $ and $ \mathbf{X}_2 $ are $n$-dimensional vectors of $\mathbb{F}_q$. \if0 Assume nodes $A$ and $B$ have information messages $\mathbf{w}_1, \mathbf{w}_2 \in \{0,1\}^{k_n}$. In the symmetric case, when both $A$ and $B$ are assumed to transmit equal amount of information, we can analyze the achievable rate without distinguishing $A$ or $B$. Therefore, a rate for the two-way network $R_{TW}$ is achievable if, for any $\epsilon > 0$ there exist a sufficiently large $n$, an encoding scheme of rate $k/n \geq R_{TW}$, and a decoding scheme such that error probability is smaller than $\epsilon$. The capacity $C_{TW}$ is the supremum of $R_{TW}$ over all possible encoding and decoding schemes. The two-way relay network model was one of the first models proposed in the literature of PLNC and was extensively analyzed from different points of view\cite{Larsson2005}\cite{Katti2006}\cite{Katti2007}\cite{Wu2005}\cite{Oechtering2007}. The same model was also considered as basic building network block in the compute-and-forward framework \cite{Nazer08} \cite{Nazer11} \cite{NazerRel11}\cite{Feng13}. In the following we review the algebraic structure that interference induces in the signal and corresponding achievable rates. \fi As a typical example, we often employ the BPSK scheme, i..e, $q=2$. Then, we fix a map $\sigma$ from $x\in \mathbb{F}_2$ to $\mathbb{R}$ as $(-1)^x $. Then, our multiple input channel channel is given as the map $W: (x_1,x_2)\mapsto \phi_{h_1 \sigma(x_1)+h_2 \sigma(x_2) ,N_0} $, where $\phi_{a,N_0} $ is the Gaussian distribution with average $a$ and variance $N_0$. Now, we assume that node $i$ encodes the information $\mathbf{V}_i \in \mathbb{F}_q^{k_n}$ instead of $M_i$ and the relay $R$ recovers $\mathbf{V}_1+\mathbf{V}_2 \in \mathbb{F}_q^{k_n}$. In this case, both nodes often employ the same linear map $G: \mathbb{F}_q^{k_n} \to \mathbb{F}_q^n $ with rank $k$ as an encoder and relay $R$ employs a decoder $D$, which is a map from $\mathbb{R}^n$ or $\mathbb{C}^n$ to $\mathbb{F}_q^{k_n}$. Here, relay $R$ is assumed to know the coefficients $h_1$, $h_2$, the map $\sigma$, and $N_0$. Now, we discuss the scheme with shift vectors $ e_1, e_2 \in \mathbb{F}_q^n$ as follows. The encoder $\Phi_{G,E_1,1}^n$ of node 1 maps $\mathbf{V}_1 \in \mathbb{F}_q^{k_n}$ to the element $ G(\mathbf{V}_1)+e_1$ of the alphabet, and the encoder $\Phi_{G, e_2,2}^n$ of the node 2 maps $\mathbf{V}_2 \in \mathbb{F}_q^{k_n}$ to the element $ G(\mathbf{V}_2)+e_2$ of the alphabet. As decoding process, the relay $R$ obtains $\mathbf{V}_R:=D(\mathbf{Y}_R -h_1\sigma (e_1)-h_2\sigma (e_2))$. In the broadcast phase protocol, the relay $R$ sends the information $\mathbf{V}_R$ to nodes 1 and 2, which can be achieved by a conventional channel coding. Since node 1 has the information $\mathbf{V}_1$, node 1 recovers the information $\mathbf{V}_2$ as $\mathbf{V}_R- \mathbf{V}_1 $. Similarly, node 2 recovers the information $\mathbf{V}_1$. Our interest is information leakage to the relay $R$. Now, we discuss a secure protocol to exchange their messages $M_1$ and $M_2$ without information leakage to $R$. To discuss this problem, we discuss a slightly different protocol. When $\mathbf{V}_1$ and $\mathbf{V}_2$ are uniform random number, we have the relations $I(\mathbf{V}_R;\mathbf{V}_i)=0$ for $i=1,2$, i.e., $\mathbf{V}_R$ is independent of $\mathbf{V}_i$ with $i=1,2$. However, the relay $R$ obtains the information \begin{align} \textbf{Y}_R=h_1 \sigma (G (\mathbf{V}_1)+e_1) +h_2 \sigma (G (\mathbf{V}_2)+e_2)+\textbf{Z}_R, \end{align} which is more informative than $\mathbf{V}_R$. Further, the variable $\textbf{Y}_R$ has correlation with $\mathbf{V}_i$ for $i=1,2$. This information leakage can be removed when nodes 1 and 2 apply a linear hash function $F:\mathbb{F}_q^{k_n}\rightarrow\mathbb{F}_q^{k_n-\bar{k}_n}$ whose rank is $k_n-\bar{k}_n$. Now, we prepare the auxiliary random variable $L_i \in \mathbb{F}_q^{\bar{k}_n}$ for $i=1,2$. We choose linear functions $F_1:\mathbb{F}_q^{k_n-\bar{k}_n} \to \mathbb{F}_q^{k_n}$ and $F_2:\mathbb{F}_q^{\bar{k}_n} \to \mathbb{F}_q^{k_n}$ such that $F \circ F_1 $ is the identity map on $\mathbb{F}_q^{k_n-\bar{k}_n}$ and the image of the map $(m_1,l_2)\in \mathbb{F}_q^{k_n-\bar{k}_n} \times \mathbb{F}_q^{\bar{k}_n} \mapsto F_1(m_1)+F_2(l_2)$ is $\mathbb{F}_q^{k_n}$. Then, the encoders is given as $\Phi_{G,e_1,1}^n(F_1(M_1)+ F_2(L_1)) $ and $\Phi_{G,e_2,2}^n(F_1(M_2)+ F_2(l_2)) $. That is, the random variable $\mathbf{V}_i$ is given as $F_1(M_i)+ F_2(L_i)$. The decoder is given as $F (D(\mathbf{Y}_R -h_1\sigma (e_1)-h_2\sigma (e_2)))$. The relay $R$ broadcasts it. Then, we denote the above protocol with a linear map $G$ and shift vectors $ e_1, e_2$ of block length $n$ by $\Phi_{G ,e_1,e_2}^n$. In summary, the encoding and decoding processes are illustrated as Fig. \ref{FT}. \begin{figure}[h] \begin{center} \includegraphics[width=0.9\linewidth]{MHRepeater.png} \end{center} \caption{Encoding and decoding process.} \label{FT} \end{figure} \section{Secrecy Analysis with Transmission Rate}\label{S3} In this section, we derive a finite-length bound for leaked information when ${\cal X}=\mathbb{F}_q$. To discuss the information leakage for $M_i$, we introduce the security criterion for $i=1,2$ \begin{align} & d(\Phi_{G,e_1,e_2}^n)_i \nonumber \\ :=& \| P_{M_i \mathbf{Y}| E_1=e_1, E_2=e_2 ,n}- P_{\mathbf{Y}| E_1=e_1, E_2=e_2 ,n}\times P_{M_i,\mathop{{\rm mix}}\nolimits}\|_1 \nonumber \\ =& \sum_{m_i} P_{M_i} (m_i) \int_{\mathbb{C}} \Big| p_{\mathbf{Y}|M_i, E_1=e_1, E_2=e_2,n }(\mathbf{y}|m_i)\nonumber \\ &\quad - p_{\mathbf{Y}| E_1=e_1, E_2=e_2,n }(\mathbf{y}) \Big| d \mathbf{y}, \end{align} where $P_{M_i,\mathop{{\rm mix}}\nolimits}$ expresses the uniform distribution for $M_i$. In the following, for security analysis, priorly, the shift vectors $e_1$ and $e_2$ are chosen randomly. So, they are treated as random variables, and are denoted by $E_1$ and $E_2$. We consider the case when $G$ is chosen as a code with efficient decoder. For finite-length analysis, we prepare other notations and information quantities used in this paper. Given a joint distribution channel $P_{Y,Z_1, Z_2}$ over the product system of a finite discrete set ${\cal Z}_1\times {\cal Z}_2$ and a continuous set ${\cal Y}$, we denote the conditional probability density function of $P_{Y|Z_1,Z_2}$ by $p_{Y|Z_1,Z_2}(y|z_1,z_2) $. Then, we define the conditional distribution $P_{Y| Z_1}$ over a continuous set ${\cal Y}$ conditioned in the discrete set ${\cal Z}_1$ by the conditional probability density function $p_{Y|Z_1}(y|z_1):= \sum_{z_2\in {\cal Z}_2} P_{Z_2}(z_2) p_{Y|Z_1,Z_2}(y|z_1,z_2)$. Then, we define the Renyi conditional mutual information $I_{1+s}^{\downarrow}( Y;Z_1|Z_2)$ \begin{align} & \frac{s}{1+s} I_{1+s}^{\downarrow}( Y;Z_1|Z_2)\nonumber \\ :=& \log \sum_{z_2} P_{Z_2}(z_2) \int_{{\cal Y}} \Big( \sum_{z_1} P_{Z_1|Z_2}(z_1|z_2) p_{Y|Z_1,Z_2}(y|z_1,z_2)^{1+s} \Big)^{\frac{1}{1+s}} dy \end{align} for $s>0$. \if0 Hence, \begin{align} & s I_{\frac{1}{1-s}}^{\downarrow}( Y;Z_1|Z_2)\nonumber \\ =& \log \sum_{z_2} P_{Z_2}(z_2) \int_{{\cal Y}} \Big( \sum_{z_1} P_{Z_1|Z_2}(z_1|z_2) p_{Y|Z_1,Z_2}(y|z_1,z_2)^{\frac{1}{1-s}} \Big)^{1-s} dy \end{align} \fi Since $\lim_{s \to 0}s I_{1+s}^{\downarrow}( Y;Z_1|Z_2)=0$, taking the limit $s \to 0$, we have \begin{align} \lim_{s \to 0}\frac{s I_{1+s}^{\downarrow}( Y;Z_1|Z_2)}{s} = I( Y;Z_1|Z_2), \end{align} where $I( Y;Z_1|Z_2)$ expresses the conditional mutual information. The concavity of the function $x \mapsto x^{\frac{1}{1+s}}$ yields \begin{align} e^{\frac{s}{1+s} I_{1+s}^{\downarrow}( Y;Z_1|Z_2,Z_3)} \le e^{\frac{s}{1+s} I_{1+s}^{\downarrow}( Y;Z_1,Z_2|Z_3)}.\label{HY2} \end{align} Given a channel $P_{Y|Z_1, Z_2,Z_3}$ from the finite discrete set ${\cal Z}_1\times {\cal Z}_2\times {\cal Z}_3$ to a continuous set ${\cal Y}$, when the random variables $Z_1,Z_2,Z_3$ are generated subject to the uniform distributions, we have a joint distribution among $Y,Z_1,Z_2$. In this case, we denote the mutual information $I(X_1;Y)[P_{Y|Z_1,Z_2,Z_3}]$. This rule is applied to the R\'{e}nyi conditional mutual information and the conditional mutual information as $I_{1+s}^{\downarrow}( Y;Z_1|Z_2)[P_{Y|Z_1,Z_2,Z_3}]$ and $I( Y;Z_1|Z_2)[P_{Y|Z_1,Z_2,Z_3}]$, respectively. In the following, we use this notation to the channel $W$ defined by \begin{align} Y=h_1 \sigma (X_1) + h_2 \sigma (X_2) +{Z}_R, \label{23-1M} \end{align} where $Z_R$ is the Gaussian variable with average $0$ and the variance $N_0$ on $\mathbb{R}$ or $\mathbb{C}$. Here, the choice of random variables $Z_1,Z_2,$ and $Z_3$ depends on the context. \begin{theorem}\label{TTA} Given a map $G=g$, using $B_{i,n,s,1}:= 3 q^{s (n-k_n-\bar{k}_n)} e^{s n I_{\frac{1}{1-s}}^{\downarrow} (Y; X_i )[W] }$, we have \begin{align} \mathbb{E}_{E_1,E_2} d(\Phi_{g,E_1,E_2}^n)_i \le \min_{s \in [0,\frac{1}{2}]} B_{i,n,s,1}, \label{LLLA}. \end{align} \end{theorem} To improve the bound \eqref{LLLA}, we focus on the ensemble of injective linear codes $G: \mathbb{F}_q^{k_n} \to \mathbb{F}_q^{n}$. We consider the permutation-invariance for the ensemble as follows. We say that the ensemble $G$ is permutation-invariant when ${\rm Pr}( \mathbf{x} \in \mathop{{\rm Im}}\nolimits G )= {\rm Pr}( g(\mathbf{x}) \in \mathop{{\rm Im}}\nolimits G )$ for any $\mathbf{x} \in {\cal X}^n$ and any permutation $g$ among $\{1, \ldots, n\}$. In addition, we often consider the following condition. We say that the ensemble $G$ is universal 2 when the ensemble $G$ satisfies the condition \begin{align} {\rm Pr}\{ x \in \mathop{{\rm Im}}\nolimits G\}\le q^{k_n-n}\label{GDE} \end{align} holds for any $x (\neq 0) \in \mathbb{F}_q^n$ \cite{Carter,Krawczyk}. Let $\vec{\lambda}$ be an integer-valued vector $(\lambda_{t})_{t \in \mathbb{F}_q}$ such that $\sum_{t \in \mathbb{F}_q} \lambda_t=n$, $\lambda_0 \neq n$, and $\lambda_t \ge 0$. We denote the set of such integer-valued vectors by ${\cal T}_n(\mathbb{F}_q)$. For a code $g$, we define \begin{align} N(\vec{\lambda}, g ):=| \{ \mathbf{x} \in \mathop{{\rm Im}}\nolimits g | \vec{n}(\mathbf{x}) = \vec{\lambda}\}|, \label{TYO4} \end{align} where $n_{t}(\mathbf{x})$ expresses the number of $t$ in the vector $\mathbf{x}$. Then, using the above number, we define the value \begin{align} A&:=\max_{\vec{\lambda}} A(\vec{\lambda}),\label{TYO}\\ A(\vec{\lambda}) &:=\max_{\vec{\lambda} (\neq \vec{0}_n)\in {\cal T}_n(\mathbb{F}_q)} \mathbb{E}_{G} \frac{N(\vec{\lambda},G)q^{n-k_n}}{ {n \choose \vec{\lambda}} },\label{TYO2} \end{align} where ${n \choose \vec{\lambda}} $ expresses the multi-nomial combination, and $\vec{0}_n$ expresses the vector satisfying that $(\vec{0}_n)_0=n$ and $(\vec{0}_n)_t=0$ for $t (\neq 0)\in \mathbb{F}_q$. When the ensemble $G$ is universal 2, that is, the ensemble $G$ has no deviation, we have $A\le 1$. Hence, $A$ expresses the degree of deviation. \begin{theorem}\label{TTY} When the ensemble $G$ is permutation-invariant, \begin{align} \mathbb{E}_{E_1,E_2,G} d(\Phi_{G,E_1,E_2}^n)_i \le \min_{s \in [0,\frac{1}{2}]} B_{i,n,s,2} [A] \label{LLL}, \end{align} where $B_{i,n,s,2}[A]$ is defined to be \begin{align} 3 q^{-s (k_n+\bar{k}_n)} e^{s n I_{\frac{1}{1-s}}^{\downarrow}(Y; X_1 ,X_2 )[W] } +3 A^s q^{-s \bar{k}_n} e^{s n I_{\frac{1}{1-s}}^{\downarrow} (Y; X_i )[W] }. \label{LLL7} \end{align} \end{theorem} Although Theorem \ref {TTY} assumes the permutation-invariance, we do not need this condition because of the following reason. We focus on a code $g$. When the ensemble $G$ is given as the ensemble given by the application of the random permutation to the code $g$, $\mathbb{E}_{G} d(\Phi_{G,E_1,E_2}^n)_i = d(\Phi_{g,E_1,E_2}^n)_i $ because the amount of leaked information $d(\Phi_{g,E_1,E_2}^n)_i $ does not change under the application of permutation. As the comparison between \eqref{LLLA} and \eqref{LLL}, we have the following lemma, whose proof is given in Appendix \ref{AA}. \begin{lemma}\label{GUR} We have \begin{align} 2 B_{i,n,s,1} \ge B_{i,n,s,2}[A] \label{LLL6}. \end{align} That is, the bound \eqref{LLL} is smaller than the twice of the bound \eqref{LLLA}. \end{lemma} In the following, we derive the achievable rates based on the upper bounds \eqref{LLLA} and \eqref{LLL}. For this aim, we introduce the parameter $r_2$ for our sequence of code ensembles satisfying \begin{align} \lim_{n\to \infty}\frac{\log A}{n}= \frac{r_2}{\log q}. \end{align} Also, we introduce the parameter $r_1$ for the sacrifice rate as \begin{align} \lim_{n\to \infty}\frac{\bar{k}_n}{n} &= \frac{r_1}{\log q} . \end{align} Since $N(\vec{\lambda},G) \le {n \choose \vec{\lambda}} $, $r_2$ is bounded as \begin{align} r_2 \le (\log q)\lim_{n\to \infty} \frac{n-k_n}{n} =\log q - r_0. \label{ALO} \end{align} \begin{table}[htpb] \caption{Summary of rates} \label{T1} \begin{center} { \renewcommand\arraystretch{1.7} \begin{tabular}{|c|l|l|} \hline $r_0 $ & Rate of error correcting code&$\lim_{n\to \infty}\frac{k_n}{n} \log q $\\ \hline $r_1$ & Sacrifice rate &$\lim_{n\to \infty}\frac{\bar{k}_n}{n} \log q $\\ \hline $r_2 $ & Rate of $A$ & $\lim_{n\to \infty}\frac{\log A}{n} \log q $\\ \hline \end{tabular}} \end{center} \end{table} Then, \eqref{LLLA} implies \begin{align} \lim_{n\to \infty}\frac{-1}{n}\log (\min_{s \in [0,\frac{1}{2}]} B_{i,n,s,1}) \ge \max_{s \in [0,\frac{1}{2}]} s(r_1+r_0 -\log q- I_{\frac{1}{1-s}}^{\downarrow}(Y; X_1 )[W]), \label{HTRE} \end{align} and \eqref{LLL} implies \begin{align} \lim_{n\to \infty}\frac{-1}{n}\log (\min_{s \in [0,\frac{1}{2}]} B_{i,n,s,2} [A]) \ge &\max_{s \in [0,\frac{1}{2}]} \min( s(r_0+r_1- I_{\frac{1}{1-s}}^{\downarrow}(Y; X_1 ,X_2 )[W]),\nonumber \\ & s(r_1-r_2- I_{\frac{1}{1-s}}^{\downarrow}(Y; X_1 )[W])). \label{HTYE} \end{align} Thus, the condition for the exponential decay of the upper bound $\min_{s \in [0,\frac{1}{2}]} B_{i,n,s,1}$ is the condition $r_1 > \log q +I(Y; X_1 )[W]- r_0 $. That is, when we use the upper bound $\min_{s \in [0,\frac{1}{2}]} B_{i,n,s,1}$ and the rate of error correcting code is fixed to $r_0$, the achievable rate is the following value \begin{align} r_0-(\log q +I(Y; X_1 )[W]- r_0)= 2 r_0-\log q -I(Y; X_1 )[W], \label{NFR} \end{align} which is called the 1st type of rate. Also, the condition for the exponential decay of $\min_{s \in [0,\frac{1}{2}]} B_{i,n,s,2}[A]$ is the condition $r_1 > \max(I(Y; X_1,X_2 )[W]- r_0,I(Y; X_1 )[W]+r_2 )$. When we use the upper bound $\min_{s \in [0,\frac{1}{2}]} B_{i,n,s,1}$ and the rate of error correcting code is fixed to $r_0$, the achievable rate is the following value \begin{align} r_0-\max(I(Y; X_1,X_2 )[W]- r_0,I(Y; X_1 )[W]+r_2 )= \min( 2 r_0-I(Y; X_1,X_2 )[W],r_0-r_2-I(Y; X_1 )[W]), \label{NFR2} \end{align} which is called the 2nd type of rate. Since it is not so easy to calculate $r_2$ in the general case, we have the following lower bound of \eqref{NFR2} by substituting $r_2=0$; \begin{align} \min( 2 r_0-I(Y; X_1,X_2 )[W],r_0-I(Y; X_1 )[W]), \label{Achi} \end{align} which is called the 3rd type of rate. In fact, when these rates are negative, the achievable rates are zero. However, to see the mathematical behaviors of the above differences, we address these values directly in this paper. \section{Examples}\label{S5} \begin{figure}[t \begin{center} \includegraphics[scale=0.9]{repeater7} \end{center} \caption{Achievable rates with BPSK when the variance $N_0$ is $=1$. The base of logarithm is chosen to be $e$. The horizontal axis expresses the intensity $h$. The vertical axis expresses transmission rate. The solid black line express the 2nd type of rate with random coding given as \eqref{H13}. This value is positive with $h \ge 2.443$ and approaches $\frac{1}{2}\log 2$. The dashed blue line express the 1st type of rate with random coding given as \eqref{H17}. This value is positive with $h \ge 2.518$ and approaches $\frac{1}{2}\log 2$. The black points express the 3rd type of rate with $(d_l,d_r,L)$ spatial coupling LDPC code with sufficiently large $L$, whose rate is \eqref{H14}. The blue points express the 1st type of rate with $(d_l,d_r,L)$ spatial coupling LDPC code with sufficiently large $L$, whose rate is \eqref{H18}. According to these formulas, the value is negative when $h$ is less than a certain threshold. In this case, the secure transmission of $M_1+M_2$ is impossible in these methods.} \label{F1} \end{figure}% \subsection{Random coding with universal$2$ condition}\label{SeR} For simplicity, we ignore the decoding time, and discuss the asymptotic transmission rate. Then, the generating matrix $G \in \mathbb{F}_q^{n \times k_n}$ is assumed to be generated subject to the universal$2$ condition \eqref{GDE}. We employ the channel decoding for the degraded channel \cite{Ullah2}. That is, the decoder is given as \begin{align} \mathop{{\rm argmax}}\limits_{\mathbf{v} \in \mathbb{F}_q^{k_n}} \sum_{i=1}^n \log \hat{\phi}_{h (G(\mathbf{v})_i-e_1-e_2),N_0}(Y_i), \end{align} where $\hat{\phi}_{x,N_0}(y):=\sum_{x'\in \mathbb{F}_q} \frac{1}{q}{\phi}_{ h_1 \sigma(x')+h_2 \sigma(x-x'),N_0}(y)$. Then, the optimal $k_n$ satisfies \cite{Ullah} \begin{align} r_0=\lim_{n\to \infty} \frac{k_n}{n}\log q =& I(Y;X_1+X_2)[W]. \end{align} Then, the 1st type of rate is \begin{align} 2 r_0-\log q -I(Y; X_1 )[W] =2 I(Y;X_1+X_2)[W]-\log q -I(Y; X_1 )[W].\label{former} \end{align} Since $r_2=0$, the 2nd type of rate equals the 3rd type of rate, which is calculated as \begin{align} 2 I(Y;X_1+X_2)[W] -I(Y; X_1,X_2 )[W]. \label{LODT} \end{align} See Appendix \ref{AC}. Next, We consider the BPSK scheme, and assume that $h_1=h_2=h$. Then, $\hat{\phi}_{x,N_0}$ is simplified as $\hat{\phi}_{0,N_0}(y)= ({\phi}_{0,N_0}( y)+ {\phi}_{2h,N_0}( y ))/2$ and $\hat{\phi}_{1,N_0}(y)={\phi}_{h,N_0}( y)$. Then, by using the differential entropy $H$, $r_0$ is calculated to be $I(h):= H(\frac{\phi_{0,N_0}+2 \phi_{h,N_0}+\phi_{2h,N_0}}{4}) -\frac{1}{2} H(\frac{\phi_{0,N_0}+\phi_{2h,N_0}}{2})-\frac{1}{2} H(\phi_{h,N_0})$, and the 2nd type of rate \eqref{Achi} is \begin{align} 2I(Y;X_1+X_2)[W] -I(Y; X_1 ,X_2 )[W] =& H(\frac{\phi_{0,N_0}+2 \phi_{h,N_0}+\phi_{2h,N_0}}{4}) \nonumber \\ &- H(\frac{\phi_{0,N_0}+\phi_{2h,N_0}}{2}).\label{H13} \end{align} In this case, since $I(Y; X_1 )[W]= H(\frac{\phi_{0,N_0}+2 \phi_{h,N_0}+\phi_{2h,N_0}}{4})- H(\frac{\phi_{0,N_0}+\phi_{2h,N_0}}{2})$, due to \eqref{former}, the 1st type of rate \eqref{NFR} is \begin{align} H(\frac{\phi_{0,N_0}+2 \phi_{h,N_0}+\phi_{2h,N_0}}{4}) - H(\phi_{h,N_0})- \log 2.\label{H17} \end{align} That is, the difference between the 1st type and 2nd type of rates are the value $\log 2 - ( H(\frac{\phi_{0,N_0}+\phi_{2h,N_0}}{2})- H(\phi_{h,N_0}))$. This value becomes very small when $h$ is sufficiently large in comparison with $N_0$ as Fig. \ref{F1} because the two distributions $\phi_{0,N_0}$ and $\phi_{2h,N_0}$ can be distinguished with high probability. \subsection{LDPC code}\label{SeL} In fact, it is not so easy to calculate the coefficient $A$ in a real code, e.g., an LDPC code. In this case, we employ the finite-length security formula \eqref{LLLA} of Theorem \ref{TTA}. Hence, we focus on the 1st type of rate \eqref{NFR} as the asymptotic transmission rate with the security guarantee of the finite-length setting. Since it is not so easy calculate the 2nd type of rate, we address the 3rd type of rate as its lower bound. When the difference between the 1st and the 3rd types of rates is small, we can conclude that the 2nd type of rate is close to the 1st type of rate. Now, we consider the BPSK case with $h_1=h_2=h$ when $G$ is a $(d_l,d_r,L)$ spatial coupling LDPC code with large $L$. According to the preceding papers \cite{KRU,Takabe}, we employ belief propagation in decoder. Applying density evolution to the channel $x \mapsto \hat{\phi}_{x,N_0}$, the paper \cite{Takabe} calculated the transmission rate $I_{sc}(h)$ in the code. By using the difference $\Delta I(h):=I(h)-I_{sc}(h)$, the 3rd type of rate \eqref{Achi} is calculated to \begin{align} \lim_{n\to \infty} \frac{k_n-\bar{k}_n}{n}\log 2 =& H(\frac{\phi_{0,N_0}+2 \phi_{h,N_0}+\phi_{2h,N_0}}{4}) \nonumber \\ &- H(\frac{\phi_{0,N_0}+\phi_{2h,N_0}}{2})-2 \Delta I(h).\label{H14} \end{align} The 1st type of rate \eqref{NFR} is \begin{align} &2 I(Y;X_1+X_2)[W]-\log 2 -I(Y; X_1 )[W]-2 \Delta I(h)\nonumber\\ =& H(\frac{\phi_{0,N_0}+2 \phi_{h,N_0}+\phi_{2h,N_0}}{4}) - H(\phi_{h,N_0})- \log 2 -2 \Delta I(h).\label{H18} \end{align} Fig. \ref{F1} shows that the difference between the 1st and the 3rd types of rates is small. \section{Conclusion}\label{S8} In order to make secure transmission via untrusted relay, we have derived a code that securely transmits the XOR of the messages of two nodes via a multiple access channel. In this code, the relay cannot obtain any information for the message of each node, and can decode only the messages of two nodes. Since our code is constructed by simple combination of an existing liner code and universal2 hash function, it can be realizable in practice. To apply this system to a real secure satellite communication, we need to study the following items. First, we need to evaluate the performance of the proposed LDPC codes with a finite-length setting, which requires computer simulation. Then, it is needed to combine the result of this computer simulation and the security evaluation based on \eqref{LLLA} of Theorem \ref{TTA}. Further, we need to consider the case when the relay does not inform the correct values of the strength of $h$ and $N_0$ to both nodes. That is, there is a possibility that the true values of $h/N_0$ is larger than the value informed by the relay. In this case, both nodes can estimate the upper bound of $h/N_0$ by using the spatial conditions. Then, for evaluation of the decoding error probability, both nodes need to use the value of $h/N_0$ informed by the relay. For evaluation of the amount of leaked information, both nodes need to use the upper bound of $h/N_0$. For real implementation, it is needed to numerically simulate the security evaluation based on this observation. Finally, we should remark that Theorems \ref{TTA} and \ref{TTY} cannot be shown by simple application of the result of wire-tap channel \cite{Wyner} as follows. Consider the secrecy of the message $M_1$ of node $1$. In this case, if node $2$ transmits elements of $\mathbb{F}_q^n$ with equal probability, the channel from node $1$ to relay $R$ is given as $n$-fold extension of the degraded channel $x \mapsto \sum_{x'\in \mathbb{F}_q}\frac{1}{q} \phi_{h_1 \sigma(x)+ h_2 \sigma(x-x'),N_0}$, which enables us to directly apply the result of wire-tap channel. However, node $2$ transmits elements of the image of $G$, which is a subset of $\mathbb{F}_q^n$, with equal probability. Hence, the channel from node $1$ to relay $R$ does not have the above simple form. Therefore, we need more careful discussion. Finally, we point out that or proofs of Theorems \ref{TTA} and \ref{TTY} are still valid even when the channel is a general multiple access channel whose input is given as $\mathbb{F}_q \times \mathbb{F}_q $ because our proofs employ only the property of a general multiple access channel. \section*{Acknowledgments} We are grateful to Dr. Satoshi Takabe for giving the numerical values for Fig. \ref{F1}. The work reported here was supported in part by the JSPS Grant-in-Aid for Scientific Research (A) No.17H01280, (B) No. 16KT0017, (C) No. 16K00014, and Kayamori Foundation of Informational Science Advancement. \appendices \section{Proof of Lemma \ref{GUR}}\label{AA} For the proof of Lemma \ref{GUR}, we prepare the following lemma. \begin{lemma}\label{LOE} When the conditional distribution $P_{Z_2|Z_1}$ is the uniform distribution on ${\cal Z}_2$, we have \begin{align} e^{s I_{\frac{1}{1-s} }^{\downarrow}( Y;Z_1,Z_2)} \le |{\cal Z}_2|^s e^{s I_{\frac{1}{1-s} }^{\downarrow}( Y;Z_1)} \label{HY2} \end{align} for $s \in [0, \infty)$. \end{lemma} \begin{proofof}{Lemma \ref{LOE}} We have \begin{align*} & e^{s I_{\frac{1}{1-s} }^{\downarrow}( Y;Z_1,Z_2)} = \int_{{\cal Y}} \Big( \sum_{z_1} P_{Z_1}(z_1) \sum_{z_2} P_{Z_2|Z_1}(z_2|z_1) p_{Y|Z_1,Z_2}(y|z_1,z_2)^{\frac{1}{1-s}} \Big)^{1-s} dy \\ = & \int_{{\cal Y}} \Big( \sum_{z_1} P_{Z_1}(z_1) |{\cal Z}_2|^{\frac{s}{1-s}} \sum_{z_2} P_{Z_2|Z_1}(z_2|z_1)^{\frac{1}{1-s}} p_{Y|Z_1,Z_2}(y|z_1,z_2)^{\frac{1}{1-s}} \Big)^{1-s} dy \\ \le & \int_{{\cal Y}} \Big( \sum_{z_1} P_{Z_1}(z_1) |{\cal Z}_2|^{\frac{s}{1-s}} \Big( \sum_{z_2} P_{Z_2|Z_1}(z_2|z_1) p_{Y|Z_1,Z_2}(y|z_1,z_2) \Big)^{\frac{1}{1-s}} \Big)^{1-s} dy \\ = & |{\cal Z}_2|^{s} \int_{{\cal Y}} \Big( \sum_{z_1} P_{Z_1}(z_1) p_{Y|Z_1}(y|z_1)^{\frac{1}{1-s}} \Big)^{1-s} dy \\ =& |{\cal Z}_2|^s e^{s I_{\frac{1}{1-s} }^{\downarrow}( Y;Z_1)}. \end{align*} \end{proofof} \begin{proofof}{Lemma \ref{GUR}} Since $N(\vec{\lambda},G) \le {n \choose \vec{\lambda}} $, $A$ is bounded as $q^{n-k_n}$. Hence, \begin{align*} 3 q^{s (n-k_n-\bar{k}_n)} e^{s n I_{\frac{1}{1-s}}^{\downarrow} (Y; X_1 )[W] } \ge 3 A^s q^{-s \bar{k}_n} e^{s n I_{\frac{1}{1-s}}^{\downarrow} (Y; X_1 )[W] }. \end{align*} Also, Lemma \ref{LOE} guarantee that $ 3 q^{s (n-k_n-\bar{k}_n)} e^{s n I_{\frac{1}{1-s}}^{\downarrow} (Y; X_1 )[W] } $ is less than the first term of RHS of \eqref{LLL7}. Hence, we obtain \eqref{LLL6}. \end{proofof} \section{Proof of Theorems \ref{TTA} and \ref{TTY}}\label{S7} \noindent{\bf Step (1):}\quad First, we notice the relation \begin{align} \mathbf{V}_i= F_1(M_i)+F_2(L_i), \quad \mathbf{X}_i= (\mathbf{V}_i,E_i). \end{align} Since $(E_1,E_2)$ is subject to the uniform distribution on $ \mathbb{F}_q^{2(n -k_n)}$, even when the map $G$ is fixed to be $g$, $(\mathbf{X}_1,\mathbf{X}_2)$ is subject to the uniform distribution on $ {\cal X}^{2n}$. Receiver receives the random variable $\mathbf{Y} \in {\cal Y}^n$ that depends only on $(\mathbf{X}_1,\mathbf{X}_2)$. Once, $G$ is fixed, we have the Markov chain $(F, M_1,M_2,L_1,l_2,E_1,E_2)- (\mathbf{X}_1,\mathbf{X}_2) -\mathbf{Y}$. Due to \eqref{23-1}, the relay node can decode $M_1+M_2$ and $L_1+l_2$ from $\mathbf{Y}$ by using the knowledge $E_1+E_2$ for the coset. Then, we focus on the randomness of the choice of $F$. Then, for $s \in [0,\frac{1}{2}]$, we have \begin{align} & \mathbb{E}_{E_1, E_2,F} \| P_{M_1 \mathbf{Y}| E_1, E_2 ,n}- P_{\mathbf{Y}| E_1, E_2 ,n}\times P_{M_1,\mathop{{\rm mix}}\nolimits}\|_1 \nonumber \\ \stackrel{(a)}{\le} & 3 q^{-s \bar{k}_n} e^{s I_{\frac{1}{1-s}}^{\downarrow} (\mathbf{Y}; \mathbf{V}_1 | G=g, E_1, E_2 ) } \nonumber\\ \stackrel{(b)}{\le} & 3 q^{-s \bar{k}_n} e^{s I_{\frac{1}{1-s}}^{\downarrow} (\mathbf{Y}; \mathbf{V}_1, E_1 | G=g, E_2 ) } \nonumber\\ \stackrel{(c)}{=} & 3q^{-s \bar{k}_n} e^{s I_{\frac{1}{1-s}}^{\downarrow} (\mathbf{Y}; \mathbf{X}_1 | G=g, E_2 ) } \label{TY10} \\ \stackrel{(d)}{\le} & 3 q^{s(n-k_n- \bar{k}_n)} e^{s I_{\frac{1}{1-s}}^{\downarrow}(\mathbf{Y}; \mathbf{X}_1 ) } \nonumber\\ = & 3 q^{s(n-k_n- \bar{k}_n)} e^{sn I_{\frac{1}{1-s}}^{\downarrow}(Y; X_1 )[W] } , \label{TUY} \end{align} where $(a)$ follows from Theorem 6 of \cite{Hayashi2013} and the universal2 condition for $F$; $(b)$ follows from \eqref{HY2}; $(c)$ follows from the fact that the pair $(E_i,F_i)$ and $\mathbf{X}_i$ uniquely determine each other; and $(d)$ will be shown in the next step. Hence, we obtain \eqref{LLLA} in Theorem \ref{TTA}. Now, we proceed to the proof of Theorem \ref{TTY}. \begin{align} &3q^{-s \bar{k}_n} e^{s I_{\frac{1}{1-s}}^{\downarrow} (\mathbf{Y}; \mathbf{X}_1 | G, E_2 ) } \nonumber\\ \stackrel{(e)}{\le} & 3q^{-s (k_n+\bar{k}_n)} e^{s I_{\frac{1}{1-s}}^{\downarrow}(\mathbf{Y}; \mathbf{X}_1 ,\mathbf{X}_2 ) } +3A^{s} q^{-s \bar{k}_n} e^{s I_{\frac{1}{1-s}}^{\downarrow}(\mathbf{Y}; \mathbf{X}_1 ) } \nonumber\\ = & 3q^{-s (k_n+\bar{k}_n)} e^{sn I_{\frac{1}{1-s}}^{\downarrow}(Y; X_1 ,X_2 )[W] } +3A^{s} q^{-s \bar{k}_n} e^{sn I_{\frac{1}{1-s}}^{\downarrow}(Y; X_1 )[W] } , \label{TUY2} \end{align} where $(e)$ will be shown in the next step. Combing \eqref{TY10} and \eqref{TUY2}, we obtain \eqref{LLL} in Theorem \ref{TTY}. \noindent{\bf Step (2):}\quad Now, we show $(d)$ in \eqref{TUY}. Given a code $g:\mathbb{F}_q^n\to \mathbb{F}_q^{k_n}$, for an element $\mathbf{x} \in \mathbb{F}_q^n$, we uniquely have a coset $[\mathbf{x}]$ and its representative $e \in \mathbb{F}_q^n$. We denote the map from an element $\mathbf{x} \in \mathbb{F}_q^n$ to the representative $e \in \mathbb{F}_q^n$ by $g_2$. We define the set ${\cal S}(g_2,e):=\{ \mathbf{x} \in \mathbb{F}_q^n | g_2(\mathbf{x}_j)=e\}$. Definition of $A$ implies \begin{align} & \sum_{ \mathbf{x}_2' \in {\cal S}(g_2,g_2(\mathbf{x}_2)) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \nonumber \\ \le & \sum_{ \mathbf{x}_2'} P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \nonumber \\ =& q^{n}P_{Y|\mathbf{X}_1=\mathbf{x}_1 } (\mathbf{y}).\label{HY12B} \end{align} Using \eqref{HY12}, for $s \in [0,\frac{1}{2}]$, we have the following relations, where the explanations for steps is explained later. \begin{align} & e^{s I_{\frac{1}{1-s}}^{\downarrow} (\mathbf{Y}; \mathbf{X}_1 | G=g, E_2 ) } \nonumber\\ = & \mathbb{E}_{E_2} \int_{{\cal Y}^n} \Big( q^{-n} \sum_{\mathbf{x}_1} P_{Y|\mathbf{X}_1=\mathbf{x}_1 , E_2 ,G=g}(\mathbf{y})^{\frac{1}{1-s}} \Big)^{1-s} d \mathbf{y} \nonumber \\ = & \mathbb{E}_{E_2} \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} \Big( q^{-k_n} \sum_{ \mathbf{x}_2 \in {\cal S}(g_2,E_2) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big)^{\frac{1}{1-s}} \Big)^{1-s} d \mathbf{y} \nonumber \\ \stackrel{(a)}{\le} & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} \mathbb{E}_{ E_2} \Big( q^{-k_n} \sum_{ \mathbf{x}_2 \in {\cal S}(g_2,E_2) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big)^{\frac{1}{1-s}} \Big)^{1-s} d \mathbf{y} \nonumber \\ = & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} \mathbb{E}_{E_2} q^{-\frac{k_n}{1-s}} \Big( \sum_{ \mathbf{x}_2 \in {\cal S}(g_2,E_2) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big( \nonumber \\ &\sum_{ \mathbf{x}_2' \in {\cal S}(g_2,E_2) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \Big)^{\frac{s}{1-s}} \Big) \Big)^{1-s}d \mathbf{y} \nonumber \\ = & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} q^{-\frac{k_n}{1-s}} \Big( q^{k_n-n} \sum_{ \mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big( \sum_{ \mathbf{x}_2' \in {\cal S}(g_2,g_2(\mathbf{x}_2)) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \Big)^{\frac{s}{1-s}} \Big) \Big)^{1-s}d \mathbf{y} \nonumber \\ \stackrel{(b)}{\le} & \Big( \int_{{\cal Y}^n} q^{-n}\sum_{\mathbf{x}_1} q^{-\frac{k_n}{1-s}} \Big( q^{k_n-n} \sum_{ \mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big( q^{n} P_{Y|\mathbf{X}_1=\mathbf{x}_1 } (\mathbf{y}) \Big)^{\frac{s}{1-s}} \Big) \Big)^{1-s} d \mathbf{y} \nonumber \\ = & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} q^{-\frac{k_n}{1-s}} \Big( q^{k_n} P_{Y|\mathbf{X}_1=\mathbf{x}_1 } (\mathbf{y}) \Big( q^{n} P_{Y|\mathbf{X}_1=\mathbf{x}_1 } (\mathbf{y}) \Big)^{\frac{s}{1-s}} \Big) \Big)^{1-s} d \mathbf{y} \nonumber \\ = & q^{s (n-k_n)} \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} P_{Y|\mathbf{X}_1=\mathbf{x}_1 } (\mathbf{y})^{\frac{1}{1-s}} \Big)^{1-s}d \mathbf{y} \nonumber \\ =& q^{s (n-k_n)} e^{s I_{\frac{1}{1-s}}^{\downarrow} (\mathbf{Y}; \mathbf{X}_1 ) } , \end{align} where $(a)$ follows from the concavity of $x \mapsto x^{1-s}$ with $x\ge 0$. and $(b)$ follows from the concavity of \eqref{HY12B}. \noindent{\bf Step (3):}\quad Now, we show $(e)$ in \eqref{TUY2}. Definition of $A$ implies \begin{align} &\mathbb{E}_{G} \sum_{ \mathbf{x}_2' (\neq \mathbf{x}_2)\in {\cal S}(G_2,G_2(\mathbf{x}_2)) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \nonumber \\ =& \sum_{\vec{\lambda}\neq \vec{0}_n} \sum_{ \mathbf{x}_2': \vec{n}(\mathbf{x}_2'- \mathbf{x}_2)= \vec{\lambda}} P( \mathbf{x}_2'- \mathbf{x}_2 \in \mathop{{\rm Im}}\nolimits G) P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \nonumber \\ =& \sum_{\vec{\lambda}\neq \vec{0}_n} \sum_{ \mathbf{x}_2': \vec{n}(\mathbf{x}_2'- \mathbf{x}_2)= \vec{\lambda}} \mathbb{E}_{G} \frac{N(\vec{\lambda},G)}{ {n \choose \vec{\lambda}} } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \nonumber \\ \le & A q^{k_n-n} \sum_{\vec{\lambda}\neq \vec{0}_n} \sum_{ \mathbf{x}_2': \vec{n}(\mathbf{x}_2'- \mathbf{x}_2)= \vec{\lambda}} P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \nonumber \\ \le & A q^{k_n} P_{Y|\mathbf{X}_1=\mathbf{x}_1} (\mathbf{y}).\label{HY12} \end{align} Using \eqref{HY12}, for $s \in [0,\frac{1}{2}]$, we have the following relations, where the explanations for steps is explained later. \begin{align} & e^{s I_{\frac{1}{1-s}}^{\downarrow} (\mathbf{Y}; \mathbf{X}_1 | G, E_2 ) } \nonumber\\ = & \mathbb{E}_{G, E_2} \int_{{\cal Y}^n}\Big( q^{-n}\sum_{\mathbf{x}_1} P_{Y|\mathbf{X}_1=\mathbf{x}_1 , E_2 ,G}(\mathbf{y})^{\frac{1}{1-s}} \Big)^{1-s} d \mathbf{y} \nonumber \\ = & \mathbb{E}_{G, E_2} \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} \Big( q^{-k_n} \sum_{ \mathbf{x}_2 \in {\cal S}(G_2,E_2) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big)^{\frac{1}{1-s}} \Big)^{1-s}d \mathbf{y} \nonumber \\ \stackrel{(a)}{\le} & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} \mathbb{E}_{G, E_2} \Big( q^{-k_n} \sum_{ \mathbf{x}_2 \in {\cal S}(G_2,E_2) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big)^{\frac{1}{1-s}} \Big)^{1-s}d \mathbf{y} \nonumber \\ = & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} \mathbb{E}_{G,E_2} q^{-\frac{k_n}{1-s}} \Big( \sum_{ \mathbf{x}_2 \in {\cal S}(G_2,E_2) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big( P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \nonumber \\ &+\sum_{ \mathbf{x}_2' (\neq \mathbf{x}_2)\in {\cal S}(G_2,E_2) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \Big)^{\frac{s}{1-s}} \Big) \Big)^{1-s}d \mathbf{y} \nonumber \\ = & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} \mathbb{E}_{G} q^{-\frac{k_n}{1-s}} \Big( q^{k_n-n} \sum_{ \mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big( P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \nonumber \\ &+\sum_{ \mathbf{x}_2' (\neq \mathbf{x}_2)\in {\cal S}(G_2,G_2(\mathbf{x}_2)) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \Big)^{\frac{s}{1-s}} \Big) \Big)^{1-s}d \mathbf{y} \label{GTR} \\ \stackrel{(b)}{\le} & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} q^{-\frac{k_n}{1-s}} \Big( q^{k_n-n} \sum_{ \mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y}) \Big( P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y})^{\frac{s}{1-s}} \nonumber \\ &+ \Big(\mathbb{E}_{G} \sum_{ \mathbf{x}_2' (\neq \mathbf{x}_2)\in {\cal S}(G_2,G_2(\mathbf{x}_2)) } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2'} (\mathbf{y}) \Big)^{\frac{s}{1-s}} \Big) \Big) \Big)^{1-s}d \mathbf{y} \nonumber \\ \stackrel{(c)}{\le} & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} q^{-\frac{k_n}{1-s}} \Big( q^{k_n-n} \sum_{ \mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2}(\mathbf{y}) \Big( P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y})^{\frac{s}{1-s}} \nonumber \\ &+ \Big( A q^{k_n} P_{Y|\mathbf{X}_1=\mathbf{x}_1} (\mathbf{y}) \Big)^{\frac{s}{1-s}} \Big) \Big) \Big)^{1-s}d \mathbf{y} \nonumber \\ = & \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} q^{-\frac{k_n}{1-s}} \Big( q^{k_n-n} \sum_{ \mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2}(\mathbf{y})^{\frac{1}{1-s}} + A^{\frac{s}{1-s}} q^{\frac{sk_n }{1-s}+k_n} P_{Y|\mathbf{X}_1=\mathbf{x}_1} (\mathbf{y})^{\frac{1}{1-s}} \Big) \Big)^{1-s}d \mathbf{y} \nonumber \\ = & \int_{{\cal Y}^n} \Big( q^{-\frac{s k_n}{1-s}} q^{-2 n} \sum_{ \mathbf{x}_1,\mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2}(\mathbf{y})^{\frac{1}{1-s}} + A^{\frac{s}{1-s} q^{-n}\sum_{\mathbf{x}_1} P_{Y|\mathbf{X}_1=\mathbf{x}_1} (\mathbf{y})^{\frac{1}{1-s}} \Big)^{1-s} d \mathbf{y} \nonumber \\ \stackrel{(d)}{\le} & \int_{{\cal Y}^n} \Big( q^{-\frac{s k_n}{1-s}} q^{-2 n} \sum_{ \mathbf{x}_1,\mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2} (\mathbf{y})^{\frac{1}{1-s}} \Big)^{1-s} d \mathbf{y} + \int_{{\cal Y}^n}\Big( A^{\frac{s}{1-s} q^{-n}\sum_{\mathbf{x}_1} P_{Y|\mathbf{X}_1=\mathbf{x}_1} (\mathbf{y})^{\frac{1}{1-s}} \Big)^{1-s}d \mathbf{y} \nonumber \\ = & q^{-s k_n} \int_{{\cal Y}^n} \Big( q^{-2 n} \sum_{ \mathbf{x}_1,\mathbf{x}_2 } P_{Y|\mathbf{X}_1=\mathbf{x}_1 ,\mathbf{X}_2=\mathbf{x}_2}(\mathbf{y})^{\frac{1}{1-s}} \Big)^{1-s}d \mathbf{y} + A^{s} \int_{{\cal Y}^n} \Big( q^{-n}\sum_{\mathbf{x}_1} P_{Y|\mathbf{X}_1=\mathbf{x}_1} (\mathbf{y})^{\frac{1}{1-s}} \Big)^{1-s}d \mathbf{y} \nonumber \\ =& q^{-s k_n} e^{s I_{\frac{1}{1-s}}^{\downarrow}(\mathbf{Y}; \mathbf{X}_1 ,\mathbf{X}_2 ) } +A^s e^{s I_{\frac{1}{1-s}}^{\downarrow} (\mathbf{Y}; \mathbf{X}_1 ) } , \end{align} where each step can be shown as follows. $(a)$ follows from the concavity of $x \mapsto x^{1-s}$ with $x\ge 0$. $(b)$ follows from the concavity of $x \mapsto x^{\frac{s}{1-s}}$ with $x\ge 0$. $(c)$ follows from the inequality \eqref{HY12}. $(d)$ follows from the inequality $(x+y)^{\frac{s}{1-s}}\le x^{\frac{s}{1-s}}+y^{\frac{s}{1-s}}$ with $x,y\ge 0$. \section{Proof of \eqref{LODT}}\label{AC} \begin{align} &\min(2 r_0-I(Y; X_1,X_2 )[W],r_0-I(Y; X_1 )[W]) \nonumber \\ =& \min(2 I(Y;X_1+X_2)[W] -I(Y; X_1,X_2 )[W], I(Y;X_1+X_2)[W] -I(Y; X_1 )[W])\nonumber \\ =& 2 I(Y;X_1+X_2)[W] -I(Y; X_1,X_2 )[W], \label{LOD} \end{align} where \eqref{LOD} is shown as follows. Since $X_1+X_2$ and $X_1$ are independent of each other, we have $I(Y; X_1 ,X_2 )[W]-I(Y;X_1+X_2)[W] =I(Y; X_1 | X_1+X_2 )[W] $. Therefore, \begin{align} & I(Y;X_1+X_2)[W] -I(Y; X_1 )[W] \ge I(Y;X_1+X_2)[W] -I(Y; X_1 | X_1+X_2 )[W] \nonumber \\ =& 2I(Y;X_1+X_2)[W] -I(Y; X_1 ,X_2 )[W], \end{align} which implies \eqref{LOD}.
1,116,691,498,113
arxiv
\section*{\hspace*{-0.72cm} \normalsize\bf\arabic{section}.$\;$#1}\vspace*{-0.3cm}} \def\subsec#1{\addtocounter{subsection}{1}\subsection*{\hspace*{-0.4cm} \normalsize\bf\arabic{section}.\arabic{subsection}.$\;$#1}\vspace*{-0.3cm}} \vspace{-0.7cm} \begin{flushright} $\vcenter{ { \hbox{{\footnotesize FUT and TOKUSHIMA Report}} } { \hbox{(arXiv:1011.2655)} } }$ \end{flushright} \vskip 0.8cm \begin{center} {\large\bf Addendum to: Search for anomalous top-gluon couplings} \vskip 0.15cm {\large\bf at LHC revisited} \end{center} \vspace{0.6cm} \begin{center} \renewcommand{\thefootnote}{\alph{footnote})} Zenr\=o HIOKI$^{\:1),\:}$\footnote{E-mail address: \tt [email protected]}\ and\ Kazumasa OHKUMA$^{\:2),\:}$\footnote{E-mail address: \tt [email protected]} \end{center} \vspace*{0.4cm} \centerline{\sl $1)$ Institute of Theoretical Physics,\ University of Tokushima} \centerline{\sl Tokushima 770-8502, Japan} \vskip 0.2cm \centerline{\sl $2)$ Department of Information Science,\ Fukui University of Technology} \centerline{\sl Fukui 910-8505, Japan} \vspace*{2.25cm} \centerline{ABSTRACT} \vspace*{0.2cm} \baselineskip=21pt plus 0.1pt minus 0.1pt In our latest paper ``Search for anomalous top-gluon couplings at LHC revisited'' in {\sl Eur. Phys. J.} {\bf C65} (2010), 127--135 (arXiv:0910.3049 [hep-ph]), we studied possible effects of nonstandard top-gluon couplings through the chromoelectric and chromomagnetic moments of the top quark using the total cross section of $p\bar{p}/pp\to t\bar{t}X$ at Tevatron/LHC. There we pointed out that LHC data could give a stronger constraint on them, which would be hard to obtain from Tevatron data alone. We show here that the first CMS measurement of this cross section actually makes it possible. \vfill PACS: 12.38.-t, 12.38.Bx, 12.38.Qk, 12.60.-i, 14.65.Ha, 14.70.Dj Keywords: anomalous top-gluon couplings, Tevatron, LHC, effective operators \\ \newpage \renewcommand{\thefootnote}{$\sharp$\arabic{footnote}} \pagestyle{plain} \setcounter{footnote}{0} In our latest paper \cite{Hioki:2009hm}, we studied possible effects of nonstandard top-gluon couplings through the chromoelectric and chromomagnetic moments of the top quark yielded by $SU(3)\times SU(2)\times U(1)$ invariant dimension-6 effective operators \cite{Buchmuller:1985jz,AguilarSaavedra:2008zc} (see also \cite{Grzadkowski:2010es}) via the total cross section of $p\bar{p}/pp\to t\bar{t}X$ at Tevatron/LHC. There we pointed out that future LHC data could give a stronger constraint on those two parameters, which would be hard to obtain from Tevatron data alone. This note is an addendum to that paper and the aim is to show that the recently reported first CMS measurements \cite{Khachatryan:2010ez} actually make it possible. In our framework the top-gluon interaction Lagrangian including the above operator contribution is given by \begin{eqnarray} &&{\cal L}_{t\bar{t}g,gg} =-\frac12 g_s \sum_a \Bigl[\, \bar{\psi}_t(x) \lambda^a \gamma^\mu \psi_t(x) G^a_\mu(x) \nonumber\\ &&\phantom{{\cal L}_{t\bar{t}g,gg}=-\frac12 g_s \sum_a} \ \ \ \ - \bar{\psi}_t(x) \lambda^a \frac{\sigma^{\mu\nu}}{m_t} (d_V+id_A \gamma_5) \psi_t(x) G^a_{\mu\nu}(x)\,\Bigr], \label{Lag} \end{eqnarray} where $g_s$ is the $SU(3)$ coupling constant, and $d_V$ and $d_A$ correspond to the top chromomagnetic and chromoelectric moments, respectively. It is straightforward, though a bit lengthy, to calculate various cross sections and distributions within the parton-model framework, so we do not repeat describing those works here and leave it to \cite{Hioki:2009hm}. There we carried out the analysis just after LHC started to operate, and we had only CDF and D0 data from Tevatron available \cite{Teva-data}: \begin{eqnarray} &&\!\!\!\!\!\!\!\!\! \sigma_{\rm exp} = 7.02 \pm 0.63\ {\rm pb}\: \ \ ({\rm CDF}:\:m_t=175\:{\rm GeV}) \\ &&\!\!\!\!\!\!\!\!\! \phantom{\sigma_{\rm exp}} = 8.18^{\ +\ 0.98}_{\ -\ 0.87}\ {\rm pb}\ \ \ \ ({\rm D0}:\:m_t=170\:{\rm GeV}). \end{eqnarray} Comparing them with $\sigma_{\rm tot}(t\bar{t})$ computed in our framework as a function of $d_{V,A}$, we obtained an allowed region in the $d_V$-$d_A$ plane surrounded by two closed curves (see Fig.\ref{allowed} presented below). It is possible to narrow the region if we get data with smaller errors, but we will not be able to single out the standard model, i.e., the area around $d_V=d_A=0$, as long as we use $\sigma_{\rm exp}(t\bar{t})$ measured at Tevatron alone, even if those $d_{V,A}$ are correct values. However, we showed in \cite{Hioki:2009hm} that it can be very effective to combine data from Tevatron and LHC together (see Fig.6 in \cite{Hioki:2009hm}). This is because the $q\bar{q} \to t\bar{t}$ process dominates at Tevatron, while $gg\to t\bar{t}$ becomes the main process at LHC and therefore different parts in the cross section are enhanced at these two hadron colliders. Recently the CMS collaboration published their first data, \begin{equation} \sigma_{\rm exp} = 194 \pm 72\,({\rm stat.}) \pm 24\,({\rm syst.}) \pm 21\,({\rm lumi.}) \ {\rm pb}, \end{equation} for a top-quark mass of 172.5 GeV \cite{Khachatryan:2010ez}, and we found that this new information actually enabled us to realize our analysis. Let us show our main result. As the standard-model total cross section, we take the NLO theoretical cross section \begin{equation} \sigma_{\rm QCD} = 157.5^{+23.2}_{-24.4}\ {\rm pb} \end{equation} for $m_t=172.5\:{\rm GeV}$ \cite{Campbell:2010ff,Kleiss:1988xr}, which is used in \cite{Khachatryan:2010ez}. Combining this theoretical error with the above experimental errors, we get \begin{equation} \sigma_{\rm exp} = 194 \pm 82 \ {\rm pb} \end{equation} \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \begin{overpic}[width=11cm,clip]{allowed.eps} \put(8,120){\large $d_A$} \put(150,10){\large $d_V$} \end{overpic} \caption{The $d_{V,A}$ region allowed by Tevatron and LHC data (the shaded part). The solid curves are from CDF data, the dashed curves are from D0 data, and the dash-dotted curves are from CMS data.}\label{allowed} \end{center} \end{minipage} \end{figure} \vskip 0.7cm \noindent and use this value as our input that is to be compared with the calculated total cross section. Superposing the new result thus obtained with the constraints from Tevatron which we already have from \cite{Hioki:2009hm}, we find that only a small region around $d_V=d_A =0$ survives as in Fig.\ref{allowed}. There the solid curves are from CDF data, the dashed curves are from D0 data, and the dash-dotted curves are from CMS data. The shaded part is the new $d_{V,A}$ region allowed by both Tevatron and LHC data. This figure is quite similar to Fig.6 of \cite{Hioki:2009hm}, which, however, we drew assuming some plausible values for $\sigma(t\bar{t})$ at LHC energy. This is what we expected of LHC experiments in \cite{Hioki:2009hm}. In conclusion, we have shown here that combing the Tevatron and latest LHC (CMS) data produces a stronger constraint on $d_V$ and $d_A$ based on our previous analysis. This analysis worked because Tevatron is a $p\bar{p}$ collider, where the $q\bar{q}\to t\bar{t}$ process dominates, while LHC is a $pp$ collider, where the $gg\to t\bar{t}$ process plays a much more important role. Although the precision is not sufficiently high yet, we expect that LHC will give us fruitful data and make it possible to perform much more precise analyses in the near future. \vspace{0.6cm} \centerline{ACKNOWLEDGMENTS} \vspace{0.3cm} This work is supported in part by the Grant-in-Aid for Scientific Research No.22540284 from the Japan Society for the Promotion of Science. \vspace*{0.8cm}
1,116,691,498,114
arxiv
\section{Introduction} \input{sections/introduction} \section{Problem Definition} \label{sec: problem_definition} \input{sections/problem_definition} \section{SCalable Remembering and Unlearning unBound (SCRUB)} \input{sections/method} \section{Related work} \input{sections/related_work} \section{Experiments} \label{sec: experiments} \input{sections/experiments_new} \section{Conclusion} \input{sections/conclusion} \section*{Acknowledgements} We would like to thank Fabian Pedregosa for insightful discussions, especially on the topic of membership inference attacks. \subsection{The SCRUB} We consider the original model $w^o$ as the `teacher' and we formulate our goal as training a `student' $w^u$ that \textit{selectively} obeys that teacher. Intuitively, our goal for $w^u$ is twofold: unlearn $\mathcal{D}_f$ while still remembering $\mathcal{D}_r$. To that effect, $w^o$ should be obeyed when teaching about $\mathcal{D}_r$, but disobeyed when teaching about $\mathcal{D}_f$. Our code is available \footnote{https://github.com/Meghdad92/SCRUB}. En route to deriving our training objective, let us first define: \begin{equation*} d(x; w^u) = \displaystyle D_{\mathrm{KL}} ( p(f(x; w^o)) \Vert p(f(x; w^u)) ) \end{equation*} In words, $d(x; w^u)$ is the KL-divergence between the student and teacher distributions for the example $x$. We make the dependence of $d$ on $w^u$ explicit, since we will optimize the student weights $w^u$ while keeping the teacher weights frozen $w^o$, treating them as a constant. Concretely, we begin by initializing the student $w^u$ to the weights of the teacher $w^o$. Since the teacher weights $w^o$ were trained on all of $\mathcal{D}$, we assume that the teacher performs well on both constituents of $\mathcal{D}$, namely both of $\mathcal{D}_r$ and $\mathcal{D}_f$. Therefore, this initialization step ensures that the student has good performance on $\mathcal{D}_r$ at initialization time, already fulfilling one of our desiderata. Can we then start from this solution and modify it to unlearn $\mathcal{D}_f$? In principle, one could do this by optimizing: \begin{equation} \label{eq:max_step} \min_{w^u} - \frac{1}{N_f} \sum_{x_f \in \mathcal{D}_f} d(x_f; w^u) \end{equation} However, in practice, when performing this maximization of the distance between the student and teacher on the forget set, we noticed that, while indeed the performance on $\mathcal{D}_f$ degrades, as desired, it unfortunately also results in degrading performance on $\mathcal{D}_r$. To amend that, we propose to simultaneously encourage the student to `stay close' to the teacher on retain examples, while encouraging it to `move away' from the teacher on forget examples, adding a contrastive flavour. Formally, the optimization objective becomes: \begin{equation} \label{eq:max_min_steps} \min_{w^u} \frac{1}{N_r} \sum_{x_r \in \mathcal{D}_r} d(x_r; w^u) - \frac{1}{N_f} \sum_{x_f \in \mathcal{D}_f} d(x_f; w^u) \end{equation} Furthermore, SCRUB also simultaneously optimizes for the task loss on the retain set, to further strengthen the incentive to perform well there. Our final training objective is then the following: \begin{equation} \label{eq:scrubs_objective} \begin{split} \min_{w^u}~~ & \frac{\alpha}{N_r} \sum_{x_r \in \mathcal{D}_r} d(x_r; w^u) \\& + \frac{\gamma}{N_r} \sum_{(x_r, y_r) \in \mathcal{D}_r} l(f(x_r; w^u), y_r) \\& - \frac{1}{N_f} \sum_{x_f \in \mathcal{D}_f} d(x_f; w^u) \end{split} \end{equation} where $l$ stands for the cross-entropy loss and the $\alpha$ and $\gamma$ are scalars that we treat as hyperparameters. In practice, we found that optimizing the objective in Equation \ref{eq:scrubs_objective} is challenging, due to oscillations in the loss. Intuitively, this is due to trying to simultaneously satisfy two objectives, which may interfere with each other, namely moving close to the teacher on some data points while moving away from it on others. To address this, SCRUB provides a practical recipe for optimization, reminiscent of common `tricks' used in other min-max-like problems like in Generative Adversarial Networks (GANs) \citep{goodfellow2014generative} where, in each iteration, the discriminator is trained for several steps before performing a single update to the generator. Specifically, SCRUB iterates between performing an epoch of updates on the retain set (the \textit{min-step}) followed by an epoch of updates on the forget set (the \textit{max-step}), in an alternating fashion. Guarding against hurting retain performance due to this alteration of \textit{min-steps} and \textit{max-steps}, SCRUB also performs a sequence of additional \textit{min-steps} at the end of the sequence to restore the retain performance in the event that it was harmed. SCRUB training stops when the forget error has increased without harming the retain set error. This point, we have found empirically, can be reached with a small number of epochs. In the Appendix, we discuss and ablate different design choices in Section \ref{sec:ablations}, show illustrations of the training dynamics of SCRUB and provide pseudocode in Algorithm \ref{algo:train} for clarity. \subsection{SCRUB and Rewind (SCRUB+R)} By construction, SCRUB encourages obtaining a high error on the forget set. This is desired for some application scenarios. However, an uncharacteristially high error on deleted examples can make them identifiable, causing vulnerability to membership inference attacks. SCRUB+R addresses this. To this end, we are after a principled procedure for `early stopping' SCRUB, at the point where the forget set error is `just high enough'. One could obtain a reference point for that by retraining from scratch without the forget set and recording the error of that model on the forget set. However, that defeats the purpose of unlearning, as we want to avoid that computation. Therefore, we propose a different way of establishing a reference point. First, we create a validation set of the same distribution as the forget set. For instance, if the forget set has only examples of class 0, we keep only the examples of class 0 in the validation set too (we assume the existence of a validation set; else one could hold out some data from the retain set). Next, we train SCRUB as described previously, storing a model checkpoint every epoch. At the end of its training, where the forget error is typically the highest, we measure the error on the constructed validation set. This will serve as the reference point for the desired forget set error. Lastly then, if the forget set error at the endpoint is higher than the reference, we `rewind' to the checkpoint where it's closest to this reference point. The intuition is that the last step of unlearning approximates having `maximally forgotten' the forget set. Because of this, the error on the identically-distributed validation set approximates the error of a model that never learned about that distribution from the forget set: any correct predictions on the held-out examples are due only to the `generalization power' of the model that was trained only on the retain set. We show empirically in our experiments that this rewinding procedure can greatly increase the defense against membership inference attacks for SCRUB, making it a strong performer for applications where privacy is a consideration. \subsection{Experimental setup} \paragraph{Overview of metrics} We consider three different sets of metrics, reflecting different applications. First, we look at the error on each of the retain, forget and test sets, $\mathcal{D}_r$, $\mathcal{D}_f$ and $\mathcal{D}_t$, respectively (\textbf{M1}). Here, we consider that higher is better for the error on $\mathcal{D}_f$, as it suggests successful forgetting, but the goal it to avoid hurting the error of $\mathcal{D}_r$ and $\mathcal{D}_t$. Next, inspired by \citet{goel2022evaluating}, we consider a use-case where unlearning is used to resolve the model's `confusion' between two classes, that stemmed from a portion of its original training set being mislabelled (\textbf{M2}). Finally, we consider a scenario that captures the application of data deletion for user privacy (\textbf{M3}). In this case, we use a Membership Inference Attack (MIA) as the metric where success of unlearning is defined in terms of the attacker not being able to tell apart examples that belong to the forget set from examples that were truly never seen. In all cases, aside from achieving unlearning, according to the application-specific definition, we also care about not degrading the quality of the model (performance on $\mathcal{D}_r$ and $\mathcal{D}_t$). \paragraph{Datasets and architectures} For all scenarios, we utilize the same two datasets that are used in previous work: CIFAR-10 \citep{krizhevsky2009learning} and Lacuna-10 \citep{golatkar2020eternal}, which is derived from VGG-Faces \citep{cao2015towards}, and the same two architectures: All-CNN \citep{springenberg2014striving} and ResNet-18 \citep{he2016deep}. For fair comparisons with previous work, we follow the standard setup of using a model that was pre-trained on CIFAR-100 / Lacuna-100 for the CIFAR-10 / Lacuna-10 experiments, respectively \citep{golatkar2020eternal,golatkar2020forgetting}. We run each experiment with 3 random seeds and report the mean and standard deviation in all cases. We utilize the public code for the NTK and Fisher methods to ensure correctness. We consider two settings throughout the paper: \textbf{small-scale} and \textbf{large-scale}. The former is in place to enable comparisons with NTK, which doesn't scale beyond it. For the small-scale experiments, we exactly follow the setup in \citep{golatkar2020forgetting} that uses only 5 classes from each of CIFAR and Lacuna (`CIFAR-5' / `Lacuna-5'), with 100 train, 25 validation and 100 test examples per class. The forget set contains 25 examples from class 0 (5\%). For the large-scale experiments, we exactly follow the setup in \citep{golatkar2020eternal} that uses all 10 classes of each of CIFAR and Lacuna, and considers both a \textit{class unlearning} scenario where the forget set is the entire training set for class 5 (10\%), as well as a \textit{selective unlearning} one where 100 examples of class 5 are forgotten (0.25\% in CIFAR and 2\% in Lacuna). All details can be found in the Appendix. \paragraph{Unlearning algorithms} We compare against state-of-the-art approaches as well as various baselines and reference points: \textbf{Retrain}: Retraining from scratch without the forget set. This is assumed to not be viable in practice, but we include it as a reference point. \textbf{Original}: The `original' model trained on all of $\mathcal{D}$ without any unlearning intervention. \textbf{Finetuning}: Finetunes the original model on data from the retain set $\mathcal{D}_r$. \textbf{NegGrad}: Similar to Finetuning, starts from the original model and finetunes it both on data from the retain and forget sets, negating the gradient for the latter. Previous work considered a weaker baseline that only trains on the forget set, with a negative gradient. We tune this stronger baseline to achieve a good balance between the two objectives. \textbf{Fisher Forgetting} \citep{golatkar2020eternal}. \textbf{NTK Forgetting} \citep{golatkar2020forgetting}. Catastrophic Forgetting-k (\textbf{CF-k}) and Exact Unlearning-k (\textbf{EU-k}) \citep{goel2022evaluating}: they freeze the first (bottom-most) k layers of the original model and either finetune the remaining layers on $\mathcal{D}_r$ (CF-k) or train them from scratch on $\mathcal{D}_r$ (EU-k). \subsection{Forget, retain and test errors (M1)} \input{sections/figures_tex/counts_main} In this section, we consider an application scenario where we desire the unlearning algorithm to achieve the highest possible error on $D_f$, indicating that it has successfully forgotten those examples, without hurting the error of $D_r$ and $D_t$. Some practical use-cases are removing biases from trained models and deleting out-of-date examples. We report the results for these metrics in Figures \ref{fig:main_paper_spiders} and \ref{fig:main_paper_counts} and more detailed results in Appendix Tables \ref{tab:small_scale_resnet}, \ref{tab:small_scale_allcnn}, \ref{tab:large_scale_class_resnet}, \ref{tab:large_scale_class_allcnn}, \ref{tab:large_scale_selective_resnet}, \ref{tab:large_scale_selective_allcnn}. As shown in Figure \ref{fig:main_paper_counts}, SCRUB is by far the most consistent method in terms of unlearning for this metric, while it doesn't hurt retain and test errors and also scales significantly better than some other methods (see Figure \ref{fig:main_paper_spiders}). \subsection{Resolving class confusion via unlearning (M2)} In this section, inspired by \cite{goel2022evaluating}, we explore an application where unlearning is used to resolve confusion between two classes that the original model suffers from due to a part of its training set being mislabelled. Specifically, all and only the mislabeled portion of the training set is placed in the forget set. Therefore, a successful unlearning algorithm would entirely resolve the model's confusion. In more detail, the setup is: 1) Mislabel some portion of the training dataset (we mislabeled examples between classes 0 and 1 of each of CIFAR-5 and Lacuna-5), 2) Train the `original model' on the (partly mislabeled) training dataset, and 3) Unlearn with the confused examples as the forget set. We consider the following metrics inspired from \cite{goel2022evaluating} that are variants of the model's error on examples from the confused classes (lower is better for both). We outline them below and define them formally in the Appendix. \begin{itemize} \item \textbf{Interclass Confusion \textsc{IC-Err}} (e.g. IC test error, IC retain error). This counts mistakes where an example from either of the two confused classes was incorrectly predicted to belong to \textit{any} other class. \item \textbf{\textsc{Fgt-Err}} (e.g. Fgt test error, Fgt retain error). This metric counts only misclassification \textit{between the confused classes}, i.e. an example of class 0 being predicted as belonging to class 1, or vice versa. \end{itemize} We present the results for this application in Table \ref{tab:confusion_main_paper}. Additional results for this scenario can be found in the Appendix in Tables \ref{tab:conf_cif5_res}, \ref{tab:conf_cif5_allcnn}, \ref{tab:conf_lac5_resnet}, \ref{tab:conf_lac5_allcnn}, \ref{tab:conf_cif10_resnet}, \ref{tab:conf_cif10_allcnn}, \ref{tab:conf_lac10_resnet}, \ref{tab:conf_lac10_allcnn}. We observe that across the board, SCRUB is by far the most consistent method in terms of eliminating class confusion without damaging the quality of the model (on retain and test sets). \subsection{Unlearning in privacy-critical applications (M3)} \label{sec:mia_results} \input{sections/tables/mia_main_paper} \input{sections/figures_tex/mia_main} Next, we consider a set of metrics targeting a privacy-critical application, for instance deletion of user data upon their request to exercise their right-to-be-forgotten. In this case, we use a Membership Inference Attack (MIA) as the metric, inspired by \citet{shokri2017membership}. At a high-level, the goal is that, after applying unlearning, the model treats examples that were in the forget set similarly as those that were truly never seen, thus preventing an `attacker' from inferring which examples were in its original training set. Returning to our previous notation, let $l(f(x; w^u), y)$ denote the cross-entropy loss of the unlearned model (a deep network $f$ with weights $w^u$) on example $x$ with label $y$. We abbreviate this as $l(x, y)$ from now on; dropping the dependence on $f$ and $w^u$. The attacker is a binary classifier that takes as input loss values, coming from either the forget set $\mathcal{D}_f$ or a held-out test set $\mathcal{D}_t$, and predicts whether the example whose loss value was presented was in the training set of the original model. We train this attacker via supervised learning on a class-balanced labelled training set for this binary problem: $\mathcal{D}_{train}^b = \{ (l(x_i, y_i), y_i^b) \}$ where each $x_i$ is an example coming either from $D_f$ or $D_t$, and its binary label $y_i^b$ is defined as being 0, if $x_i \in \mathcal{D}_t$ and 1 if $x_i \in \mathcal{D}_f$. Once the binary classifier attacker is trained, we use it to make predictions for a held-out evaluation set of the binary problem: $\mathcal{D}_{eval}^b = \{ (l(x_i, y_i), y_i^b) \}$ that is also balanced between examples coming from $\mathcal{D}_f$ and $\mathcal{D}_t$, but is disjoint from $\mathcal{D}_{train}^b$. We provide full details in the Appendix. The attacker succeeds if it achieves high accuracy on $\mathcal{D}_{eval}^b$, meaning that it can tell apart examples that were part of the original training set from those that weren't, which marks a defeat for the unlearning model in terms of this metric, since it has `left traces behind' (in this case, in terms of loss values) and leaks information about membership in the forget set. We consider that an optimal defense against this MIA corresponds to a 50\% attack accuracy; that is, no better than randomly guessing whether an example had been trained on. In principle, the Retrain oracle should defend optimally: it in fact did not train on the forget set, so the forget and test sets are simply two different held-out sets for this model whose loss values should generally be indistinguishable from each other if these sets are identically-distributed. We present our MIA results in Table \ref{tab:mia_main_paper} for CIFAR-10, and additional results in Tables \ref{tab:mia_allcnn_cifar_selective}, \ref{tab:mia_allcnn_cifar_class}, \ref{tab:mia_resnet_cifar_selective}, \ref{tab:mia_resnet_cifar_class}, \ref{tab:mia_allcnn_lacuna_selective}, \ref{tab:mia_allcnn_lacuna_class}, \ref{tab:mia_resnet_lacuna_selective}, \ref{tab:mia_resnet_lacuna_class} in the Appendix. We make the following observations: first, SCRUB, especially when equipped with its rewinding procedure, is a strong performer in terms of MIA. EU-k can also perform strongly in this metric, though not as consistently. Unsurprisingly, the original model performs poorly in this regard: it has not performed any unlearning operation, leaving the losses of the examples in the forget set to be very low (as low as the ones in the retain set), and thus easily distinguishable from the certainly-higher test set losses. We also observe that Finetune, NegGrad and CF-k all perform poorly. Between CF-k and EU-k, it is expected that the latter is stronger in this metric, as it can be viewed as an approximation of the Retrain oracle. We also observe from Table \ref{tab:mia_main_paper} that rewinding was only triggered once, for \textit{selective} unlearning. In both selective and class unlearning, the forget set contains examples coming from the same class, but in the former, it doesn't contain \textit{all} examples of that class; the rest are in the retain set. Thus, we indeed expect that rewinding would be more useful for selective unlearning: the smaller the portion of a class in the forget set, the larger the remainder of that class in the retain set and consequently, the better we expect the model to generalize on held-out examples of that class (lower error), making it in turn more likely to need to rewind, in order to also lower the forget set error commensurately. In Figure \ref{fig:mia} we plot results for different variants of selective unlearning, with different number of examples in the forget set (from the same class) and indeed find that for smaller numbers, rewinding is triggered. Our take-away is that, when needed, rewinding can substantially improve SCRUB's MIA results. \subsection{Additional experiments and discussion} We discuss limitations and future work directions in Section \ref{sec:limitations}, ablate design choices and illustrate SCRUB's training dynamics in Section \ref{sec:ablations} and present more results in Sections \ref{sec:more_m1_results}, \ref{sec:more_m2_results} and \ref{sec:more_m3_results} for M1, M2 and M3, respectively. \subsection{Overview of results and take-aways} \textbf{Finetuning} retains the performance of the original model for the most part, but is not good at unlearning in either of the three sets of metrics. The previous state-the-art \textbf{NTK} and \textbf{Fisher} models aren't among the top-performers either. The former performs poorly across the board in terms of unlearning and doesn't scale beyond small datasets, though at least it doesn't hurt the retain set performance. The latter can sometimes perform well in M1 in terms of the forget error, though not in all cases and unfortunately it is also very slow (Figure \ref{fig:main_paper_spiders}), actually exceeding the runtime of Retrain; an observation also made in \citep{goel2022evaluating}. \textbf{NegGrad} turns out to be a strong baseline in terms of achieving a balance between unlearning and retaining performance (it performs well in several settings in terms of M1, albeit not as consistently as SCRUB) and we encourage future work to report this baseline, though SCRUB outperforms it in terms of both the confusion metrics and MIA , which demonstrates the benefit of `distillation' in SCRUB (using soft labels from the teacher) instead of using one-hot labels as in NegGrad. True to its nature, \textbf{CF-k} inherits the performance profile of Finetune: it avoids degrading the model but doesn't unlearn well. On the other hand, \textbf{EU-k} can more reliably unlearn (performs especially strongly in MIA, and in often in M1 metrics), which is expected relative to CF-k, since it trains part of the network from scratch, thus removing more information. But SCRUB outperforms it significantly in terms of resolving confusion via unlearning (M2) and is more consistently a top performer in M1. A notable failure case that we discovered for EU-k in M1 is \textit{selective} unlearning (notice the contrast between EU-k's forget error between e.g. Figures \ref{fig:cifar10-class-spider} and \ref{fig:cifar10-selective-spider}). This finding may speak to where class vs instance information is stored in neural networks and warrants further investigation. Finally, SCRUB is by far the most consistent method in successfully unlearning under different metrics without sacrificing performance and being significantly more scalable than previous state-of-the-art. \subsection{Experimental setup} \paragraph{Methods} We compare against state-of-the-art approaches as well as various baselines: \begin{itemize} \item \textbf{Retrain}: Retraining from scratch without the cohort to forget. This method is assumed to not be viable in practice due to its prohibitive computational cost. \item \textbf{Original}: The `original' model trained on all of $\mathcal{D}$ without any unlearning intervention. \item \textbf{Finetuning}: Finetunes the original model on data from the retain set $\mathcal{D}_r$. \item \textbf{NegGrad}: Similar to Finetuning, starts from the original model and finetunes it both on data from the retain set and the forget set, negating the gradient for the latter. Previous work considered a weaker baseline that only trains on the forget set, with a negative gradient. We tune this stronger baseline to achieve a good balance between the two objectives. \item \textbf{Fisher}: The Fisher Forgetting method from \citep{golatkar2020eternal}. \item \textbf{NTK}: The NTK Forgetting method from \citep{golatkar2020forgetting}. \item \textbf{CF-k}: Catastrophic Forgetting-k (CF-k) \citep{goel2022evaluating} freezes the first (bottom-most) k layers of the original model and finetunes the remaining (top-most) layers on $\mathcal{D}_r$. \item \textbf{EU-k}: Exact Unlearning-k (EU-k) \citep{goel2022evaluating} freezes the first (bottom-most) k layers of the original model and trains from scratch the remaining (top-most) layers on $\mathcal{D}_r$. \end{itemize} \input{sections/figures_tex/small_scale_spider} \paragraph{Metrics} We desire the model to have forgotten about $\mathcal{D}_f$, without hurting its performance on the retain set nor its generalization ability. We therefore report \textbf{forget error}, where higher values are better (denoted $\uparrow$ in the tables), and \textbf{retain error} and \textbf{test error} where lower values are better (denoted $\downarrow$ in the tables). For larger-scale experiments, we also consider \textbf{scale-up factor} as an additional metric: the fraction of the runtime of retraining from scratch over the runtime of the given unlearning algorithm. Finally, we posit that \textit{consistency} in good performance on the above metrics is crucial for practical applicability, so we experiment with a large number of settings, including different datasets, sizes of forget and training sets, architectures and selective or class forgetting. \paragraph{Experimental setup} We base our investigation on the same two datasets that are used in previous work: CIFAR-10 \citep{krizhevsky2009learning} and Lacuna-10 \citep{golatkar2020eternal}, which is a dataset derived from VGG-Faces \citep{cao2015towards}. We also use the same two architectures as in previous work: All-CNN \citep{springenberg2014striving} and ResNet-18 \citep{he2016deep}. For fair comparisons with previous work, we follow the standard setup of using a model that was pre-trained on CIFAR-100 / Lacuna-100 for the CIFAR-10 / Lacuna-10 experiments, respectively \citep{golatkar2020eternal,golatkar2020forgetting}. We run each experiment with 3 random seeds and report the mean and standard deviation. We utilize the public code for the NTK and Fisher methods in our experiments, to ensure correctness. We also plan to release our own code upon publication to facilitate future comparisons. \input{sections/figures_tex/figure_of_counts} We conduct our empirical investigation in 3 setups, described in further detail in the Appendix: \begin{itemize} \item \textbf{Large forget set sizes}: We significantly increase the \% of the training set that belongs to $\mathcal{D}_f$; an axis orthogonal to that of increasing the training set size. We investigate forgetting up to $\sim$ 67\% of the training set (for reference, standard benchmarks consider up to 10\%). For this, we train on 6 classes from each of CIFAR and Lacuna (`CIFAR-6' / `Lacuna-6') and perform both \textit{selective unlearning}, where we forget 50\% of the training examples of each class, and \textit{class unlearning} where we forget either 3 (50\%) or 4 classes ($\sim$ 67\%). \item \textbf{Small-scale}: We exactly follow the setup in \citep{golatkar2020forgetting} that uses only 5 classes from each of CIFAR and Lacuna (`CIFAR-5' / `Lacuna-5'), with 100 train, 25 validation and 100 test examples per class. The forget set contains 25 examples from class 0 (5\%). This small-scale setting allows comparing to NTK, which doesn't scale to larger datasets. \item \textbf{Large-scale}: We exactly follow the setup in \citep{golatkar2020eternal} that uses all 10 classes of each of CIFAR and Lacuna, and considers both a \textit{class unlearning} scenario where the forget set is the entire training set for class 5 (10\%), as well as a \textit{selective unlearning} one where 100 examples of class 5 are forgotten (0.25\% in CIFAR and 2\% in Lacuna). \end{itemize} \input{sections/figures_tex/large_scale_spider} \subsection{Findings and take-aways} We present our results on exploring larger forget set sizes (\textbf{Q1}) in Table \ref{tab:large_forget_set_cifar_resnet} (and Tables \ref{tab:large_forget_set_cifar_allcnn}, \ref{tab:large_forget_set_lacuna_resnet}, \ref{tab:large_forget_set_lacuna_allcnn}, \ref{tab:large_forget_set_selective_resnet}, \ref{tab:large_forget_set_selective_allcnn}), and Figure \ref{fig:large_forget_set_spiders} (and Figures \ref{fig:large_forget_set_spiders_class}, \ref{fig:large_forget_set_spiders_selective}). In investigating the competitiveness of SCRUBS on previously-established benchmarks (\textbf{Q2}), we report results for the small-scale in Figure \ref{fig:small_scale_spiders} (and Tables \ref{tab:small_scale_resnet} and \ref{tab:small_scale_allcnn}) and for the large-scale in Figure \ref{fig:large_scale_spiders} (and Tables \ref{tab:large_scale_class_resnet}, \ref{tab:large_scale_class_allcnn}, \ref{tab:large_scale_selective_resnet}, \ref{tab:large_scale_selective_allcnn}). We showcase the consistency of each method's ability to unlearn (\textbf{Q3}) in Figure \ref{fig:forget_counts}, and we report results for the scale-up factor in Figures \ref{fig:large_scale_spiders} (and Figure \ref{fig:large_scale_spiders_lacuna}) (\textbf{Q4}). Tables and Figures mentioned in brackets can be found in the Appendix. We summarize our main findings below. \begin{itemize} \item \textbf{SCRUBS} is the sole method that is \textit{consistently} a top-performer in terms of achieving unlearning (Figure \ref{fig:forget_counts}), while incurring a minimal performance degradation, if any (Table \ref{tab:large_forget_set_cifar_resnet}, Figures \ref{fig:large_forget_set_spiders}, \ref{fig:small_scale_spiders} and \ref{fig:large_scale_spiders}), and being very computationally efficient (Figure \ref{fig:large_scale_spiders}). \item \textbf{Original}, as expected, enjoys good performance on the retain and test sets, but it has low forget set error, reflecting that no effort was made to unlearn $\mathcal{D}_f$. \item \textbf{Finetuning} retains the performance of Original for the most part, but also fails at forgetting. \item The previous state-the-art \textbf{NTK} isn't among the top-performers in terms of forgetting, though it at least doesn't hurt the model's performance (Figures \ref{fig:large_forget_set_spiders} and \ref{fig:small_scale_spiders}). However, our thorough investigation of larger forget set sizes revealed a notable exception to its poor performance, on selective unlearning with large forget set sizes (Figure \ref{fig:large_forget_set_spiders_selective}). This warrants further investigation to gain a deeper understanding of NTK, though it isn't viable in practice since it can't scale beyond very small datasets and performs poorly on several settings. \item The previous state-the-art \textbf{Fisher} sometimes achieves forgetting (Figures \ref{fig:lacuna6-spider-combined}, \ref{fig:lacuna5-spider} and \ref{fig:cifar10-class-spider}, \ref{fig:large_forget_set_spiders_selective}), though not consistently (Figure \ref{fig:forget_counts}), and unfortunately usually degrades the model's performance substantially. It is also very slow (Figure \ref{fig:large_scale_spiders}), actually exceeding the runtime of the retrain from scratch; an observation also made in \citep{goel2022evaluating}. \item The baseline of \textbf{NegGrad} turns out to be a strong one in terms of achieving a balance between unlearning and retaining performance and we encourage future work to also report this baseline. However, SCRUBS outperforms it in terms of forget error in several scenarios, as seen in Figure \ref{fig:forget_counts} and is much more computationally efficient (see Figure \ref{fig:large_scale_spiders}). \item \textbf{CF-k} completely fails to forget in most cases, whereas \textbf{EU-k} is more successful in that regard (though not consistently (Figure \ref{fig:forget_counts})). This is expected, since EU-k trains the top layers from scratch, thus increasing the chances of removing information about $\mathcal{D}_f$ whereas CF-k finetunes those layers instead. A notable failure case that we discovered for EU-k is \textit{selective} forgetting (notice the contrast between EU-k's forget error between Figures \ref{fig:cifar10-class-spider} and \ref{fig:cifar10-selective-spider} and between Figures \ref{fig:lacuna6-spider-class} and \ref{fig:lacuna6-spider-selective}). This interesting finding may speak to where class vs instance information is stored in neural networks and warrants further investigation. \end{itemize} \input{sections/figures_tex/ablations} \paragraph{Analysis of Training Dynamics} Next, we illustrate the training dynamics of SCRUBS and the importance of different design choices. As a reminder, the student is initialized from the teacher and subsequently undergoes an alternating sequence of \textit{max-steps} and \textit{min-steps}; the former encouraging the student to move far from the teacher on the forget set, and the latter encouraging it to stay close to the teacher on the retain set. We also found it useful to perform a sequence of additional \textit{min-steps} after the alternating sequence. We now explore the effect of these decisions. First, we show that performing only \textit{max-steps}, by optimizing Equation \ref{eq:max_step}, is not a good solution. Simply pushing the student away from the teacher on the forget set achieves forgetting but unfortunately also hurts the retain and validation set performance (Figure \ref{fig:only_max_dynamics}). Therefore, alternating between \textit{max-steps} and \textit{min-steps} is necessary. However, it is important to find the right balance. For instance, as seen in Figure \ref{fig:scrubs_insufficient_max}, performing too few \textit{max-steps} leads to the unwanted consequence of the forget error dropping. On the other hand, removing the final sequence of only \textit{min-steps} is also harmful, as shown in Figure \ref{fig:scrubs_uncontrolled} that trains for a larger number of epochs of an equal number of (alternating) \textit{max-steps} and \textit{min-steps} without achieving a good balance at any point throughout the trajectory. On the other hand, SCRUBS (Figure \ref{fig:scrubs_dynamics}) achieves a good balance of high forget error and low retain and validation error simultaneously. The Appendix shows additional examples of training dynamics (Figures \ref{fig:train_dynamics_cifar5_allcnn}, \ref{fig:train_dynamics_lacuna5_resnet}, \ref{fig:train_dynamics_lacuna5_allcnn}), hyperparameter sensitivity (Table \ref{tab:scrubs_sens}), possible trade-offs of forgetting-versus-performance (Table \ref{tab:NG_tradeoff}) and an ablation of the cross-entropy term in Equation \ref{eq:scrubs_objective}, which provides a small but consistent added protection against degrading performance (Figure \ref{fig:effect_of_cross_entropy}). \subsection{Discussion of limitations and additional future work} \label{sec:limitations} SCRUB has shown impressive results in terms of being consistently a top-performer in terms of unlearning, with a minimal drop in performance compared to previous works. However, SCRUB has limitations that we hope future work will address. Primarily, the lack of theoretical guarantees is a limitation of our method. While this is an important drawback, theoretical guarantees are challenging for deep neural networks. Previous works associated with guarantees, despite offering great insights, suffer from practical limitations. We aim to fill this gap instead. However, we look forward to future work that strives to strike a compromise between effective unlearning, good performance, scalability, and theoretical insights. Another limitation of SCRUB is the difficulty and instability associated with tuning the min-max objective, as shown in the literature e.g. for GANs. For instance, this can lead to oscillating behaviour, as we show in Figure \ref{fig:ablations}. We remedy this to a large extent in practice by providing a practical algorithm that works well, showing consistently improved results over prior work, but there is room for improvement on this front for future work. SCRUB's rewinding procedure also has limitations. We find in practice throughout all of our experiments that it can help to substantially increase the success of SCRUB's defense on MIA in scenarios where the forget error obtained by SCRUB at the end of unlearning is `too high'. However, a different failure case which can also appear is that SCRUB's forget error at the end of training is `too low'. This can happen due to the way in which we tune hyperparameters, which is designed to not harm the retain performance too much, and thus can in some cases lead to `premature stopping' before the forget error reaches the same level as a reference point for how high it would be if the model had truly never seen those examples. We highlight that addressing all possible issues that can arise in all scenarios and provide an unlearning algorithm that performs strongly across the board is extremely challenging. The fact that we have observed failure cases for each algorithm, be it SCRUB or other baselines, is indicative of the extensiveness of the experimentation we conducted. Our work has made important strides in designing consistently strong-performing unlearning methods and we look forward to future contributions in this direction. We hope that future work also continues to push the limits of scalability. In this work, we have made important strides in this direction. However, the datasets and models we consider aren't too large, in order to allow comparisons to previous works that would not be feasible to run for larger scale experiments. An interesting topic of future work is investigating the interplay between SCRUB and other scalable algorithms like NegGrad with increasing amounts of scale. Another really interesting future work direction is to investigate how different unlearning algorithms interact with different architectures, like Transformers, and loss functions, like self-supervised learning. \subsection{Pseudocode for SCRUB} For clarity, we provide pseudocode for training SCRUB in Algorithm \ref{algo:train}. \input{sections/algorithms} \subsection{More experimental details} \textbf{Datasets}. We have used CIFAR-10 and Lacuna-10 datasets for evaluation purposes. CIFAR-10 consists of 10 classes with 60000 color images of size 32 x 32. In our experiments, the train, test, and validation sizes are 40000, 10000, and 10000 respectively. Lacuna-10 is a dataset derived from VGG-Faces \citep{cao2015towards}. We have followed the same procedure described in \citep{golatkar2020eternal} to build Lacuna. We randomly select 10 celebrities (classes) with at least 500 samples. We use 100 samples of each class to form the test-set, and the rest make the train-set. All the images are resized to 32 x 32. We also use CIFAR-100 and Lacuna-100 to pretrain the models. Lacuna-100 is built in a similar way as Lacuna-10, and there is no overlap between the two datasets. We have not applied any data augmentation throughout the experiments. \textbf{Small-Scale datasets}. We followed the same procedure as decribed in \citep{golatkar2020forgetting} to create the small versions of CIFAR-10 and Lacuna-10, namely CIFAR-5 and Lacuna-5. To this end, we take the first 5 classes of each dataset and randomly sample 100 images for each class. We make the train and test sets by sampling from the respective train and test sets of CIFAR-10 and Lacuna-10. We also make 25 samples from each class from the train set to create the validation sets. \textbf{Models.} We use the same models with the same architectural modifications in \citep{golatkar2020eternal, golatkar2020forgetting}. For All-CNN, the number of layers is reduced and batch normalization is added before each non-linearity. For Resnet, ResNet-18 architecture is used. For small scale experiments, the number of filters is reduced by 60\% in each block. For the large-scale experiments, the exact architecture has been used. \textbf{Pretraining.} Following the previous work for consistency, we apply pretraining. Specifically, for CIFAR datasets, we have pretrained the models on CIFAR-100. For Lacuna, we have pretrained the models on Lacuna-100. We pretrain the models for 30 epochs using SGD with a fixed learning rate of 0.1, Cross-Entropy loss function, weight decay 0.0005, momentum 0.9, and batch size 128. \textbf{Baselines.} `Original' is the model trained on the entire dataset $D$. For `Retrain', we train the same architecture on $D_r$, with the same hyperparameters used during training of the original model. For `Finetune', we fine-tune the `original' model on $D_r$ for 10 epochs, with a fixed learning rate of 0.01 and weight-decay 0.0005. For `NegGrad', we fine-tune the `original' model using the following loss: \begin{equation}\label{eq:neggrad_loss} \mathcal{L}(w) = \beta \times \displaystyle\frac{1}{|D_r|} \displaystyle\sum_{i=1}^{|D_r|} l(f(x_i;w), y_i) - (1-\beta)\times \displaystyle\frac{1}{|D_f|} \displaystyle\sum_{j=1}^{|D_f|} l(f(x_j;w), y_j) \end{equation} Where $\beta \in [0,1]$. We have tuned $\beta$ to get a high forget-error while not destroying retain-error. For small-scale experiments, $\beta=0.95$ and we have trained for 10 epochs, with SGD, 0.01 lr and 0.1 weight-decay. For large-scale experiments $\beta = 0.9999$ and we have trained for 5 epochs, with SGD, 0.01 lr, and 0.0005 weight-deay. Please note that small $\beta$ result in explosion quickly. For `CF-k', we freeze the first k layers of the network and finetune the rest layers with $D_r$. We use the same setting as `Finetune' baseline. For `EU-k' we freeze the first k layers, and re-initialize the weights of the remaining layers and retrain them with $D_r$. As all the models are pretrained on larger datasets, for re-initializing we use the weights of the pretrained models. For `EU-k' we use the same settings as the `Retrain' baseline. In both `EU-k' and `CF-k' baselines, for both ResNet and All-CNN we freeze all the layers except for the last block of the network. For Resnet the last block is block4 and for All-CNN, the last block of layers is the 9th sequential block. \textbf{SCRUB parameters.} We train SCRUB using Algorithm \ref{algo:train}. Throughout the experiments, we tune the parameters to get a high forget-error while retaining the retain and validation error of the original model. We use the same optimizer for both min and max steps. We observed that for small-scale settings `Adam' optimizer works better, while in large-scale settings both `Adam' and `SGD' could be used. For all experiments, we initialize the learning rate at 0.0005 and decay it by 0.1 after a number of min and max steps. Decaying the learning rate is crucial to control the oscillating behaviour of our min and max optimization. We apply a weight decay of 0.1 for small-scale setting and 0.0005 for large scale experiments, with a momentum of 0.9. Finally, we use different batch sizes for the forget-set and the retain-set to control the number of iteration in each direction, i.e the max and the min respectively. We report these in \autoref{tab:param_choices}. \begin{table}[] \centering \begin{tabular}{c|c|c|c|c|c|c|} \toprule model&dataset&unlearning-type&forget-set bs& retain-set bs& max steps& min steps \\ \midrule \multirow{6}{*}{ResNet}&CIFAR-10&class&512&128&2&3 \\ &CIFAR-10&selective&16&64&5&5 \\ &Lacuna-10&class&128&128&5&5 \\ &Lacuna-10&selective&32&32&4&4 \\ &CIFAR-5&selective&32&32&10&10 \\ &Lacuna-5&selective&32&32&5&10 \\ \midrule \multirow{6}{*}{All-CNN}&CIFAR-10&class&512&256&3&4 \\ &CIFAR-10&selective&16&64&5&5 \\ &Lacuna-10&class&32&32&4&4 \\ &Lacuna-10&selective&8&32&2&4 \\ &CIFAR-5&selective&16&32&5&10 \\ &Lacuna-5&selective&32&32&5&10 \\ \midrule \end{tabular} \caption{SCRUB's hyperparameters for each experiment} \label{tab:param_choices} \end{table} \textbf{System specification.} For scale-up experiments, All the code is written and executed in Python 3.8, on an Ubuntu 20 machine with 40 CPU cores, a Nvidia GTX 2080 GPU and 256GB memory. \subsection{Formally defining the metrics} In this section, we give more details and mathematical definitions of the metrics that we use throughout the paper. \textbf{M1 Metrics} We define retain error, forget error and test error. Let $\mathcal{D}_r$, $\mathcal{D}_f$ and $\mathcal{D}_t$ denote the retain and forget portions of the training dataset, and a test dataset of heldout examples, respectively. We define error ($Err$) as follows: \begin{equation} \label{eq:error} Err(\mathcal{D}) = 1 - \frac{1}{|\mathcal{D}|} \sum_{(x_i, y_i) \in \mathcal{D}} \mathbbm{1}[\argmax(f(x_i; w)) == y_i] \end{equation} where $f$, parameterized by $w$ is the neural network model (comprised of a feature extractor followed by a softmax classifier layer), $\argmax(f(x_i; w))$ is the label that the model thinks is most likely for example $x_i$, and $\mathbbm{1}[ x ]$ is the indicator function that returns 1 if $x$ is True and 0 otherwise. Based on the above, the retain error, forget error and test error are computed as $Err(\mathcal{D}_r)$, $Err(\mathcal{D}_f)$ and $Err(\mathcal{D}_t)$, respectively. \textbf{M2 Metrics: class confusion} We now define the class confusion metrics inspired by \cite{goel2022evaluating}. Specifically, we explore a scenario where the forget set has confused labels (e.g. for two classes A and B, examples of A are labelled as B, and vice versa). The idea here is that, because mislabelled examples are only present in the forget set, successful unlearning (removing the influence of the forget set) would lead to a model that is not at all confused between classes A and B. In more detail, the setup we follow is: 1) We first mislabel some portion of the training dataset (we mislabelled examples between classes 0 and 1 of each of CIFAR-5 and Lacuna-5 in our experiments), 2) train the `original model' on the (partly mislabelled) training dataset (it has mislabelled examples for classes 0 and 1 but correct labels for the remaining classes), 3) perform unlearning where the forget set contains all and only the confused examples. Given this, the goal for the unlearning algorithm is to resolve the confusion of the original model. We consider the following metrics (using terminology consistent with \cite{goel2022evaluating}). They are presented in order of decreasing generality, and increasing focus on measuring degrees of confusion between the two classes considered. \begin{itemize} \item \textbf{Error} (e.g. test error, retain error, forget error). This counts all mistakes, so anytime that an example of some class is predicted to be in any other class, it will be counted. These are the same metrics that we use for the rest of the paper (see Equation \ref{eq:error}). For test and retain error, lower is better, whereas for forget error, higher is better. \item \textbf{Interclass Confusion \textsc{IC-Err}} (e.g. IC test error, IC retain error). This counts only mistakes that involve examples from the confused classes A and B. Specifically, it counts instances of any example of class A being predicted to be in \textit{any} other class , and analogously for class B. Compared to Error, this metric is more focused towards understanding the result of the introduced confusion, since it only considers cases that relate to the confused classes. A successful unlearning method would make no such errors, so lower is better for each of IC test error and IC retain error. \item \textbf{\textsc{Fgt-Err}} (e.g. Fgt test error, Fgt retain error). This metric counts only misclassification \textit{between the confused classes} A and B. Here, a mistake of an example of class A (or B) being predicted to be in class other than A or B will not be counted. Only mistakes of an example of class A being predicted to be in class B, and vice versa, are counted. \textbf{This is the most focused metric that explicitly measures the amount of confusion remaining after unlearning}. A successful unlearning method would make no such errors, so lower is better for each of Fgt test and Fgt retain. \end{itemize} More formally, Error is the same as defined in Equation \ref{eq:error}. Let us now mathematically define \textsc{IC-Err} and \textsc{Fgt-Err}. We denote by $C^{w, \mathcal{D}}$ the confusion matrix for model parameterized by $w$ on the dataset $\mathcal{D}$, and let $\mathcal{D}_A$ denote the part of the dataset $\mathcal{D}$ that belongs to class $A$. So, for example $\mathcal{D}_{r_A}$ denotes the part of the retain set $\mathcal{D}_r$ that belongs to class $A$, and the entry $C^{w, \mathcal{D}}_{A,B}$ of the confusion matrix stores the number of times that a sample belonging to class $A$ was (mis)classified as belonging to class $B$ in the dataset $\mathcal{D}$ by the model parameterized by $w$. Then, we have: \begin{equation} \label{eq:IC_error} \textsc{IC-Err}(\mathcal{D}, A, B; w) = \frac{\sum_k C^{w, \mathcal{D}}_{A,k} + \sum_{k'} C^{w, \mathcal{D}}_{B,k'} }{|\mathcal{D}_A| + |\mathcal{D}_B|} \end{equation} where $k \neq A, k' \neq B$. So, for example, the `IC test error' column in our tables is computed via \textsc{IC-Err}($\mathcal{D}_t, 0, 1; w$), where $\mathcal{D}_t$ denotes the test set, and 0 and 1 are the two classes confused in our experiments. Analogously, `IC retain error' is computed as \textsc{IC-Err}($\mathcal{D}_r, 0, 1; w$) Finally: \begin{equation} \label{eq:Fgt_error} \textsc{Fgt-Err}(\mathcal{D}, A, B; w) = C^{w, \mathcal{D}}_{A,B} + C^{w, \mathcal{D}}_{B,A} \end{equation} That is, \textsc{Fgt-Err} only measures the misclassification between the two confused classes A and B. So, for example, the `Fgt test error' in our tables is computed as \textsc{Fgt-Err}($\mathcal{D}_t, 0, 1; w$) and analogously `Fgt retain error' is computed as \textsc{Fgt-Err}($\mathcal{D}_r, 0, 1; w$). \textbf{M3 Metrics: MIA attacks} We now give full details for the MIA attack that we use. As outlined in Section \ref{sec:mia_results}, the attacker is a binary classifier that takes the loss values of the forget set $\mathcal{D}_f$ or a held-out test set $\mathcal{D}_t$ as the input. In practice, if the distribution of the forget set and the test set are very different from each other, their loss values will be very distinguishable. It means that the binary classifier can tell them apart easily, but without having truly learned to infer membership in the training dataset. This makes the attacker's evaluation unreliable. To circumvent this problem, we ought to pick the held-out test set from the same distribution. More specifically, if the forget set is examples from the `cat' class of CIFAR10 dataset, we use the same class for our held-out test set. In our experiments, we clip the loss values to a range between [-400, +400] to remove anomalies. Also, we use the default LogisticRegression() classifier of the Python's scikit-learn library as our attack model, and perform a cross-validation with 5 random splits. We report the average accuracy of the evaluation part of each of the 5 folds as the MIA score. Ideally, this score is closest to 50\%, indicating that the attacker fails to tell apart the forget set from the test set. \subsection{Ablations and illustration of training dynamics} \label{sec:ablations} \input{sections/figures_tex/ablations} In this section, we illustrate the training dynamics of SCRUB and the importance of different design choices. As a reminder, the student is initialized from the teacher and subsequently undergoes an alternating sequence of \textit{max-steps} and \textit{min-steps}; the former encouraging the student to move far from the teacher on the forget set, and the latter encouraging it to stay close to the teacher on the retain set. We also found it useful to perform a sequence of additional \textit{min-steps} after the alternating sequence. We now explore the effect of these decisions. First, we show that performing only \textit{max-steps}, by optimizing Equation \ref{eq:max_step}, is not a good solution. Simply pushing the student away from the teacher on the forget set achieves forgetting but unfortunately also hurts the retain and validation set performance (Figure \ref{fig:only_max_dynamics}). Therefore, alternating between \textit{max-steps} and \textit{min-steps} is necessary. However, it is important to find the right balance. For instance, as seen in Figure \ref{fig:scrubs_insufficient_max}, performing too few \textit{max-steps} leads to the unwanted consequence of the forget error dropping. On the other hand, removing the final sequence of only \textit{min-steps} is also harmful, as shown in Figure \ref{fig:scrubs_uncontrolled} that trains for a larger number of epochs of an equal number of (alternating) \textit{max-steps} and \textit{min-steps} without achieving a good balance at any point throughout the trajectory. On the other hand, SCRUB (Figure \ref{fig:scrubs_dynamics}) achieves a good balance of high forget error and low retain and validation error simultaneously. We also ablate the cross-entropy term in Equation \ref{eq:scrubs_objective}, which provides a small but consistent added protection against degrading performance in Figure \ref{fig:effect_of_cross_entropy}. We show additional examples of training dynamics (Figures \ref{fig:train_dynamics_cifar5_allcnn}, \ref{fig:train_dynamics_lacuna5_resnet}, \ref{fig:train_dynamics_lacuna5_allcnn}). \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_cifar5_allcnn_f25_errors_0nly_max_steps_10.png} \caption{Performing only max steps (Equation \ref{eq:max_step}).} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_cifar5_allcnn_f25_errors_min_3_max_10.png} \caption{SCRUB with insufficient max steps.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[scale=0.4]{sections/figures/small_cifar5_allcnn_f25_errors_uncontrolled_both_20.png} \caption{SCRUB with too many max steps.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[scale=0.4]{sections/figures/small_cifar5_allcnn_f25_errors_normal.png} \caption{SCRUB.} \end{subfigure} \caption{Illustration of training dynamics of SCRUB variants, on CIFAR-5 with All-CNN. Performing the right interleaving of \textit{min-steps} and \textit{max-steps} is important for achieving a good balance between high forget error and low retain and validation errors.} \label{fig:train_dynamics_cifar5_allcnn} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_resnet_errors_only_max_10.png} \caption{Performing only max steps (Equation \ref{eq:max_step}).} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_resnet_errors_min_3_max_10.png} \caption{SCRUB with insufficient max steps.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_resnet_errors_uncontrolled_both_20.png} \caption{SCRUB with too many max steps.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_resnet_errors_normal.png} \caption{SCRUB.} \end{subfigure} \caption{Illustration of training dynamics of SCRUB variants, on Lacuna-5 with ResNet. Performing the right interleaving of \textit{min-steps} and \textit{max-steps} is important for achieving a good balance between high forget error and low retain and validation errors.} \label{fig:train_dynamics_lacuna5_resnet} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_allcnn_errors_only_max_10.png} \caption{Performing only max steps (Equation \ref{eq:max_step}).} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_allcnn_errors_min_3_max_10.png} \caption{SCRUB with insufficient max steps.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_allcnn_errors_uncontrolled_both_20.png} \caption{SCRUB with too many max steps.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_allcnn_errors_normal.png} \caption{SCRUB.} \end{subfigure} \caption{Illustration of training dynamics of SCRUB variants, on Lacuna-5 with All-CNN. Performing the right interleaving of \textit{min-steps} and \textit{max-steps} is important for achieving a good balance between high forget error and low retain and validation errors.} \label{fig:train_dynamics_lacuna5_allcnn} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_cifar5_resnet_f25_errors_without_ce.png} \caption{CIFAR-5 with ResNet.} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_cifar5_allcnn_f25_errors_without_ce.png} \caption{CIFAR-5 with All-CNN.} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_resnet_errors_no_ce.png} \caption{Lacuna-5 with ResNet.} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.4]{sections/figures/small_lacuna5_allcnn_errors_no_ce.png} \caption{Lacuna-5 with All-CNN.} \end{subfigure} \caption{Effect of adding the cross-entropy loss in Equation \ref{eq:scrubs_objective}. Dashed lines omit cross-entropy while solid lines use it. We find that the addition of cross-entropy offers an additional protection to maintaining the model's performance during the unlearning procedure. This sometimes comes at the cost of smaller forget set error, compared to the forget set error that would have been achieved if cross-entropy was omitted from the loss.} \label{fig:effect_of_cross_entropy} \end{figure} \subsection{Additional results for M1} \label{sec:more_m1_results} \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c | } \toprule & \multicolumn{4}{c|}{CIFAR-10} & \multicolumn{4}{c|}{Lacuna-10} \\ \multirow{2}{*}{Model} & \multicolumn{2}{c|}{ResNet} & \multicolumn{2}{c|}{All-CNN} & \multicolumn{2}{c|}{ResNet} & \multicolumn{2}{c|}{All-CNN} \\ &all&selective&all&selective&all&selective&all&selective \\ \midrule Finetune & 3.8 & 3.09 & 3.33 & 3.03 & 1.7 & 2.03 & 2.16 & 2.00 \\ Fisher & 0.08 & 0.07 & 0.16 & 0.14 & 0.08 & 0.07 & 0.16 & 0.15 \\ NegGrad & 3.4 & 2.96 & 2.30 & 2.97 & 1.66 & 1.5 & 2.41 & 2.27 \\ CF-k & 3.55 & 3.17 & 3.37 & 2.91 & 3.42 & 3.20 & 3.27 & 3.11 \\ EU-k & 1.41 & 1.26 & 1.34 & 1.20 & 1.39 & 1.28 & 1.32 & 1.26 \\ SCRUB & 7.84 & 7.41 & 6.36 & 5.33 & 2.17 & 1.95 & 2.81 & 2.48 \\ \midrule \end{tabular} \caption{ \textbf{Scale-up factor}: the fraction of the runtime of retrain from scratch over the runtime of each given unlearning algorithm. That is, a scale-up value of X for an unlearning algorithm means that that algorithm runs X times faster than retrain from scratch. } \label{tab:speedup} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c | c c | c c | c c | c c | c c | c c |} \toprule & \multicolumn{6}{c|}{CIFAR-5} & \multicolumn{6}{c|}{Lacuna-5} \\ \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} \\ &mean&std&mean&std&mean&std &mean&std&mean&std&mean&std \\ \midrule Retrain & 24.9 & 2.5 & 0.0 & 0.0 & 28.8 & 5.9 & 5.8 & 0.4 & 0.0 & 0.0 & 4.8 & 3.4\\ \midrule Original & 24.2 & 2.6 & 0.0 & 0.0 & 0.0 & 0.0 & 5.7 & 0.4 & 0.0& 0.0 & 0.0 & 0.0 \\ Finetune & 24.3 & 2.4 & 0.0 & 0.0 & 0.0 & 0.0 & 5.6 & 0.3 & 0.0 & 0.0 & 0.0 & 0.0 \\ Fisher & 31.6 & 3.4 & 14.0 & 6.0 & 4.8 & 5.2 & 14.0 & 3.6 & 6.7 & 3.3 & 6.4 & 8.3 \\ NTK & 24.4 & 2.6 & 0.0 & 0.0 & 22.4 & 9.2 & 5.6 & 0.5 & 0.0 & 0.0 & 0.0 & 0.0 \\ NegGrad & 25.5 &1.1 & 0.0 & 0.0 & 41.3& 6.1 & 6.1& 0.7& 0.0& 0.0& 1.3& 2.3\\ CF-k& 22.6& 1.9& 0.0& 0.0& 0.0& 0.0 & 5.8& 0.4& 0.0& 0.0& 0.0& 0.0\\ EU-k& 23.5& 1.1& 0.0& 0.0& 10.7& 2.3 & 5.9& 0.6& 0.0& 0.0& 0.0& 0.0\\ SCRUB & 24.2 & 1.6 & 0.0 & 0.0 & 40.8 & 1.8 & 6.2 & 0.73 & 0.0 & 0.0 & 24.8 & 5.2\\ \midrule \end{tabular} } \caption{ \textbf{Small-scale} results with ResNet. SCRUB is the top-performer in terms of forgetting with minimal performance degradation. } \label{tab:small_scale_resnet} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c | c c | c c | c c | c c | c c | c c |} \toprule & \multicolumn{6}{c|}{CIFAR-5} & \multicolumn{6}{c|}{Lacuna-5} \\ \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} \\ &mean&std&mean&std&mean&std &mean&std&mean&std&mean&std \\ \midrule Retrain & 24.36 & 1.61 & 0.13 & 0.28 & 28.8 & 9.12 & 4.6 & 0.38 & 0.0 & 0.0 & 4.67 & 6.41\\ \midrule Original & 24.08 & 1.86 & 0.17 & 0.38 & 0.0 & 0.0 & 4.53 & 0.47 & 0.0 & 0.0 & 0.0 & 0.0 \\ Finetune & 23.48 & 1.91 & 0.04 & 0.09 & 0.0 & 0.0 & 9.77 & 10.76 & 6.63 & 13.22 & 19.33 & 40.03 \\ Fisher & 42.64 & 6.56 & 31.83 & 10.47 & 15.2 & 16.83 & 52.53 & 13.87 & 51.09 & 14.54 & 39.33 & 40.43\\ NTK & 24.16 & 1.77 & 0.17 & 0.38 & 13.6 & 8.29 & 4.47 & 0.47 & 0.0 & 0.0 & 3.33 & 4.68 \\ NegGrad & 26.07 &1.21& 0.56& 0.49& 36.00& 10.58& 5.27& 0.76& 0.14& 0.12& 12.00& 13.86 \\ CF-k& 22.67 &1.55& 0.00& 0.00& 0.00& 0.00& 4.67 &0.70& 0.00& 0.00& 0.00& 0.00 \\ EU-k& 25.87& 0.64& 3.23& 1.69& 8.00& 6.93& 5.20& 0.20& 0.00& 0.00& 0.00& 0.00 \\ SCRUB & 23.88 & 1.78 & 0.08 & 0.12 & 40.8 & 8.2 & 3.87 & 0.72 & 0.0 & 0.0 & 25.33 & 4.13 \\ \midrule \end{tabular} } \caption{\textbf{Small-scale} results with All-CNN. SCRUB is the top-performer in terms of forgetting with minimal performance degradation.} \label{tab:small_scale_allcnn} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c | c c | c c | c c | c c | c c | c c |} \toprule & \multicolumn{6}{c|}{CIFAR-10} & \multicolumn{6}{c|}{Lacuna-10} \\ \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} \\ &mean&std&mean&std&mean&std &mean&std&mean&std&mean&std \\ \midrule Retrain & 14.72 & 0.16 & 0.0 & 0.0 & 100.0 & 0.0 & 2.87 & 0.34 & 0.0 & 0.0 & 99.75 & 0.56 \\ \midrule Original & 16.56 & 0.1 & 0.0 & 0.0 & 0.0 & 0.0 & 3.07 & 0.26 & 0.0 & 0.0 & 0.0 & 0.0 \\ Finetune & 16.41 & 0.09 & 0.0 & 0.0 & 0.0 & 0.0 & 3.02 & 0.37 & 0.0 & 0.0 & 0.0 & 0.0 \\ Fisher & 26.42 & 1.41 & 2.45 & 0.84 & 100.0 & 0.0 & 3.33 & 0.54 & 0.0 & 0.0 & 100.0 & 0.0 \\ NegGrad & 17.84 & 1.46& 1.74& 2.55& 91.26& 7.73 & 3.41& 0.17& 0.00& 0.00& 14.90& 1.78 \\ CF-k& 15.31& 0.12& 0.00& 0.00& 0.03& 0.01& 2.89& 0.22& 0.00& 0.00& 0.00& 0.00 \\ EU-k& 18.73& 0.42& 0.00& 0.00& 98.79& 0.18& 3.19& 0.17& 0.01& 0.02& 4.06& 0.83 \\ SCRUB & 15.73 & 0.17 & 0.51 & 0.02 & 100.0 & 0.0 & 3.69 & 0.36 & 0.28 & 0.23 & 100.0 & 0.0 \\ \midrule \end{tabular} } \caption{\textbf{Large-scale, class unlearning} results with ResNet. SCRUB and EU-k are the top-performers in this setting in terms of forgetting with minimal performance degradation. Note, however, that EU-k doesn't perform strongly across the board and in particular performs very poorly in selective unlearning (notice the contrast between EU-k's forget error between Figures \ref{fig:cifar10-class-spider} and \ref{fig:cifar10-selective-spider} Fisher is also a top-performer in terms of forget error in this setting too, but on CIFAR causes a large degradation in test error, as is often observed for this method.} \label{tab:large_scale_class_resnet} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c | c c | c c | c c | c c | c c | c c |} \toprule & \multicolumn{6}{c|}{CIFAR-10} & \multicolumn{6}{c|}{Lacuna-10} \\ \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} \\ &mean&std&mean&std&mean&std &mean&std&mean&std&mean&std \\ \midrule Retrain & 13.97 & 0.19 & 0.0 & 0.0 & 100.0 & 0.0 & 1.59 & 0.36 & 0.0 & 0.0 & 100.0 & 0.0 \\ \midrule Original & 15.56 & 0.25 & 0.0 & 0.0 & 0.0 & 0.0 & 1.56 & 0.33 & 0.0 & 0.0 & 0.0 & 0.0 \\ Finetune & 15.39 & 0.22 & 0.0 & 0.0 & 0.0 & 0.0 & 1.67 & 0.44 & 0.0 & 0.0 & 0.0 & 0.0 \\ Fisher & 27.4 & 2.28 & 3.66 & 1.03 & 99.0 & 0.0 & 1.78 & 0.29 & 0.0 & 0.0 & 89.0 & 0.0 \\ NegGrad & 17.87&0.31& 0.58& 0.13& 87.22& 1.67& 1.63& 0.17& 0.00& 0.00& 6.56& 1.13 \\ CF-k& 14.99& 0.23& 0.00& 0.00& 0.00& 0.00& 1.48& 0.36& 0.00& 0.00& 0.00& 0.00 \\ EU-k& 15.30& 0.69& 0.13& 0.14& 100.00& 0.00& 1.74 &0.45& 0.00& 0.00& 77.19& 39.51 \\ SCRUB & 15.06 & 0.14 & 0.12 & 0.03 & 100.0 & 0.0 & 2.0 & 0.4 & 0.0 & 0.0 & 100.0 & 0.0 \\ \midrule \end{tabular} } \caption{\textbf{Large-scale, class unlearning} results with All-CNN. SCRUB is the top-performer in terms of forgetting with minimal performance degradation.} \label{tab:large_scale_class_allcnn} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c | c c | c c | c c | c c | c c | c c |} \toprule & \multicolumn{6}{c|}{CIFAR-10} & \multicolumn{6}{c|}{Lacuna-10} \\ \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} \\ &mean&std&mean&std&mean&std &mean&std&mean&std&mean&std \\ \midrule Retrain & 17.4 & 0.14 & 0.0 & 0.0 & 29.67 & 3.21 & 2.7 & 0.2 & 0.0 & 0.0 & 1.0 & 1.0 \\ \midrule Original & 17.36 & 0.14 & 0.0 & 0.0 & 0.0 & 0.0 & 2.73 & 0.15 & 0.0 & 0.0 & 0.0 & 0.0 \\ Finetune & 17.37 & 0.11 & 0.0 & 0.0 & 0.0 & 0.0 & 2.63 & 0.12 & 0.0 & 0.0 & 0.0 & 0.0 \\ Fisher & 21.23 & 0.27 & 2.88 & 0.54 & 3.0 & 2.65 & 3.1 & 0.35 & 0.0 & 0.0 & 0.0 & 0.0 \\ NegGrad & 22.7 &0.6 &4.1& 0.5& 53.7& 6.8 & 4.7 & 0.2& 0.9& 0.1 & 13.0 & 1.0\\ CF-k& 17.4 & 0.1 &0.0& 0.0& 0.0& 0.0 & 2.7& 0.2& 0.0& 0.0& 0.0 & 0.0\\ EU-k& 21.8& 0.2 & 0.4& 0.6& 23.7& 3.5 & 2.9& 0.1& 0.0& 0.0& 0.0& 0.0 \\ SCRUB & 18.04 & 0.2 & 0.0 & 0.0 & 70.33 & 4.16 & 3.0 & 0.0 & 0.0 & 0.0 & 4.67 & 3.06\\ \midrule \end{tabular} } \caption{\textbf{Large-scale, selective unlearning} results with ResNet. SCRUB and NegGrad are the top-performers in terms of forgetting, though NegGrad has worse test performance than SCRUB in both cases. Note also that NegGrad isn't as consistent at forgetting across settings as SCRUB } \label{tab:large_scale_selective_resnet} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c | c c | c c | c c | c c | c c | c c |} \toprule & \multicolumn{6}{c|}{CIFAR-10} & \multicolumn{6}{c|}{Lacuna-10} \\ \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} & \multicolumn{2}{c|}{Test error ($\downarrow$)} & \multicolumn{2}{c|}{Retain error ($\downarrow$)} & \multicolumn{2}{c|}{Forget error ($\uparrow$)} \\ &mean&std&mean&std&mean&std &mean&std&mean&std&mean&std \\ \midrule Retrain & 16.47 & 0.21 & 0.0 & 0.0 & 25.67 & 2.31 & 1.6 & 0.44 & 0.0 & 0.0 & 0.67 & 0.58\\ \midrule Original & 16.43 & 0.08 & 0.0 & 0.0 & 0.0 & 0.0 & 1.53 & 0.31 & 0.0 & 0.0 & 0.0 & 0.0\\ Finetune & 16.5 & 0.18 & 0.0 & 0.0 & 0.0 & 0.0 & 1.43 & 0.21 & 0.0 & 0.0 & 0.0 & 0.0\\ Fisher & 21.39 & 1.22 & 4.0 & 1.44 & 13.0 & 11.27 & 1.87 & 0.21 & 0.01 & 0.02 & 0.0 & 0.0\\ NegGrad & 21.36 & 0.34& 3.23& 0.37& 45.33& 2.89& 2.77& 0.25&0.40&0.07&8.67&0.58\\ CF-k& 16.29 &0.07& 0.00& 0.00 &0.00& 0.00& 1.53& 0.31& 0.00& 0.00& 0.00& 0.00\\ EU-k& 17.62& 0.61& 0.11& 0.11& 0.33& 0.58& 1.83& 0.47& 0.00& 0.00& 0.00& 0.00\\ SCRUB & 16.55 & 0.11 & 0.0 & 0.0 & 29.33 & 3.21 & 2.07 & 0.31 & 0.0 & 0.0 & 1.67 & 0.58\\ \midrule \end{tabular} } \caption{\textbf{Large-scale, selective unlearning} results with All-CNN. SCRUB and NegGrad are the top-performers in terms of forgetting, though NegGrad has worse test performance than SCRUB in both cases. Note also that NegGrad isn't as consistent at forgetting across settings as SCRUB, as can be seen in Figure \ref{fig:main_paper_counts}} \label{tab:large_scale_selective_allcnn} \end{table} In this section, we provide the results for all scenarios we studied for M1 (ResNet and All-CNN, on both CIFAR and Lacuna, for both small-scale and large-scale), for completeness, in Tables \ref{tab:small_scale_resnet}, \ref{tab:small_scale_allcnn}, \ref{tab:large_scale_class_resnet}, \ref{tab:large_scale_class_allcnn}, \ref{tab:large_scale_selective_resnet}, \ref{tab:large_scale_selective_allcnn}. \subsection{Additional results for M2: Resolving class confusion via unlearning} \label{sec:more_m2_results} We show the full results are in Tables \ref{tab:conf_cif5_res}, \ref{tab:conf_cif5_allcnn}, \ref{tab:conf_lac5_resnet}, \ref{tab:conf_lac5_allcnn}, \ref{tab:conf_cif10_resnet}, \ref{tab:conf_cif10_allcnn}, \ref{tab:conf_lac10_resnet}, \ref{tab:conf_lac10_allcnn} for all settings. We observe that across the board, SCRUB is a top-performer on this metric too (see the captions of the individual tables for more details about performance profile). \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|} \toprule \multirow{2}{*}{model}&\multicolumn{2}{c|}{Test error ($\downarrow$)}&\multicolumn{2}{c|}{Retain error ($\downarrow$)}&\multicolumn{2}{c|}{Forget error ($\uparrow$)}&\multicolumn{2}{c|}{IC test error ($\downarrow$)}&\multicolumn{2}{c|}{IC retain error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt test error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt retain error ($\downarrow$)}\\ &mean&std&mean&std&mean&std&mean&std&mean&std&mean&std&mean&std \\ \midrule Retrain & 26.67 & 2.87 & 0.0 & 0.0 & 90.33 & 1.53 & 24.0 & 1.8 & 0.0 & 0.0 & 18.33 & 4.16 & 0.0 & 0.0 \\ \midrule Original & 41.0 & 2.09 & 0.0 & 0.0 & 0.0 & 0.0 & 56.0 & 3.04 & 0.0 & 0.0 & 92.0 & 7.94 & 0.0 & 0.0 \\ Finetune & 38.13 & 1.42 & 0.0 & 0.0 & 0.0 & 0.0 & 52.0 & 3.12 & 0.0 & 0.0 & 79.33 & 10.07 & 0.0 & 0.0 \\ NegGrad & 36.27 & 0.42 & 0.0 & 0.0 & 12.67 & 21.94 & 47.5 & 5.27 & 0.0 & 0.0 & 69.0 & 13.53 & 0.0 & 0.0 \\ CF-k & 39.6 & 1.64 & 0.0 & 0.0 & 0.0 & 0.0 & 54.83 & 2.02 & 0.0 & 0.0 & 85.33 & 7.02 & 0.0 & 0.0 \\ EU-k & 37.47 & 1.62 & 7.33 & 1.26 & 43.67 & 2.08 & 47.0 & 4.77 & 8.33 & 4.73 & 63.33 & 9.71 & 3.67 & 2.52 \\ Fisher & 44.8 & 2.36 & 21.33 & 3.45 & 32.0 & 11.53 & 51.5 & 7.47 & 26.33 & 9.5 & 79.0 & 3.61 & 20.0 & 7.94 \\ NTK & 32.6 & 2.51 & 0.0 & 0.0 & 60.33 & 0.58 & 37.5 & 4.0 & 0.0 & 0.0 & 52.0 & 10.58 & 0.0 & 0.0 \\ SCRUB & 25.93 & 3.13 & 1.08 & 0.52 & 96.0 & 1.73 & 19.0 & 3.91 & 0.0 & 0.0 & 19.67 & 7.51 & 0.0 & 0.0 \\ \midrule \end{tabular} } \caption{Confusion metrics on CIFAR-5 with ResNet. (Confused class 0,1; 50-50 samples). SCRUB is the best-performer by far in terms of eliminating the confusion via unlearning (see the IC error and Fgt error columns), while not hurting performance for other classes (see e.g. the usual Error metrics in the first 3 groups of columns).} \label{tab:conf_cif5_res} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|} \toprule \multirow{2}{*}{model}&\multicolumn{2}{c|}{Test error ($\downarrow$)}&\multicolumn{2}{c|}{Retain error ($\downarrow$)}&\multicolumn{2}{c|}{Forget error ($\uparrow$)}&\multicolumn{2}{c|}{IC test error ($\downarrow$)}&\multicolumn{2}{c|}{IC retain error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt test error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt retain error ($\downarrow$)}\\ &mean&std&mean&std&mean&std&mean&std&mean&std&mean&std&mean&std \\ Retrain & 24.4 & 2.75 & 0.0 & 0.0 & 90.67 & 4.04 & 19.0 & 1.32 & 0.0 & 0.0 & 11.33 & 4.62 & 0.0 & 0.0 \\ \midrule Original & 37.07 & 4.67 & 1.5 & 2.6 & 5.67 & 9.81 & 49.0 & 4.77 & 6.0 & 10.39 & 80.67 & 12.58 & 6.0 & 10.39 \\ Finetune & 34.33 & 3.35 & 0.0 & 0.0 & 3.0 & 5.2 & 43.67 & 7.29 & 0.0 & 0.0 & 67.33 & 16.04 & 0.0 & 0.0 \\ NegGrad & 33.53 & 4.47 & 0.0 & 0.0 & 13.33 & 21.36 & 42.33 & 11.34 & 0.0 & 0.0 & 62.0 & 22.65 & 0.0 & 0.0 \\ CF-k & 36.13 & 4.21 & 0.0 & 0.0 & 0.33 & 0.58 & 47.83 & 5.8 & 0.0 & 0.0 & 76.33 & 14.43 & 0.0 & 0.0 \\ EU-k & 51.6 & 1.0 & 27.67 & 3.5 & 52.67 & 6.03 & 59.5 & 5.22 & 38.33 & 6.66 & 68.67 & 15.57 & 19.67 & 10.41 \\ Fisher & 51.93 & 2.95 & 35.17 & 3.92 & 31.0 & 11.53 & 56.83 & 8.69 & 31.67 & 14.01 & 78.33 & 15.53 & 17.67 & 11.5 \\ NTK & 32.2 & 2.84 & 0.75 & 1.3 & 43.33 & 14.15 & 36.67 & 4.07 & 3.0 & 5.2 & 54.33 & 9.02 & 3.0 & 5.2 \\ SCRUB & 25.0 & 3.14 & 0.0 & 0.0 & 93.33 & 2.52 & 26.0 & 4.44 & 0.0 & 0.0 & 18.0 & 11.14 & 0.0 & 0.0 \\ \midrule \end{tabular} } \caption{Confusion metrics on CIFAR-5 with All-CNN. (Confused class 0,1; 50-50 samples). SCRUB is the best-performer by far in terms of eliminating the confusion via unlearning (see the IC error and Fgt error columns), while not hurting performance for other classes (see e.g. the usual Error metrics in the first 3 groups of columns).} \label{tab:conf_cif5_allcnn} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|} \toprule \multirow{2}{*}{model}&\multicolumn{2}{c|}{Test error ($\downarrow$)}&\multicolumn{2}{c|}{Retain error ($\downarrow$)}&\multicolumn{2}{c|}{Forget error ($\uparrow$)}&\multicolumn{2}{c|}{IC test error ($\downarrow$)}&\multicolumn{2}{c|}{IC retain error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt test error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt retain error ($\downarrow$)}\\ &mean&std&mean&std&mean&std&mean&std&mean&std&mean&std&mean&std \\ \midrule Retrain & 6.0 & 0.2 & 0.0 & 0.0 & 99.67 & 0.58 & 7.17 & 2.57 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \midrule Original & 27.07 & 3.33 & 1.67 & 0.88 & 4.33 & 1.53 & 57.5 & 6.26 & 6.67 & 3.51 & 108.0 & 14.18 & 6.67 & 3.51 \\ Finetune & 18.8 & 4.26 & 0.0 & 0.0 & 14.67 & 6.03 & 37.67 & 11.15 & 0.0 & 0.0 & 63.67 & 22.01 & 0.0 & 0.0 \\ NegGrad & 17.8 & 2.95 & 1.67 & 0.72 & 55.33 & 2.08 & 33.17 & 5.25 & 5.33 & 1.53 & 56.67 & 12.9 & 4.33 & 1.53 \\ CF-k & 22.27 & 4.31 & 0.08 & 0.14 & 10.67 & 5.03 & 46.33 & 10.97 & 0.33 & 0.58 & 81.67 & 23.01 & 0.33 & 0.58 \\ EU-k & 15.27 & 3.19 & 0.83 & 0.38 & 62.0 & 12.49 & 29.33 & 9.0 & 2.33 & 1.53 & 43.67 & 16.29 & 0.33 & 0.58 \\ Fisher & 35.87 & 3.33 & 17.75 & 3.78 & 27.33 & 3.79 & 60.0 & 5.27 & 31.0 & 7.94 & 109.0 & 14.53 & 30.0 & 7.0 \\ NTK & 14.53 & 5.22 & 0.0 & 0.0 & 51.67 & 23.18 & 27.17 & 11.3 & 0.0 & 0.0 & 43.33 & 25.32 & 0.0 & 0.0 \\ SCRUB & 8.47 & 1.17 & 0.33 & 0.14 & 96.0 & 1.0 & 11.33 & 3.82 & 1.33 & 0.58 & 9.33 & 1.53 & 1.33 & 0.58 \\ \midrule \end{tabular} } \caption{Confusion metrics on Lacuna-5 with ResNet. (Confused class 0,1; 50-50 samples). SCRUB is the best-performer by far in terms of eliminating the confusion via unlearning (see the IC error and Fgt error columns), while not hurting performance for other classes (see e.g. the usual Error metrics in the first 3 groups of columns). NTK in some cases is able to resolve confusion, but not consistently, and it also suffers from higher Test Error.} \label{tab:conf_lac5_resnet} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|} \toprule \multirow{2}{*}{model}&\multicolumn{2}{c|}{Test error ($\downarrow$)}&\multicolumn{2}{c|}{Retain error ($\downarrow$)}&\multicolumn{2}{c|}{Forget error ($\uparrow$)}&\multicolumn{2}{c|}{IC test error ($\downarrow$)}&\multicolumn{2}{c|}{IC retain error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt test error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt retain error ($\downarrow$)}\\ &mean&std&mean&std&mean&std&mean&std&mean&std&mean&std&mean&std \\ \midrule Retrain & 4.2 & 0.87 & 0.0 & 0.0 & 100.0 & 0.0 & 5.33 & 2.25 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \midrule Original & 25.47 & 2.32 & 5.75 & 5.63 & 20.33 & 25.74 & 56.17 & 4.93 & 23.0 & 22.54 & 105.67 & 8.08 & 23.0 & 22.54 \\ Finetune & 12.8 & 2.8 & 0.0 & 0.0 & 23.0 & 7.94 & 25.83 & 7.75 & 0.0 & 0.0 & 39.67 & 12.74 & 0.0 & 0.0 \\ NegGrad & 12.8 & 9.06 & 2.5 & 3.12 & 90.0 & 6.56 & 20.33 & 17.04 & 5.0 & 6.24 & 12.67 & 11.68 & 2.67 & 3.79 \\ CF-k & 21.27 & 1.63 & 0.58 & 0.8 & 9.33 & 0.58 & 47.0 & 4.58 & 2.33 & 3.21 & 82.67 & 10.12 & 2.33 & 3.21 \\ EU-k & 17.0 & 8.91 & 3.92 & 3.99 & 92.33 & 4.93 & 35.0 & 18.26 & 13.0 & 11.36 & 3.67 & 4.73 & 0.0 & 0.0 \\ Fisher & 49.6 & 4.73 & 39.25 & 7.45 & 40.0 & 9.54 & 57.67 & 10.79 & 42.33 & 11.59 & 88.67 & 11.68 & 29.67 & 16.86 \\ NTK & 12.87 & 6.63 & 2.83 & 4.91 & 72.33 & 12.06 & 25.5 & 17.88 & 11.33 & 19.63 & 35.67 & 24.03 & 10.0 & 17.32 \\ SCRUB & 3.87 & 0.7 & 0.0 & 0.0 & 100.0 & 0.0 & 4.33 & 1.26 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \midrule \end{tabular} } \caption{Confusion metrics on Lacuna-5 with All-CNN. (Confused class 0,1; 50-50 samples). SCRUB is the best-performer by far in terms of eliminating the confusion via unlearning (see the IC error and Fgt error columns), while not hurting performance for other classes (see e.g. the usual Error metrics in the first 3 groups of columns).} \label{tab:conf_lac5_allcnn} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|} \toprule \multirow{2}{*}{model}&\multicolumn{2}{c|}{Test error ($\downarrow$)}&\multicolumn{2}{c|}{Retain error ($\downarrow$)}&\multicolumn{2}{c|}{Forget error ($\uparrow$)}&\multicolumn{2}{c|}{IC test error ($\downarrow$)}&\multicolumn{2}{c|}{IC retain error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt test error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt retain error ($\downarrow$)}\\ &mean&std&mean&std&mean&std&mean&std&mean&std&mean&std&mean&std \\ \midrule retrain & 18.7 & 0.07 & 0.0 & 0.0 & 98.57 & 0.28 & 14.78 & 0.18 & 0.0 & 0.0 & 31.33 & 2.08 & 0.0 & 0.0 \\ \midrule original & 21.86 & 0.37 & 0.0 & 0.0 & 0.0 & 0.0 & 31.23 & 0.45 & 0.0 & 0.0 & 356.0 & 11.53 & 0.0 & 0.0 \\ finetune & 20.85 & 0.37 & 0.0 & 0.0 & 0.0 & 0.0 & 26.75 & 0.48 & 0.0 & 0.0 & 255.0 & 10.58 & 0.0 & 0.0 \\ NegGrad & 23.41 & 0.32 & 3.87 & 0.31 & 80.07 & 6.77 & 41.08 & 0.6 & 20.29 & 1.52 & 46.0 & 8.72 & 0.67 & 1.15 \\ CF-k & 20.93 & 0.38 & 0.0 & 0.0 & 0.0 & 0.0 & 27.27 & 0.76 & 0.0 & 0.0 & 267.33 & 16.17 & 0.0 & 0.0 \\ EU-k & 20.03 & 0.19 & 0.25 & 0.08 & 95.55 & 0.54 & 17.85 & 0.67 & 0.18 & 0.03 & 53.0 & 7.94 & 3.33 & 2.31 \\ SCRUB & 18.01 & 0.18 & 0.02 & 0.01 & 95.45 & 0.26 & 15.07 & 0.99 & 0.04 & 0.03 & 30.33 & 3.79 & 0.33 & 0.58 \\ \midrule \end{tabular} } \caption{Confusion metrics on CIFAR-10 with ResNet. (Confused class 0,1; 2000-2000 samples).} \label{tab:conf_cif10_resnet} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|} \toprule \multirow{2}{*}{model}&\multicolumn{2}{c|}{Test error ($\downarrow$)}&\multicolumn{2}{c|}{Retain error ($\downarrow$)}&\multicolumn{2}{c|}{Forget error ($\uparrow$)}&\multicolumn{2}{c|}{IC test error ($\downarrow$)}&\multicolumn{2}{c|}{IC retain error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt test error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt retain error ($\downarrow$)}\\ &mean&std&mean&std&mean&std&mean&std&mean&std&mean&std&mean&std \\ \midrule retrain & 16.43 & 0.03 & 0.0 & 0.0 & 98.42 & 0.15 & 14.37 & 0.24 & 0.0 & 0.0 & 23.67 & 2.31 & 0.0 & 0.0 \\ \midrule original & 19.95 & 0.23 & 0.0 & 0.0 & 0.0 & 0.0 & 30.18 & 0.66 & 0.0 & 0.0 & 348.67 & 13.58 & 0.0 & 0.0 \\ finetune & 18.72 & 0.11 & 0.0 & 0.0 & 1.05 & 0.61 & 24.33 & 0.2 & 0.0 & 0.0 & 223.67 & 6.66 & 0.0 & 0.0 \\ NegGrad & 21.74 & 0.44 & 4.48 & 0.34 & 87.65 & 2.98 & 40.05 & 0.44 & 21.8 & 0.66 & 44.0 & 5.2 & 2.33 & 3.21 \\ CF-k & 19.31 & 0.23 & 0.0 & 0.0 & 0.0 & 0.0 & 27.45 & 0.61 & 0.0 & 0.0 & 294.0 & 4.36 & 0.0 & 0.0 \\ EU-k & 17.66 & 0.23 & 1.36 & 0.19 & 87.9 & 1.28 & 16.82 & 0.79 & 2.89 & 0.46 & 63.67 & 8.62 & 91.67 & 12.58 \\ SCRUB & 15.92 & 0.17 & 0.2 & 0.06 & 87.47 & 1.46 & 14.98 & 0.13 & 0.39 & 0.15 & 54.0 & 3.61 & 9.67 & 2.52 \\ \midrule \end{tabular} } \caption{Confusion metrics on CIFAR-10 with All-CNN. (Confused class 0,1; 2000-2000 samples).} \label{tab:conf_cif10_allcnn} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|} \toprule \multirow{2}{*}{model}&\multicolumn{2}{c|}{Test error ($\downarrow$)}&\multicolumn{2}{c|}{Retain error ($\downarrow$)}&\multicolumn{2}{c|}{Forget error ($\uparrow$)}&\multicolumn{2}{c|}{IC test error ($\downarrow$)}&\multicolumn{2}{c|}{IC retain error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt test error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt retain error ($\downarrow$)}\\ &mean&std&mean&std&mean&std&mean&std&mean&std&mean&std&mean&std \\ \midrule retrain & 2.43 & 0.32 & 0.0 & 0.0 & 99.83 & 0.29 & 3.33 & 0.58 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \midrule original & 7.37 & 0.31 & 1.21 & 0.12 & 15.83 & 4.19 & 27.67 & 1.26 & 8.26 & 0.8 & 46.0 & 4.36 & 36.33 & 3.51 \\ finetune & 4.17 & 0.5 & 0.0 & 0.0 & 56.83 & 9.44 & 11.17 & 1.76 & 0.0 & 0.0 & 15.67 & 3.51 & 0.0 & 0.0 \\ NegGrad & 5.63 & 0.38 & 0.31 & 0.22 & 71.33 & 9.88 & 19.33 & 1.04 & 2.12 & 1.51 & 8.33 & 2.31 & 0.0 & 0.0 \\ CF-k & 5.4 & 0.4 & 0.07 & 0.06 & 33.83 & 3.33 & 17.33 & 1.76 & 0.45 & 0.39 & 27.67 & 3.51 & 2.0 & 1.73 \\ EU-k & 3.0 & 0.26 & 0.0 & 0.0 & 90.17 & 4.65 & 6.0 & 1.8 & 0.0 & 0.0 & 2.0 & 2.65 & 0.0 & 0.0 \\ SCRUB & 3.07 & 0.59 & 0.0 & 0.0 & 98.5 & 0.5 & 6.83 & 1.26 & 0.0 & 0.0 & 0.67 & 0.58 & 0.0 & 0.0 \\ \midrule \end{tabular} } \caption{Confusion metrics on Lacuna-10 with ResNet. (Confused class 0,1; 200-200 samples).} \label{tab:conf_lac10_resnet} \end{table} \begin{table}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|} \toprule \multirow{2}{*}{model}&\multicolumn{2}{c|}{Test error ($\downarrow$)}&\multicolumn{2}{c|}{Retain error ($\downarrow$)}&\multicolumn{2}{c|}{Forget error ($\uparrow$)}&\multicolumn{2}{c|}{IC test error ($\downarrow$)}&\multicolumn{2}{c|}{IC retain error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt test error ($\downarrow$)}&\multicolumn{2}{c|}{Fgt retain error ($\downarrow$)}\\ &mean&std&mean&std&mean&std&mean&std&mean&std&mean&std&mean&std \\ \midrule retrain & 2.13 & 0.25 & 0.0 & 0.0 & 99.83 & 0.29 & 2.5 & 1.32 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \midrule original & 7.83 & 0.55 & 1.21 & 0.52 & 16.0 & 4.92 & 31.33 & 1.76 & 8.26 & 3.53 & 56.33 & 3.79 & 36.33 & 15.53 \\ finetune & 3.0 & 0.7 & 0.0 & 0.0 & 74.5 & 6.08 & 6.5 & 2.0 & 0.0 & 0.0 & 9.0 & 3.0 & 0.0 & 0.0 \\ NegGrad & 4.3 & 0.52 & 0.4 & 0.06 & 89.67 & 4.25 & 15.33 & 2.47 & 2.73 & 0.39 & 4.67 & 3.21 & 0.0 & 0.0 \\ CF-k & 5.27 & 0.47 & 0.11 & 0.07 & 33.33 & 1.61 & 18.5 & 2.29 & 0.76 & 0.47 & 31.33 & 3.51 & 3.33 & 2.08 \\ EU-k & 2.53 & 0.67 & 0.09 & 0.02 & 97.83 & 2.08 & 5.17 & 1.04 & 0.38 & 0.35 & 0.33 & 0.58 & 0.67 & 0.58 \\ SCRUB & 2.1 & 0.4 & 0.0 & 0.0 & 97.5 & 1.73 & 4.17 & 0.58 & 0.0 & 0.0 & 0.33 & 0.58 & 0.0 & 0.0 \\ \midrule \end{tabular} } \caption{Confusion metrics on Lacuna-10 with All-CNN. (Confused class 0,1; 200-200 samples).} \label{tab:conf_lac10_allcnn} \end{table} \subsection{Additional results for M3} \label{sec:more_m3_results} We present MIA results for all settings in Tables \ref{tab:mia_allcnn_cifar_selective}, \ref{tab:mia_allcnn_cifar_class}, \ref{tab:mia_resnet_cifar_selective}, \ref{tab:mia_resnet_cifar_class}, \ref{tab:mia_allcnn_lacuna_selective}, \ref{tab:mia_allcnn_lacuna_class}, \ref{tab:mia_resnet_lacuna_selective}, \ref{tab:mia_resnet_lacuna_class}. We find that SCRUB, especially equipped with its rewinding procedure, is able to consistently have a strong defense against MIAs. \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c |} \toprule \multirow{2}{*}{method}&\multicolumn{2}{c}{Test error}&\multicolumn{2}{c}{Forget error}&\multicolumn{2}{c}{Retain error}&\multicolumn{2}{c}{MIA} \\ &mean&std&mean&std&mean&std&mean&std \\ \toprule Retrain&16.71&0.05&26.67&3.09&0.00&0.00&51.33&6.13 \\ \midrule Original&16.71&0.07&0.00&0.00&0.00&0.00&68.67&3.09 \\ Finetune&16.86&0.13&0.00&0.00&0.00&0.00&69.33&2.05\\ NegGrad&21.65&0.40&47.00&3.74&4.54&0.70&73.00&1.41\\ CF-k&16.82&0.03&0.00&0.00&0.00&0.00&69.67&1.89\\ EU-k&18.44&0.21&0.33&0.47&0.32&0.02&66.00&2.94\\ SCRUB&17.01&0.20&33.00&5.89&0.00&0.00&51.00&1.41\\ SCRUB+R&16.88&0.19&26.33&4.50&0.00&0.00&49.33&2.49\\ \end{tabular} \caption{MIA for All-CNN architecture on CIFAR-10 for selective unlearning.} \label{tab:mia_allcnn_cifar_selective} \end{table} \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c |} \toprule \multirow{2}{*}{method}&\multicolumn{2}{c}{Test error}&\multicolumn{2}{c}{Forget error}&\multicolumn{2}{c}{Retain error}&\multicolumn{2}{c}{MIA} \\ &mean&std&mean&std&mean&std&mean&std \\ \toprule Retrain&13.98&0.07&100.00&0.00&0.00&0.00&48.73&0.24\\ \midrule Original&15.70&0.09&0.00&0.00&0.00&0.00&71.40&0.70\\ Finetune&14.53&0.13&1.31&0.54&0.00&0.00&74.97&1.27\\ NegGrad&17.04&0.11&59.91&1.53&0.43&0.09&70.03&1.92\\ CF-k&15.72&0.06&0.00&0.00&0.00&0.00&72.93&1.06\\ EU-k&15.76&0.28&100.00&0.00&0.24&0.02&51.60&1.22\\ SCRUB&14.93&0.17&100.00&0.00&0.09&0.02&54.30&2.24\\ SCRUB+R&14.93&0.17&100.00&0.00&0.09&0.02&54.30&2.24\\ \end{tabular} \caption{MIA for All-CNN architecture on CIFAR-10 for class unlearning.} \label{tab:mia_allcnn_cifar_class} \end{table} \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c |} \toprule \multirow{2}{*}{method}&\multicolumn{2}{c}{Test error}&\multicolumn{2}{c}{Forget error}&\multicolumn{2}{c}{Retain error}&\multicolumn{2}{c}{MIA} \\ &mean&std&mean&std&mean&std&mean&std \\ \toprule Retrain&17.38&0.15&29.33&2.49&0.00&0.00&54.00&1.63\\ \midrule Original&17.41&0.15&0.00&0.00&0.00&0.00&65.33&0.47\\ Finetune&17.48&0.16&0.00&0.00&0.00&0.00&64.00&0.82\\ NegGrad&21.69&0.07&45.33&2.62&3.94&0.43&66.67&1.70\\ CF-k&17.53&0.19&0.00&0.00&0.00&0.00&65.00&0.00\\ EU-k&19.77&0.04&13.67&0.47&0.06&0.01&53.00&3.27\\ SCRUB&17.01&0.03&71.67&0.94&0.01&0.01&78.00&2.45\\ SCRUB+R&17.54&0.28&19.33&14.64&0.01&0.01&58.67&1.89\\ \end{tabular} \caption{MIA for ResNet architecture on CIFAR-10 for selective unlearning.} \label{tab:mia_resnet_cifar_selective} \end{table} \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c |} \toprule \multirow{2}{*}{method}&\multicolumn{2}{c}{Test error}&\multicolumn{2}{c}{Forget error}&\multicolumn{2}{c}{Retain error}&\multicolumn{2}{c}{MIA} \\ &mean&std&mean&std&mean&std&mean&std \\ \toprule Retrain&14.69&0.10&100.00&0.00&0.00&0.00&49.33&1.67\\ \midrule Original&16.33&0.14&0.00&0.00&0.00&0.00&71.10&0.67\\ Finetune&15.10&0.16&0.33&0.17&0.00&0.00&75.57&0.69\\ NegGrad&17.41&0.09&61.00&1.14&0.44&0.05&69.57&1.19\\ CF-k&15.29&0.02&0.04&0.04&0.00&0.00&75.73&0.34\\ EU-k&17.05&0.07&97.48&0.28&0.05&0.01&54.20&2.27\\ SCRUB&15.33&0.06&100.00&0.00&0.08&0.01&52.20&1.71\\ SCRUB+R&15.33&0.06&100.00&0.00&0.08&0.01&52.20&1.71\\ \end{tabular} \caption{MIA for ResNet architecture on CIFAR-10 for class unlearning.} \label{tab:mia_resnet_cifar_class} \end{table} \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c |} \toprule \multirow{2}{*}{method}&\multicolumn{2}{c}{Test error}&\multicolumn{2}{c}{Forget error}&\multicolumn{2}{c}{Retain error}&\multicolumn{2}{c}{MIA} \\ &mean&std&mean&std&mean&std&mean&std \\ \toprule Retrain&1.50&0.08&0.33&0.47&0.00&0.00&52.00&2.16\\ \midrule Original&1.57&0.24&0.00&0.00&0.00&0.00&59.00&2.16 \\ Finetune&1.40&0.16&0.00&0.00&0.00&0.00&57.33&3.30\\ NegGrad&3.60&0.14&14.33&1.25&0.87&0.07&51.00&1.63\\ CF-k&1.57&0.12&0.00&0.00&0.00&0.00&58.33&2.49\\ EU-k&3.90&1.47&0.00&0.00&0.76&0.63&52.00&3.56\\ SCRUB&1.67&0.19&0.00&0.00&0.00&0.00&57.67&0.94\\ SCRUB+R&1.67&0.19&0.00&0.00&0.00&0.00&57.67&0.94\\ \end{tabular} \caption{MIA for All-CNN architecture on Lacuna-10 for selective unlearning.} \label{tab:mia_allcnn_lacuna_selective} \end{table} \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c |} \toprule \multirow{2}{*}{method}&\multicolumn{2}{c}{Test error}&\multicolumn{2}{c}{Forget error}&\multicolumn{2}{c}{Retain error}&\multicolumn{2}{c}{MIA} \\ &mean&std&mean&std&mean&std&mean&std \\ \toprule Retrain&1.67&0.09&100.00&0.00&0.00&0.00&55.67&2.62\\ \midrule Original&1.70&0.21&0.00&0.00&0.00&0.00&58.00&1.63 \\ Finetune&1.67&0.27&0.00&0.00&0.00&0.00&56.33&1.25\\ NegGrad&2.00&0.00&14.27&0.74&0.00&0.00&54.33&2.05\\ CF-k&2.07&0.14&0.00&0.00&0.00&0.00&52.33&2.05\\ EU-k&4.15&1.22&62.08&44.26&0.81&0.53&52.67&3.68\\ SCRUB&1.96&0.34&100.00&0.00&0.00&0.00&50.33&2.62\\ SCRUB+R&1.96&0.34&100.00&0.00&0.00&0.00&50.33&2.62\\ \end{tabular} \caption{MIA for All-CNN architecture on Lacuna-10 for class unlearning.} \label{tab:mia_allcnn_lacuna_class} \end{table} \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c |} \toprule \multirow{2}{*}{method}&\multicolumn{2}{c}{Test error}&\multicolumn{2}{c}{Forget error}&\multicolumn{2}{c}{Retain error}&\multicolumn{2}{c}{MIA} \\ &mean&std&mean&std&mean&std&mean&std \\ \toprule Retrain&2.50&0.24&1.67&0.94&0.00&0.00&49.67&3.09\\ \midrule Original&2.53&0.25&0.00&0.00&0.00&0.00&56.67&1.70 \\ Finetune&2.67&0.05&0.00&0.00&0.00&0.00&53.67&0.94\\ NegGrad&4.30&0.43&12.67&3.30&0.95&0.08&54.00&2.16\\ CF-k&2.47&0.25&0.00&0.00&0.00&0.00&56.00&0.82\\ EU-k&2.60&0.00&0.00&0.00&0.03&0.00&56.00&2.83\\ SCRUB&2.97&0.25&6.00&3.27&0.00&0.00&50.67&4.03\\ SCRUB+R&2.97&0.25&6.00&3.27&0.00&0.00&50.67&4.03\\ \end{tabular} \caption{MIA for ResNet architecture on Lacuna-10 for selective unlearning.} \label{tab:mia_resnet_lacuna_selective} \end{table} \begin{table}[] \centering \begin{tabular}{c | c c | c c | c c | c c |} \toprule \multirow{2}{*}{method}&\multicolumn{2}{c}{Test error}&\multicolumn{2}{c}{Forget error}&\multicolumn{2}{c}{Retain error}&\multicolumn{2}{c}{MIA} \\ &mean&std&mean&std&mean&std&mean&std \\ \toprule Retrain&2.52&0.19&100.00&0.00&0.00&0.00&55.00&2.94\\ \midrule Original&2.81&0.28&0.00&0.00&0.00&0.00&56.00&2.45\\ Finetune&3.04&0.19&0.00&0.00&0.00&0.00&54.67&1.25\\ NegGrad&2.74&0.26&9.48&0.64&0.00&0.00&53.67&4.03\\ CF-k&2.81&0.28&0.00&0.00&0.00&0.00&56.00&2.45\\ EU-k&2.48&0.14&7.71&2.52&0.00&0.00&54.33&3.09\\ SCRUB&3.26&0.38&99.90&0.15&0.07&0.05&54.33&2.49\\ SCRUB+R&3.26&0.38&99.90&0.15&0.07&0.05&54.33&2.49\\ \end{tabular} \caption{MIA for ResNet architecture on Lacuna-10 for class unlearning.} \label{tab:mia_resnet_lacuna_class} \end{table}